{"text": "Conserved protein sequence motifs are short stretches of amino acid sequence patterns that potentially encode the function of proteins. Several sequence pattern searching algorithms and programs exist foridentifying candidate protein motifs at the whole genome level. However, amuch needed and importanttask is to determine the functions of the newly identified protein motifs. The Gene Ontology (GO) project is an endeavor to annotate the function of genes or protein sequences with terms from a dynamic, controlled vocabulary and these annotations serve well as a knowledge base.This paperpresents methods to mine the GO knowledge base and use the association between the GO terms assigned to a sequence and the motifs matched by the same sequence as evidence for predicting the functions of novel protein motifs automatically. The task of assigning GO terms to protein motifsis viewed as both a binary classification and information retrieval problem, where PROSITE motifs are used as samples for mode training and functional prediction. The mutual information of a motif and aGO term association isfound to be a very useful feature. We take advantageof the known motifs to train a logistic regression classifier, which allows us to combine mutual information with other frequency-based features and obtain a probability of correctassociation. The trained logistic regression model has intuitively meaningful and logically plausible parameter values, and performs very well empirically according to our evaluation criteria.In this research, different methods for automatic annotation of protein motifs have been investigated. Empirical result demonstrated that the methods have a great potential for detecting and augmenting information about thefunctions of newly discovered candidate protein motifs. With the completion of many genome sequencing projects and advances in the methods of automatic discovery of sequence patterns project is a conThe basic approach for determining the function of a motif is to study all the sequences that contain the motif (pattern). Intuitively, if all the functional aspects of the sequences matching a motif are known, we should be able to learn which function is most likely encoded by the motif, based on the assumption that every protein function is encoded by an underlying motif. This means that we would need a knowledge base of protein sequences, in which the functions of a sequence are annotated as detailed as possible. In addition, we would also need prediction methods that can work on a given set of protein sequences and their functional descriptions to reliably attribute one of the functions to the motif that matches these sequences. To determine the function of any novel motif, we would first search the protein knowledge base to retrieve all the functional descriptions of the proteins containing the motif, and then use such prediction methods to decide which function is encoded by the motif. In this research, we use the Gene Ontology database as our protein knowledge base and explore statistical methods that can learn to automatically assign biological functions (in the form of GO terms) to a protein motif.vice versa. However, the database usually returns more than one GO terms that may or may not describe the function of the motif in the query. Thus, we need methods to disambiguate which GO term describe the function of the motif (assign a GO term to a motif) and determine how confident we are as the assignment is concerned. We use statistical approaches to learn from known examples and cast disambiguation task into a classification problem. Furthermore, the probability output by the classifier can be used to represent its confidence for the assignment.Our approach is based on the observation that the Gene Ontology database contains protein sequences and the GO terms associated with the sequences. In addition, the database also contains information of known protein motifs, e.g. the PROSITE patterns that match the sequences. Thus, the protein sequences in the database provide a sample of potential associations of GO term with motifs, among which some are correct and some are not. This provides us an opportunity to perform supervised learning to identify discriminative features and use these features to predict whether a new association is correct or not. Current Gene Ontology database is implemented with relational database system, which allows one to perform queries like \"retrieve all GO terms associated with the sequences that matches a given motif\" and p-value. They found that, in the database they worked with, most sequences only had one functional GO term. Therefore, they could assign the GO term of a sequence to the motif that matched with highest score with fairly good accuracy. However, due to restrictive assumption that each sequence has only one GO term, their approach can not address the potential problem that a sequence matching a motif has multiple associated GO terms, which is common case now, and how to resolve such ambiguity.Recently, Schug et al publisheWe use the May 2002 release of the Gene Ontology sequence database (available online ), which Using the information stored in the Gene Ontology and PROSITE, we manually judged a set of 1,602 cases of distinct PROSITE-GO associations to determine whether the association is correct or not. The PROSITE-GO association set has been judged in two different ways. One way is to label an association as correct if and only if the definition of the GO term and the PROSITE motif match perfectly according to the annotator. Gene Ontology has the structure of a directed acyclic graph (DAG) to reflect the relations among the terms. Most terms (nodes in the graph) have parent, sibling and child terms to reflect the relation of \"belonging to\" or \"subfamily\". The second way of judging GO-PROSITE association is to label an association as correct if the GO term and the PROSITE motif are either exact match or the definitions of GO term and PROSITE motif are within one level difference in the tree, i.e., the definition of GO term and the PROSITE motif have either a parent-child relation or a sibling relation according to the GO structure. Thus we have two sets of labeled PROSITE-GO associations, the perfect match set and the relaxed match set (with neighbors). Both sets are further randomly divided into training (1128 distinct associations) and test (474 distinct associations) sets. Since the test sample size is fairly large, the variance of the prediction accuracy can be expected to be small. Thus we have not considered any alternative split of training and test sets.T, and none of the sequences that do not match the motif is assigned the term T, then it is very likely that the motif pattern is encoding the function described by term T. Of course, this is only an ideal situation; in reality, we may see that most of, but not all of the proteins matching a motif pattern would be assigned the same pattern, and also some proteins that do not match the motif may also have the same term. Thus, we want to have a quantitative measure of such correlation between GO terms and motif patterns.Intuitively, we may think of the GO terms assigned to a protein as one description of the function of a protein in one language (human understandable) while the motifs contained in the protein sequence as another description of the same function in a different language . We would like to discover the \"translation rules\" between these two languages. Looking at a large number of annotated sequences, we hope to find which terms tend to co-occur with a given motif pattern. Imagine that, if the sequences that match a motif are all assigned a term X and Y [p with the expected joint distribution under the hypothesis that X and Y are independent, which is given by p(X = x)p(Y = y). A larger mutual information indicates a stronger association between X and Y, and I = 0 if and only if X and Y are independent.A commonly used association measure is mutual information (M.I.), which measures the correlation between two discrete random variables X and Y . It basiT to a sequence and the matching of a sequence with a motif M as two binary random variables. The involved probabilities can then be empirically estimated based on the number of sequences matching motif M (NM), the number of sequences assigned term T (NT), the number of sequences both matching M and assigned T (NT-M), and the total number of sequences in the database. Table For our purpose, we regard the assignment of a term c to define a simple decision rule: assign term T to motif M, if and only if I \u2265 c. For a given cutoff c, the precision of term assignment is defined as the ratio of the number of correct assignments to that of the total assignments according to the cutoff c. In Figure We set out to test whether we can use mutual information as a criterion to assign a GO term to a PROSITE motif. One approach is to use a mutual information cutoff value most likely to be appropriate for the motif. To address this problem, we can use a different cutoff strategy, and adopt a decision rule that assigns a GO term to a motif based on the ranking of mutual information, which is a common technique used in information retrieval text categorization [M in the annotated data set, all observed motif-term associations containing M are retrieved and ranked according to mutual information, then the term that has highest mutual information is assigned to M. Alternatively, if we use this approach to facilitate human annotation, we can relax the rule to include GO terms that have lower ranks, thus allowing multiple potential GO terms to be assigned to a motif, assuming that a human annotator would be able to further decide which is correct. In this method, the key in making a decision is to select a cutoff rank that covers as many correct associations as possible (high sensitivity) while also retrieves as fewer incorrect associations as possible (high specificity). The optimal cutoff can be determined by the desired utility function.However, a drawback of such an approach is that, given a motif, sometimes, many observed motif-term associations can have mutual information above the cutoff value, making it difficult to decide which pair is correct. While in other cases, the mutual information of the observed motif-term pairs may all be below the cutoff value, but we still would like to predict what terms are rization . More spFigure While the mutual information measure appears to give reasonable results, there are three motivations for exploring more sophisticated methods. First, the mutual information value is only meaningful when we compare two candidate terms for a given motif pattern; it is hard to interpret the absolute value. While a user can empirically tune the cutoff based on some utility preferences, it would be highly desirable to attach some kind of confidence value or probability of correctness to all the potential candidate motif-term associations. Second, there may be other features that can also help predict the function (term) for a motif. We hope that the additional features may help a classifier to further separate correct motif-term assignment from wrong ones. Third, there exist many motifs with known functions , and it is desirable to take advantage of such information to help predict the functions of unknown motifs. This means that we need methods that can learn from such information. In this section, we show that the use of logistic regression can help achieve all three goals. Specifically, we use logistic regression to combine the mutual information with other features, and produce a probability of correct assignment. The motifs with known functions serve as training examples that are needed for estimating the parameters of the regression function.T) and PROSITE motif (M) co-occur (NT-M). (2) The number of sequences in which T occurs (NT). (3) The number of sequences in which M occurs (NM). (4) The number of distinct GO terms (G) seen associated with M (NG|M). (5) The number of distinct PROSITE patterns (P) seen associated with T (NP|T). In addition, we also consider, as a feature, the similarity of the sequences that support a motif-term pair. Intuitively, if a motif is conserved among a set of diverse sequences, it is more likely that the motif is used as a building block in proteins with different functions. Thus, the average pair-wise sequence similarity of the sequence set can potentially be used as a heuristic feature in the logistic regression classifier. Given a set of sequences, we use a BLAST search engine to perform pair-wise sequence comparisons. We devised a metric AvgS to measure the averaged pair-wise sequence similarity per 100 amino acids (see methods) and use it as an input feature for classifier.We now discuss the features to be used in logistic regression, in addition to the mutual information discussed in the previous section. The goal is to identify a set of features that is helpful to determine whether association of any pair of a GO term and a motif is correct or not, without requiring specific information regarding the function of GO term and motif. For a distinct motif-term pair, we collect following frequency-based features: (1) The number of sequences in which the GO term , where k is the number of features. The seven features/variables used in our experiments are NT-M, NT, NM, NG|M, NP|T, AvgS, and M.I.. Suppose we have observed n motif-term pairs, then we have n samples of , i = 1, 2, ..., n, where, yi is the correctness label and xi is the feature vector for the corresponding motif-term pair. Our goal is to train a classifier which, when given a motif-term pair and feature vector X, would output a label Y with value 1 or 0. Alternatively, we can also consider building a classifier which outputs a probability that Y = 1 instead of a deterministic label. Thus, our task is now precisely a typical supervised learning problem; many supervised learning techniques can potentially be applied. Here, we choose to use logistic regression as our classification model because it has a sound statistical foundation, gives us a probability of correct assignment, and can combine our features naturally without any further transformation.To cast the prediction problem as a binary classification problem, we augment our data set of motif-term pairs with a class label variable G, given by 2l(D|\u03b2 f)/l(D|\u03b2 f-), where, l(D|\u03b2 f) and l(D|\u03b2 f-) are the log-likelihood of the model with feature f and the model without feature f, respectively. Since we add or drop one feature at a time, G follows \u03c72 distribution with degree of freedom of 1 [p-value of 0.1 as a significant threshold. Figure In order to build a model only with the truly discriminative features, it is a common practice to perform feature selection for logistic regression. We use a combined forward and backward feature selection algorithm. Starting from the intercept, we sequentially add features into the model and test if the log-likelihood increases significantly; we keep the current feature if it does. After the forward selection, we sequentially drop features from the model, to see if dropping a feature would significantly reduce the log-likelihood of the model; if it does, we exclude the feature from the model, otherwise continue. When testing the significance, we use the likelihood ratio statistic dom of 1 . We use p(Yi = 1|Xi) for each test case. Table After fitting the model using the training set, we tested the model on the test set, i.e., we used the model to compute an output Y = 1, thus, a small change in the number of cases introduces a large change in percentage. For example, when the threshold is set at 0.9, only three cases are covered by the rule and two of them are correct, thus percent correct drop to 66%.As the results from the logistic regression are the conditional probability that an association of a GO term with a given motif is correct, we need to decide the cut off threshold for making decision. We calculate the sensitivity and specificity for a different threshold from 0.1 to 0.9 with a step of 0.1 and plotted the ROC curves as shown in Figure C and D. We see that using mutual information alone performs almost as well as logistic regression with additional features. However, the area under the curve (0.816) is smaller than that of logistic regression (0.875), indicating that logistic regression does take advantage of other features and has more discriminative power than mutual information alone.To see whether the additional features are useful, we also performed ROC analysis using different mutual information cutoff threshold on the perfect match test set. The result is shown in Figure \u03b21, \u03b22 and \u03b23 for the three features NT-M, NT and NM, which are also involved in the calculation of mutual information, have a very interesting interpretation \u2013 they indicate that the roles of these three variables in the logistic regression model actually are to compromise the effect of mutual information! Indeed, according to the formula of the mutual information, a strong correlation corresponds to a high NT-M, low NT, and low NM, but the coefficients shown in Table NT) with respect to any pattern matched by the sequence to which the term is assigned. But, intuitively, one occurrence is very weak evidence, and at least should be regarded as weaker than when we have a term occurring 10 times in total and co-occurring 9 times with the same motif. The key issue here is that mutual information only reflects the correlation between variables, but does not take into account the strength of evidence, therefore, tends to over-favor the situation where there is a perfect correlation but very little evidence. However, the number of sequences in which the co-occurrence happens, which is called the \"support\" for the association, is also very important.The coefficients NG|M and NP|T, are also meaningful. Their negative signs indicate that the more terms a motif co-occurs with or the more motifs a term co-occurs with, the less likely a particular association is correct. This also makes sense intuitively, since all those co-occurring terms can be regarded as \"competing\" for a candidate description of the motif's function, so the more terms a motif is associated with, the competition is stronger, and thus the chance that any particular term is a correct description of function should be smaller. Thus, the logistic regression model not only performs well in terms of prediction accuracy but also gives meaningful and logically plausible coefficient values.The coefficients for the other two parameters, In this paper, we explore the use of the Gene Ontology knowledge base to predict the functions of protein motifs. We find that the mutual information can be used as an important feature to capture the association between a motif and a GO term. Evaluation indicates that, even used alone, the mutual information could be useful for ranking terms for any given motif. We further use logistic regression to combine mutual information with several other statistical features and to learn a probabilistic classifier from a set of motifs with known functions. Our evaluation shows that, with the addition of new features and with the extra information provided by the motifs with known functions, logistic regression can perform better than using the mutual information alone. This is encouraging, as it shows that we can potentially learn from the motifs with known functions to better predict the functions of unknown motifs. This means that our prediction algorithm can be expected to further improve, as we accumulate more and more known motifs.new motifs e.g., those discovered using TEIRESIAS, SPLASH or other programs. This would further enable us to perform data mining from the Gene Ontology database in several ways. For example, we can hypothesize the functions of a large number of novel motifs probabilistically, then we will be able to answer a query, such as \"finding the five patterns that are most likely associated with the GO term tyrosine kinase\". This is potentially very useful because it is not uncommon that substantial knowledge about the functions and sub-cellular location of a given protein is available even though a structural explanation for the functions remains obscure. On the other hand, we believe that our methods will facilitate identifying potentially biological meaningful patterns among the millions of patterns returned by pattern searching programs. A sequence pattern that associates with certain GO term with high M.I. or probability is more like to be a meaningful pattern that that with low scores. Furthermore, our methods can also be used in automatic annotation of novel protein sequences as suggested in Schug et al and Rigoutsos et al [Although we have so far only tested our methods on the known motifs, which is necessary for the purpose of evaluation, the method is most useful for predicting the functions of new and unknown motifs. For the future work, we can build a motif function prediction system and apply our algorithm to many candidate os et al ,13,22. OHaving stated the potential uses of our approaches, we also realize that there exist some limitations for our methods. For example, in order to predict the function of a newly identified sequence pattern correctly, we would require functional annotations of the sequences of GO database be complete and accurate, which may not always be the case. In this paper, we mainly used the motifs with known function to evaluate the capability of the methods developed in this research. Our result shows that the methods work well with known sequences patterns. Currently, the annotation of motif function with GO term is carried out manually at the European Bioinformatics Institute (the GOA project). Such approach is warranted because human annotation is more accurate than automatic ones. However, as the amount of information regarding protein functions accumulates and a large number of new potential motifs are discovered, it will be very labor intensive to annotate the potential association of protein function and protein patterns. By then, the methods studied in this research will potentially prove to be useful to discover the underlying protein motifs that are responsible for the newly annotated function. For example, the methods can be used as prescreening to narrow down to the most possible associations of protein function and motifs, thus facilitate human annotation.In summary, we have developed methods that disambiguate the associations between of Gene Ontology terms and protein motifs. These methods can be used to mine the knowledge contained in the Gene Ontology database to predict the function of novel motifs, discover the basis of a molecular function at primary sequence level and automatically annotated the function of novel proteins.Mutual information is defined as followsp, p(X = x) and p(Y = y) can be empirically estimated from the data by counting occurrence/co-occurrence followed by normalization.In which the probabilities The sensitivity and specificity of the rules are calculated asTP (True Positive) is the number of associations labeled as correct among the retrieved motif-term pairs meeting the ranking cutoff criteria, FN is the number of associations labeled as correct but not retrieved, TN (True Negative) is the number of associations labeled as incorrect and not retrieved, and FP is the number of associations labeled incorrect but are retrieved.where AvgS) of a sequence set is as followsCalculation of the average pair-wise sequence similarity per 100 amino acids is a delta function which equals 1 if i = j and 0 otherwise.Where Sp(Y = 1|X) and X1, ..., Xk:The logistic regression model is a conditional model that assumes the following linear relationship between \u03b2 = is the parameter vector. We can fit the logistic regression model using the Maximum Likelihood method \u2013 essentially setting the parameters to values at which the likelihood of the observed data is maximized . In our experiments, we use iteratively reweighted least squares (IRLS) algorithm [where, lgorithm to fit t"} {"text": "The Mammalian Phenotype (MP) Ontology enables robust annotation of mammalian phenotypes in the context of mutations, quantitative trait loci and strains that are used as models of human biology and disease. The Mammalian Phenotype (MP) Ontology enables robust annotation of mammalian phenotypes in the context of mutations, quantitative trait loci and strains that are used as models of human biology and disease. The MP Ontology supports different levels and richness of phenotypic knowledge and flexible annotations to individual genotypes. It continues to develop dynamically via collaborative input from research groups, mutagenesis consortia, and biological domain experts. The MP Ontology is currently used by the Mouse Genome Database and Rat Genome Database to represent phenotypic data. Mammalian phenotypes are complex and the term itself is imprecise. Generally, we use the word phenotype in referring to the appearance or manifestation of a set of traits in an individual that result from the combined action and interaction of genotype and environment.Because mouse is the premier model organism for the study of human biology and disease, the goal of comparative phenotyping and building new animal models through genetic engineering holds great promise. The mouse has distinct advantages for studies that translate to humans. It is a small, short-lived mammal with a fully sequenced genome in which all life stages can be accessed, and for which myriad tools are available for precisely experimentally manipulating its genome. Further, the large collection of inbred strains of mice and the controlled environment in which the animals live provides the ability to confirm phenotype observations and to systematically perturb environmental factors and genetic input to measure effects under defined conditions. Current international efforts to 'make a mutation' for every gene through mutagenesis and geneMammalian phenotypes are frequently genetically complex. Mutation of even a single gene almost always produces pleiotropic effects. Conversely, non-allelic mutations can produce indistinguishable phenotypes. Modifier genes and epistatic interactions can markedly alter the phenotype. Combining different allelic combinations of different genes can produce unique phenotypes not found in the single-gene mutation genotype. Imprinting of genes can dramatically affect phenotype. Mutations expressed in different inbred strains of mice can manifest as an increase or decrease of severity or penetrance of the corresponding phenotype. Quantitative trait loci (QTL) can contribute in complex nonlinear ways to the phenotype. In addition, mutations that are 'genomic' in nature, either disrupting or deleting multiple genes or occurring in intergenic regions, can produce distinct phenotypes and challenge us to think beyond gene effects to genomic effects. The outcome of these complex interactions can be dissected and reproducibly examined by characterizing inbred strains that represent the combined phenotype of the 'whole-genome' genotype in its environmental context.The Mouse Genome Database (MGD) at the Mouse Genome Informatics website ,5 servesWritten descriptions of phenotypes in higher organisms reflect the complexity of the subject, the richness of language, and the phenomenal diversity that these data represent. While text descriptions are commonly used in publications describing phenotype, and have been the basis of electronically accessible phenotypic descriptions and the Consider the example in Table A further detriment to database text records is their difficulty to update and maintain. As new information is learned about a phenotypic mutant, the record must be continually rewritten. Although this practice might be sustained for a small number of records, it does not scale when thousands of mutant records are considered. The alternative of simply adding on another paragraph to existing text records becomes confusing, with potentially conflicting information and different writing styles appearing in one textual description, and unwieldy, with more and more text that may no longer represent a logical synthesis.Formal nomenclatures for genes, mutant alleles and inbred strains of mice have existed since the 1940s ,9. The MBeyond nomenclatures, which are key to object identities and relationships, are vocabularies that can be used to describe broader concepts and categorizations. Vocabularies can take many forms, including simple lists of controlled terms, such as the cytogenetic band designations used to name the bands defined by chromosome staining or the classes of genetic markers, such as gene, pseudogene, expressed sequence tag (EST), and so forth.The annotation of complex biological data and concepts requires more than lists and simple vocabularies. Ontologies, or 'descriptions of what there is', contain both concepts, with precise meanings, and relationships among those concepts. As such ontologies are able to support descriptions of complex biology and are useful in making these data more amenable to computational analyses. The first widely used ontology developed and adopted in the biological domain is the Gene Ontology (GO) -13 whichAlthough the need for vocabularies as key components to consistent phenotype annotations for mammals has been recognized for some time , and manOur goal is to describe the richness of phenotypes as precisely as they are known, recognizing that phenotype data are by nature complex and usually incomplete. Taking advantage of structural properties of a DAG, we have the ability to annotate phenotypes to the level of data resolution available, whether general or very specific and the ability to query with a high-level term, returning all phenotypes containing annotations to that term or to terms more specific than the query term. Thus, one can query for 'respiratory signs/symptoms' and retrieve all phenotypes annotated to this term and its hierarchical 'children' , or specifically request annotations to any of these sub-terms.The top level terms of the MP Ontology include physiological systems, behavior, developmental phenotypes and survival/aging. Physiological systems branch into morphological and physiological phenotypes at the level immediately below. A browser to view the ontology is available at Figure . In thisEach MP ontology term has a unique identifier, a definition and synonyms. In the term detail pages, these data and the number of hierarchical paths of the vocabulary where the terms appear are displayed. A plus sign following the term indicates that children of this term exist. In this figure, displayed next to the term, is a link indicating the number of annotation instances in MGD using this term or children of this term. This feature, due to be publicly available in early 2005, will greatly improve phenotype-centric searching in MGD.To initiate the vocabulary, we first developed a high-level categorization of phenotypes consisting of approximately 100 terms, such as heart/cardiovascular dysmorphology and skeletal axial defects. As we used this list for annotations, terms were refined and general organizing principles for the MP vocabulary were developed.An important component of our approach has been to address two practical implementation questions. From the biologist's perspective, the question is what term would be used to describe a specific phenotypic trait. From the curation perspective, we ask what terms reflect biological reality and maximize curator productivity.From a purely ontological perspective, every trait could be broken down into a core object, such as 'cornea' or 'gastrulation', defined by anatomical, behavioral or physiological terms, and a series of attribute vocabularies that describe the quality, quantity and character of a trait. For the practical reason of needing robust terms to describe phenotypes up-front to speed curation and the problem of losing biological meaning, particularly for clinical or dysmorphology terms, when terms are completely deconstructed , we have chosen to use compound terms in the MP Ontology. A few examples of terms where it is difficult to preserve the full biological meaning once they are deconstructed are shown in Table pheno@informatics.jax.org.Three major strategies are being pursued to further develop the vocabulary itself. First and most important is through the ongoing process of curating phenotype data. As new phenotypic traits are described and published, the need for new terms is recognized. New terms added in this way may be a simple addition to an existing hierarchical path or may result in the addition of entire new branches in the hierarchy. Second, collaborative efforts between the MGD phenotype curators, the mouse mutagenesis centers and the rat genetics community identify new specific terms and suggest improved organization of terms within particular hierarchical branches. Third, we are recruiting individuals with expertise in specific biological domains to review and evaluate sections of the vocabulary for accuracy, completeness and systematic arrangement. The MP Ontology is a work in progress and remains incomplete in some areas. We welcome the participation of the mammalian research community so that the most useful, definitive and universally applicable terms will be included. Information can be obtained by sending e-mail to While common pathological and clinical terms are used in the MP Ontology, considerations for term placement within the structure and for precise terminology is often derived from comparison with other open biological ontologies (OBO) . RecentlThe MP Ontology was built as a DAG using the DAG-Edit software written by John Richter and Suzanna Lewis . The MP Phenotypes are described in the MGD relative to the genotype of the individual. Genotype objects specifically consist of one or more allele pairs describing mutations or QTLs and the genetic background strain(s) where the phenotype was observed. Each phenotype annotation associates a MP Ontology term with a genotype/strain and the reference or data source supporting this assertion. Additional modifying text may be annotated to describe detail that is not easily standardized. Examples include experimental conditions, age of onset and incidence, and trait penetrance, among others. The annotation note may also include specifics of the phenotype where such details are deemed to be too case specific to be a MP term. In addition, genotypes are associated with OMIM where a particular mouse genotype is a model for human diseases and syndromes. Figure The MP Ontology and annotation schema was designed to minimize curatorial time, yet remain precise enough to describe phenotypic data. It supports robust phenotypic annotations and querying capabilities for mouse phenotype data. While this vocabulary is far from complete, we have designed strategies for its continued development as a collaborative effort for supporting the representation of existing mutations and those that continue to be created.As of 1 November 2004, over 11,150 phenotypic alleles representing mutations in 5,214 unique genes had been catalogued in MGD. For these alleles, 9,696 genotype records exist, with 21,556 phenotypic annotation instances. The MP Ontology is also used in phenotypic data annotations at the RGD . As our"} {"text": "The function of a novel gene product is typically predicted by transitive assignment of annotation from similar sequences. We describe a novel method, GOtcha, for predicting gene product function by annotation with Gene Ontology (GO) terms. GOtcha predicts GO term associations with term-specific probability (P-score) measures of confidence. Term-specific probabilities are a novel feature of GOtcha and allow the identification of conflicts or uncertainty in annotation.Plasmodium falciparum and six other genomes. GOtcha was compared quantitatively for retrieval of assigned GO terms against direct transitive assignment from the highest scoring annotated BLAST search hit (TOPBLAST). GOtcha exploits information deep into the 'twilight zone' of similarity search matches, making use of much information that is otherwise discarded by more simplistic approaches.The GOtcha method was applied to the recently sequenced genome for E-value cutoff of 10-4.At a P-score cutoff of 50%, GOtcha provided 60% better recovery of annotation terms and 20% higher selectivity than annotation with TOPBLAST at an Plasmodium falciparum annotation and is being adopted by many other genome sequencing projects.The GOtcha method is a useful tool for genome annotators. It has identified both errors and omissions in the original It is now often possible to obtain the complete genome sequence of an organism in a few months, but without a directed approach, determining the function of potential gene products through biological experimentation is inefficient. Accordingly, methods for function prediction are required to direct experiments in function verification. In the context of this paper the term function is used to refer to all aspects of a gene product's behaviour. This includes the concepts described by the Gene Ontology classifications for Molecular Function, Biological Process and Cellular Component. It is explicitly stated in the text when a more specific interpretation of function is intended.A powerful tool in the annotation of novel genomes is the prediction of function by similarity to a sequence of known function. Such 'transitive function assignment' can work very well where there is a clear match to a homologue with a well established function. However, accurate functional assignment is difficult in cases where the match is less well defined, either due to lower sequence similarity or the presence of many candidates with differing functions. Gerlt and Babbitt reviewedKeywords and restricted vocabularies do not solve the problem of conflicting assignments. Unless some computable form of relationship is present between terms, it is not possible to provide any automated form of conflict resolution between terms or to identify computationally where one term is a more specific descriptor than another.An ontology represented as a graph can provide a solution to this problem. Ontologies are restricted vocabularies, or sets of terms where each term is explicitly related to parent terms and child terms (and hence to sibling terms). The Gene Ontology (GO) is a desThe availability of the Gene Ontology has provided for the first time, a broadly accepted classification system for function assignment that can be analysed computationally. Previous work using other classification schemes, such as restricted vocabularies based on SwissProt keywords, suffered because of the lack of a distinct relationship between terms and/or due to typographical differences ,7. SinceXie and coworkers have comE-value of the pairwise match. GOblet [Two tools based on BLAST searches have recently been presented in the literature. OntoBlast provided. GOblet also appPlasmodium falciparum) [In this paper we present a novel method, GOtcha, that can be applied to any database search technique that returns scored matches. We have initially implemented this with BLAST searches and extend the analysis from the similarity match scores for a search in order to provide an empirical estimate of the confidence in each predicted function. We have applied this method to Malaria (ciparum) and six The assessment of the global accuracy of a particular annotation method is extremely problematic in the absence of a computable annotation scheme. Gene Ontology provides such a computable scheme and we present here a quantitative measure for comparison of function annotations based on assignment to GO terms. This provides a metric for direct objective comparison of annotation methods that is independent of arbitrary cut off values. The new accuracy measure encompasses true positives, false positives and false negatives, so combining sensitivity and selectivity in one value.Two sets of annotation predictions were determined for each data set in the study. One was based on all available GO annotations and the other on a reduced set of GO annotations that excluded gene-associations with the evidence code IEA (Inferred by Electronic Annotation). IEA annotations are usually considered to be less reliable as they have not been assessed by a human curator. In contrast, ISS annotations (Inferred from Sequence Similarity) are annotations which, whilst being derived electronically, have been assessed by a human curator and can be considered sufficiently reliable. IEA annotations may however give a broader coverage than non-IEA annotations. On average, each dataset contained slightly more than 50% IEA annotations, though the vast majority of the sequences had some non-IEA annotation. The number of sequences for each dataset is listed in Table y-axis indicates the proportion of annotations provided by the genome project (given annotations) that were annotated to some degree by either GOtcha . This E-value cutoff is at the top end of the E-values between 10-4 and 10-20 typically used as a threshold for confident function assignment [E-values.Figure a Figure or TOPBLa Figure . At a P-signment -20. The x-axis representing the minimum P-score. A low P-score represents low confidence in the annotation. A high P-score represents high confidence in the annotation. Figures x-axis representing the maximum E-value. A low E-value represents high confidence in the annotation. A high E-value represents low confidence in the annotation. In Figure x-axis approaches the maximum relatively quickly when moving from high P-score to low P-score, typically coming very close to the total number of sequences annotated well before the P-score has dropped to 50%. This represents a broad coverage of sequence space, assigning annotation at a relatively nonspecific level to most sequences. In terms of the total number of annotations, these rise steadily as the P-score cut off drops. At very low P-scores (below 10%) the total number of annotations increases rapidly, indicating an increase in the spectrum of functions matched with only weak similarity. The number of annotations per sequence increases gradually as the P-score drops until a rapid rise at low P-scores . Figure E-value below the cutoff on the x-axis. At an E-value of 10-4 TOPBLAST shows a selectivity of 53.4% (43\u201360% s.d. 5.7). Accordingly, GOtcha outperforms TOPBLAST with improved coverage and better selectivity for each genome examined. Both the GOtcha and the TOPBLAST analyses include gene associations that are children of obsolete (GO:0008369) and the three 'unknowns' . The obsolete terms comprise a very small proportion (1.5% mean 0 \u2013 3.1% s.d. 1.1) of the total number of annotations of the number of annotations recovered by GOtcha.Function assignment was repeated using the same BLAST search results but excluding the IEA coded gene-associations. Figure The number of annotations per sequence was reduced by comparison to the data shown in Figure x-axis. Figure E-value below the cutoff on the x-axis. GOtcha with a P-value cutoff of 50% shows a selectivity of 60% (35\u201379% s.d. 14). TOPBLAST with an E-value of 10-4 shows a selectivity of 49% (25\u201359% s.d. 11). In all cases except that of Arabidopsis GOtcha shows a clear improvement over TOPBLAST with a mean improvement in selectivity of 1.2 fold (0.85 \u2013 1.4 s.d. 0.17).Figure One issue with excluding IEA annotations is that the coverage of functions in the genome is lowered. This inevitably will lead to a higher number of positives that have incorrectly been assigned as false as a result of the incomplete sequence annotations. Despite excluding terms for which there is no annotation to the ontology under examination, the results are skewed by assigning a proportion of true positives as false positives. This indicates that the method is performing more poorly than is in fact the case. We have examined the nature of the false positives in more detail below.Comparing function assignment methods is difficult. Typically the standard against which they are assessed is an incompletely annotated dataset. Both a lack of experminental data confirming potential functions and a lack of knowledge about potential functions can lead to the standard data being less perfectly annotated that would be desired. It is not realistically possible in an automated analysis to cope with unrecorded true positives that are registered in the analysis as false positives. It is therefore the case that any analysis of accuracy can only give an estimate of minimum accuracy.Accuracy can also be difficult to compare between two methods that annotate to different subsets of GO. One method may only annotate to relatively general terms, allowing for a better claimed specificity than a method that attempts to annotate to a more specific level. GOtcha predicts at all levels of the GO hierarchy. It assigns a probability to every combination of GO term \u2013 sequence association and should be compared to other function assignment algorithms using a global metric, one which can account for over-specificity and under-specificity in a set of predictions as well as incorrect assignment.Ouzounis and Karp describeWhen applied to annotation using a DAG such as GO the number of potential categories is reduced from the eight described in TABS to three. TABS was developed to compare annotations where the terms used are not implicitly related through a computable structure such as a DAG. As we are using a DAG where ancestor nodes are implicitly associated with the gene through direct association of a child node, the prediction for a particular sequence becomes a set of GO terms (the nodeset) comprising all nodes that match the prediction rather than just the most specific terms. The accuracy of a prediction can then be assessed by observing the presence of nodes in both the node sets for annotations and for the predictions rather than assigning qualitative values. The more distant a given prediction node set is from the annotation node set, the smaller a proportion of nodes (GO terms) they will have in common.The effect of a quantitative approach on the TABS categories is as follows: TABS category 0 is unchanged. This is an exact match and is represented by the presence of the term in the node sets for both original annotation and current prediction. TABS category 1 is no longer relevant. A controlled vocabulary is being used so there is no scope for typographical errors of the type described by Ouzounis and Karp or by Tsoka or Iliopn \u2208 A \u2229 B), false positives and the false negatives . The aim of any prediction method is to maximise the number of matches (true positives) whilst minimising the errors . The number of true negatives does not need to be considered as this number is very large and essentially constant over the analysis. We can use the following relation as an error quotient to assess prediction methods.Given two sets A and B corresponding to a given annotation set and a predicted set (each node in the set comprising a sequence \u2013 GO term association) we are interested in the true matches has been determined for both GOtcha and top BLAST hit annotation sets, both with and without the use of automated annotations (IEA evidence code) for transitive function assignment . It may be that the annotation set used as the reference in comparing these results was incomplete. This would result in some true positives being incorrectly assigned as false positives with a corresponding increase in REQ. However, this would apply similarly to GOtcha and to the top BLAST hit analysis.Minimum nt Table . When aud Figure and 0.12P. falciparum chromosomes were assessed by hand to give an indication of the completeness of the curated annotations. Results for selected sequences in this set are shown in additional file Samples of the false positive function predictions by GOtcha with the highest P-scores from three P. falciparum genome project if no function had been identified during the first-pass automatic annotation. However, the addition of GO terms to sequences by GOtcha prompted the original annotation to be re-evaluated. For example, PFL1875w shows a hit to the Pfam K+ tetramerisation domain supports the GOtcha annotation although it is at a level that genome annotators may feel is marginal. In PFL1780w, stronger supporting evidence indicates again that GOtcha can suggest GO annotations that have been previously overlooked.In some examples, genes that were annotated as encoding hypothetical proteins could be re-annotated based on GOtcha predictions. GO terms had not been assigned during the manual curation phase of the In several examples, GOtcha predicted either additional functions or more specific GO terms to describe previously annotated functions. PFC0495w encodes a putative aspartyl protease. When all evidence codes were included, a molecular function of pepsin A activity is predicted. This protein matches pepsin A domains defined by the InterPro entry IPR001461 ('Peptidase_A1 pepsin A'), thus the term from GOtcha is likely to be correct.PFL2465 encodes a thymidylate kinase, which was correctly annotated by GOtcha as being involved in dTTP biosynthesis. GOtcha also indicates 'dTDP biosynthesis' as a suitable GO process term. Thymidylate kinase catalyses the synthesis of dTDP, a necessary step in dTTP biosynthesis. However, the human annotator missed the fact that dTDP biosynthesis is not a 'part of' dTTP biosynthesis within the ontology structure and in such cases, terms describing both processes must be employed.P. falciparum genome, many of which have arisen from retrospective corrections and amendments to gene models. For instance, GOtcha provides detailed annotation for a putative ATPase synthase F1 alpha subunit (PFB0795w) almost completely lacking useful GO terms. GOtcha also suggested GO terms relating to translation elongation for PFL1710c. A highly significant hit to Pfam:00009 indicates that this GOtcha prediction may well be more accurate then the original genome annotation.Sometimes, GOtcha highlighted erroneous omissions in the GO annotation of the Annotations performed with IEA terms appeared to be more specific than those where IEA terms were excluded. In many cases, such as PFC0495w, the difference was quite pronounced. Here the protein was implicated in 'proteolysis and peptidolysis' when all annotations were included but filtering out IEA annotations resulted in the more general, and less useful, description of 'metabolism'.Out of the 20 genes inspected, PFL1825w was the only example where GO terms were incorrectly suggested for the biological process, molecular function and cellular component aspects of GO. In other cases, mis-annotations often had low I scores (predictions made with P-scores > 50% but very low associated I-scores \u226a 0.1) or were due to terms taken from slightly too far down a branch in the ontology structure. For example 'ATP-binding and phosphorylation-dependent chloride channel' was predicted for PFB0795w, an ATP synthase.The cellular component of gene products are hard to annotate \u2013 often BLAST is insufficient to recognise the targeting information encoded in signal and transit peptides and specific signal sequence detection methods such as PSORT II must be It is hard to measure what proportion of the calculated false positives does in fact represent serious mis-annotation. Although the hand analysis may provide representative examples, it is too small to be of statistical significance. Genuine false positives (with high P- and I-scores) were fewer than would be expected from the P-score. Despite the small sample size, these results show that GOtcha performs well as a guide to the manual assignment of GO terms. Not only can it provide suggestions for more granular annotation but it can highlight terms that would otherwise be missed by a human annotator.REQ goes up. This may well indicate a degree of inaccuracy in the IEA based annotations, or incomplete coverage by the human curated annotations. GOtcha, however, makes a significantly better use of the BLAST search result in the quality and coverage of the annotation.One of the major problems facing assessment of function assignment is the separation of annotation and test datasets. In this analysis we have tackled this issue by taking individual genome datasets as the test sets and using other genome datasets for the annotation source from which to transitively assign function. The scoring mechanism used for estimating accuracy values is independent of both test and annotation datasets, since it makes use of sequences that are found in neither. Whilst the sequences are independent, the annotations associated with these sequences may not be. Many of the computationally assigned annotations are derived from analyses involving the 'independent' datasets and can therefore not be regarded as entirely independent. IEA annotations are primarily obtained from sequence similarity searches. As a consequence it is not surprising that the results obtained for both GOtcha and TOPBLAST when IEA annotations are included are so similar. Interestingly, when IEA based annotations are excluded from the TOPBLAST analysis, the REQ, the GOtcha predictions for the human genome were compared to the genome consortium annotations with weights ranging from 0.5 to 15 equally to over prediction errors . In order to examine the effect of the weighting on 5 Figure . As expeREQ as described in this paper, each term in the nodeset is weighted equally. This may not be the most appropriate measure. The granularity of terms in Gene Ontology is not constant across the ontologies, nor is it readily quantifiable. This may lead to bias in the metric, where differences in the presence or absence of closely related terms is weighted equally to presence or absence of more distantly related terms even though they have the same graph path distance between them. There is also the issue of prevalence. Some terms occur in almost every nodeset, others are less prevalent. The most appropriate form for a quantitative metric will need to be examined in future work.The metric presented here is an objective measure of method performance but has some drawbacks. Using the Transitive function assignment is limited by the sensitivity of the underlying search method and the scope of the dataset being searched. The GOtcha method of preparing a weighted composite view of the functions from a complete set of search results provides a significant improvement in the annotation of sequences when compared to a method that selects the most significant annotated hit. GOtcha also provides a confidence measure for the putative function assignments, allowing for the determination of an appropriate level of specificity for the annotation set. Hennig and co-workers examined the ability of BLAST analysis to transitively assign function from distant taxa, concluding that for the majority of cases, GO-based annotation would give a good result . In thisThe GOtcha method has several significant advantages over the transitive assignment of function by TOPBLAST. Firstly each function assignment has a directly understandable accuracy estimate that can be interpreted without any knowledge of the prediction methodology. This accuracy estimate is function-specific, unlike general rules of thumb that are applied to interpretation of BLAST search results. Secondly, the GOtcha method provides much greater coverage than a top annotated match approach, annotating more sequences with reasonable confidence. In many cases it provides annotations for sequences that otherwise would have no annotations. Finally, it provides term specific annotation accuracy estimates. This is a significant advantage over TOPBLAST where every term in the set predicted for an individual sequence has the same value and a biologist interpreting the results is given little indication of which terms can reasonably be accepted. In contrast, GOtcha provides individual P-scores for each term. This allows a rapid visual examination of the prediction as a graph or a list, indicating appropriate points at which experimental verification may best be directed.In order to assess the accuracy of annotations to tree-like ontologies we have developed an objective flexible scoring metric that provides a global analysis, including assessment of both false positives and false negatives. This metric also provides a means for comparison of methods that is not dependent on the selection of any particular parameter threshold or cutoff in the scoring method used.The underlying mapping methodology applied in GOtcha can readily incorporate other search methods that provide a more sensitive similarity search. Combining search methods should also provide a better coverage of sequence space occupied by distant homologues , and sucAll data were obtained in the same week to provide a consistent time point at which to perform the analysis.Plasmodium falciparum) sequence data for the recently determined genomic sequence [Malaria (sequence were obtDrosophila melanogaster) data were obtained from Flybase [Fruit fly ( Flybase as releaSacchyromyces cerevisiae) data were obtained from the Sacchyromyces Genome Database [Yeast (Database . The setVibrio cholerae data were obtained from The Institute for Genomic Research [Cholera Research . The datHomo sapiens) data were obtained from Swiss-Prot using the conceptual complete human proteome from the Swiss-Prot/EnsEMBL collaboration dated 6 March 2003 [Human (rch 2003 . The datCaenorhabditis elegans) data were obtained from Wormbase release 97 [Worm data were obtained from The Arabidopsis Information Resource [Thale cress . Results were stored in a relational database (PostgreSQL version 7.3) or as flat files where appropriate. BLAST result parsing was performed with the BioPerl toolkit (release 0.7) . SequencD. melanogaster was the query/subject set, blastp otherwise). Default parameters were used . Each sequence database search produces a ranked set of sequences similar to the query sequence. The search result for each genome database search is parsed and a list of pairwise matches between the query sequence and the subject database sequences obtained.The GOtcha method is illustrated by a cartoon in Figure 10(E), 0 } where E is the expectancy score for that pairwise match. In this way the whole subtree to the root node is assigned the R-score. The GOtcha method allows mappings obtained from many sequence matches to be combined. For each node , R-scores for all pairwise matches which contain annotation to that node are summed and normalised to the total R-score for the root node of that ontology . This normalisation gives an internal relative score (the I-score), producing a weighted composite subgraph of the GO. This normalisation effectively removes bias in the E-value due to database size or search program used. A confidence measure is calculated as loge of the root node score (the C-score). Accordingly, this provides two measures for an individual predicted gene-association; A score relative to the other predicted gene-associations in the node set (the I-score) and a score for the function prediction as a whole (the C-score).For each similarity match between the query sequence and a database sequence, a set of GO terms corresponding to the gene-associations for the database sequence is retrieved from the appropriate gene-association dataset. The set of GO terms and all ancestral terms (the nodeset) are assigned a score R = max { -logEach genome was searched individually and I-score and C-score for each GO term association were averaged across all genome searches that provide at least one annotated pairwise match. Averaging across genomes in this way provides some correction for individual genes with exceptionally high copy numbers in certain genomes. In this paper the term 'function prediction' relating to an individual sequence refers to a prediction of a set of GO term \u2013 sequence associations . Averaging of the individual search results avoids the over-representation of large genomes in the final annotation set and allows the final result to be weighted towards a particular taxonomic grouping should that be desired. Each gene association represents a function assignment of a gene product with a GO term and is annotated with an evidence code providing an indication of the reliability of a particular annotation. The GOtcha method allows specific classes of annotation, such as those derived exclusively from computational analyses, to be excluded from the analysis if required.Although higher C-score and I-score values correspond to greater confidence in the transitive assignment of function than lower C-score or I-score values, it is not immediately apparent how these values should be interpreted. Examination of preliminary results indicated that there was considerable variation between GO terms in the confidence that can be placed in a prediction with a given I-score and C-score (data not shown). Accordingly we have created an empirically based estimate of accuracy that can be used to indicate confidence in the prediction of association between a GO term and a gene product.A background set of 518226 annotated sequences from the SwissProt gene associations were included in the accuracy estimate after excluding taxa corresponding to the search databases and their subspecies. All background sequences were subject to a search against all 7 species specific datasets and a set of function predictions obtained as described above. A scoring table for each GO term was prepared by segregating all predictions for that GO term on I-score and C-score. I-scores were divided into ten rows by dividing the range (0 \u2013 1) evenly. C-scores were divided into columns by unit ranges . This gave rise to approximately one hundred cells for each GO term table. Each prediction was assigned to a cell based upon its I-score and C-score. The overall accuracy of each cell was determined by comparison of the predicted associations in that cell to the annotations provided by the GO Annotation project (GOA) and calculated as the proportion of true positives to the sum of true and false positives. The table for a specific GO term was then used to deliver the P-score based on any given I-score and C-score pair for a predicted association between that GO term and the query sequence. A similar set of tables was constructed from background analyses from which terms with IEA associations were excluded. For GO terms where there are few datapoints with which to estimate accuracy reliably, accuracy estimation falls back to a scoring table that combines results over all GO terms from that ontology with the same number of ancestors.E-value for that hit.The same BLAST searches used for function assignment with the GOtcha method were analysed. Function assignments for the nodeset corresponding to the top annotated BLAST match (TOPBLAST) for each genomic dataset were transferred to the query sequence with a score corresponding to the DAG, Directed Acyclic Graph. URI, Uniform Resource Identifier. BLAST, Basic Local Alignment Search Tool. TOPBLAST, Top annotated BLAST match. Perl, Practical Extraction and Report Language. GO, Gene Ontology. TABS, Transitive Annotation Based Score. NCBI, National Centre for Biological Information. s.d., Standard deviation.The GOtcha method was devised and implemented by DMAM who also prepared the manuscript. MB performed the manual assessment of false positives and provided feedback on the presentation of results. GJB provided essential guidance for the performance assessment and revision of the manuscript.The supplementary data contains representative examples from the manual assessment of false positives. It is portrayed in tabular format and indicates the benchmark annotation, the highest scoring predicted incorrect annotation by GOtcha and the lowest scoring predicted annotation by GOtcha.Click here for file"} {"text": "The current progress in sequencing projects calls for rapid, reliable and accurate function assignments of gene products. A variety of methods has been designed to annotate sequences on a large scale. However, these methods can either only be applied for specific subsets, or their results are not formalised, or they do not provide precise confidence estimates for their predictions.Xenopus laevis sequences, yielding functional annotation for more than half of the known expressed genome. Compared to the currently available annotation, we provided more than twice the number of contigs with good quality annotation, and additionally we assigned a confidence value to each predicted GO term.We have developed a large-scale annotation system that tackles all of these shortcomings. In our approach, annotation was provided through Gene Ontology terms by applying multiple Support Vector Machines (SVM) for the classification of correct and false predictions. The general performance of the system was benchmarked with a large dataset. An organism-wise cross-validation was performed to define confidence estimates, resulting in an average precision of 80% for 74% of all test sequences. The validation results show that the prediction performance was organism-independent and could reproduce the annotation of other automated systems as well as high-quality manual annotations. We applied our trained classification system to Xenopus laevis contig sequences was predicted and the results are publicly available at .We present a complete automated annotation system that overcomes many of the usual problems by applying a controlled vocabulary of Gene Ontology and an established classification method on large and well-described sequence data sets. In a case study, the function for Accurate annotation has traditionally been maintained manually with the experience of individual experts and the experimental characterisation of sequences. However, the increasing gap between the amount of sequence data available and the time needed for their experimental characterisation demands computational function prediction in complementing manual curation [and establish a method to provide a confidence value for each annotation.Ongoing genome sequencing and recent developments in cDNA sequencing projects have led to an exponential rise in the amount of sequence information. This has increased the need for acquiring knowledge from sequences as to their biological function. Annotating a single sequence is the gateway to interpreting its biological relevance. However, the usefulness of these annotations is highly correlated with their quality. curation -4. Commocuration . Such ancuration . Howevercuration ,7,8. A ncuration , GAIA [1curation , Genotatcuration , Magpie curation , GeneQuicuration , GeneAtlcuration and PEDAcuration . Howeveret al predicted GO terms by intersecting domain profiles [The current annotation, written in a rich, non-formalised language also complicates this automated process. We addressed this problem by applying a controlled vocabulary from Gene Ontology (GO) -17. GO pprofiles . The Swiprofiles . Text miprofiles . HoweverXenopus laevis, a widely studied model organism in developmental biology. Because many researchers are now focussing on the functional genomics of this organism, a demand exists for a quality annotation [Xenopus laevis contig sequences (from TIGR Gene Indices) yielding annotation with good confidence values for more than half of these sequences.We have developed an automated system for large-scale cDNA function assignment, designed and optimised to achieve a high-level of prediction accuracy without any manual refinement. Our system assigns molecular function GO terms to uncharacterised cDNA sequences and defines a confidence value for each prediction. The cDNA sequences were searched against GO-mapped protein databases and the GO terms were extracted from the homologues. In the training phase, these GO terms were compared to the GO annotation of the query sequences and labelled correspondingly. We applied Support Vector Machines (SVMs) as the machine learning method to classify whether the extracted GO terms were appropriate to the cDNA sequence or not. In order to classify the GO terms we used a broad variety of elaborated features (attributes) including sequence similarity measures, GO term frequency, GO term relationships between homologues, annotation quality of the homologues, and the level of annotation within the GO hierarchy. To enhance the reliability of the prediction, we used multiple SVMs for classification and applied a committee approach to combine the results with a voting scheme . The connotation . TherefoThe classifier (SVM) needs to specify attribute values (features) for a broad list of samples and a class label for each of these samples. Through the training samples it learns the feature patterns and tries to group them according to their class labels. After training, the algorithm assigns class labels to new samples according to the class that they best match.We selected GO-annotated cDNA sequences for training the SVM classifier. The nucleotide sequences were searched against GO-mapped protein databases and GO-annotations were extracted from the significant hits. Then, each GO term obtained was utilized as a sample for the feature table. The sample GO terms were then labelled as either correct \"+1\") or false (\"-1\") by comparing them to the original annotation. Note that we applied the relationships of the GO terms based on their graph structure: \"Correct\" was assigned not only if they were exact matches but also if the GO terms were related as either \"parent\" or \"child\" , Drosophila melanogaster (fly), Mus musculus (mouse), Arabidopsis thaliana (Arabidopsis), Caenorhabditis elegans (worm), Rattus norvegicus (rat), Danio rerio (fish), Leishmania major (Leishmania), Bacillus anthracis Ame (Bacillus), Coxiella burnetii RSA 493 (Coxiella), Shewanella oneidensis MR-1 (Shewanella), Vibrio cholerae (Vibrio) and Plasmodium falciparum (Plasmodium) . Prokaryotic bacteria contributed 20.6% and the remaining 24.1% of the sequences came from rat, fish, worm, Plasmodium, Leishmania and yeast. Yeast and fly are purely manually annotated datasets. Where as Bacillus, Coxiella, Vibrio, Shewanella, Leishmania and Plasmodium are mostly manually, and the rest mostly automatically annotated datasets. Manual annotation tends to be conservative and sparse, since the GO terms are assigned only if the annotator is highly confident. Therefore, a GO term may be missed due to a poor definition of a false negative. To reduce this critical problems, yeast and fly annotations are accompanied by an \"unknown molecular function\" term for sequences with questionable further functions. To reduce false negatives, we discarded all sequences with these tags for training and testing .For training and testing the SVM, we selected 39,740 GO-annotated cDNA sequences from the following organisms: Shewanella, Coxiella and Vibrio sequences had the lowest number of GO terms per sequence .The cDNA sequences were searched across the protein databases covering a wide range of organisms from prokaryotes to eukaryotes and SWISSPROT. For 36,771 sequences we got hits with GO terms, contributing to 856,632 sample GO terms and yielding an average of 23.29 GO terms per query sequence into 99 equal subsets. Note that, amongst these 99 subsets, 96 contained data from a single organism and the remaining 3 from two organisms each. Subsequently, we built 99 classifiers with these subsets. Since the training sets were created organism-wise, the classifiers were trained from different ranges of data, based on purely manual annotation , mostly automated annotation or a mixture of both. For training each of these classifiers, we performed a model selection , which yielded varying accuracy values ranging from 78.81% to 96.03%, with an average accuracy of 85.11%.Plasmodium and Leishmania, 98 classifiers; minimum: Arabidopsis, 74 classifiers). The quality of the predictions was estimated by comparing the predicted terms with the original annotation and the results were expressed in terms of precision and accuracy values (see Methods). The average-accuracy refers to the average of the accuracy values attained by all classifiers used for the prediction. The maximum average-accuracy was achieved for fly (81.51%), followed by yeast (80.50%), and the minimum for mouse (76.0%).To test the classifiers performance, we prepared 13 test sets (each set corresponding to a single organism) using the same 856,632 sample GO terms. The prediction quality of all 99 classifiers were assessed by an organism-wise cross-validation approach, i.e. for each organism (test set), we used all the classifiers for prediction except those that corresponded to the same organism. With this approach, we were able to simulate the annotation of a new organism. The number of classifiers used for predictions varied highly across organisms (maximum: Arabidopsis) with the manually annotated test sequences (yeast and fly). The prediction of the yeast and fly sequences with the 20 classifiers from the mouse sequences produced an average-accuracy of 79% and 80% respectively. Similar results were acquired with the 25 classifiers from Arabidopsis (79% and 80%). Likewise, the worm classifiers (11 classifiers) yielded the average-accuracy of 82% for yeast and 83% for fly. These values were comparable with the average-accuracy of 81% achieved by both, using yeast as test sequences against fly classifiers (16 classifier) and vice-versa (fly test sequences against yeast classifiers). Likewise, we classified the mouse test sequences against yeast classifiers (5 classifier) and fly classifiers yielding 69% and 71% average-accuracy respectively.Additionally, we compared the classification efficiency of the classifier derived from automatic annotation . The classification performances revealed by the ROC plots were comparable between IEA and non-IEA annotated test organisms and non-IEA . All sequences from Xenopus laevis contig sequences from the TIGR Xenopus laevis Gene Index (XGI) [We extracted all ex (XGI) and got Xenopus, fly, yeast and mouse to the high-level, i.e. more generalised or high-level terms of the molecular function ontology (\"GO slim\" for molecular function) . These me Figure . Note thXenopus contigs (TIGR Xenopus laevis gene indices). We compared our annotation with the TIGR GO annotation for molecular function. From 35,251 contig sequences, TIGR annotated 5,444 contigs with a total of 16,432 molecular function GO terms. In contrast, our approach was able to predict function terms for 17,804 contigs, i.e. more than three times that of TIGR sequences. Our procedure did not annotate 295 contigs from the TIGR annotated contigs. For the remaining 5,149 contigs, 85% of all TIGR terms were found to be exact with those using our method; 3.2% of the TIGR terms were at a higher-level of the GO tree than our annotation, so in this case we provided annotation at a deeper level; in 0.9% of the cases our annotation was at a higher-level; 8.3% of the cases were completely different; and 0.6% of the TIGR terms were obsolete. We compared the quality of TIGR and that of our annotations by a raising stringency and found that when we applied a confidence threshold of 80% for our annotation, we lost 46.6% of the sequences. This included 1,492 sequences holding equivalent TIGR annotation or 27.4% of the total TIGR annotation. With this stringency, our system annotated 9,510 contig sequences, i.e. twice the TIGR annotation at this quality.TIGR provides a GO mapping for endopeptidase activity and more specifically serine-type peptidase activity (98% and 97% confidence respectively). 2) TC209487 and TC190605 are predicted to be aminopeptidases, however for the latter the more specific prediction of prolyl aminopeptidase activity is assigned with 86% confidence. 3) TC199713 is predicted as glutathione peroxidase at 100% confidence and TC194305 is annotated as protein kinase with the same confidence. 4) Both TC187949 and TC210151 are transmembrane receptors but the latter one is classified as frizzled receptor with 82% confidence. In most of these examples the functional assignment and associated confidence were recorded in multiple levels of granularity.We were interested in novel annotated sequences with the highest confidence values and found we could predict GO terms for 557 contigs with a confidence value of 100% . Interestingly, 192 of these lacked any GO annotation by TIGR. Out of these, 184 had got a descriptive TIGR annotation and the rest had not got any. Table In this paper, we presented an automatic annotation system that is able to cope with the expanding amount of biological sequence data. Our approach efficiently combines the ongoing efforts of Gene Ontology and the availability of GO-mapped sequences with a profound machine learning system. The GO-mapped databases provide annotation described in a controlled vocabulary and also a measure of reliability, as these GO entries are labelled with their type of origin. Furthermore, GO terms are structured hierarchically, which allow us a twofold use of the information: i) the level within the tree is taken as a classification criterion to distinguish low from high-level annotations during the learning procedure, and, ii) the hierarchical structure allows us to extend hits by slightly moving up and down within a restricted local area of the tree. This may overcome fluctuations of the annotation levels coming from varying annotation experts.Our annotation system exploits the different combinations of attributes and yields functional transitivity: SVM learning and prediction are organism-independent and comparable to manual annotation, which may be supported by the nature of the attributes we utilise. Subsets and overlaps are counted in a balanced fashion to avoid biases due to the complexity of an organism and a potentially correlated complexity of its sequences. The committee approach allows us to improve the prediction quality as well as to assign confidence values for the new predictions in a straightforward manner. Our classifiers performance is hardly limited by the varying quality of the training data, whether manual or automatic annotated. The prediction results of manually annotated test sets with the classifiers based on automated annotation as well as classifiers based on manual annotation were comparable. Regarding the outcome of the overall classifiers, we achieve consistency with existing annotation from automatic annotations. This is the less complex part of our work and shows a comparable efficiency of our system. Additionally, our system reproduces annotation of purely manually annotated datasets . However, the performance results for these datasets are low in terms of recall, i.e. 47.4% recall with 80% precision compared to 60.6% recall with the same precision of the complete test set. Note that manual annotation tends to be conservative and sparse, yielding stringent true positive definitions, whereas automatically annotated sequences may accumulate information to a greater extent.Xenopus since it is a familiar model organism. However, the sequences were not very well annotated. Our system was applied to annotate the Xenopus contig sequences from TIGR. Through our approach, we annotated 50.5% of all contig sequences available at present, and associated a confidence value for each prediction, yielding roughly three times more sequences as compared to the currently available GO annotation. However, the coverage of annotation to new organism like Xenopus is crucial. We were able to attain predictions for 50.5% of all Xenopus contig sequences (no singletons). This compares to the applied databases that contained 53% satisfactory annotation for their sequences (not regarding sequences with unknown function terms), and better than the organism specific databases (36%). Obviously, improving the quality and quantity of annotation within the available databases goes along with the coverage exploit of machine learning algorithms for new organisms. In future we want to extend our method with the information from other sources such as domain databases and protein family databases.We were interested in annotating Xenopus laevis contig sequences, we obtained a remarkably enhanced annotation coverage compared to the existing annotation.We developed an automated annotation system to assign functional GO terms to an unknown sequence. We used the well-established technique of Support Vector Machines (SVM) for the classification of correct and incorrect GO terms. Our approach benefited from the broad variety of potential attributes used for the functional transitivity and a vast amount of data used for training and validating. The committee scheme exploited in our system provided a means to assign confidence values in a straightforward manner. Our system performance was robust, organism-independent and reproduced the high-quality manual annotation. When applying it to We used the following statistical terms ,31.Accuracy was the rate of correct predictions compared to all predictions,Accuracy: = (TP + TN) / (TP + FP + TN + FN), \u00a0\u00a0\u00a0 (1)Precision was the portion of true positives with respect to all positives,where TP denotes true positives, FP false positives, TN true negatives and FN false negatives. Precision: = TP / (TP + FP). \u00a0\u00a0\u00a0 (2)sensitivity := TP / (TP + FN), specificity := TN / (FP + TN), and false positive rate := 1 - specificity. We defined the term \"coverage-of-sequences\" as the portion of query sequences for which the classifier delivers a prediction; \"Precision-per-sequence\" the (average) portion of correct GO terms for a single query sequence, with respect to all GO terms assigned to it. Note that these terms were defined within our model, i.e. a good \"accuracy\" meant good consistency with respect to our training and test sets.Also used were 1 with respect to GO2 was classified as \"parent\", \"child\", \"sibling\" or \"different\" Pi denotes the set of nodes from GOi to the rootP2 is a \"child\" of GO1 if their paths P2 and P1 intersect such thatGO1 \u2283 P2, \u00a0\u00a0\u00a0 (4)P2 is a \"sibling\" of GO1 if a common parent exists with a distance of one to GO1 and GO2 and is henceforth referred to as \"path-pairs\". This method could yield a list of several relations. To select the appropriate relation from this list, we considered the parent relationship to be most relevant, followed by the child relationship, and the sibling was considered least relevant. We implemented the following order:We could apply the single path relationship for most of the GO terms (3665 out of 5391). However, for the remaining 1726 terms more than one path to the root were found. For these cases we defined multiple path relationships and each path was considered individually. The single path relationship was applied to each possible pair of these paths (path for GO1. The parent relationship was set if at least one of the path-pairs gave a (single path) parent relationship;2. The child relationship was set if at least one of the path-pairs gave a child relationship. To avoid a bias due to an overwhelming number of path-pairs that did not match, we set a threshold: we considered this relationship only, if the number of path-pairs with no child relationship was equal or less than four times the number of path-pairs with child relationship;3. The sibling relationship was set if at least one of the path-pairs gave a sibling relationship. We again set a threshold: we considered this relationship only, if the number of path-pairs with no sibling relationship was equal or less than twice the number of pairs with sibling relationship;4. If none of these criteria could be applied, the relationship \"different\" was set.Note that we also implemented the hierarchy of these relations by tuning the stringencies for the fractions of path-pairs that must match .Arabidopsis, worm, rat, fish, Leishmania, Bacillus, Coxiella, Shewanella, Vibrio, Plasmodium, Oryza sativa, Trypanosoma brucei, and Homo sapiens. Apart from this, the SWISS-PROT database was also included [Arabidopsis, worm, rat, fish, Leishmania, Bacillus, Coxiella, Shewanella, Vibrio and Plasmodium GO level and path: The GO structure was exploited to derive the first two attributes,GO level: the distance of the sample GO term to the root (molecular function node);A.1.GO path: the number of paths from the sample GO term to the root.A.2. B) Alignment quality criteria: These attributes are based on the BLAST alignments. For attributes B.1 - B.4, the best value for the corresponding attribute was taken, if a GO term occurred in more than one hit,Expectation value: the expectation value from BLASTX;B.1. Bit score: the bit score value provided by BLASTX;B.2. and the hits to offset biases due to different complexities of the query and subject organisms. Attributes B.3, B.4, C.3 and D.3 were obtained from initial trials with a small dataset and applying parameter optimisation to distinguish the samples.We wanted to award alignment length and quality by combining features. This was done with respect to the length of the query Query coverage score (QCS): Combined measure of alignment size and quality concerning the query sequence,B.3. S := (AL / QL) \u00d7 (I + S), \u00a0\u00a0\u00a0 (5)QCL denotes the alignment length, QL the length of the query sequence, I the number of identities in the alignment, and S the number of positively contributing residues in the alignment;where ASubject coverage score (SCS): as in B.3, however only with respect to the corresponding subject sequence (database hit),B.4. S := (AL / SL) \u00d7 (I + S), \u00a0\u00a0\u00a0 (6)SCL denotes the length of the subject sequence;where SAdditionally, we decomposed these attributes into the following further six attributes (B.5 - B.10). For these attributes, we considered the hit with the best coverage score if a GO term occurred in more than one hit .Query percentage (QPC): Percentage of coverage of the alignment region in the query sequence (with respect to QCS), i.e.B.5. C := (AL / QL) \u00d7 100; \u00a0\u00a0\u00a0 (7)QPSubject percentage (SPC) Percentage of coverage of the alignment region in the corresponding subject sequence (with respect to SCS), i.e.B.6. C := (AL / SL) \u00d7 100; \u00a0\u00a0\u00a0 (8)SPQuery identity (QI): Percentage of identical residues in the BLASTX alignment (with respect to QCS);B.7. Subject identity (SI): Percentage of identical residues in the BLASTX alignment (with respect to SCS);B.8. Query similarity (QS): Percentage of similar or positively contributing residues in the alignment (with respect to QCS);B.9. Subject similarity (SS): Percentage of similar or positively contributing residues in the alignment (with respect to SCS).B.10.C) GO frequency related attributes: We extracted information about the frequency of GO terms in the hits by the following attributes:GO frequency (FG): the number of hits that contained the sample GO term.C.1.Number of hits (TH): the total number of hits for the query.C.2.Frequency score (FS): the number of hits that contained the sample GO term. Unlike C.1, we limited this score to emphasize differences in queries with few hits:C.3. Species frequency: The number of organisms contributing to a sample GO term for a single query sequence;C.4.Total GO (TG): total number of GO terms from all hits.C.5.Unique GO (UG): as C.5, except, that GO terms occurring more than once (in the hits) were counted only once.C.6. D) GO frequency by considering relationships: For these attributes we applied the structure of the Gene Ontology graph. Not only perfectly matching terms were considered, but also their defined parents, children or siblings:Relative frequency for all (RA): the relationships for the sample GO term with all GO terms that occurred in the hits were calculated. The sum of non-\"different\" relationships i.e. parent, child, or sibling was used for this attribute;D.1.Relative frequency for unique (RU): similar to attribute D.1, with the exception that GO terms occurring more than once were counted only once.D.2.Relative frequency for all (limited) (RAlim): same as attribute D.1, however this score was limited to emphasize differences of queries with few hits:D.3.Relative frequency for unique (limited) (RUlim): same as attribute D.2, however this score was limited to emphasize differences of queries with few hits:D.4.E) Annotation quality related attributes: Quality attributes were selected from the evidence codes provided by the gene association tables of the GO-mapped sequence databases. We selected 9 commonly used evidence codes , resulting in attributes E.1 to E.9. The entries of these attributes for each sample GO term were calculated by summing the occurrences of the corresponding evidence codes of all hits.Before training, normalisation was performed. We normalised the attributes by taking the logarithm (log) and log of log if necessary. We used log values for 16 attributes and log of log for 8 attributes . Furthermore, we converted the attribute values into mean 0 and standard deviation 1 by applying the Z-transformation. The feature table contained 856,632 samples and 31 attributes. We split the dataset into 99 training subsets. Each subset comprised of approximately 1% of the samples i.e. 8,566 GO terms. This resulted in 96 organism specific subsets and 3 hybrid subsets. We applied the support vector machines in the implementation of LIBSVM , which sXenopus laevis contig sequences is downloadable at . We followed the standard GO annotation style (using Gene ontology guideline). The evidence code is always IEA. The confidence value is included for each GO term.The annotation for The main work was carried out by AV. RK and KG conceived the idea of the study. AV and RK drafted the manuscript. FS developed and JM applied the machine learning strategy. KG implemented the databases in SRS. RE and SS supervised the work. All authors participated in reading, approving and revising the manuscript."} {"text": "In prokaryotes, Shine\u2013Dalgarno (SD) sequences, nucleotides upstream from start codons on messenger RNAs (mRNAs) that are complementary to ribosomal RNA (rRNA), facilitate the initiation of protein synthesis. The location of SD sequences relative to start codons and the stability of the hybridization between the mRNA and the rRNA correlate with the rate of synthesis. Thus, accurate characterization of SD sequences enhances our understanding of how an organism's transcriptome relates to its cellular proteome. We implemented the Individual Nearest Neighbor Hydrogen Bond model for oligo\u2013oligo hybridization and created a new metric, relative spacing (RS), to identify both the location and the hybridization potential of SD sequences by simulating the binding between mRNAs and single-stranded 16S rRNA 3\u2032 tails. In 18 prokaryote genomes, we identified 2,420 genes out of 58,550 where the strongest binding in the translation initiation region included the start codon, deviating from the expected location for the SD sequence of five to ten bases upstream. We designated these as RS+1 genes. Additional analysis uncovered an unusual bias of the start codon in that the majority of the RS+1 genes used GUG, not AUG. Furthermore, of the 624 RS+1 genes whose SD sequence was associated with a free energy release of less than \u22128.4 kcal/mol (strong RS+1 genes), 384 were within 12 nucleotides upstream of in-frame initiation codons. The most likely explanation for the unexpected location of the SD sequence for these 384 genes is mis-annotation of the start codon. In this way, the new RS metric provides an improved method for gene sequence annotation. The remaining strong RS+1 genes appear to have their SD sequences in an unexpected location that includes the start codon. Thus, our RS metric provides a new way to explore the role of rRNA\u2013mRNA nucleotide hybridization in translation initiation. More than 30 years ago researchers first discovered a sequence of messenger RNA (mRNA) nucleotides in bacteria that ribosomes recognize as a signal for where to begin protein synthesis. Today, genome annotation software takes advantage of this finding and uses it to help identify the location of start codons. Because these sequences, now referred to as Shine\u2013Dalgarno (SD) sequences, are always upstream from start codons, annotation programs look for them in the region 5\u2032 to these candidate sites. In a comprehensive analysis of 18 bacterial genomes, the authors show that when looking for SD sequences, it sometimes pays off to analyze unlikely locations. By examining the region that immediately surrounds the start codon for SD sequences, the authors identify many mis-annotated genes and in so doing offer a method to help check for these in future annotation projects. Escherichia coli's 16S ribosomal RNA (rRNA) and observed that part of the sequence, 5\u2032\u2013ACCUCC\u20133\u2032, was complementary to a motif, 5\u2032\u2013GGAGGU\u20133\u2032, located 5\u2032 of the initiation codons in several messenger RNAs (mRNAs). They combined this observation with previously published experimental evidence and suggested that complementarity between the 3\u2032 tail of the 16S rRNA and the region 5\u2032 of the start codon on the mRNA was sufficient to create a stable, double-stranded structure that could position the ribosome correctly on the mRNA during translation initiation. The motif on the mRNAs, 5\u2032\u2013GGAGGU\u20133\u2032, and variations on it that are also complementary to parts of the 3\u2032 16S rRNA tail, have since been referred to as the Shine\u2013Dalgarno (SD) sequence. Shine and Dalgarno's theory was bolstered by Steitz and Jakes in 1975 [In 1974 Shine and Dalgarno [ in 1975 and evenSince Shine and Dalgarno's publication, two different approaches have been used to identify and position SD sequences in prokaryotes: sequence similarity and free energy calculations.Methods based on sequence similarity include searching upstream from start codons for sub-strings of the SD sequences that are at least three nucleotides long . IdentifG \u00b0) [G \u00b0 values for progressive alignments of the rRNA tail with the mRNA in the region upstream of the start codon [G \u00b0 upstream of the start codon whose location is largely coincident with the SD consensus sequence. This second approach can both identify the SD sequence and pinpoint its exact location as that having the minimal \u0394G \u00b0 value. However, the exact location of the SD sequence is dependent on the nucleotide indexing scheme of the algorithm, i.e., on which nucleotide is designated as the \u201c0\u201d position.The second approach, using free energy calculations, is based on thermodynamic considerations of the proposed mechanism of 30S binding to the mRNA and overcomes the limitations of sequence analysis. Watson\u2013Crick hybridization occurs between the 3\u2032-terminal, single-stranded nucleotides of the 16S rRNA (the rRNA tail) and the SD sequence in the mRNA and has a significant effect on translation . The forG \u00b0) \u201314. Thisrt codon ,6,15,16.relative spacing (RS). This metric localizes binding across the entire translation initiation region (TIR), relative to the rRNA tail, enabling us to characterize binding that involves the start codon as well as sequences downstream. RS is also independent of the length of the rRNA tail, and this property allows for comparison of binding locations between species.To normalize indexing and to further extend free energy analysis through the start codon and into the coding region of genes, we created a new metric, leaderless mRNAs [By examining sequences downstream from start codons, we could explore mRNAs that lack any upstream region, the ss mRNAs \u201322. The ss mRNAs , has beess mRNAs . Thus, tss mRNAs ,25.G \u00b0 troughs for genes of 18 species of prokaryotes as a test of its usefulness as a means to improve existing annotation tools, i.e., by identifying SD sequences. We observe 2,420 genes where the strongest binding in the entire TIR takes place one nucleotide downstream from the start codon, at RS+1. Of these, 624 genes have unusually strong binding . We then determine if these 624 genes were mis-annotated and conclude that 384 are.In this study we use the RS metric to identify the positions of minimal \u0394G \u00b0 value at each position of the TIR for each species is shown in G \u00b0 troughs upstream from RS 0 are consistent with previous experimental studies on the location of the SD sequence [G \u00b0 trough immediately after the first base in the initiation codon, at RS+1, is unexpected, but present in a significant portion of genes in all species examined. The histograms of G \u00b0 < \u22123.4535, see the Materials and Methods section for more details) in each TIR for all genes within a species. For all genes that contain an SD-like sequence, we will call genes where the lowest \u0394G \u00b0 value is at RS+1, +1 genes, and +1 genes where \u0394G \u00b0 < \u2212 8.4 kcal/mol, strong +1 genes. Genes where the strongest SD-like sequence is between RS-20 and RS-1, inclusive, are designated upstream genes, and similarly, downstream genes are genes where the strongest SD-like sequence is between RS+1 and RS+20, keeping in mind that these designations do not imply that other SD-like sequences do not exist in the TIR, but only that they do not bind with as low a \u0394G \u00b0 value to the rRNA. If a trough of minimal free energy can be definitive of the SD sequence, a site whose location is presumed to be upstream from the coding region, the +1 genes are unexpected in that they exist within, not upstream from, the coding region. Our study focuses on the characterization of the sequence interactions that give rise to strong +1 genes and on possible explanations for their presence; we have reserved the downstream genes for future analysis.The average \u0394sequence ,8, as wesequence ,26 or ussequence . The \u0394G We thought of four hypotheses to explain the unexpected RS+1 result. 1) The +1 site is an artifact of our model or implementation. 2) The +1 trough could result from known sequence bias around the start codon, assuming the start codon annotation is correct. 3) The start codon annotation could be incorrect: the presence of in-frame start codons downstream of the annotated start codons would be consistent with this interpretation. 4) If there were sequence errors in the start codon, they could potentially change the free energy calculation for alignments in which the three nucleotides of the start codon participated. All four of these hypotheses were examined.We were quickly able to dispose of our first hypothesis. The +1 site is not an artifact of the individual nearest neighbor\u2013hydrogen bond (INN-HB) model or its implementation. Both the individual nearest neighbor (INN) and the INN-HB RNA secondary structure models are based on thermodynamics and use experimentally derived parameters. Implementations of INN models using dynamic programming have a well-established history of accurately predicting secondary structures for short RNA sequences ,14,28, aE. coli have shown considerable bias in the second codon, too [http://weblogo.berkeley.edu/) were created for the region of mRNA that would be aligned with the rRNA tail for RS+1 . E. coli genes that includes the first two codons. This logo was representative of the sequence logos for all 18 organisms (unpublished data). For E. coli, the sequence logo gives two options for relatively abundant sequences that could bind to the rRNA tail: AUGA and GUGA. AUGA has a positive \u0394G \u00b0 value of 0.21 kcal/mol and cannot explain the trough of \u0394G \u00b0. The alternate sequence, GUGA, has a negative \u0394G \u00b0 value of \u22121.88 kcal/mol. However, if all 570 E. coli genes whose start codons are GUG had this value, the total would be too small to cause the average value of the 4,254 E. coli genes to be \u22120.79 kcal/mol. Using the same approach with the sequence logos for the remaining 17 organisms, sequence bias of the first two codons also failed to explain the average negative free energy trough associated with the RS+1 alignment.The second hypothesis assumes that the significant negative free energy value at RS+1 results primarily from nucleotide biases in the first two codons of the coding region. Obviously there is extreme codon bias in the start codon for all genes and, therefore, for all species examined, as shown in don, too \u201337. To edon, too ,39 . In the case of the remaining genes, the changes resulted in many more of the initiation regions having their most stable binding at RS+1. However, the \u0394G \u00b0 value at RS+1 in these modified start codon sequences was only marginally stronger than the free energy trough still present at the upstream SD site. The small difference in energy values between the upstream SD site and the RS+1 site contrasts with that seen using the actual sequences of RS+1 genes. In those cases, the difference in energy values is quite large, as seen in The fourth hypothesis proposes that sequence errors might account for the presence of a minimal free energy trough at the RS+1 alignment. To examine this idea further, There is a long history of investigating SD sequences using approaches grounded in thermodynamics ,15,16,26Three major differences separate our method from prior methods. The primary difference is that we are examining both upstream and downstream sequences. Investigating downstream sequences allowed us to observe the large number of hybridization sites that include the start codon. The second main difference is our use of RS as a means to compare hybridization locations among species. The third difference is our use of the INN-HB model instead of the INN model.There are also many minor differences between our method and its predecessors. The most common are discrepancies in rRNA tail selection. We defined the 16S rRNA tails based on proposed secondary structures and conserved single-stranded 16S rRNA motifs. The sequences we used are the maximum number of single-stranded nucleotides available for hybridization based on accepted models of rRNA secondary structure. Osada et al. used the last 20 nucleotides of the 16S rRNA sequence without consideration of secondary structure models and the intramolecular helix formation that a significant portion of their 5\u2032 bases are involved in . On the G \u00b0 trough indicative of SD sequences in Synechocystis [G \u00b0 \u2265 0 kcal/mol, and thus no discernible binding site for the rRNA tail, we were able to identify eight as +1 genes, and two as having stronger than average SD sequences between five and ten bases upstream from the start codons. Of the eight +1 genes, two had in-frame start codons within 12 bases downstream from the annotated start codon. The remaining 28 genes were able to bind to the rRNA tail farther downstream from the annotated start codon. These results show the benefit of our approach by providing more resolution of the TIR in genes that have unusual nucleotide sequences relative to previous methods.As a result of these differences, our method improves SD sequence characterization. hocystis . Our methocystis shows beE. coli, none are experimentally verified, and they have no assigned function, making it likely that they are not true genes, but only vestigial ORFs.Our method is also useful for detecting errors in sequence annotation. B. longum's strong +1 gene rnpA, a ribonuclease P protein component, does not contain an in-frame start codon downstream from the annotated start site. CTC02285, a strong +1 gene in Clostridium tetani that codes for protein translation initiation factor 3 (IF3), is also without a downstream initiation codon. Bradyrhizobium japonicum has many strong +1 genes without downstream start codons: polE, which codes for the polymerase epsilon subunit, cycK, nah, and 52 others. Thus, while a large percentage of the strong +1 genes appears to be the result of sequence annotation errors, there remains a significant number that require an alternative explanation.That said, it is harder to understand the strong +1 genes that do not appear to be the result of annotation errors in the 17 other organisms we studied. For example, Two possible explanations for strong +1 genes that do not seem to be artifacts of annotation errors are: 1) the +1 site could stimulate translation initiation on leaderless genes, and 2) the binding site at RS+1 could be used as a translational standby site, i.e., sequences that hold the 16S rRNA close to the SD sequence . In the E. coli with more than 100 amino acids contained upstream SD sequences. The two cyanobacteria in our study, Nostoc and Synechocystis, both have relatively small percentages of upstream SD sequences. These two organisms are believed to be closely related to the free living predecessor of chloroplasts, which are thought to use SD sequences as well as alternative mechanisms to recruit ribosomes for translation . All genome sequences were downloaded from the National Center for Biotechnology Information (NCBI) GenBank database . Several observations determined this sequence window. In the majority of cases examined, SD sequences were within 10 nucleotides of the start codon. Although the hypothesis that a downstream box interacted with rRNA during translation initiation was rejehttp://www.rna.icmb.utexas.edu). We defined the 3\u2032 tail as the single-stranded terminal 3\u2032 nucleotides, and then, to verify consistency, compared these sequences with all annotated copies of the 16S rRNA in the genome.To determine the 3\u2032 tails for the 16S rRNAs, we downloaded predicted secondary structures from The Comparative RNA Web Site is the free energy released by the hybridization of a particular nearest neighbor doublet, and jn is its number of occurrences in the duplex. mterm\u2212AU is the number of terminal AU pairs, and \u0394G term\u2212AU \u00b0 is the free energy penalty for having a terminal AU pair. Finally, \u0394Gsym \u00b0 is the penalty for internal symmetry and Loopk the penalty for the kth internal loop. free_scan's hybridization parameter values for Watson-Crick binding are from Xia et al. [G \u00b0. Bulges, where one of the two strands of RNA has intervening nucleotides between bases that bond with the other strand, as well as secondary structures involving only one of the two strands of RNA, are ignored due to uncertainty about how much space is available within the 30S ribosomal complex to accommodate these structures, as well as the limitations they put on our ability to calculate RS. Dangling 5\u2032 or 3\u2032 ends are not considered because of ambiguities about what constitutes a dangling end on the mRNA sequences and on the 5\u2032 end of the 16S rRNA tail.In this formula, \u0394a et al. , G/U misa et al. , and looa et al. . free_scG \u00b0 values to be computed before the start codon, one at the start codon, and 35 \u0394G \u00b0 values after.After the free energy value for the first alignment in the mRNA is calculated, free_scan shifts the rRNA tail downstream one base, and the second alignment is examined. This process, illustrated in G \u00b0 estimates obtained using the prior INN models [Xia et al. created the INN-HB model to improN models ,51\u201354. TN models .G \u00b0 value for each RS position . The INN-HB, however, does not assign all three hybridizations the same \u0394http://sourceforge.net/projects/free2bind.Our programs, free_scan and free_align are available at Source Forge: G \u00b0 value calculated within the initiation region. If \u0394G \u00b0 > \u22123.4535 kcal/mol, then the gene was assumed not to have an SD sequence. This threshold is based on the work of Ma et al. [We located the SD sequence by the position of the lowest \u0394a et al. for genes mentioned in this paper are: radC (948968); rnpA (1023245); CTC02285 (1060453); polE (1051409); cycK (1053038); nah (1053188); rpsA (945536); wecF (2847677); argD (947864); and hcaF (946997).Accession numbers from the National Center for Biotechnology Information (NCBI) GenBank database ("} {"text": "To identify as many different transcripts/genes in the Atlantic salmon genome as possible, it is crucial to acquire good cDNA libraries from different tissues and developmental stages, their relevant sequences (ESTs or full length sequences) and attempt to predict function. Such libraries allow identification of a large number of different transcripts and can provide valuable information on genes expressed in a particular tissue at a specific developmental stage. This data is important in constructing a microarray chip, identifying SNPs in coding regions, and for future identification of genes in the whole genome sequence. An important factor that determines the usefulness of generated data for biologists is efficient data access. Public searchable databases play a crucial role in providing such service.Twenty-three Atlantic salmon cDNA libraries were constructed from 15 tissues, yielding nearly 155,000 clones. From these libraries 58,109 ESTs were generated, of which 57,212 were used for contig assembly. Following deletion of mitochondrial sequences 55,118 EST sequences were submitted to GenBank. In all, 20,019 unique sequences, consisting of 6,424 contigs and 13,595 singlets, were generated. The Norwegian Salmon Genome Project Database has been constructed and annotation performed by the annotation transfer approach. Annotation was successful for 50.3% of the sequences and 6,113 sequences (30.5%) were annotated with Gene Ontology terms for molecular function, biological process and cellular component.Salmo salar), EST sequencing, clustering, and annotation by assigning putative function to the transcripts. These sequences represents 97% of all sequences submitted to GenBank from the pre-smoltification stage. The data has been grouped into datasets according to its source and type of annotation. Various data query options are offered including searches on function assignments and Gene Ontology terms. Data delivery options include summaries for the datasets and their annotations, detailed self-explanatory annotations, and access to the original BLAST results and Gene Ontology annotation trees. Potential presence of a relatively high number of immune-related genes in the dataset was shown by annotation searches.We describe the construction of cDNA libraries from juvenile/pre-smolt Atlantic salmon ( The role of aquaculture in world food industry has rapidly become more important in the last 20 years. Atlantic salmon is an important aquaculture species with an interesting biology. It spawns in fresh water and develops through several stages before migrating to the sea to feed, a dramatic change of habitat that requires physiological, morphological and behavioural changes. In addition the salmonids did undergo a duplication event 25\u2013100 Myr ago . As an iThe major goal of all farm animal genome projects is to identify the genetic mechanisms responsible for important and commercially interesting traits, such as disease resistance, growth, meat colour, fat deposition etc. in order to implement these results in the breeding and management programmes. Compared to most other farmed animals there is still a large stock of wild fish for most aquaculture species. This means that there is also a great need for managing the wild populations. To identify these genetic mechanisms one needs access to various tools such as a genetic and physical map, polymorphic markers (both microsatellites and SNPs), cDNA libraries, ESTs and full-length gene sequences, and preferably the whole genome sequence. In addition bioinformatics tools and databases are needed to extract biologically meaningful results from this large amount of data.For Atlantic salmon some of these resources have been developed. They include genetic markers and maps -4, a BACSalmo Salar EST sequences in GenBank, of these 55,118 EST sequences have been generated and submitted as part of the Salmon Genome Project. The DFCI (TIGR) annotated Atlantic salmon Gene Index (AsGI) . Table We performed 5' sequencing for approximately 75,000 pre-smolt cDNA clones from these libraries. Approximately 68,500 sequences (raw data) were loaded into the Salmon Genome Project (SGP) database as described in Construction and content. After loading, all sequences were subjected to pre-processing in order to clean out poor quality sequences and to trim off vector and linker sequences. After pre-processing there remained 58,109 (84%) high quality sequences, which have been marked as pre-processed (\"passed\") in the SGP database. The pre-processed sequences were submitted to the GenBank dbEST after reThe project relies heavily on bioinformatics data processing and analysis and we have constructed the SGP data resource (Construction and content), which was used for the sequence processing, contig assembly, annotation, and project data hosting. All sequences, as well as other data and results, can be accessed through the SGP data resource as described in Table salmonidae. The matches in PDB and SWISS-PROT databases were considered to be potentially more informative. A match in the PDB database leads to the PDB entry, which contains a link to the UNIPROT entry and, depending on the length of aligned query sequence opens a possibility of further function prediction by protein structure modelling. A SWISS-PROT match allowed access to detailed annotation data for the match sequence verified to high quality standards, including function assignment and cross-links with other databases. Where sequences in SWISS-PROT had GO terms [salmonidae-specific search with the SGP dataset was performed on the NCBI NT database as an attempt to identify the SGP sequences which are similar to those that had previously been annotated as belonging to salmonidae. The results of this automatic annotation, loaded in the SGP database as the SGP-Sal dataset inevitably include some mismatches in its 1,768 hits, but provide a useful estimate of the possibly known salmonidae genes in the SGP data. Another annotation was performed where the sequences with salmonidae hits were excluded from the SGP dataset. The dataset identified as SGP-noSal was also loaded. These annotation datasets are available under the Clustered data datasets, Clustered data summary and Annotations menus, as well as via searches in the SGP database.Contig consensus sequences and singlet sequences were annotated by the BLAST-GO automatic annotation pipeline. The annotation focused on the specialised, detailed results, which provide novel information and extend the currently available annotations data for GO terms assignedGO terms ,25, a puThere are three major ways of accessing the SGP data resource \u2013 by the Clustered data datasets and Clustered data summary menus, and via the SGP database searches. The Clustered data datasets menu Figure providesThe detailed annotation output format includes length of the matched segment of the query and per cent identity with the aligned sequence Figure . TherefoAnnotation statistics are shown in Table Putative assignments of the GO terms (Construction and content) referred to here as GO-GOA annotation were made for 30% of the SGP contigs and singlets, with almost 50% of the contigs returning GO-GOA hits. The GO-GOA annotations present a very uneven picture, Figure Important function categories such as development, immune response and other were selected to assess the difference between annotations searches with the keywords representing broader search terms and alternatively exact GO terms. Results, listed in Table Salmo salar). Subsequent EST sequencing and clustering yielded 6,424 contigs and 13,595 singlets, resulting in a total of 20,119 unique sequences.We have constructed 23 tissue specific cDNA libraries from pre-smolt Atlantic salmon terms for molecular function, biological process and cellular component.All data on ESTs, clustering and annotation can be accessed via the SGP data resource . There iOn the whole, annotation searches in the SGP database and access to annotations as datasets summary or as detailed results offer a powerful tool for exploring, at different levels of granularity, biological features reflected in the EST data. A database search, which can be done using sophisticated keywords search options, will produce an overview of the highly reliable sequence similarities (\"best hits\") and their gene and function annotations including GO assignments. For each of the \"best hits\" displayed in the overview, a separate link will produce a detailed annotation output presented in a user-friendly format, listing all significant hits in all databases used in the annotation. Users wishing to explore further annotation details can do this via links to the source EST sequences, dissected alignments in the original BLAST format, target (hit) sequences in the source databases, and the original GO annotation tree..The Salmon Genome Project (SGP) data resource is available at The web access is optimised for Netscape 8 and Internet Explorer.AAA supervised SGP bioinformatics, designed software and data processing techniques and developed the data resource. He carried out software development and data processing and drafted part of the manuscript.AVV, TAR, and JKL worked on software design, carried out development, implementation and data processing. AVV in addition were responsible for web design.HH-L did the construction and sequencing of the SSH as well as the normal gills and intestine libraries.BH conceived, headed and coordinated the project and performed the construction and sequencing of all cDNA libraries except those done by HH-L. He loaded all sequences into the DB and performed the sequence processing using the preAssemble pipeline. He drafted part of the manuscript.All authors have read and approved the final manuscript."} {"text": "Drosophila melanogaster Release 4 genomic sequences using the combined computational evidence derived from RepeatMasker, BLASTER, TBLASTX, all-by-all BLASTN, RECON, TE-HMM and the previous Release 3.1 annotation. Our system is designed for use with the Apollo genome annotation tool, allowing automatic results to be curated manually to produce reliable annotations. The euchromatic TE fraction of D. melanogaster is now estimated at 5.3% (cf. 3.86% in Release 3.1), and we found a substantially higher number of TEs than previously identified . Most of the new TEs derive from small fragments of a few hundred nucleotides long and highly abundant families not previously annotated . We also estimated that 518 TE copies (8.6%) are inserted into at least one other TE, forming a nest of elements. The pipeline allows rapid and thorough annotation of even the most complex TE models, including highly deleted and/or nested elements such as those often found in heterochromatic sequences. Our pipeline can be easily adapted to other genome sequences, such as those of the D. melanogaster heterochromatin or other species in the genus Drosophila.Transposable elements (TEs) are mobile, repetitive sequences that make up significant fractions of metazoan genomes. Despite their near ubiquity and importance in genome and chromosome biology, most efforts to annotate TEs in genome sequences rely on the results of a single computational program, RepeatMasker. In contrast, recent advances in gene annotation indicate that high-quality gene models can be produced from combining multiple independent sources of computational evidence. To elevate the quality of TE annotations to a level comparable to that of gene models, we have developed a combined evidence-model TE annotation pipeline, analogous to systems used for gene annotation, by integrating results from multiple homology-based and de novo TE identification methods. As proof of principle, we have annotated \u201cTE models\u201d in A first step in adding value to the large-scale DNA sequences generated by genome projects is the process of annotation\u2014marking biological features on the raw string of adenines, cytosines, guanines, and thymines. The predominant goal in genome annotation thus far has been to identify gene sequences that encode proteins; however, many functional sequences exist in non-protein-coding regions and their annotation remains incomplete. Mobile, repetitive DNA segments known as transposable elements (TEs) are one class of functional sequence in non-protein-coding regions, which can make up large fractions of genome sequences and can play important roles in gene and chromosome structure and regulation. As a consequence, there has been increasing interest in the computational identification of TEs in genome sequences. Borrowing current ideas from the field of gene annotation, the authors have developed a pipeline to predict TEs in genome sequences that combines multiple sources of evidence from different computational methods. The authors' combined-evidence pipeline represents an important step towards raising the standards of TE annotation to the same quality as that of genes, and should help catalyze their understanding of the biological role of these fascinating sequences. Transposable elements (TEs) are mobile, repetitive DNA sequences that constitute a structurally dynamic component of genomes. The taxonomic distribution of TEs is virtually ubiquitous: they have been found in nearly all eukaryotic organisms studied, with few exceptions. TEs represent quantitatively important components of genome sequences , and thhttp://www.repeatmasker.org/), which recent studies indicate may be \u201cneither the most efficient nor the most sensitive approach\u201d for TE annotation ) tail. This occurs because the reference sequence has a shorter poly(A) tail than a particular genomic copy. In general, these cases are easily identified by observing an overlapping poly(A) simple repeat at the 3\u2032 end of the element. One solution to this problem is to extend the poly(A) tail of non-LTR retrotransposons in the reference set to the length of the longest observed genomic copy.The biggest pitfall we have encountered is the problem posed by simple repeats that exist in TE reference sequences. Without a specific treatment of this problem we would have included 3,040 spurious hits\u2014approximately one-third of our original set of annotations. Filtering simple repeats on the genomic or reference sequences without affecting the sensitivity of TE detection is not easy. We have developed an effective (but ad hoc) two-step filtering strategy, but the magnitude of this problem leaves room for future improvements. Currently we employ RM to detect simple repeats, although refined parameter optimization may reveal that other more specialized simple repeat detection software, such as TRF , Mreps , or otheRegardless of the best method or criteria to detect simple repeats, the existence of simple repeats in TE reference sequences raises an important problem, since it is difficult to unambiguously determine whether a simple repeat with homology to a TE is a spurious hit or reflects a true remnant of that TE in the genome. Our methods guarantee that if we leave a spurious hit in the annotation because of homology with a simple repeat, it is more than 170 bp long. Moreover, any potentially real TE labeled as spurious that did not survive our rescue strategy bears no unique hallmarks of being generated by a TE. Nevertheless, the possibility of the involvement of TEs in the genesis of microsatellites highlighD. melanogaster. Our automated pipeline allows us to annotate TEs on a genomic scale quickly and accurately, and the integration of our pipeline with the Apollo annotation tool also allows rapid evaluation and manual editing of TE annotations for even complex TE models. Based on the lessons learned in this study, we are continuing to develop and improve our pipeline. We are automating several classes of the manual edits that we have identified and expect that progressively fewer manual edits will be necessary in the future, allowing application of our pipeline to larger genome sequences such as the human sequence. One possible solution to the simple repeat problem is to develop a \u201ccombined sensor\u201d model that would seek to resolve competing signals between simple repeats and TE models. It may also be possible to predict nested elements that require manual edits by using a stochastic context-free grammar [We have shown in this work that a combined-evidence framework can improve the quality and confidence of TE annotations in grammar approachD. melanogaster genome. Since the methods that support these predictions potentially suffer from a high false positive rate, we have chosen not to include them in our current annotation, since more work needs to be done to validate these potential new TE families. Nevertheless the combined evidence for some of these elements is compelling and such cases are available for mining in our current results.We have observed several cases in the genome annotation where one or more de novo methods simultaneously support a potential sequence belonging to a new TE family. In addition, results of our analyses with tools that detect anonymous TEs see suggest Drosophila. We hope that the TE annotations presented here will serve to further the development and refinement of TE discovery and annotation methods in general, as the Release 3.1 annotations have served for the development of our current methods.In general, the problem of TE discovery remains a major challenge for TE annotation. A good TE annotation relies critically on an expertly assembled reference sequence set, data that currently cannot be obtained in an automatic fashion. This crucial step is now the bottleneck in any method or pipeline to annotate TEs in genome sequences . The taFinally, we are also developing our pipeline to include methods for the detailed annotation of the structural features in TE sequences. Development of such detailed annotation methodologies will allow a detailed evaluation of the coding and expression potential of individual TE annotations in genomic sequences. Moreover, the ability to automatically annotate structural features of TEs will facilitate the manual curation and validation of candidate TE sequences resulting from one or several different de novo TE discovery methods \u201316. ContD. melanogaster genomic sequences and TE reference sets are available from BDGP (http://www.fruitfly.org/). The Release 3.1 D. melanogaster genomic sequences and their TE annotations have been extracted from the GAME-XML files. The Release 4 D. melanogaster genomic sequences have been downloaded as fasta files. TE reference sequence sets v.7.1 and v.9http://www.girinst.org). We used them to detect unknown families by similarity with TEs from other species.Sequences of the TEs were also obtained from the Repbase Update database release 8.12 , which cftp://ftp.ncbi.nlm.nih.gov/blast/) programs were used with default parameters, using as a query genomic fragments of 50 kb, overlapping by 100 bp.We have improved three C++ programs: BLASTER, MATCHER, and GROUPER, previously presented in Quesneville et al. . BLASTERE-value greater than 1 \u00d7 10\u221210 or length of 20 or less are eliminated.MATCHER has been developed to map match results onto query sequences by first filtering overlapping hits. When two matches overlap on the genomic (query) sequence, the one with the best alignment score is kept; the other is truncated so that only nonoverlapping regions remain on the match. As a result of this procedure a match is totally removed only if it is included in a longer one with a best score. All matches that have Long insertions (or deletions) in the query or subject could result in two matches, instead of one with a long gap. Thus, the remaining matches are chained by dynamic programming. A score is calculated by summing match scores and subtracting a gap penalty (0.05 times the gap length) and a mismatch penalty (0.2 times the mismatch length region) as in .The chaining algorithm to gather similar sequences into groups by simple link clustering. A match belongs to a group if one of the two matching sequence coordinates overlaps a sequence coordinate of this group by more than a given length coverage percentage threshold (a program parameter). If the two matches overlap with this constraint, their coordinates are merged, taking the extremum of the both. Groups that share sequence locations\u2014not previously grouped because of a too low length coverage percentage\u2014are regrouped into what we call a cluster. As a result of these procedures, each group contains sequences that are homogeneous in length. A given region may belong to several groups, but all of these groups belong to the same cluster.http://www.repeatmasker.org) screens for TEs and low-complexity DNA sequences. It detects TEs in nucleic acid sequences by nucleic sequence alignment with previously characterized elements using the program Cross_match (http://www.phrap.org/phredphrapconsed.html) or WU-BLAST (http://blast.wustl.edu) with the script MaskerAid [RepeatMasker and (2) and the counts of true positive (TP\u2014correctly annotated as belonging to a TE), false positive , true negative (TN\u2014correctly annotated as not belonging to a TE), and false negative nucleotides.A high sensitivity indicates that a method misses few TE nucleotides . A high specificity indicates that a method finds few false positive nucleotides.The second Python script compared the boundaries of predictions to the boundaries of the reference annotations. For each prediction under test, we searched the reference annotations that overlapped on the same genomic region. Different cases could be distinguished according to one-to-one, one-to-many, many-to-one, or many-to-many relationships had a large insertion or deletion. In this case, the two fragments (flanking the indel) were predicted as two separate copies, and the fragments were not joined. We called this error class \u201cmethod not joined\u201d. We also found cases in which two predictions were falsely considered as one in the reference annotation. Here, a long region of mismatch separated two fragments and the most parsimonious explanation was the independent insertion of two copies. These were \u201cannotation over-joined\u201d cases. We also found cases considered as one copy by the reference annotation, but that were in fact copies with a self-duplicated region. If the duplication was nested we call it \u201csame TE nested\u201d, or if not nested, \u201cTE duplication\u201d.One-to-many relationships were cases in which two annotations in the reference were found joined by the method. We called this \u201cannotation not joined\u201d.One-to-zero relationships corresponded to cases in which a prediction did not correspond to a reference annotation. \u201cNew TE\u201d cases were copies identified by the method under test but not present in the reference annotation, and \u201cdifferent TE\u201d cases were those overlapping a reference annotation but with a different TE family name. A TE prediction included in a prediction of a different family already involved in a given relationship with reference annotations, was called \u201cnew nest\u201d if no corresponding reference annotation could be found. Annotation correspondence of the same TE family but on different strand was called \u201cother strand\u201d if the relationship was one-to-one; otherwise they were \u201cnew TE\u201d.Finally we had a \u201ccomplex structure\u201d case when the relation was many-to-many.The script could be also used in an anonymous mode to test boundaries of de novo predictions that do not use a specific reference sequence. The information used for such comparisons is of poorer quality since we do not have alignment coordinates on the reference sequence , which renders several categories meaningless ."} {"text": "Picea glauca [Moench] Voss).The sequencing and analysis of ESTs is for now the only practical approach for large-scale gene discovery and annotation in conifers because their very large genomes are unlikely to be sequenced in the near future. Our objective was to produce extensive collections of ESTs and cDNA clones to support manufacture of cDNA microarrays and gene discovery in white spruce also matched rice or Arabidopsis genomes. We used several sequence similarity search approaches for assignment of putative functions, including blast searches against general and specialized databases , Gene Ontology term assignation and Hidden Markov Model searches against PFAM protein families and domains. In total, 70% of the spruce transcripts displayed matches to proteins of known or unknown function in the Uniref100 database . We identified multigenic families that appeared larger in spruce than in the Arabidopsis or rice genomes. Detailed analysis of translationally controlled tumour proteins and S-adenosylmethionine synthetase families confirmed a twofold size difference. Sequences and annotations were organized in a dedicated database, SpruceDB. Several search tools were developed to mine the data either based on their occurrence in the cDNA libraries or on functional annotations.We produced 16 cDNA libraries from different tissues and a variety of treatments, and partially sequenced 50,000 cDNA clones. High quality 3' and 5' reads were assembled into 16,578 consensus sequences, 45% of which represented full length inserts. Consensus sequences derived from 5' and 3' reads of the same cDNA clone were linked to define 14,471 transcripts. A large proportion (84%) of the spruce sequences matched a pine sequence, but only 68% of the spruce transcripts had homologs in This report illustrates specific approaches for large-scale gene discovery and annotation in an organism that is very distantly related to any of the fully sequenced genomes. The ArboreaSet sequences and cDNA clones represent a valuable resource for investigations ranging from plant comparative genomics to applied conifer genetics. Pinus taeda ranged from 11 pg Voss). This collection of ESTs constitutes an important new resource for the genomics of white spruce and related species. In this paper, we report the sequence analysis of around 71,000 sequence reads obtained through 3' and 5' sequencing of cDNAs. Comparative analyses were conducted to assign a functional annotation based upon similarities. Spruce contigs were also correlated with terms derived from the Gene Ontology and [Genbank:CO472624-CO490610].Sequence traces from the spruce EST libraries were analyzed with the equences . Peaks w.990329) . Phrap cormation under acace file generated by Phrap. From the ace file, several important characteristics of a consensus sequence and its member sequences can be determined. The first characteristic used in this process is the \"shape\" of the consensus sequence, or how the assembled reads overlap each other. This can be thought of as the profile of the consensus sequence member distribution. Consensus sequences are classified as being of block, staircase, or dumbell shape. Contigs with a dumbell shape are candidates for additional evaluation.The quality control of resulting consensus sequences used a system developed at the CCGB. This system uses information that is included in the contig Phrap provides information on the quality regions of assembled sequences, which is used for this step. If the high quality region of the read (as defined by the Phrap ace file) has less than 95% consistency with the consensus sequence of the contig, or has more than 5 mismatched bases relative to the consensus, the read is flagged as a suspected chimera, provided it also shows evidence of either a polyA or polyT region.Reads within a dumbell shaped contig are evaluated for their similarity to the consensus sequence of the contig. blast hits to different proteins are found to be adjacent in the read. The process of chimera detection and removal is often repeated numerous times before arriving at a finished assembly.The final step of the quality control process is to examine the flagged reads visually to find chimeric qualities. Chimeric reads are selected and removed based on their similarity to the consensus sequence and to the individual reads in the contig. A chimeric read may also be indicated if tblastx or blastx programs [Arabidopsis (AGI11), rice (OGI16) and pine (PGI5.0), retrieved from the TIGR web site [Cycas EST assembly [Blast searches were conducted against several databases: the NCBI non redundant database (nr), the Uniref100 peptides set [Arabidopsis and rice coding sequences were downloaded from the TAIR web site [Similarity searches were performed with the programs against web site and agaiassembly , retrievassembly . Blast sides set , and theides set . HMM seaweb site and the web site , respectArabidopsis proteins were analysed. For each spruce consensus sequence, the blastx hits with a minimum similarity value of 0.75 and a minimum coverage of 0.5 were used in the GO assignment procedure. Similarity was defined as hsp positive/hsp alignment length (hsp : high scoring pair). Coverage was defined as the high scoring pair alignment length \u00d7 3/ query length. Among the retained hits, whenever a spruce sequence matched a protein with an associated GO term, this term was transferred to the spruce consensus sequence. Two GO annotation lists were completed: one including evidence codes Inferred from Electronic Annotation (IEA) evidence codes and one excluding IEA evidence.To correlate the spruce consensus sequences to a Gene Ontology (GO) molecular function term, the annotations of homologous Uniref100 and NP, coordination of bioinformatics activities, data analysis, preparation of the manuscript; CP, LP, JC, JEJ, ER, sequence processing, assembly and annotation, web publishing and database development; MJM, JC, AS\u00e9, plant material production, library synthesis, and evaluation; EN, CGC, protein family sequence analyses; YB, SB, GY, JS, ASi, RH, MM, high-throughput EST sequencing and quality assurance; CP, JB, preparation of manuscript; JM, overall project supervision, preparation of manuscript.Description of tissues used for cDNA library synthesis: genotype, treatments , organ, tissue and developmental stage.Click here for fileAnnotation of proteins related to the cell wall based on similarities with sequences from the Cell Wall Navigator Database [and lignin biosynthesis enzymes [tblastx searches with e-value < 1e-10.Database and ligni enzymes . Spruce Click here for file"} {"text": "With the exponential increase in genomic sequence data there is a need to develop automated approaches to deducing the biological functions of novel sequences with high accuracy. Our aim is to demonstrate how accuracy benchmarking can be used in a decision-making process evaluating competing designs of biological function predictors. We utilise the Gene Ontology, GO, a directed acyclic graph of functional terms, to annotate sequences with functional information describing their biological context. Initially we examine the effect on accuracy scores of increasing the allowed distance between predicted and a test set of curator assigned terms. Next we evaluate several annotator methods using accuracy benchmarking. Given an unannotated sequence we use the Basic Local Alignment Search Tool, BLAST, to find similar sequences that have already been assigned GO terms by curators. A number of methods were developed that utilise terms associated with the best five matching sequences. These methods were compared against a benchmark method of simply using terms associated with the best BLAST-matched sequence (best BLAST approach).The precision and recall of estimates increases rapidly as the amount of distance permitted between a predicted term and a correct term assignment increases. Accuracy benchmarking allows a comparison of annotation methods. A covering graph approach performs poorly, except where the term assignment rate is high. A term distance concordance approach has a similar accuracy to the best BLAST approach, demonstrating lower precision but higher recall. However, a discriminant function method has higher precision and recall than the best BLAST approach and other methods shown here.Allowing term predictions to be counted correct if closely related to a correct term decreases the reliability of the accuracy score. As such we recommend using accuracy measures that require exact matching of predicted terms with curator assigned terms. Furthermore, we conclude that competing designs of BLAST-based GO term annotators can be effectively compared using an accuracy benchmarking approach. The most accurate annotation method was developed using data mining techniques. As such we recommend that designers of term annotators utilise accuracy benchmarking and data mining to ensure newly developed annotators are of high quality. Genomics research is generating enormous quantities of DNA and protein sequence data. GenBank, a major repository of genomic data, reports an exponential increase in sequence data, in the last 10 years the quantity of data has increased more than two-hundred-fold \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0terms were all assigned to a single sequence\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0While observed number < 5\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Drop lowest scoring term based on assignment method\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0observed = number of occurrences where term-\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0combination [i] terms were all assigned to a single\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0sequence\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0End while1/max)*(n2/max)... *(nn/max)) * max [where max\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0expected = \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0While test statistic < chi-square critical value AND terms > 1\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Assign these terms to sequence\u00a0\u00a0\u00a0End IfEnd forCEJ undertook initial study design, software implementation, statistical analysis and interpretation, and drafted the initial manuscript. UB and ALB participated in the final study design, coordinated the study and contributed to the final manuscript."} {"text": "Genome annotation can be viewed as an incremental, cooperative, data-driven, knowledge-based process that involves multiple methods to predict gene locations and structures. This process might have to be executed more than once and might be subjected to several revisions as the biological (new data) or methodological (new methods) knowledge evolves. In this context, although a lot of annotation platforms already exist, there is still a strong need for computer systems which take in charge, not only the primary annotation, but also the update and advance of the associated knowledge. In this paper, we propose to adopt a blackboard architecture for designing such a systemWe have implemented a blackboard framework for developing automatic annotation systems. The system is not bound to any specific annotation strategy. Instead, the user will specify a blackboard structure in a configuration file and the system will instantiate and run this particular annotation strategy. The characteristics of this framework are presented and discussed. Specific adaptations to the classical blackboard architecture have been required, such as the description of the activation patterns of the knowledge sources by using an extended set of Allen's temporal relations. Although the system is robust enough to be used on real-size applications, it is of primary use to bioinformatics researchers who want to experiment with blackboard architectures.In the context of genome annotation, blackboards have several interesting features related to the way methodological and biological knowledge can be updated. They can readily handle the cooperative and opportunistic (the flow of execution depends on the state of our knowledge) aspects of the annotation process. H. influenzae, was obtained in 1995. Ten years after, the number of fully sequenced genomes is steadily increasing: more than 350 bacterial and archeabacterial genomes and 20 eukaryotic genomes are presently available in public databases. However, the availability of the sequence is merely a starting point. The real challenge actually consists in interpreting and annotating the genomic text. When annotating a genome, biologists are especially looking for the genes, i.e. the regions of the chromosome containing the information to produce proteins or RNA, as well as regulatory signals. Finding all genes and regulatory signals on a complete raw genomic sequence is still an open problem, especially in the case of eukaryotic genomes where the coding regions are interspersed with non-coding regions called introns. Moreover, finding genes and signals is just the first step of the process. Once this has been done, the biologist should face the question to assign a putative function to the gene's product. This is done, for instance, by scanning databases of known proteins in order to pickup those that most resemble the protein to identify. Finally, once all these information have been collected, new and more complex questions arise, such as positioning the protein within its metabolic or gene regulation networks. All these steps compose the annotation process and involve computer programs as well as a lot of human expertise.The first complete genomic sequence of a living organism, the bacterium Genome annotation can be seen as an incremental, cooperative, data-driven, knowledge-based process . The preThe annotation is thus a long and tedious interpretation process. Moreover, it might have to be executed more than once and subjected to revisions. First of all, sequencing errors may be reported or manual corrections may be supplied by experts and will ask for the reannotation of the corrected regions. Moreover, as new prediction methods appear, they should be applied on the already annotated genomes to produce up-to-date annotations. There is therefore a strong need for computer systems which take in charge, not only the primary detection of genomic features, but the whole incremental annotation process. In this paper, we propose to adopt a blackboard architecture for designing such a system.To our knowledge, no annotation software has ever been designed as a blackboard system, but several existing automatic annotation platforms have adopted well-recognized architectures. Our purpose is not to list hereafter all the existing platforms , but Biopipe has been\u2022 A set of \"analyses\", which describe how a method can be accessed and what are the adequate parameter values;\u2022 A set of rules, which specify when and how a method has to be executed; the set of rules thus defines the possible sequences of analysis methods;\u2022 A manager, which is in charge of accessing the data.Taverna has adopIn the ImaGene object-oIn rule-based systems, the biological data are represented by facts and the methodological knowledge is represented by rules. Rules express how facts in the current state of the system allow to infer new facts, which in turn allow the activation of rules, and so on. MagPie is a gooIn GeneWeaver , the ann\u2022 The Primary database agents maintain a shared sequence database up to date so that it can be read and used by the agents which need these information;\u2022 The Non-redundant database agents rely on the information provided by the Primary database agents to maintain a curated non-redundant database.\u2022 The Genome agents manage the information related to a particular genome;\u2022 The Calculation agents are associated to sequence analysis methods;\u2022 The Broker agents register and manage the information on all the other agents so as to facilitate their working.One of the limitations of GeneWeaver is that it has never actually been deployed as a truly operational annotation system. However it recently gave rise to AGMIAL , a systeOther systems using multi-agent concepts have been proposed: including BioMAS/DECAF and EDIT\u2022 The Information Extraction agents provide access to databases as well as some calculation services (such as sequence similarity search or feature predictions)\u2022 The Task agents are mostly generic middle agents except for the Annotation Agent that orchestrates the collection of information for each sequence and, therefore, provides some reasoning capabilities about sequence features.\u2022 The interface agents communicate with other agents and provide user interface to manual annotation and database querying.These various architectures can be classified into two main categories. The first category, which includes pipeline, workflow and task-based systems, is characterized by a sequential method invocation scheme. The second class, which includes multi-agent, blackboard-based and rule-based systems, is characterized by an opportunistic, event-driven, method invocation scheme. The advantages and drawbacks of these various approaches will be discussed later on in the Discussion section.Blackboard systems exhibit several similarities with aforementioned rule-based systems: a shared working memory, a procedural representation of knowledge and an inference cycle. The blackboard architecture is well known to support cooperative data-driven interpretation processes . Indeed,A blackboard system has three main components : the blaThe blackboard is a shared working space, hierarchically structured into layers. Each layer receives domain entities. These entities are produced by knowledge sources acting on entities belonging to lower layers. The bottom layer is directly populated with input data. In Hearsay, the bottom layer contains the raw signal coming from the microphone; the second layer receives the segments into which this continuous signal was decomposed; the third layer stores the hypothetical phonemes associated to each segment; on the fourth layer, these phonemes are grouped into syllables and so on, up to the last layer which contains the formal database query.Since a knowledge source may produce more than one entity for a given set of input data, the entities are given an hypothesis status. Some of them will be confirmed later on and merged into new entities stored in upper layers. Some others will be further discarded. The management of hypotheses is therefore an important feature of a blackboard system.All layers of the blackboard are observed by knowledge sources (KS). A KS takes as input the entities of one or more layers and will infer new entities to be stored on one or more higher layers. Inference of a new entity may be the result of an algorithm, a set of expert rules, a formal neural network, or any executable code. From the system point of view, a KS is actually a black box which is only known by the pattern of entities it expects as input (the activation condition) and the type of entities its produces as output. In Hearsay, the KS working on the lowest layer is a signal processing method; other KSs deal with the succession of phonemes and attempt to merge them into syllables; other KSs look up lexicons to predict words from syllable or check the syntax of a predicted sentence.The inference process of a blackboard system follows a cycle. First, as an event, such as the creation of an entity, occurs on the blackboard, the controller will inform all the KSs that are concerned by this event. Each selected KS then checks if its activation condition is satisfied or not. The resulting list of activable KSs is further sorted by the controller in order to prioritize the KSs that must be activated first. The further execution of these KSs will then produce new entities on the blackboard, thus triggering new events and a new cycle begins. The process ends when no KS can be further activated so that the state of the blackboard remains unchanged. The role of the controller is essential in focusing the inference process. It maintains an agenda of the pending KSs and may change the priorities of the agenda entries according to any specified criteria. The efficiency of the overall problem solving process may strongly depends upon the strategy used by the controller to order the KSs.Born as an AI architecture, the blackboard presents indeed several interesting features from the knowledge and software engineering points of view. Most of these properties derive from the existence of a shared working space, which represents, at any time, the state of the system. First of all, the KSs do not interact with each other. A KS is only concerned by the events occurring on one (or more) layer(s). At this time, it checks whether some patterns of entities match its own input pattern and declares itself as applicable to the controller. It is eventually executed when requested by the controller and finally writes its results onto its associated output layers. A KS can therefore be added or removed from the overall system without affecting the other KSs. Moreover, the inner part of a KS can be modified without affecting the system, as long as its activation pattern is not modified. Alternatively, the strategy of the controller can be independently modified and tuned up in order to make the inference process more efficient.The inference process is said to be opportunistic: the sequence of method invocations is not explicitly expressed before run time, but is decided according to the state of the system. As a striking illustration of this opportunistic behavior, if data produced by external sources are laid onto the blackboard, they are taken into account as if internally produced by the KSs and will affect the inference process accordingly.Blackboard systems have been built for a large spectrum of applications, for which the problems to be solved could not be linearly or hierarchically decomposed into sub- problems. This is the case of most interpretation problems, such as the seminal example of speech analysis and understanding. Examples of application of blackboards in biology include Protean , which wApart from the striking similarities between genomic sequence annotation and speech analysis, the decision to adopt a blackboard architecture for an annotation system has been motivated by the very nature of the annotation process. As explained previously, the annotation process relies on multiple methods, the execution of which provides different clues on the presence of coding sequences and regulatory signals. These clues have to be confronted, possibly discarded, but hopefully merged at different levels to finally predict the location and the structure of genes. The annotation process can thus be seen as a cooperative , opportunistic , knowledge-based (the conditions under which a method may be applied has to be explicitly expressed) and data-driven (the problem solving process is directed by the occurrence of patterns on the input DNA sequence) problem solving process.This latest point is probably the most important and the most characteristic of blackboard systems as compared to more traditional architectures. In blackboard systems each KS is autonomous and responsible of recognizing a particular state of the shared working space (its activation pattern) and declaring itself as activable. Therefore, the sequence of execution of methods is not programmed in a procedural manner but depends upon the current state of the blackboard. This aspect sometimes causes trouble to developers who want to have full control on the sequence of execution. With blackboards, one should better think in terms of event-driven programming.We have implemented a blackboard framework for developing automatic annotation systems. By framework, we mean that the system is not bound to any specific annotation strategy or to any particular KSs. Instead, the user can specify a blackboard structure (layers and KSs) in a single configuration file and the system will actually instantiate and run this particular annotation strategy. This allows designing several strategies to target specific biological applications. However, all these strategies will share common mechanisms and properties that will be described by using the very simple prokaryotic annotation strategy depicted in Figure Stop triplet, a ribosome binding site (RBS) or a coding region (CDS), is described by its origin, its end and its type, i.e. as an oriented and typed interval over the sequence.The blackboard itself is structured into a hierarchy of layers, all collinear with the input sequence Figure . The lowKS:HypoCDS in Figure hypothetical CDS starting from an ORF and an in-frame Start triplet located within this ORF. More generally, it turns out that a large number of such annotation rules can be expressed by considering the relative position of the intervals representing the features. That is, by considering, for instance, that a given interval is located \"before\" or \"after\", or \"overlaps\" another interval. More formally, the activation pattern of a KS can be expressed by using a set of relations between intervals adapted from Allen's work on temporal relations [KS:HypoCDS states that this source should be activated when it exists an in-frame Start \"during\" an ORF. In the same way, KS:AnnotCDS will be activated when it exists a database hit \"during\" a predicted CDS. This KS will further ensure that the characteristic of the overlap (e.g. the sequence identity) is sufficient to produce an annotated CDS in the upper layer. When the activation pattern of a KS matches a pattern of features, the KS is said to be applicable. It will be further triggered and its output (i.e. a new feature) will be deposited on the corresponding layer.Some KSs integrate feature detection methods relying on bioinformatics algorithms such as Markov modeling or pattern searching, while others merely confront and merge features into more complex structures. An example of the latter is elations . This approach will be further illustrated in the Results section.In practice both approaches may be mixed, e.g. by packing features in the lowest layers and by working with individual features at the higher levels throughThe prototype has not been developed with the ambition to overcome existing automatic annotation systems, but to demonstrate the appropriateness of the blackboard architecture for the development of genome annotation systems, both from the knowledge engineering and the software engineering points of view. Although the system is robust enough to be used on real-size applications, it is of primary use to bioinformatics researchers who want to experiment with blackboard architectures. To this purpose we provide, in addition to the core system, a graphical user interface allowing to load a blackboard configuration, to run it (possibly step by step) and to graphically visualize the creation of features on the chromosome during the execution.i.e. their input and output patterns and their executable body. This process will be exemplified now on the real annotation strategy depicted in Figure KS:FindORFs) for long ORFs (ALL_ORFS). These ORFs are used to build (KS:LearnMatrix) a Markov transition matrix (matrix). This matrix is further used to actually find (KS:FindCDS) CDSs. These CDSs are then checked (KS:SearchDB) against an enzyme database (database) whose entries are annotated with EC numbers (an EC number characterizes the function of an enzyme). Finally, when a CDS gets sufficient matches with annotated enzymes and when the majority of these matches have the same EC number, then this EC number is transferred to the CDS's product to eventually yield an enzymatic gene (ENZGENE). It is important to note that, in this example, we stopped at the gene level but one could imagine to continue using these genes in higher layers of the blackboard, representing for instance bacterial operons (sets of co-transcribed genes) or metabolic pathways .To instantiate a blackboard, the designer must declare, in a configuration file, the number of layers, the types of the different entities, the KSs, sequence, matrix, database and ALL_ORFS) are implemented as lists. They represent respectively, the raw sequence, the Markov matrix, the database to scan and the list of all ORFs . The last three layers are implemented as intervals (TimeLine) and represent individual features : CDSs, Blast hits associated to one CDS and validated enzyme genes respectively. The second part of the configuration file deals with the declaration of the KSs. Each KS has two parts: the activation pattern ( tag) and the executable body ( tag). Moreover, the type of entities produced by a KS is specified by the tag attribute create. For instance, KS:FindORFs in figure Figure sequence\". It produces (a single packed) entity of type ALL_ORFS through the call of the executable pkorf declared in the tag. All sub-tags of the tag are associated to a piece of Java code. Some of them perform internal operations (e.g. will write some text on the console) and others may call external executables (e.g. ). When adding a new KS to the system, one will have also to define the associated tag. A more sophisticated KS example is given on Figure KS:SearchDB is responsible of computing Blast hits associated to each CDS. Its activation pattern reads as \"select all couples composed of one CDS and the database. This CDS will then be scanned () on the database and the resulting hits will be further packed . Finally, an example of Allen relation is given on Figure KS:ECAnnotator activates on each couple composed of one CDS and a set of associated hits (DBHITS) that is included in (\"during\") the CDS.To configure a blackboard implementing this strategy, one has to edit an XML configuration file such as the one described in Figure CDS, DBHITS and ENZGENE). As a real-sized test case, we ran this particular annotation strategy on the whole chromosome of B. subtilis (~4 Mb). More than 97% of the actual genes (4106) were correctly found (with 7% of over-predicted genes). 399 genes were further annotated with EC numbers, most of them (95%) beeing correct as compared to the published annotation. On the other hand, 494 genes with EC annotations remained unpredicted, indicating that this particular strategy (or its parameters) were probably too conservative. The total running time on the whole chromosome was 9 hours , most of the time (99%) being actually spent in the Blast steps.As mentioned earlier, the Genepi standalone applications also provides a graphical interface allowing to follow step by step the execution of the blackboard. Figure As already mentioned in the Background section, two main classes of architecture of automatic annotation systems can be identified. The first class, which includes pipeline, workflow and task-based systems, is characterized by a sequential method invocation scheme. The order in which the analysis methods have to be executed is static and predefined. Some variations may be accepted if alternative sequences can be described. This is often the case in task-based systems, which allow the next task to be chosen according to the results of the previous one. The main drawback of such a sequential scheme is the lack of flexibility regarding the maintenance and the modification of the system, especially when new methods are to be added. On the other hand, the end user can easily follow the execution of the system. The second class, which includes multi-agent, blackboard-based and rule-based systems, is characterized by an opportunistic method invocation scheme. The order in which the methods are called is not preset but is determined at runtime by the state of the system. The major advantage is that a new knowledge chunk corresponding to a new method can be easily added or removed without much disturbing the other parts of the knowledge base. If the conditions of a method invocation have been properly defined, the method will be appropriately called when the corresponding state will occur in the system. Among these systems, blackboard architectures present determinant advantages: a shared working memory which is structured in layers in adequacy with the levels of hypothesis setting, a centralized control strategy which is easy to follow, an intuitive description of the methods and their activation patterns as independent knowledge sources.i.e. the agents, communicate directly one with the others by sending messages: the control is therefore also distributed. On the contrary, in blackboards, KSs never communicate directly. They find their input on the blackboard and deposit the products of their execution on the same blackboard, which can thus be seen as a shared working memory. In both cases, the advantages of the architecture result from the modularity it induces from both the software and knowledge engineering points of view. However, we consider that the existence of a shared and structured working memory, together with a central controller, produce a reasoning system which is much easier to follow and understand, and therefore easier to maintain, to tune and to extend. Moreover, the KSs are truly independent modules that can be added or removed without affecting the others. On the other hand, the multi-agent architectures are better suited to distributed environments. Despite their qualities, blackboard systems seem to be presently much less popular that multi-agent systems. An explanation for this situation has probably to be searched in the development of object-oriented techniques, which have provided the technology to efficiently implement the agents as interacting concurrent processes. In the same time, the complexity of blackboard systems increased (it was not uncommon to read about systems which included multiple blackboards and highly sophisticated control strategies) and thus lost some of their most appreciated properties.Multi-agent and blackboard systems are both part of the so-called distributed AI systems in which the processing capacity is distributed among multiple entities: the KSs of the blackboard systems and the agents of the multi-agent systems. The main difference lies in the way the entities communicate. In a multi-agent system, the entities, via the blackboard, these additions are simple because they do not interfere with the ones already integrated. From this point of view, the annotation of a genomic sequence can thus be updated after a new analysis method has been integrated. Conversely, if a sequencing error has been detected and corrected on the raw sequence, all the KS executions for which the corrected region was involved can be forgotten, their output erased from the layers on the blackboard and the inference cycle reactivated. Finally, we would like to mention another important case where the update facilities offered by a blackboard architecture could be put into play. Usually, after a first pass of fully automatic annotation, the genomic features need to be manually reviewed by experts. For instance, the Start position, the functional annotation or simply the presence of a gene may be modified. These manual modifications may have consequences on the overall annotation and need to be propagated, for instance if the modified CDS is involved in higher structures like operons or pathways. In the blackboard view, this means that the human expert plays the role of a new KS . The propagation and update of the modification can then be handled by the architecture.In the context of genome annotation, blackboards have several interesting features related to the way the methodological and biological knowledge can be updated. As research in bioinformatics produces new methods, they can be added to the system, wrapped as new KSs. Since KS never communicate directly but Of course, besides these advantages, blackboard (and multi-agent) systems also have some known drawbacks. The first one is the difficulty of making them do specific calculation in ordered tasks. As explained before, this is a natural consequence of their opportunistic behavior. Developers should therefore think in terms of event-driven actions rather than strictly ordered tasks. However, if such a pipeline behavior is desired then a solution is to embed the ordered tasks within a single KS. Indeed, the part of a KS can be seen as a small pipeline. Of course, this leads to a less declarative system where a part of the annotation strategy becomes hidden in the KS. Depending upon the problem to be solved, there is therefore a tradeoff to find between \"pure\" blackboard and pipeline behavior. Another known, more technical, difficulty is related to the debugging of the system. Again, because of the event-driven method invocation scheme, it may be sometimes difficult to pinpoint the source of a potential problem.The question of the reliability of bioinformatics software takes a slightly different form depending whether one considers a single piece of software or more complicated systems such as integrated platforms. In the first case, as long as the software correctly implements algorithms that are well known and understood, the software designers may consider that the results do not require to be further explained or justified. On the other hand, for genome annotation platforms, the execution of sequence analysis algorithms merely provide clues that have to be confronted, filtered and merged according to some methodological knowledge. This knowledge can be either directly provided by the user or formally expressed and integrated into the system. However, the possibility to formally express this knowledge, as rules, objects, tasks or any other modeling entities, does not mean that the resulting system will yield pertinent results. Indeed, this highly depends upon the expertise of the designers and the results may be further discussed and possibly refuted by the end users. In this context, we believe that an annotation system should not only allow the formal expression and integration of the methodological knowledge; it must also provide facilities for the user to follow and understand the annotation process, and to tune, adapt or even refute the content of the methodological knowledge base. The blackboard architecture appears to offer most of these software and knowledge engineering properties.. The distribution includes all the java sources as well as blackboard samples. The core system and graphical interface run on any platform supporting JavaVM. It has been tested on Linux and MacOSX. Some KSs (like Prokov or Blast) need external executables. These executable are provided in the distribution for MacOSX and Linux platforms.The Genepi protoptype has been implemented in Java and is freely available for download at the following url : AV and FR initiated the project. SDD, DZ and FR designed the architecture and software requirements. SDD wrote most of the Java code and AV provided the external toolbox. All authors participated in testing the software and in editing and proofreading the manuscript. All authors read and approved the final manuscript."} {"text": "Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO) sequence database (GOSeqLite). This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences.We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006) at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS) had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS) had an estimated error rate of 49%.While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information. A major challenge facing bioinformatics today is how to effectively annotate an exponentially increasing body of publicly available sequence data. While using expert curators to assign functions to sequences might be considered to be the least error prone approach, this option is far slower than annotation by automated software approaches. On the other hand, automated function annotators often rely on curated sources of information from which to make predictions.It is a commonly held view that curated sequence annotations are of better quality than automated annotations, however, the error rate of curated annotations can be significant. Estimates of the error rate of curated bacterial genome sequence protein and gene-name annotations lie between 6.8% and 8% ,2. The eThe quality of existing sequence annotations impacts on the quality of future sequence annotations through the commonly used practice of basing sequence annotations on sequence similarity. Errors in the use of sequence similarity based annotation strategies have been implicated in a number of commonly described annotation errors -6. A comThere has been some discussion in the literature pin-pointing the importance of annotation error propagation ,8. ErrorSequence annotation data generated by numerous projects has been submitted to the Gene Ontology (GO) Consortium and is available for download in various database releases . A commoIn comparison to other forms of annotation, such as gene or protein name annotations, GO terms are used to describe the biological context of sequences. Indeed, GO term annotation has become the standard method by which functional information is attributed to sequence data. As far as the authors are aware, at the time of writing there is no published account systematically examining the error rate of curated GO term annotations. However, case-studies -7 and maAs such the aims of this study are to a) develop an approach to estimating the error rate of GO term annotations, b) use this method to estimate the error rate of GO term sequence annotations submitted to the GOSeqLite database, c) and determine the impact, if any, of using sequence similarity based annotation methods on the error rate of annotations.The GOSeqLite database (revision 3rd March 2006) was downloaded from the GO Consortium site , and impIf we were to select two sequences and their associated GO term annotations at random there are two broad reasons why their term annotations would differ. Firstly, two sequences may differ in their biological context, and differences in their GO term annotations reflect this. We will refer to this biologically relevant variation as 'semantic variation'. Secondly, annotations between two sequences may differ due to annotation errors. Such errors may be missing or incorrect GO term annotations. Owing to the fact that GO terms are related to each other via a directed acyclic graph (DAG), incorrect term annotations may be under-specialised (i.e. an ancestor) or over-specialised (a descendant) or be not directly related to the correct GO term that should have been present. This second form of variation we are referring to here as 'error variation'.Instead of two sequences chosen at random, consider the case where we have a sequence, referred to as a 'query sequence', that is used in a sequence similarity search to find similar sequences against a reference set of sequences ('reference sequences'). Such a search might result in a large number of sequence matches between the query sequence and reference sequences. For each such sequence match it is possible to assign a precision to their matching annotations. If we consider that GO terms associated with the matching reference sequences are being used to predict the GO terms assigned to the query sequence, then the precision of the sequence match is:m is the number of query and reference sequence GO terms that matched, and na is the number of GO terms that the reference sequence is annotated with in the database.where P is the precision of the sequence annotation, nThis definition of precision allows only exact matches between query and reference sequence annotations to be considered correct. Because the GO is arranged in a DAG it could be possible to count reference sequence annotations as correct if they are within some number of edges of a query sequence term, as opposed to exact matching. However, previous work has demonstrated that increasing the permitted distance between the query and reference sequence terms dramatically amplifies the precision and recall, calling into question the applicability of these accuracy metrics under such conditions .The precision of the annotations from any sequence match is determined by the semantic variation and error variation. For this reason, if the impact of the semantic variation can be controlled, the annotation error rate can be estimated by using a two-step method. Firstly, we must determine the relationship between annotation error rate and sequence-match annotation precision. To do this we will add annotation errors to reference sequence annotations at known rates, and use this to determine the relationship between precision and annotation error rate by means of linear regression. This will allow us to find a model that predicts the annotation error rate for a given precision value. Next, we must estimate the precision when no annotation error is present in the reference annotations. This precision value is referred to here as the 'maximal precision' because it is the highest precision that is possible for the sample of sequence matches. We assume that the semantic contribution to the precision is at its highest possible, and there is no error contribution, when the precision of the sample is equal to the maximal precision. Therefore, all differences between query and reference sequence annotations are due to differences in biological contexts. As any such estimate is likely to have a large impact on the final annotation error estimate, we will derive two independent approaches to estimating the maximal precision. Given that we know the basal precision of sequence-match annotations and the relationship between precision and annotation error rate, we can use the difference between the maximal and basal precisions to find the annotation error rate at the basal precision. This then is an estimate of the error rate of reference set annotations.-10 each query sequence has a potentially large number of matching reference sequences. For determination of precision scores we selected a sample of the total sequence-matches, where, for each query sequence in the query set, a single sequence-match was chosen that had the highest precision score. In other words, a single matching reference sequence was selected for each query sequence that had the high precision observed for all matches to that query sequence. It could be considered that this reference sequence is the best functional match to the query sequence. In this sample of sequence-matches, referred to as the 'highest precision sample', the semantic contribution to the precision must be at its greatest. It is probably a valid assumption that this sample also contains a representative error rate that can be used to estimate the annotation error rate of the entire population. The precision of the highest precision sample is determined by the highest observed semantic contribution and annotation error. As such, precision estimates from this sample are comparable to the maximal precision estimates. If no error is present in reference sequence annotations then the highest precision sample's precision scores would be approximately the same as the maximum precision estimate. Therefore, if we determine the relationship between precision and annotation error rate for the highest precision sample, the difference between the highest precision sample's precision and the maximal precision estimate can be used to derive an annotation error rate estimate.An important assumption here is that it is possible to compare the sample of sequence-matches used to derive the precision scores at artificially increased error rates, to the sample that is being used to generate a maximal precision estimate. At a BLAST expect value cut-off of 1e-10 for several reasons. Firstly, at this value both annotation error and semantic variation would be present. At lower expect value cut-off values, e.g. 1e-100, we would expect little semantic variation. Furthermore, annotation error would be difficult to detect as few query sequences would have significant matches except to themselves. Such self-matches, as determined by sequence id, are excluded from the analysis. Also, the derived error rate estimate could be considered more applicable to biologists when using this BLAST cut-off value, as it represents a fairly common use-case for biologists. However, it is important to note that most sequences found to be similar to a query sequence by BLAST at this cut-off value are not necessarily orthologous. Often they are simply protein sequences with one or more significant regions of similarity, such as a structural domain.As described above, the maximal precision estimate is the maximum precision score possible for a sample of sequence-match annotations. When the precision is equal to the maximal precision estimate the semantic contribution to precision is maximised, and the annotation error is negligible. The maximal precision estimate will depend on the expect value cut-off of the BLAST search used. Here we adopted a cut-off of 1eThe importance of this estimate to the error estimation method cannot be overstated. The difference between precision at the naturally occurring error rate and the precision at zero error rate is used to directly estimate the annotation error rate via a function derived from regression coefficients. Therefore we have developed two independent approaches to estimating the maximal precision for a sample.The first maximal precision estimation method is based on a number of simplifying assumptions concerning the distribution of semantic and error variation. If we assume that sequence-matches with no matching GO terms (i.e. a precision of 0) are purely due to semantic variation , we can then assume that cases that have a non-zero precision (i.e. at least one matching GO term between sequences) contain some semantic and some error variation. Therefore, if the error were to be removed from cases that have a non-zero precision, the precision could be 1. The assumption that all non-zero precision cases could have a precision of 1 if no error was present is very optimistic, and will result in a conservatively high maximal precision estimate. Using these assumptions we can derive a maximal precision estimate such that:p1 is the first maximal precision estimate, Nm is the number of sequence matches with at least one matching GO term, and N is the total number of sequence matches.where MNote that the assumptions made above are considered to be fairly gross approximations of how error and semantic variation may be distributed. On the other hand it is unlikely that cases with a precision of 0 are due only to semantic differences between sequences. As the error rate increases more and more sequence matches are likely to have a precision of 0. For these reasons another maximal precision estimate was developed that is based on assumptions surrounding curated UniProt annotated sequences.p2, is simply the precision of cases where both sequences in the sequence match were annotated by UniProt/Swiss-Prot with self-matches, based on sequence id, excluded. In this case we are referring to UniProt/Swiss-Prot annotated sequences that are also present in the GoSeqLite database.UniProt is widelp1 will generally provide a higher maximal precision estimate because it assumes that non-zero precision cases could have a precision of 1 where no error is present. Semantic variation will decrease the average precision of these cases, such that an average precision of 1 is unlikely. Alternatively Mp2 will tend to provide a more generous estimate, as it presumes that UniProt/Swiss-Prot sequence annotations contain absolutely no annotation error. Existing evidence concerning the error rate of other forms of annotation [p2 estimate. As such our two independent maximal precision estimates provide a useful range, and will in turn result in a range for annotation error estimates.While both estimates of the maximal precision require significant simplifying assumptions concerning the relative weights and distributions of semantic and error variation, we might gain some reassurance in their reliability if they give similar results. We might expect that Mnotation suggestsOnce the relationship between precision and annotation error rate has been found using linear regression it is a simple matter to rearrange the standard regression prediction formula to find the annotation error rate corresponding to any precision:where X is the artificially added annotation error rate, Y is the maximal precision estimate, B is the regression constant, and m is the regression slope coefficient.The above formula allows us to obtain the annotation error rate corresponding to a maximal precision estimate, given that we have already determined the values of regression coefficients. The annotation error rate (X) will be the error rate difference between the natural error rate and the annotation error rate corresponding to the maximal precision value. As a result the annotation rate estimate is:where the annotation error rate estimate, E, is the absolute of value of X, where X is the artificially added annotation error rate.p1 mean 0.882, sd 0.005; Mp2 mean 0.851, sd 0.013). Both maximal precision estimates were derived independently using different methods. Mp1 was calculated based on an assumption that non-zero precision cases could have a precision of 1 if no error was present while Mp2 was based on UniProt to UniProt matches. However, the small difference between the means of these estimates might be taken as an indicator of accuracy. For the ISS annotation error estimation the maximal precision estimates were roughly half that of the non-ISS annotation error estimation experiment, however, both maximal precision estimates again had very similar values. Lower maximal precision estimates indicate that a much larger degree of semantic variation exists between the query and reference set annotations for this group.Maximal precision estimates were calculated for all experiments table . Both ma2 > 97.5). For the cross-validation groups there was very little variation in regression coefficient values (Constant (B) mean 0.758 sd 0.006; slope (m) mean -0.706 sd 0.006).Data consisting of the precision of iterations at varying levels of artificially added GO term annotation errors were examined to determine a model for predicting precision from annotation error rate and vice versa. In all experiments, data showed a very high degree of linearity, with some gradual increase in variance as the annotation error rate increased. This increase was relatively small, and standardized scatterplots indicated that homoscedasticity (uniform variance in precision as error rate increased) was largely met. In all cases precision was normally distributed. Linear regression was used to determine the relationship between precision and annotation error for each experiment table . In each1 and E2 are the error rate estimates derived from the maximal precision estimates Mp1 and Mp2 respectively. For non-ISS annotation error cross-validation groups, all have highly similar annotation error estimates . Both ISS annotation error estimates were found to be identical . Using the relative proportion of both types of annotation and their respective error rates, we estimate that the error rate of all curated GO term sequence annotations is 28% to 30%.Regression coefficients and maximal precision estimates were used to derive annotation error estimates for each sample table . E1 and -10 was chosen for this study as it reflects the upper value that would generally be used by biologists when attempting to find similar sequences. As such it provides a generous but realistic estimate of the error rate of GO term annotations.It should be noted that our research indicates that the expectation cut-off value of the BLAST match influences the error rate estimate (data not shown). Decreasing the expectation value cut-off resulted in an increase in the annotation error estimate. This appears to have occurred because, even though maximal precision and the regression constant coefficient (B) also decreased, there was a greater decrease in the regression slope coefficient (m), resulting in a greater annotation error rate at higher E-values. The E-value cut-off of 1eThe method developed and utilized here to estimate the GO term annotation error rate of GoSeqLite sequence annotations is based on a number of assumptions. Given that a query sequence has been found to be similar to a reference sequence (using BLAST) and both have an associated list of GO term annotations, we can calculate the precision of the sequence-match. This precision is determined by semantic variation and error variation (errors made during the curation of sequences). We have used a number of assumptions to pry apart the effects of semantic and error variation to arrive at a method of estimating the error rate of GO term sequence annotationsThe GO term annotation error rate estimates for the GoSeqLite database were found to be 13% to 18% for curated non-ISS annotations, 49% for ISS annotations, and 28% to 30% for all curated annotations. Other studies that examined different forms of sequence annotation (e.g. protein names) have found single forms of error that have accounted for between 6.8% and 8% of annotation errors alone [The magnification of sequence annotation errors through the use of ISS annotation methods has been identified as a possible major source of annotation error ,8. At fiBefore using GO term annotations all users should first be familiar with the meaning of GO evidence codes . In partAs far as the authors are aware, this is the first systematic study of GO term annotation error. We have found that the GO sequence database has a relatively low annotation error rate (28% to 30%), with non-ISS annotations having a much lower annotation error rate than ISS annotations (13% to 18% versus 49% respectively). As ISS annotations have approximately a 35% higher annotation error rate they should be viewed more suspiciously, and used more cautiously, than non-ISS annotations.It is our recommendation that curators should only use ISS annotations after a thorough review. When a suitably similar sequence is found that is already annotated it would be prudent to examine the evidence concerning each annotation in detail to ensure that it is relevant in the current case. For instance, an annotated protein sequence may contain different protein domains to the sequence to be curated, and thus not all GO terms may be applicable. Because the error rate of ISS annotations is high, using sequence similarity to sequences as the basis for annotation, where that sequence was itself annotated based on sequence similarity, should be avoided. At the very least, the curator should search through these chains of annotations based on sequence similarity, to find the instances where annotations were made for other reasons, and determine whether that evidence is applicable to the current sequence.There is a growing number of electronic annotators that predict GO terms to sequences based on sequence similarity to previously GO annotated sequences e.g. ,13,15). ,15. 13,1We have developed a method to undertake systematic analysis of GO term annotation error in sequence annotation databases, and used this to estimate the GO term annotation error rate of the GoSeqLite sequence annotation database. We found that the overall error rate is 28%\u201330%, and that GO term annotations not based on sequence similarity (non-ISS) have a far lower error rate than those that are, with error rates of 13%\u201318% and 49% respectively. Based on the available evidence, the overall error rate of the GoSeqLite database can be considered to be low. Due to the fact that the error rate of ISS annotations is relatively high, we recommend that curators using ISS annotated sequences as evidence for future annotations treat these with care to avoid propagating annotation errors. Furthermore, to ensure that the false prediction rate of electronic annotators is low, designers should avoid the use of ISS annotations when developing prediction algorithms. Indeed, it would be prudent to use curated, experimentally verified GO annotations as source data for annotators. We recommend against the unquestioning use of output from electronic annotators, especially those that use ISS annotated sequences to make GO term predictions.The aims of this study were to estimate the annotation error rate of curated GO term sequence annotations and to determine the impact on error rate of using sequence similarity based annotation approaches. To accomplish this, annotations were assigned to two groups based on their associated evidence code . All annIn all cases we utilised a reference set of sequences and used BLAST to find similar sequences to a set of query sequences. All query sets were the complete sample of sequences that had non-ISS annotations associated , and their annotations that were considered to be 'correct' for the purposes of precision scores, were their associated non-ISS annotations .In the case of estimating the annotation error rate of non-ISS annotations, 10-fold cross-validation was used. The entire query set was randomly broken into 10 equal sized groups. Each group was assigned to a query group, and the remaining 9 were assigned to a reference group. For example, the first cross validation group was assigned to query group 1, and was used as query sequences against a BLAST reference database consisting of cross-validation groups 2\u201310. The mean from all cross-validation groups was used as an estimate of the overall non-ISS annotation error rate estimate.When estimating the annotation error rate of ISS annotations, the reference set is the set of sequences that only had ISS annotations . This simplification allows the entire query set of non-ISS annotations and sequences to be used at one time. The reference set annotations are made up of ISS annotations only . In this case we are comparing non-ISS sequence annotations against ISS sequence annotations.Formatdb was employed to create a BLAST custom database for each reference sequence set. The NCBI blastp application was usedEach query sequence had a potentially large number of matching reference sequences identified . SQL queries were written to extract the highest precision sample for each experiment. This involved assigning a precision to each query-reference sequence match according to the number of term annotations they had in common versus the number associated with the reference sequence. Then, for each query sequence the query-reference sequence match with the highest precision was selected for inclusion into the highest precision sample. This resulted in 59,251 sequence matches selected for the later error insertion experiment.m), and the total number of sequence matches for each group (N). The values were then used to calculate Mp1. Mp2 values were found with use of a select statement that found the average precision of GO term annotations where both the query and reference sequences were annotated by UniProt or UniProtKB. The Mp1 estimate is based on the highest precision sample, while only matching sequences that were provided by UniProt are used to estimate Mp2. These estimates were retained for later use with regression coefficients.The maximal precision estimates were determined for each group of BLAST results using SQL queries of the MySQL database. Select statements were written to find the number of sequence matches with at least one matching GO term (NIn order to determine the relationship between precision and annotation error rate for each highest precision sample, annotation errors were inserted into the reference set GO term annotations at known rates. Errors were added artificially to the annotations of matching sequences through an error-prone copy process. Firstly, a table with the same schema of the GoSeqLite annotation was created. The annotations belonging to high precision sample reference sequences were copied to this table, with a random chance of the annotation's GO term id being changed to an error flag during the copy operation. This random chance corresponded to the artificial error rate applied to the sample. Subsequently the precision of the highest precision sample sequence annotations at this error rate treatment was calculated. The error rate was examined between 2% and 40% inclusive, at 2% intervals. At each given error rate level 100 error prone annotation table copy replications were completed, and the precision of each replicate used in the final analysis. In total 20,000 replications were conducted over 20 treatment levels. A Java application was written to automate the process of error insertion (fig This error insertion experiment was performed for both non-ISS and ISS annotation error estimation. In the case of non-ISS error estimation, the error insertion experiment was conducted independently for each cross-validation group. For ISS error estimation, the error insertion experiment was conducted once.CEJ undertook initial study design, software implementation, statistical analysis and interpretation, and drafted the initial manuscript. UB and ALB participated in the final study design, coordinated the study and contributed to the final manuscript."} {"text": "Unsupervised annotation of proteins by software pipelines suffers from very high error rates. Spurious functional assignments are usually caused by unwarranted homology-based transfer of information from existing database entries to the new target sequences. We have previously demonstrated that data mining in large sequence annotation databanks can help identify annotation items that are strongly associated with each other, and that exceptions from strong positive association rules often point to potential annotation errors. Here we investigate the applicability of negative association rule mining to revealing erroneously assigned annotation items.Almost all exceptions from strong negative association rules are connected to at least one wrong attribute in the feature combination making up the rule. The fraction of annotation features flagged by this approach as suspicious is strongly enriched in errors and constitutes about 0.6% of the whole body of the similarity-transferred annotation in the PEDANT genome database. Positive rule mining does not identify two thirds of these errors. The approach based on exceptions from negative rules is much more specific than positive rule mining, but its coverage is significantly lower.Mining of both negative and positive association rules is a potent tool for finding significant trends in protein annotation and flagging doubtful features for further inspection. There are currently over six million amino acid sequences known, and only a quarter of a million have been manually annotated . MoreoveIn silico annotation generated by bioinformatics methods has the advantage of being efficient and cheap, but at the same time suffers from a notoriously high error level [or level ,5. Most The most obvious and direct approach towards improving the reliability and coverage of unsupervised protein annotation entails the development of better bioinformatics tools. Remarkable algorithmic advances of the past decade include more accurate gene prediction techniques (reviewed by ), highlyA complementary tactic to improve the quality of protein sequence databases involves retrospective search for errors in the total corpus of already available annotation. Under this approach protein annotation is considered to be a collection of records, one per each gene, containing a varying number of attributes, ranging from just a few minimal descriptors for hypothetical proteins, to dozens of annotation items for better characterized proteins. Modern data mining techniques can be used to identify statistically significant associations between individual attributes, and then to investigate exceptions from such associations that can potentially point to erroneous assignments.1 & ... & An) => Z, where A1 ... An and Z are different features, and the rule means \"database entries that possess all features A1 ... An are likely to possess feature Z\". The rules of this type are thus positive because they model a positive relation between two item sets. Each rule is characterized by its coverage, the number of entries in the database that possess all features A1 ... An, its support, the number of entries satisfying both the left and the right sides of the rule simultaneously, and its strength, which is essentially the probability that a given database entry will satisfy the right side of the rule given that it satisfies the left side of the rule.In our earlier work we applied the formalism of association rule mining to extract associations between annotation items in large molecular sequence databases . ConsideOur strategy for finding errors in annotation consisted of finding rules with a strength very close, but not equal, to 1.0, which means that such rules have a minor number of exceptions, and then identifying all proteins that constitute exceptions to these rules. Applied to the Swiss-Prot the tota1 & ... & An (LHS) => not Z (RHS), with A1 ... An and Z being different features, and the rule means 'database entries that possess all features A1 ... An are unlikely to possess feature Z'. For negative rules support is the number of database entries satisfying both the LHS and the RHS, i.e. those entries that possess all features A1 ... An and do not possess the feature Z. An additional very important parameter used in this work to characterize negative rules is leverage which is defined as the difference of the rule support and the product of supports of its LHS and RHS. Leverage measures the unexpectedness of a rule as the difference of the actual rule frequency and the probability of finding it by chance with the given frequencies of its RHS and LHS.In this work we continue to explore the application of rule mining to correcting annotation errors and investigate the utility of negative association rule mining, which, as the name implies, represents the identification of negative relationships between item sets . A negatCeanorhabditis elegans [A negative association rule is thus an implication from the union of several items to an item negation. An example of a trivial biologically relevant negative association rule is \"Nuclear localization => not bacterial origin\", i.e. every protein annotated as localized in the nucleus cannot have a bacterial origin. As with positive rules, negative rules are not necessarily absolutely strict. For instance, the rule \"Operon structure => not eukaryotic origin\" has a number of exceptions because bacterial-like operons were described in elegans . Since tPEDANT software suite is an au): Helicobacter pylori, Arabidopsis thaliana, Saccharomyces cerevisiae, Thermoplasma acidophilum, Synechocystis sp., Parachlamydia, Mycobacterium tuberculosis, Aeropyrum pernix, Escherichia coli, and Bacillus subtilis. The total of 55063 gene products were annotated with more than 1 million (1265974) annotation features suitable for association rule mining , acidic isoelectric point , gene with high GC-content, bacterial origin (Bacteria), and low content of disordered regions (do:L), does not possess any low complexity regions (lc:0), has structural class of the 'alpha/beta' type and the PFAM [PF04014. It belongs to the IPR007159 InterPro [b.129.1 SCOP [\"DNA-binding\". According to the MIPS Functional Catalog [fc16 (protein with binding function), fc16.03 (nucleic acid binding), fc40 (cell fate), fc40.01 (cell growth/morphogenesis), fc32 and fc32.05 .This line describes the antitoxin of the ChpB-ChpS toxin-antitoxin system from the PFAM domain PInterPro family a9.1 SCOP structurAnnotation attributes extracted from PEDANT can generally be subdivided into three types in terms of their intrinsic susceptibility to errors.\u2022 Type 1. Features that are definitely known. This group includes either inherent properties of genes and their products, such as their taxonomic origin, or features that can be unambiguously calculated from primary sequences, such as GC content, length, pI value, percentage of low complexity regions, and so on.ab initio computational algorithms .\u2022 Type 2. Structural and functional properties of proteins predicted directly from their amino acid sequences by \u2022 Type 3. Structural and functional properties of proteins derived by similarity searches against previously characterized gene products. These features include sequence domains, keywords, functional categories, enzyme classes, and functional and structural superfamilies.It is obvious that the features of type 1 are unfaultable and cannot generally contain errors . Features of type 2 are typically predicted with the accuracy in the order of 70% by machiWe are interested in applying negative association rule mining for identifying errors in the annotation attributes of Type 3 transferred by similarity from other proteins; in the annotation entry above such features are shown in italic. In our dataset there were the total of 848511 similarity-derived features , more than a half of which were constituted by functional category assignments.Apriori algorithm for association rule mining. The basic Apriori algorithm, described in detail in [The annotation set describing 55063 genes in ten PEDANT genomes served as input data to extract negative association rules using a modified version of the well-established etail in , is desietail in implemenetail in ).Apriori software to PEDANT annotation results in a file containing one negative rule per line. Each line lists the LHS and the RHS as well as several numerical characteristics of the rule delimited by commas. A typical rule line in the output file looks like this:The application of the \"fc34.11 & fc36, not length:S, 0.028, 1560, 0.895, 49286, 0.028, 1558, 0.999, 1.116, 0.003, 161.669\"This notation means that proteins possessing FunCat labels 34.11 (\"Cellular sensing and response\") and 36 (\"Interaction with the environment (systemic)\") are unlikely to be of small (less than 120 amino acids) length. The LHS items are joined by the \"&\" symbol and are followed by the RHS (here \u2013 a negation of the annotation feature), and the list of numerical attributes of the rule: coverage, coverage count, RHS coverage, RHS coverage count, support, support count, strength, lift, leverage, and leverage count. In addition to \"Support count\", \"Coverage count\" and \"Strength\", important for positive association rule selection FunCat labels. Some of these rules may have exceptions due to annotation transfer by homology between proteins from different taxonomic groups. We classify such cases as annotation errors according to the general procedure.For manual verification of negative association rules we randomly selected a limited sample of protein entries from the PEDANT annotation set that constituted exceptions from rules and could not be corrected by the taxon specific analysis explained in the previous section. Annotation features of these proteins occurring either in the LHS or in the RHS of the rules were subjected to careful manual analysis by an experienced protein annotator according to the established procedures routinely used at MIPS for genome annotation . These We filtered out wrongly assigned taxon-specific FunCat labels and selected randomly a limited sample among all remaining homology-transferred annotation features. The accuracy of the feature assignment was thoroughly verified by an experienced annotator. All verified annotation attributes were divided into 3 categories: true assignments, false assignments, or \"not known\". The latter category was selected if the evidence for a given assignment was not sufficient to make a judgment, but the feature did not obviously contradict the nature of the protein . Features of this category were excluded from further analysis and were not taken into account while estimating the error level. For example, if in a set of 100 features selected for manual verification 40 features were classified as 'errors', 56 as 'correct assignments', and 4 as 'not known', then the final estimate of the error level in this sample was 100*40/(100-4) = 42%.Apriori algorithm to the annotation set extracted from PEDANT resulted in 9591 negative rules simply inherit this keyword from their eukaryotic homologs. In our example, the homolog is human oligoribonuclease , one of the alternatively spliced isoforms of which is localized to the nucleus.Application of the Some aspects of negative rule statistics differ significantly from positive association rules due to vastly different item frequencies. Because annotation items themselves are rare, and most items are in fact extremely rare of proteins analyzed), their negations used in negative association rule mining are unavoidably very frequent. This simple circumstance makes the calculation of negative rules computationally much more challenging compared to positive rules and necessitates the application of much stricter thresholds on the rules of interest. While analyzing rule strength distribution we considered only the rules exhibiting strength higher than 0.1. The number of weaker rules (strength below 0.1) is too high due to the combinatorial explosion caused by random feature combinations, making their analysis computationally prohibitive. However, even in the strength interval 0.1 \u2013 1.0 the number of negative rules is several orders of magnitude higher that the number of positive rules. To make the task computationally tractable we additionally imposed a threshold on minimal leverage which effectively helps to select only the most 'non-random' rules (see Methods) and eliminates all rules with the strength below 0.97. The distributions of negative rule strength with different minimal coverage counts are plotted in Figure The fraction of proteins in the PEDANT database constituting exceptions from strong rules in the strength interval between 0.97 and 1.0 as well as the fraction of relevant (homology-transferred) features participating in such rules is very low. In total, we identified 6875 features (0.8%) in the annotation of 1031 proteins (1.9%) as potential annotation errors.In order to estimate the number of actual annotation errors among exceptions from strong rules the first test was designed to exclude the influence of taxon specificity. It turned out that a very large number of rules combined FunCat labels on one side of the rule with the taxon of the protein origin on the other side . There were 3159 rules with such structure. Because many FunCat labels are taxon-specific (see Methods), these labels should ideally only be present in the annotation of the genes belonging to the corresponding taxa; homology-based transfer of such annotation attributes is highly prone to error. Where a taxonomically specific FunCat label is incompatible with the known gene taxon, it is the FunCat assignment which is guaranteed to be erroneous, since the protein origin is doubtlessly known. This simple test resulted in automatic correction of almost 50% of all exceptions in our set of strong negative rules. Figure After the filter for taxonomy-specific FunCat labels was implemented, the set of 6432 rules formed all negative rules for our annotation sample. These rules involved 4687 transferable features in the annotation of 822 proteins .To estimate the prevalence of errors among exceptions not corrected by the taxonomy procedure described above we selected randomly a sample of 100 rules and analyzed their exceptions manually. In 96% of examined exceptions at least one of the features constituting the rule was assigned wrongly to the given protein. The overall specificity of the approach was estimated to be as high as 98%: practically all feature combinations associated with exceptions included at least one annotation error.versus 6.7% for positive rules) that participate in incompatible feature combinations. More than two thirds of these features do not get detected by positive rule mining.The specificity of the negative rules is thus much higher than that in the case of positive rules which waOur approach is designed to flag incompatible feature combinations for subsequent manual inspection rather than to automatically correct annotation errors in an unsupervised fashion. With the exception of taxon-specific rules where FunCat labels incompatible with the taxonomic origin of a protein are guaranteed to be errors, we do not know exactly which feature of a flagged feature combination is wrong. Besides there always exists a chance that all features constituting an exception from a strong negative rule are nevertheless correctly assigned and that the exception is in fact biologically motivated.It would be desirable to validate our predictions against high-quality manually curated databases such as Swiss-Prot or BRENDA , but thiMethods. As seen in Table We therefore attempted to estimate the fraction of actual annotation errors among those features flagged as suspicious and, for comparison, in non-flagged features by careful manual inspection as described in Based on our assessment it becomes apparent that almost all incompatible feature combinations found by negative association rule mining include at least one wrongly assigned annotation term. The fraction of individual features flagged as suspicious is about 0.6% from the total number of features assigned by PEDANT and it is significantly enriched in annotation errors. Moreover, roughly two thirds of such erroneous assignments are not identified by positive rule mining. We conclude that applying a combination of positive rule mining described earlier and negaIIA and DF conceived the study. IIA designed and executed the analysis of negative rules and rule statistics. GF conducted manual verification of predicted annotation errors. DF supervised the project. All authors participated in the drafting and revising of the manuscript, and read and approved the final manuscript.The set of all negative rules used in this work. Rules.outClick here for file"} {"text": "Annotators can evaluate gene structure evidence derived from multiple sources to create gene structure annotations. Administrators regulate the acceptance of annotations into published gene sets. yrGATE is designed to facilitate rapid and accurate annotation of emerging genomes as well as to confirm, refine, or correct currently published annotations. yrGATE is highly portable and supports different standard input and output formats. The yrGATE software and usage cases are available at Complete and accurate gene structure annotation is a prerequisite for the success of many types of genomic projects. For example, gene expression studies based on gene probes would be misleading unless the gene probes uniquely labelled distinct genes. Identification of potential transcription signals relies on correct determination of transcriptional start and termination sites. Characterization of orthologs or paralogs and other studies of molecular phylogeny are also compromised by incomplete or inaccurate gene structure annotation.Gene structure determination is particularly difficult for eukaryotic genomes. Here, we focus on protein-coding genes. In higher eukaryotes, most of these genes contain introns, and a large fraction of the genes appear to permit alternative splicing -3. High-A policy of 'open annotation', using the internet as the forum for annotation, and bringing annotation into the mainstream has been suggested as a means to eliminate the restraints of manual annotation and to develop high quality gene annotation -15. SeveThe yrGATE package consists of a web-based Annotation Tool for gene structure annotation creation and Community Utilities for regulating the acceptance of the annotations into a community gene set. The yrGATE Annotation Tool can be used without the Community Utilities for analysis of gene loci independent of a community. The Annotation Tool presents pre-calculated exon evidence in several summaries with different selection mechanisms and provides other methods for specifying custom exons, allowing thorough analysis and quick annotation of loci. Annotators access the tool over the web, where they create an annotation, decide to save the annotation in their personal account, or submit the annotation for review for acceptance into the community gene set. The online nature of yrGATE permits a large and nonexclusive group of annotators, ranging in expertise from professional curators to students . This alThe Annotation Tool of the yrGATE package is a web-based utility for creating gene structure annotations. The inputs and outputs of the Annotation Tool are depicted in Figure ab initio predictions, or a combination of sources. The evidence is filtered by stringent thresholds to provide exons suggestive of authentic genes. User-defined exons are exons not contained in the pre-defined evidence and are individually specified by the user. Annotators have several channels to designate both categories of exons.Defining a gene's exon-intron structure is the central step in creating a eukaryotic gene annotation. The Annotation Tool provides two general categories to specify exons: pre-defined evidence-supported exons and novel user-defined exons. Pre-defined exons are provided by the Annotation Tool from prior computations and are supported by evidence derived from spliced alignments of expressed sequence tags (ESTs) and cDNAs, The Annotation Tool contains three representations of the evidence: the Evidence Plot, the Evidence Table, and links to evidence reference files. The Evidence Plot is a clickable graphic that presents evidence in a color-coded schematic (8 in Figure User-defined exons are specified through portals to exon-generating programs or through entry of the genomic coordinates of an exon. As these exons are defined, they are listed in the User Defined Exons Table (2 in Figure As an additional channel provided for designating gene structures, the tool allows pasting a coordinate structure into the mRNA structure field (6 in Figure To document the annotator's procedure and parameters, the Exon Origins attribute of an annotation record automatically stores information about the source of each exon. The following information is stored: the method of exon-generation, a score associated with the method and exon, sequence identifiers used in the method, unique database identifiers to the specific output file or record, and a hyperlink to the program output yielding the exon. Exon Origins allows for complete re-creation of the gene structure annotation and for analysis of manual annotation procedures that could aid in future manual annotation efforts and techniques.After a gene structure has been defined, a user can specify the protein coding region of the annotation through entry of genomic coordinates (4 in Figure Coordinately with gene structure and protein coding region designation and edits, the mRNA and protein sequence fields are updated (3 and 5 in Figure For cases in which genomic sequence requires editing, such as correction of sequencing errors or annotation of genes undergoing mRNA editing, the Sequence Editor Tool (7 in Figure At the conclusion of a gene annotation session, an annotator decides the outcome of their annotation record (1 in Figure The yrGATE package includes community annotation utilities for sharing annotations among a public or private community. These utilities form a process for annotation management and review (diagrammed in Figure A typical annotation submission begins with an annotator logging in to their private account, which contains all of the annotations created by the annotator. Then, the annotator creates a new annotation using the Annotation Tool and decides to submit the annotation to the community.This newly submitted annotation is listed in the Administration Tool, where an administrator can 'check out' this annotation for review, so that other administrators do not review this annotation concurrently. The administrator accesses the 'checked-out' annotation in a review version of the Annotation Tool. Then, the administrator reviews the annotation and is able to edit any attributes of the record. When satisfied with their analysis, the administrator accepts or rejects the annotation. If a decision cannot be reached, the annotation is returned to the to-be-reviewed group. Accepted annotations are added to the public community gene annotation database, where they are presented through the Community Annotation Central and Annotation Record facilities. Rejected annotations can be edited by the annotator to be resubmitted for review.For specific implementations, the described community annotation process can be adjusted by dropping any of the steps, such as eliminating the user log in or eliminating the review process so that all submitted annotations are published. New steps can also be added to the review process, such as a voting utility for submitted annotations.The yrGATE package can be implemented in different configurations depending on the input and output Figure and on tArabidopsis, ZmGDB [PlantGDB includes a family of species-specific databases: AtGDB ,27 for As, ZmGDB for maizs, ZmGDB for rices, ZmGDB . EvidencArabidopsis protein NP_190282. These proteins provided a putative functional assignment of 'sugar transporter' for the annotation. The annotator was satisfied with the annotation and submitted it for review. Administrators reviewed the annotation and accepted it because it was novel and of good quality. The annotation, ZM-yrGATE-sugar_transporter, is now accessible from the ZmGDB Community Annotation Central [The first case study is a novel maize annotation using the ZmGDB yrGATE implementation. An unannotated genome region, 158659-162032 of BAC 51315585, was chosen by the annotator using the genome browsing function of ZmGDB. A screenshot of the Annotation Tool shows the completed annotation Figure with no Arabidopsis gene model using the yrGATE implementation at AtGDB. A screenshot of the transcript view of AtGDB presents two accepted community annotations (green structures in interior window, Figure The second PlantGDB case study concerns alternative splicing and correction of an inaccurate published annotation of an DAS servers provide sequence and annotation information that can be queried and is in a standard format ,33. The Figure The primary evidence also suggests an annotation on the reverse strand that contains the angiopoietin-2 gene within one of its introns. However, current annotations on the reverse strand are inaccurate and incomplete based on mRNA and EST evidence (3 in Figure Links to these case study annotations are provided on the yrGATE website .The Annotation Tool was designed with emphasis on usability for annotators. Annotators can immediately select from high quality evidence that has a high likelihood of yielding an accurate annotation and can specify new custom evidence for cases where the evidence is inadequate. The two categories provide for a good annotation process where high quality evidence is first examined and then additional evidence is checked, which is completed in a minimal amount of mouse clicks and screen display, achieved by the tool's design.The main components of the tool are contained in one standard 1,024 \u00d7 768 resolution screen. The tool is loaded once per genomic region, and the form fields are dynamically updated, which allows annotators to quickly evaluate the impact of different exon variants and combinations of exons on the gene structure, mRNA sequence, and protein sequence. yrGATE is compatible with several major operating systems, including Linux, Windows and Macintosh, on several web browsers, of which Mozilla Firefox has the best performance in terms of speed.yrGATE is available for download . The pacyrGATE opens gene structure annotation to a large, nonexclusive community. The characteristics of yrGATE contribute to its potential for user appeal and community adoption. Among other applications, it is particularly useful for annotating emerging genomes and for correcting inaccurate published annotations. yrGATE is easily adaptable to different input data and can support a community using the Community Utilities."} {"text": "Vast progress in sequencing projects has called for annotation on a large scale. A Number of methods have been developed to address this challenging task. These methods, however, either apply to specific subsets, or their predictions are not formalised, or they do not provide precise confidence values for their predictions.We recently established a learning system for automated annotation, trained with a broad variety of different organisms to predict the standardised annotation terms from Gene Ontology (GO). Now, this method has been made available to the public via our web-service GOPET . It supplies annotation for sequences of any organism. For each predicted term an appropriate confidence value is provided. The basic method had been developed for predicting molecular function GO-terms. It is now expanded to predict biological process terms. This web service is available via Our web service gives experimental researchers as well as the bioinformatics community a valuable sequence annotation device. Additionally, GOPET also provides less significant annotation data which may serve as an extended discovery platform for the user. The expanding amount of sequence data generated from genome and cDNA sequencing projects creates an ever extending demand for automated annotation. The annotation represented in standardised formats like the ones designed by ontologies benefits from its straightforward operability across different analysis platforms. The Gene Ontology (GO) project is a collaborative initiative and provides consistent descriptions of gene products across different species ,2. This and establishing a method to provide a confidence value for each annotation. We developed an automated system for large-scale cDNA function assignment, designed and optimised to achieve a high-level of prediction accuracy without any manual refinement. With our system, Gene Ontology molecular function terms are predicted for uncharacterised cDNA sequences and a defined confidence value is calculated for each prediction. The performance of the system was benchmarked with 36,771 GO annotated cDNA sequences derived from 13 organisms [Gene product prediction is confronted with a variety of challenges coming from ambiguities concerning the underlying input databases, e.g. sequence errors, erroneous and incomplete annotation, and inconsistent annotation across databases or consistent but erroneous annotation across databases. A broad variety of excellent annotation systems have been developed to tackle these problems, e.g. RiceGAAS , GAIA 44, Genotarganisms .We have now extended our approach to predict biological process terms and implemented our method as an online sequence annotation tool . From a user-friendly front-end, the user can upload query protein- and nucleotide-sequences for which the tool assigns Gene Ontology molecular function and biological process terms. It is implemented under the W3H-Task-System which provides a flexible way to configure program and data flow between different biocomputational methods . The W3HSaccharomyces cerevisiae (Stanford University), Drosophila melanogaster (Berkeley Drosophila Genome Project), Mus musculus (Ensembl), Arabidopsis thaliana (MIPS), Caenorhabditis elegans (Sanger Center), Rattus norvegicus (NCBI), Danio rerio (SwissProt), Leishmania major (Sanger Center), Bacillus anthracis Ame (TIGR), Coxiella burnetii RSA 493 (NCBI), Shewanella oneidensis MR-1 (TIGR), Vibrio cholerae (TIGR), Plasmodium falciparum (Plasmodium Genome Research), Oryza sativa , Trypanosoma brucei (Sanger Center), Homo sapiens , as well as the protein database SwissProt (the SwissProt part of the UniProt family of databases). These databases are constantly updated to keep track of the latest information. The corresponding GO annotations were taken from Gene Ontology [Nucleotide or protein query sequences are blasted against Ontology . The SeqOntology is used Ontology ). In theSaccharomyces cerevisiae, Drosophila melanogaster, Mus musculus, Arabidopsis thaliana, Caenorhabditis elegans, Rattus norvegicus, Danio rerio, Leishmania major, Bacillus anthracis Ame, Coxiella burnetii RSA 493, Shewanella oneidensis MR-1, Vibrio cholerae and Plasmodium falciparum (same database sources as for the protein sequences). During the training phase, each instance is compared to the GO annotation of the (known) query sequence. It is classified as \"correct\" if the GO-term of the instance corresponds to one of the GO-terms from the query sequences, and labelled as \"false\" otherwise. Support Vector Machines (SVM) are applied to determine the separation between \"correct\" and \"false\" instances. Support Vector Machines were chosen due to their ability to learn any decision function [For training and testing the SVM, we selected 39,740 GO-annotatedcDNA sequences from the following organisms: function . Furtherfunction and empifunction .After training, the classifier is able to select GO-terms for an unknown query sequence by the same procedure: the query sequence is blasted against the annotated protein sequences of the database, GO-terms from the hits are extracted together with their corresponding attribute values. This instance is transferred to the SVM and classified in accordance to its attribute values.Note, that we yielded a high amount of instances for training . Therefore, we could apply a voting scheme. This consists of an assembly of 99 classifiers corresponding to \u22488,600 training instances each. So, multiple classifiers are employed for the classification. The predicted results are combined by a committee approach in whichWe applied the same approach to predict biological process terms and trained 99 new SVM classifiers specifically on GO-terms for biological process. GO-terms for each blast hit were extracted by considering GO-terms corresponding to biological process and by discarding GO-terms that were prefixed with NOT (annotators state that a particular gene product is NOT associated with a particular GO term), or corresponding to \"biological process unknown\". We were able to select 27,109 sequences from 13 model organisms for training and validation and yielded 1,342,270 instances. Therefore, each classifier was trained with \u224813,558 instances. Table Xenopus laevis contig sequences, which we annotated with our system previously [We compared our system with the well established annotation tools GoFigure and GOtceviously to compaDictyostelium discoideum[Furthermore, we compared GOPET and GOTCHA in more detail. We selected manually 100 random sequences (excluding IEA annotated ones) from DictyBase . This dThe W3H task framework has been developed in the HUSAR environment at the German Cancer Research Center . HUSAR iThe GOPET web-server is accessible via the web-page . The staConfidence values may serve in several ways. Predictions with confidence values \u2265 80% can be used straight away for annotation. In contrast, predictions with low confidence values may serve as a basis for new hypotheses and research, e.g. to infer further relationships to the original function. Automated annotation fails for sequences without any annotated and known homologues and the only alternative remains to analyse the sequence manually and in depth. We included IEA annotated sequences (automated annotated sequences) to improve the annotation coverage. To compare the performance with and without IEA annotated sequences, we calculated the respective prediction accuracies for yeast (non-IEA) based on the worm data-set (IEA) and fly data-set (non-IEA). The results were quite similar .genome@dkfz.deContact: All conceived the idea. AV carried out the work and drafted the manuscript. CD implemented the program into the W3H system. RK and FS contributed in developing the methodology and drafting the manuscript. KG implemented the databases in SRS. RK, SS, KG and RE supervised the work. All authors participated in reading, approving and revising the manuscript.Dictyostelium discoideumTable S2. Comparison of GOPET with the annotation of 100 random selected protein sequences of Click here for fileDictyostelium discoideumTable S2. Comparison of GOTcha with the annotation of 100 random selected sequences of Click here for file"} {"text": "In control experiments, we demonstrate that the method is able to correctly re-annotate 91% of all Enzyme Classification (EC) classes with high coverage (755 out of 827). Only 44 enzyme classes are found to contain false positives, while the remaining 28 enzyme classes are not represented. We also show cases where the re-annotation procedure results in partial overlaps for those few enzyme classes where a certain inconsistency might appear between homologous proteins, mostly due to function specificity. Our results allow the interactive exploration of the EC hierarchy for known enzyme families as well as putative enzyme sequences that may need to be classified within the EC hierarchy. These aspects of our framework have been incorporated into a web-server, called CORRIE, which stands for Correspondence Indicator Estimation and allows the interactive prediction of a functional class for putative enzymes from sequence alone, supported by probabilistic measures in the context of the pre-calculated Correspondence Indicators of known enzymes with the functional classes of the EC hierarchy. The CORRIE server is available at: The explosion of genome sequencing technologies has resulted in an ever-increasing gap between the discovery of new gene sequences and their experimental characterization. The accumulation of raw sequence data has dictated the use of computational techniques for the inference of their possible functional roles, based on the evolutionary conservation of structure and function. However, this widely used empirical process has not attracted sufficient attention as a fundamental problem in computational biology, requiring rigorous analysis.The typical solution to annotation transfer involves the inference of functional properties based on sequence similarity . This prOur approach relies on the usage of a reference dataset such as the EC hierarchy, where protein sequences are pre-classified into (an arbitrary number of) functional classes . An assiWe introduced Correspondence Indicators (CIs) as a novel measure to quantify the relationship between a protein sequence and a functional class. A CI results from the combination of pairwise similarity scores between a query sequence of interest and all the members of a functional class . In our . All the results reported herein concern assignments (re-annotations) obtained with an assignment probability of one (P = 1) using the univariate method with \u03b1 \u2192 \u221e i.e. with a CI Y\u03a9j reduced to the best BLAST hit of the query protein with class \u03a9j [The databases used in the present work were the ENZYME database date:2006-07-12) and UniPclass \u03a9j error rates. More precisely, we consider the probability that a re-annotation is an error knowing the annotation made by our approach, regardless of the true class, i.e. P(annotation is wrong | annotation by CORRIE). This analysis can only be performed at the P = 1 level because there is not enough information at P levels < 1 (due to the very high coverage of the database at P = 1). The results here are quite impressive: 799 (out of 827) classes have at least one assignment at level P = 1. For 755 of these classes, we did not observe any re-annotation error (again at P = 1). This corresponds to 51,131 out of 59,766 re-annotations, or a coverage level of 86%, with a specific error rate equal to zero. For the remaining 44 classes, there is at least one error recorded, which leads to non-zero specific error rates. These non-zero error rates vary across classes between 100% (1 error for 1 assignment) to 0.24% (4 errors for 1673 assignments). The highest error where the number of errors is more than one is 13.6% (3 errors for 22 assignments). We report all nine cases where the number of errors is more than one \u2260 \u03b4 (j \u2192 i). For i = j, the \u03b4 measure provides a measure of recall, or in other words, it indicates whether there exists a high level of sequence specificity within class i. Typical example cases of low recall for two large families are for EC 1.10.2.2 (ubiquinol-cytochrome c reductase) [Third, we defined a distance measure in the re-annotation space in order to obtain a better understanding of the structure/function relationship for enzymes. This measure, denoted as \u03b4 (i \u2192 j) = Nductase) , where \u03b4ductase) , where \u03b4; a downloadable version will follow soon. The format of the results is simple \u2013 by providing a query sequence, the user obtains the following information: the query sequence identifier, the original description (from the FASTA file format), an internal CORRIE protein identifier for retrieval purposes, the assignment probability, the predicted EC class, the EC description, and the local error rate for the specific class Figure . The serWe have previously developed a framework for the probabilistic annotation of enzymes into the functional classes of the EC hierarchy . We havein vitro compartmentalization, and a mutant with a single mutation was among the optimal mutants at synthesizing RNA [+ to Li+ was achieved by just four mutations [It is interesting to note that most errors reported Tables and 2 oczing RNA . Also, iutations .Beyond the issue of functional specificity, there is also an aspect of biological reality in the problematic cases, in terms of overlapping enzyme properties. In other words, these classes might represent activities that co-exist in the same enzyme. In the previous example of the DNA polymerase, it has also been reported that a mutant with just five mutations maintained a DNA polymerase activity, demonstrating that both these activities co-exist . Also, iThese examples illustrate the intricate nature of the sequence-function relationship found among those few cases that CORRIE fails to annotate correctly, and point to the limitation of using sequence similarity as a distance measure between enzymes. Therefore, we envisage implementing other methods in CORRIE in the near future. For example, the sequences within each class could be used to create one or more sequence profiles against which a new sequence could be aligned to produce an alternative CI measure, possibly focusing on key residues ,24. ThisOne shortcoming of CORRIE, since it is based on the ENZYME database for validation purposes, is the implicit assumption that the query sequences are enzymes. A possible future development would be the explicit detection of enzyme sequences from similarity information. Schemes that have addressed the issue of enzyme recognition have been previously proposed . This caThe authors declare that they have no competing interests.BA, LG and CAO participated in the design and coordination of the study. BA, LG and EDL developed the software code and the web site. All authors have drafted the manuscript, and subsequently have read and approved the final manuscript."} {"text": "Like many other biological databases, GOA gathers much of its content from the careful manual curation of literature. However, as both the volume of literature and of proteins requiring characterization increases, the manual processing capability can become overloaded.The Gene Ontology Annotation (GOA) database Consequently, semi-automated aids are often employed to expedite the curation process. Traditionally, electronic techniques in GOA depend largely on exploiting the knowledge in existing resources such as InterPro. However, in recent years, text mining has been hailed as a potentially useful tool to aid the curation process.To encourage the development of such tools, the GOA team at EBI agreed to take part in the functional annotation task of the BioCreAtIvE challenge.BioCreAtIvE task 2 was an experiment to test if automatically derived classification using information retrieval and extraction could assist expert biologists in the annotation of the GO vocabulary to the proteins in the UniProt Knowledgebase.Journal of Biological Chemistry articles used to annotate 286 human proteins with GO terms. A team of experts manually evaluated the results of 9 participating groups, each of which provided highlighted sentences to support their GO and protein annotation predictions. Here, we give a biological perspective on the evaluation, explain how we annotate GO using literature and offer some suggestions to improve the precision of future text-retrieval and extraction techniques. Finally, we provide the results of the first inter-annotator agreement study for manual GO curation, as well as an assessment of our current electronic GO annotation strategies.GOA provided the training corpus of over 9000 manual GO annotations extracted from the literature. For the test set, we provided a corpus of 200 new The GOA database currently extracts GO annotation from the literature with 91 to 100% precision, and at least 72% recall. This creates a particularly high threshold for text mining systems which in BioCreAtIvE task 2 initial results precisely predicted GO terms only 10 to 20% of the time.Improvements in the performance and accuracy of text mining for GO terms should be expected in the next BioCreAtIvE challenge. In the meantime the manual and electronic GO annotation strategies already employed by GOA will provide high quality annotations. The number of proteins requiring functional characterization in the UniProt Knowledgebase is stillmolecular function, biological process and cellular component.Currently, one of the most important advances in database annotation, querying and interoperability is the development and use of structured vocabularies. In this regard, one of the most successful is the 'Gene Ontology' (GO) ,4. SinceWith the success of GO's integration into the analyses of microarray ,7 and maWhile some of these tools are useful, others demonstrate a lack of understanding of how GO is used and queried by a biologist. For example, the GO term 'cell adhesion' (GO:0007155) has been experimentally verified as a process involving the protein ICAM1 but to assign that GO term automatically to every paper that mentions the protein ICAM1 is simply incorrect. Every article that mentions ICAM1 will not experimentally verify that process within its text; instead, it might simply describe the sequence. Annotating GO terms to biomedical literature in this way is not useful to curators, as the GO term is often not attached to a 'relevant' paper. For developers of automatic information extraction and retrieval techniques, however, this strategy might form part of a useful intermediate step to limit the number of GO terms to be searched in a given piece of text.So what do GO curators really need? A useful tool would allow curators to retrieve all 'relevant' papers which report on the distinct features of a given protein and species and then to locate within the text the experimental evidence to support a GO term assignment. Given that GO is not designed for text mining, it is of no surprise that exact text strings of many of the 18,000 GO terms will not be found verbatim in the literature. Despite these difficulties, GOA is often asked to evaluate various automatic GO retrieval and extraction systems. To encourage their comparison and development and to save time in individually evaluating the different strategies, the GOA team was delighted to take part in task 2 of the BioCreAtIvE challenge.Journal of Biological Chemistry articles. We manually evaluated 22,000 segments of text, which were provided to support the correct GO term and protein predictions. In this paper, we give a biological perspective on the evaluation, explain how we manually annotate GO using literature and offer some suggestions to improve the precision of future text retrieval and extraction techniques. Finally, we provide the results of the first inter-annotator agreement study for manual GO curation, as well as results assessing our current electronic GO annotation strategies, to help to establish a threshold for the text mining technology.BioCreAtIvE task 2 was designed to assess if automatically derived classification using information retrieval and extraction could assist biologists in the annotation of the GO terminology to proteins in UniProt. For the training set, participants were provided with papers linked to GO annotations from human proteins already publicly available . For theOne of the distinguishing features of the UniProt Knowledgebase is the high level of annotation and database cross-references that are integrated with each entry. It therefore makes sense that the large-scale assignment of GO terms to the proteins in UniProt should exploit the existing knowledge stored in these entries . Enzyme Electronic techniques are efficient in associating high-level GO terms to large datasets. On the other hand, manual curation provides more reliable and detailed GO annotation but is slower and more labour-intensive. It is clear that the manual curation process requires automatic assistance. However, before attempting to develop strategies to help curators make more rapid GO assignments, it is important to first understand current manual approaches.Each GO consortium member uses slightly different techniques in locating papers suitable for manual GO annotation ,19. The In addition to the papers archived in the UniProt records, the NCBI PubMed advanced search is queriMost GO Consortium members would agree that the most difficult task in searching the literature is finding papers that have experimental information for a given species. Often, the species 'name' (e.g. human) is not mentioned in the 'Title' or 'Abstract' and occasionally, not directly mentioned in the full text. On these occasions, the method section of the paper has to be read and perhaps the taxonomic origin of a cell line identified before any attempt at GO curation. Filtering 'Human entries only' via PubMed is not always accurate. In addition, authors do not always cite the most up-to-date gene nomenclature e.g. use of upper case letters for human gene symbols . This isOnce a relevant paper is found, the full text is read to identify the unique features of a given protein. The majority of papers will mention more than one protein; however, a curator will concentrate on capturing the information pertinent to the main protein chosen for annotation. Most curators still prefer to print out papers rather than view papers online. This is simply to limit computer eye strain and because a curator can quickly scan and select the most relevant parts of the document for curation. Words or short phrases which can be converted to GO terms are highlighted by hand and the correct GO term identifier (ID) is documented in the paper margins for review.obsolete molecular function (GO:0008369), obsolete cellular component (GO:0008370), obsolete biological process (GO:0008370)) are not used in annotation. When electronic or manual GO annotations become obsolete, they are manually replaced with an appropriate term [GO terms are chosen by querying the GO files with the QuickGO web browser ,23 or wiate term . The reaate term .sensu (in the sense of) designation . Curators are cautious when manually assigning these terms. To avoid generating inappropriate GO term assignments, the text mining community should read the GO Consortium documentation on the subject [The GO Consortium avoid using species-specific definitions for GO nodes; however some function, processes and component are not common to all organisms. Inappropriate species-specific GO terms (e.g. germination GO:0009844) should not be manually annotated to mammalian proteins. Sometimes these inappropriate terms can be distinguished by the subject .If a curator is unsure of which process term should accompany a function term, they can consult the 'Often annotated with' section of the QuickGO browser. Here, GO terms that are assigned in tandem are displayed. These are also referred to as common concurrent assignments and are calculated on our existing manual and electronic GO annotations .It is important to note that GO terms are often extracted from particular regions of a paper. Furthermore, according to GO Consortium rules, each GO annotation must be accompanied by a PubMed identifier and one of 10 manual GO evidence codes . Table 1molecular_function unknown (GO:0005554), biological_process unknown (GO:0000004) or cellular_component unknown (GO:0008372) can be assigned with GO evidence code ND ('No Data').If no functional annotation can be found for a given protein after an exhaustive literature search, the GO terms It is clear from the above that the manual GO annotation effort has many steps, which could be assisted by automatic information extraction techniques. For these reasons, BioCreAtIvE organizers designed a biologically motivated task which asked systems to identify the proteins in the text, to check if any functional annotation was present and to return the GO term ID representing this information and the evidence text that supported the annotation.Inferred from Sequence/Structural Similarity' (ISS), 'Inferred by Curator' (IC) judgment and 'No Data' (ND) should be ignored.To train systems to perform this task accurately, thousands of manual GO annotation examples were required. The training data provided to participants is documented online . EssentiIt is important to note that historically, most of the human GO annotations in the GOA database were generated before 2002. Approximately 6000 manual annotations were integrated from the former Proteome Inc. (now Incyte Genomics), which may or may not have been extracted from full text, while an additional 3000 proteins were annotated by UniProt curators from abstracts only, as part of a fast-tracking strategy. These annotations can be identified in the GOA database with GO evidence codes NAS or TAS . Since 2Journal of Biological Chemistry' (JBC) (dated between years 1998\u20132002) was chosen by the organizers because of an arrangement to use the full text openly and freely. We chose a set of JBC articles already associated with human proteins within the UniProt flat files. This set was then filtered for proteins that had no previous manual GO annotation. These criteria ensured that the annotations created for the test set would be new to both the GOA database and the participants. In total, a list of 286 UniProt accessions together with the PubMed ID of the article was distributed to 3 curators. A new GO annotation tool was created to collect the GO associations and to ensure that they would not be released or touched by other UniProt curators not involved in the BioCreAtIvE challenge. The test set took the curators one month to complete (approx. 10\u201315 papers per day). During this period, 923 distinct GO terms were extracted from text within the papers. The evidence text was highlighted on paper and therefore not in a format for machine processing. On average, each protein had 9 GO annotations. During the curation process, these GO annotations were associated with the proteins from 37 other mammalian species based on their sequence similarity to the human proteins. To prevent participants from back-extrapolating the test set annotations, associations with the evidence code 'ISS' were also suppressed from GOA releases. In Table To create the BioCreAtIvE test set, GOA was asked to associate 200 papers with human proteins and GO terms. The One difficulty in creating the test set was that curators were often restricted to a single article per protein. Normally, a curator would seek verification of author statements from more than one paper. As a result, some articles were slightly over annotated compared to the normal curation process.The test set was released to the BioCreAtIvE organizers on 3 November 2003. It was advised that participants should not use versions of GO archived in the CVS repository beyond this date. This was to ensure that the same GO ontology files were available to both the annotators and participants. The test set was suppressed from the monthly GOA release until January 2004.BioCreAtIvE organizers created an online evaluation tool for task 2. For subtask 2.1, the tool displayed the UniProt accession in the test set, along with associated 'known' GO terms and documents. Participants were expected to return a segment of text (the evidence text) from the document that supported the annotation of the 'known' GO term. The provision of evidence text was critical for the evaluators as it provided a basis for rejecting or accepting that finding. Evidence text was visible to evaluators by means of a red text highlight. The full text surrounding the evidence text was also visible in black or blue font. The evaluation tool was easy to use and was designed with the evaluators to closely resemble a curation aid that might develop from this technology. Two GOA curators evaluated subtask 2.1. There were 9 distinct users for this task but 21 separate runs were submitted for evaluation.In the second subtask (2.2), participants were given the document and the associated UniProt accession and asked to return evidence text to support their system's GO predictions for that protein Figure . ParticiThe curators made two separate evaluations of the evidence text: Did it support the correct GO term? Did it support the correct protein association? To ensure the consistency of evaluations, criteria were agreed amongst the 3 evaluators and BioCreAtIvE organizers that not all participants understood the content of GO or how it is used during annotation. The common mistakes collected by curators are presented in Table that notnot a parent or child to that chosen by the second curator). At the end of the study, the 3 curators evaluated together the GO terms extracted in the 3 categories. Results indicate that there is 39% chance of curators exactly interpreting the text and selecting the same GO term, a 43% chance that they will extract a term from new/different lineage, and a 19% chance that they will annotate a term from the same GO lineage exact term match (GO term was exact match to that chosen by the second curator), (b) same lineage (GO term was parent or child to that chosen by the second curator), (c) new/different lineage . It was of interest to GOA to also evaluate the precision of these annotation strategies. Taking the manual GO annotation created for the BioCreAtIvE test set, we again compared the number of times the different electronic techniques predicted GO terms exactly, with the same lineage and less granularity , same lineage and greater granularity or new lineage. It should be noted that electronic predictions that exactly matched or represented a parent term of a manually annotated term were assumed to be correct. Electronic GO predictions that represented a new lineage or a child term to those chosen manually could be potentially correct or incorrect. This is because the GO annotations represented in the BioCreAtIvE test set were based on the curation of just a single article and therefore not fully curated. In agreement with GOA release statistics, InterPro2GO (635 annotations) provided the most GO coverage of the test set followed by SPKW2GO (385 annotations) and EC2GO (27 annotations), data not shown. Because the GO function terms predicted by the EC2GO mappings were quite deep/final node GO terms, it was not surprising that 67% of the time they exactly matched the manual GO annotation (Table To further evaluate how precise our electronic strategies were, we manually evaluated a random set of 44 proteins that had both electronic and manual GO annotation. This time, we verified whether the GO predictions were correct or incorrect. There was little difference in the precision of each strategy and our electronic annotation was between 91\u2013100% precise Table . These rThe GOA database currently provides 69% GO coverage of the UniProt Knowledgebase using in-house electronic and manual annotations as well as annotations integrated from GO Consortium members including MGI , SGD 1919 and FlAlthough GO was not designed with text mining in mind, it does try to create a vocabulary for biological research that could be deciphered by both humans and machine processing. The complications in matching exact GO terms in the literature might be resolved when the GO Consortium implement their plans to decompose the GO phrases into individual words or concepts and properties and by the mapping of more synonyms to GO terms .Improvements in the performance and accuracy of text mining should be expected in the next BioCreAtIvE challenge. In the future we hope it will offer a useful supplement to the manual and electronic techniques already employed by GOA.RA heads the UniProt Knowledgebase and organized the collaboration with the BioLink group. EBC coordinates the manual curation of GOA database, drafted the manuscript and performed the statistical analysis Table , 4, 5, 6This shows further details of the inter-annotator agreement. It contains individual counts for each UniProt accession and PubMed Identifier that was co-curated.Click here for file"} {"text": "Assignment of function to new molecular sequence data is an essential step in genomics projects. The usual process involves similarity searches of a given sequence against one or more databases, an arduous process for large datasets.We present AutoFACT, a fully automated and customizable annotation tool that assigns biologically informative functions to a sequence. Key features of this tool are that it (1) analyzes nucleotide and protein sequence data; (2) determines the most informative functional description by combining multiple BLAST reports from several user-selected databases; (3) assigns putative metabolic pathways, functional classes, enzyme classes, GeneOntology terms and locus names; and (4) generates output in HTML, text and GFF formats for the user's convenience. We have compared AutoFACT to four well-established annotation pipelines. The error rate of functional annotation is estimated to be only between 1\u20132%. Comparison of AutoFACT to the traditional top-BLAST-hit annotation method shows that our procedure increases the number of functionally informative annotations by approximately 50%..AutoFACT will serve as a useful annotation tool for smaller sequencing groups lacking dedicated bioinformatics staff. It is implemented in PERL and runs on LINUX/UNIX platforms. AutoFACT is available at Automatic functional annotation is essential for high-throughput sequencing projects. Typically, large datasets undergo annotation by means of \"annotation jamborees\", where groups of experts are assigned to manually annotate a designated portion of an organism's genome. More recently, various tools have become available to streamline this process -9. HowevAcanthamoeba castellanii, a free-living soil amoeba and opportunistic human pathogen. This example highlights AutoFACT's performance, which yields a ~50% increase in functional annotations over a top-BLAST-hit approach against NCBI's non-redundant database or against UniProt's expert-annotated UniRef90 database.Unique to AutoFACT, is its hierarchal filtering system for determining the most informative functional annotation. This paper describes AutoFACT's functional assignment capabilities, outlining the procedure for annotating unknown nucleotide or protein sequence data. We assess the validity of AutoFACT by comparing annotations to four previously annotated and phylogenetically diverse organisms, including human, yeast and both eukaryotic and bacterial pathogens. AutoFACT has been applied to the EST sequencing project of AutoFACT is a command-line-driven program written in PERL for LINUX/UNIX operating systems. It uses BioPerl modules AutoFACT takes a single FASTA-formatted sequence file as input, automatically recognizes the sequence type as nucleotide or protein and proceeds to ask the user for preferences regarding which databases to use, the order of database importance and bit score cutoff. The bit score is a measure of sequence similarity independent of the size of the database used . It is derived from the raw alignment score in which the statistical properties of the scoring system used have been taken into account. Bit scores are normalized with respect to the scoring system and hence can be used to compare alignment scores from different searches . Each seAutoFACT assigns classification information, based on a hierarchal system, from a collection of specialized resources, currently nine databases Table , using BATP synthase' matches 'H+-pumping ATP synthase'), the description line of the UniRef90 hit is assigned to the input sequence. If there are no matches to UniRef90 terms, the informative terms from the informative hit of the next database are then queried in the same way as above, until a functionally informative description line has been assigned to the sequence.Figure We prefer to use UniRef90 as the first database in the order of importance for two reasons. First, as a member of UniProt it is one of the better annotated and curated of the available databases. Second, because UniProt entries with 90% sequence similarity are combined into a single record, the description lines are species-independent and tend to be more general in their descriptions. On the other hand, description lines from NCBI's nr database are often several lines long and contain repetitive information. Testing showed that using various database combinations does not significantly change the annotation results. A user's choice of db order is therefore dependent on the format of the description line one would prefer to assign to the sequence in question Table .AutoFACT proceeds to step 4 when there are no common informative terms between any of the databases, or when only uninformative hits are found. In this step, a sequence with significant similarity to one or more sequences in the Pfam or SMART databases is classified as a ' [domain name]-containing' protein or a 'multi-domain-containing protein'. A sequence containing no domains is simply classified as an 'unassigned protein'.A sequence is also classified as a ' [domain name]-containing protein' when the only significant hit is to a domain database. It is considered 'unclassified' when no hits are found to any of the specified databases. When EST sequences are being annotated, the last step in the annotation pipeline is to check the sequence against NCBI's est_others database. If a significant match is found, the sequence is classified as an 'unknown EST'; otherwise it remains 'unclassified'.via ExPASy's enzyme.dat file -containing proteins'. We do not consider these annotations to be false positives, merely less specific annotations. In 1/10 of the assignments, AutoFACT was better than PEDANT (Table AutoFACT and PEDANT annotations for a set of 200 cDNAs differed by 5% (10/200). We examined the original annotations for these 10 sequences in the expertly curated Saccharomyces Genome Database (SGD). Because AutoFACT considered hits to NT Table . The remPlasmodium falciparum cDNAs to annotations generated by AutoFACT. TIGR's preliminary annotations are automatically assigned by searching nucleotide and protein databases for \"good\" matches. At this preliminary stage, none of the annotations are examined or verified by human annotators. We found that between the two fully automatic pipelines, 4% (8/200) of the annotations differed, half of which were annotated by AutoFACT as ' [domain name]-containing proteins' . Autowazekii , which aAutoFACT is currently used by the Protist EST Program (PEP) , a pan-CA. castellanii. We compared AutoFACT annotations for these clusters to annotations taken from top BLASTx hits against NCBI's nr database and from top BLASTx hits against UniProt's well-annotated UniRef90 database. AutoFACT compared the A. castellanii sequences against a total of seven databases. UniRef90, KEGG, COG and NCBI's nr were searched using BLASTx ; Pfam and SMART were searched using RPS-BLAST; and NCBI's est_others database was searched using tBLASTx. In each instance, a bit score cutoff of 40 was used and the top 10 BLAST hits were filtered for uninformative terms. The database order of importance was UniRef90, KEGG, COG, NCBI's nr. Figure Under the PEP initiative, 12,937 individual EST reads yielding 5,130 clusters (consensus sequences) have been obtained to date for AutoFACT annotations for each organism mentioned above can be viewed at To efficiently and fully exploit the wealth of sequence data currently available, thorough and informative functional annotations are paramount. Considering the ever-growing number of EST sequencing projects, it becomes increasingly important to fully automate the annotation process and to make optimal use of the various available annotation resources and databases. Because no two annotation systems are exactly alike, choice of system is very much dependent on the user's end goal.A. castellanii case study shows that in comparison to the 'quick and easy' top-BLAST-hit approach against either NCBI's nr or UniProt's UniRef databases, AutoFACT substantially improves functional annotations of sequence data. Comparisons to other well-established annotation pipelines show that AutoFACT performs equally well and in some cases better than the alternative. We have also demonstrated that AutoFACT exhibits an equivalent level of performance (1\u20132% error rate) when it is used to annotate sequences across different domains of life.AutoFACT uses a hierarchal filtering system for determining the most informative functional annotation. It provides a means of classification by identifying EC numbers, KEGG pathways, COG functional classes and GeneOntology terms. AutoFACT supplies three different output formats and a log file, which are versatile and adaptable to user requirements. Importantly, it allows users to maintain data locally, whereas many other systems require sequence submission elsewhere for annotation. By combining multiple resources, AutoFACT associates sequences with a broad range of biological classifications and has proven to be very powerful for annotating both EST and protein sequence data. The Finally, we caution that over-prediction is common when using sequence similarity to infer protein function. Examples of similar sequences that do not share the same or even related functions have been documented . AutomatProject name: AutoFACTProject homepage: Operating system(s): LINUX/UNIXProgramming language: PERLOther requirements: BioPerl and BLASTLicense: GNU General Public License (GPL)Any restrictions to use by non-academics: NoneAcanthamoeba castellanii data used to test and validate AutoFACT. GB and BFL supervised the study, making significant design contributions. All authors read and approved the final manuscript.LBK designed, developed and implemented AutoFACT. MWG provided the"} {"text": "Large-scale genomic studies based on transcriptome technologies provide clusters of genes that need to be functionally annotated. The Gene Ontology (GO) implements a controlled vocabulary organised into three hierarchies: cellular components, molecular functions and biological processes. This terminology allows a coherent and consistent description of the knowledge about gene functions. The GO terms related to genes come primarily from semi-automatic annotations made by trained biologists (annotation based on evidence) or text-mining of the published scientific literature (literature profiling).in vitro enterocyte differentiation model (CaCo-2 cells).We report an original functional annotation method based on a combination of evidence and literature that overcomes the weaknesses and the limitations of each approach. It relies on the Gene Ontology Annotation database (GOA Human) and the PubGene biomedical literature index. We support these annotations with statistically associated GO terms and retrieve associative relations across the three GO hierarchies to emphasise the major pathways involved by a gene cluster. Both annotation methods and associative relations were quantitatively evaluated with a reference set of 7397 genes and a multi-cluster study of 14 clusters. We also validated the biological appropriateness of our hybrid method with the annotation of a single gene (cdc2) and that of a down-regulated cluster of 37 genes identified by a transcriptome study of an The combination of both approaches is more informative than either separate approach: literature mining can enrich an annotation based only on evidence. Text-mining of the literature can also find valuable associated MEDLINE references that confirm the relevance of the annotation. Eventually, GO terms networks can be built with associative relations in order to highlight cooperative and competitive pathways and their connected molecular functions. The numerous gene clusters identified thus far in molecular biology by high throughput analyses such as transcriptomic or proteomic technologies need to be understood according to the biological conditions under study. However, often only highly specialized individual biologists have an in-depth knowledge about a gene or gene product and therefore this knowledge is limited to relatively narrow research fields. The functional annotation of groups of gene products identified by genomic studies is a large challenge and new tools are needed to help in this task.de facto standard for formalising our knowledge about biological processes, molecular functions and cellular components, in three independent hierarchies [is_a or part_of relationships. As the defined terms can have more than one parent, the structure of this ontology is called a Directed Acyclic Graph (DAG). Furthermore, each GO term is associated with a unique identifier (GO ID) in order to allow a biological database to link to the GO and to ensure interoperability between different biological databases. These GO IDs are used in several biological databases \u2013 from almost 20 experimental organisms such as animals, plants, fungi, bacteria and viruses \u2013 to tag gene products and assign functions, biological roles and sub-cellular locations to them. Therefore, a user can identify the gene products associated with a specific GO term as well as all of the GO terms associated with this gene product by using an appropriate browser, such as AmiGo [GenNav [FatiGO [GoMiner [MAPPFinder [GOTree Machine [Onto-Tools [GOToolBox [Ontologies are widely used in informatics and are now becoming important in bioinformatics. They can make the large amounts of biological knowledge found in textbooks and research papers generally accessible in a structured way . Althougrarchies . It contas AmiGo or GenNa [GenNav . The inv [FatiGO , GoMiner[GoMiner , MAPPFinPPFinder , GOTree Machine , Onto-Toto-Tools or GOTooOToolBox , that ofOToolBox ,13.Nevertheless, annotating genes with a controlled vocabulary is laborious and needs an expert to inspect carefully the literature associated with each gene to determine the appropriate terms. As our knowledge of biology increases, becomes more refined, and expands into new areas such a process will no longer be sufficient . It is tPubGene index [PubGene method uses a probabilistic score to reflect the gene-term association strength. This score takes into account the frequencies of both gene and term in the 14 million article records of the database. We discard weak associations to improve the precision of the PubGene method. The two sets of GO terms are then merged and GO terms having statistically enriched gene numbers are identified to aid the biological interpretation of the cluster.We report here an original method for the functional annotation of gene clusters based on both evidence and literature profiles that aims to overcome the weaknesses and the limitations of each approach (annotation based on evidence and literature mining). We can functionally annotate a gene cluster by retrieving associated GO terms from two different sources of information. The first is an annotation database built on evidence: the Gene Ontology Annotation (GOA) database , and thene index . The PubWe evaluated the precision of each method and the overlap between them. We also evaluated the relevance of the bibliographic references associated with the gene cluster by the literature mining method. As we were seeking the biological meaning of a gene set, we focused on identifying metabolic pathways. This therefore limited the annotations to the Process hierarchy of GO.is_a relationship) and meronymy (part_of relationship) are the backbone of GO and make it a proper ontology in the computational sense, it however lacks associative relations within and especiallly across its three hierarchies. These relations would be very helpful and informative. For example, they could show that a certain molecular function is involved in a certain biological process and that a certain cellular component is the location of a certain biological process. We previously investigated three non-lexical approaches for identifying associative relations between GO terms [Although subsumption (GO terms . We haveResults section describes how we compared evidence (GOA) and literature (PUB) using an exhaustive reference set of 7397 genes annotated by both methods. We then explain how we evaluated the contribution of statistical dependences (DEP) on the same reference set. The methodology was quantitatively evaluated in a multi-cluster analysis concerning 14 clusters chosen from 7 independent studies (Table in vitro model of enterocyte differentiation (Up and Down clusters). A qualitative evaluation was also performed for two oncogenomic studies: a glioblastoma cluster (glioGBM) and a leukemia cluster (bcr-abl). The Discussion section describes the benefits and limitations of the evidence and literature annotations. We then comment on the major contributions of the bibliographic aspects of our method and that of the statistical dependence between GO terms. The Methods section details the technical and statistical aspects of our methodology.This paper is organised as follows. The The evidence annotation of the reference set provided 1625 Process terms whereas the literature annotation provided 3226. The two methods shared 1079 terms (24.9%). Although the reference set represented only 49.6% of the overall Process hierarchy of GO (8730 terms), we checked its relevance by evaluating its representativeness compared to all the GO Process terms available in the GOA Human database [In the evidence annotation, many gene-term associations were based on electronic inference: 38.5% of the terms retrieved in GOA were associated with the \"Inferred from Electronic Annotation\" evidence code (IEA) . The remPubGene retrieved 269172 gene-term associations of which 38703 (14.4%) had a scored below 0.01. Among the 3236 Process terms associated with the set, 10 (0.3%) were obsolete and 3075 (95.3%) had a score below 0.01.The generalised estimating equations (gee) showed that the annotation method did not affect the number of genes associated with a given term . Terms with significantly enriched gene numbers could then be compared between methods.2 = 1886.863, df = 1, p-value < 2.2e-16).The literature annotation provided more than twice as many terms for a given gene than the evidence annotation .There were many more references associated with one gene in the literature annotation than in the evidence annotation . The median depth was seven and was consistent with the overall granularity of the GO Process hierarchy.We found no significant difference in the granularities of the two annotations or literature alone for the dependent set versus the random set is a measure of its relative number of annotated parents and children terms. PQIs for the combination of both evidence and literature were significantly different from evidence alone , all were found by literature profiling and/or associative relations. The literature annotation retrieved 154 Process terms, 23 of which had scores below 0.01. These significant terms were associated with 266 MEDLINE references. A systematic reading of the title and abstract of these references showed that these were relevant for the associations brought out and related all the important steps of the cdc2 characterisation in several species, descriptions of the various substrates and inhibitors of cdc2, etc.). Furthermore, half the references provided by the literature annotation were less than 5 years old.Terms from the evidence and literature annotations were also associated with 153 terms in the associative relation database. Only 38 of these had a non-zero PQI. A selective part of the network of terms associated by dependence is presented in Figure A Venn diagram for the 37 down-regulated genes during CaCo-2 cells differentiation is shown in Figure As for the reference set, genes from the Down cluster were primarily annotated with three evidence codes: IEA (47.1%), TAS (33.3%) and NAS (12.6%). TAS, NAS and IDA evidence codes were associated with 28 MEDLINE references. Manual inspection of the 87 gene-term associations confirmed the accuracy of the evidence annotation and the robustness of the inference methods used in building annotation databases. Less than 2% of the terms were unexploitable. These were either misassociated, for example \"perception of sound\" (GO:0007605) with ITM2B, or not very biologically informative, for example \"biological_process unknown\" (GO:0000004) for TRIP6.e.g., \"copper ion transport\" (GO:0006825) with ATP7B, \"DNA replication initiation\" (GO:0006270) with MCM3, \"chromatin silencing\" (GO:0006342) and \"DNA packaging\" (GO:0006323) with CBX1, or \"ornithine catabolism\" (GO:0006593) and \"putrescine catabolism\" (GO:0009447) with ODC1. There were very few false positives associations (1.2%). The remaining 17.3% of the associations were correct but imprecise. The gene symbol and the term were both found effectively in the title/abstract but there was either: (i) no biological relationship between them, for example, ATP7B associated with \"mRNA metabolism\" (GO:0016071) in a study of mRNA expression levels (and thus transcription) of ATP7B itself [The Down cluster was annotated with 259 significant GO terms associated with 3377 MEDLINE references. Manual inspection of all the 626 significant gene-term associations retrieved by PubGene showed that 81.5% had a direct link between the gene and the term, \" GO:00060 with MCB itself on sum1+B itself in a stu2 = 48.9203, df = 1, p-value = 2.666e-12) or literature alone Figure .Enriched GO terms for the Down cluster are shown in Figure 2 = 255.7346, df = 1, p-value < 2.2e-16) for the dependent set versus the random set . Bad associations are mostly indirect rather than entirely false. For example, the \"perception of sound\" (GO:0007605) associated with ITM2B comes from a spkw2go mapping as this cdc2 was identified ten years ago, its action on CDKN1A and TOP2A was only recently characterised [Text mining of biomedical literature combined with probabilistic scoring of the gene-term associations is also a powerful annotation technique. For a given gene set, it retrieved more terms per gene than evidence annotation and with a similar precision. Although common terms highlighted the major pathways, supplementary terms were a valuable source of information for reinforcing those pathways, by adding parent, sibling and child nodes. For example, in the annotation of cdc2 Figure , the litcterised .in vitro kinase assay. Moreover, the annotators linked this article to the Cellular Component \"nucleus\" (GO:0005634) whereas it would be better associated to \"negative regulation of cell cycle\" (GO:0045786) and \"negative regulation of cyclin-dependent protein kinase activity\" (GO:0045736). Likewise, HMGA2 was TAS-associated with \"development\" (GO:0007275) [e,g., ODC1, CBX1, LAMB1).Scientific literature is the optimal resource for validating a functional annotation. However, GOA provides few MEDLINE references to support its annotation. Despite there being abundant literature on cdc2 only one article was retrieved: a general study by Laronga et al. on cycli GO:00457. Likewis0007275) . Most ofThe significant increase in the associated MEDLINE references in the literature annotation corroborates the enrichment of the GO terms. The considerably higher number of associated references and their accuracy (up to 90% precision) make PubGene an excellent bibliographic tool for validating the biological interpretation of a cluster. In the Down cluster, PubGene was able to retrieve very informative references highlighting the main biological implications of one gene. For example, the study by Chen et al. was assoAs expected, the primary source of errors found in the literature approach was linked to the ambiguity of the gene symbols: CBX1 was associated with \"secretion\" (GO:0046903) whereas the abstract of Dodic et al. referredOur data strongly suggest that networks of statistically inter-dependent GO terms highlight the leading features of a gene or gene cluster: a synthetic and simplified interpretation of its annotation. Most of the main processes identified in the functional annotation of the Down cluster between GO terms across hierarchies, such as the \"signal transduction\" (GO:0007165) process with the \"receptor binding\" (GO:0005102) function in the Down cluster annotation, or the \"mitosis\" (GO:0007067) process and the \"cyclin-dependent protein kinase activity\" (GO:0004693) function in the cdc2 annotation; and (ii) between GO terms belonging to different sub-DAGs of the same hierarchy, such as the \"regulation of cell cycle\" (GO:0000074) and \"apoptosis\" (GO:0006915) processes in both the Down cluster and cdc2 annotations.We have presented here an application of the associative relations to the functional interpretation of experimental results. We deliberately restricted their contribution to reinforce the evidence or literature annotated pathways and to identify between annotated terms the relationships across hierarchies. This improvement needs to be evaluated in terms of the precision and specificity of each non-lexical approach and the term-term associations could also be filtered with respect to their similarity coefficient.The biological interpretation of a gene cluster will surely be facilitated by the identification of the GO sub-DAGs having a high number of annotated nodes. Each term in the gene cluster annotation has a PQI that measures its annotation degree: its relative number of co-annotated kinship terms. Using the distribution of the PQI within the DAG, it is therefore possible to identify statistically over-annotated sub-DAGs \u2013 possibly biological pathways \u2013 linked to a specific biological condition. Nevertheless, this measure needs to be normalised in order to be independent from the size of the gene cluster and, consequently, from the number of GO terms in the annotation.We used the associative relations to identify possible interactions between processes and functions but this method is general to GO and not specific to a gene cluster. At least two other approaches could be explored at the cluster level. The first and most obvious one is to link terms that share one or more genes (co-annotated genes). These terms are likely to be the enriched terms from the annotation. Such approach can yet be elusive as most of the terms are only annotated with one gene. A second approach is to link sub-DAGs with high PQI terms (co-annotated terms) if we consider the PQI as a quantification of a sub-DAG (pathway) relevance for the gene cluster .Despite their obvious differences, semi-automatic annotation based on evidence and literature mining combined with statistical scoring of the gene-term associations are both efficient methods for associating relevant GO Process terms to a gene cluster. The significantly higher PQIs obtained using a combination of both methods is an indication of their synergy: they do not contain the same information. We achieved a more robust and complete annotation by combining the coherence of GOA with PubGene's exploratory and bibliographic qualities. Eventually, GO terms networks can be built with associative relations in order to highlight cooperative and competitive pathways and their connected molecular functions. Our methodology is an effort to improve the actual situation which is clearly suboptimal. It is, however, not demonstrated to what precise degree this improvement goes. This remains to be determined but is outside the scope of the present paper.The GOA database aims to provide high-quality supplementary GO annotation to proteins in the UniProt (SWISS-PROT/TrEMBL) databases. Most of the GOA content comes from the manual curation of scientific literature, with semi-automatic and electronic techniques being used to support the annotation process. Therefore, an evidence code assesses the reliability of the gene-term association. These codes are established by the GO Consortium and rangi.e., specified in the term_comment attribute) and obsolete terms with no updated term were discarded from the literature annotation. Terms being poorly associated with the gene cluster were also discarded.PubGene is a web-based database of gene-gene and gene-term associations based on co-occurrences in biomedical literature. It provides a full-scale literature network for 25,000 human genes extracted from the titles and abstracts of over 14 million article records from the MEDLINE citation database of the National Library of Medecine (NLM). The method assumes that if two genes are mentioned in the same MEDLINE record there should be an underlying biological relationship. Genes are linked to terms from the Gene Ontology and a probabilistic score is computed that reflects the gene-term association strength which can be used to assess the relevance of each individual term. The computation of this probabilistic score assumes that occurrences of the gene and the term are independent. Therefore, a binomial formula can be used to estimate the probability of finding the gene and the term together in an article based on their respective frequencies in the whole database. Assuming a normal distribution, the expected number of articles mentioning the gene and the term is then compared to the number of times they actually occur together product, involved in the G2/M transition of the cell cycle [We wanted to determine what the method was able to retrieve for a minimal cluster, that is, for a single gene. We chose the cell division cycle 2 and 37 down-regulated genes (Down cluster). Evaluations for both Up and Down clusters were quantitatively and qualitatively similar. We will therefore only detail here the results obtained for the down-regulated cluster (Down cluster). See The methodology was quantitatively evaluated in a multi-cluster analysis concerning 14 clusters chosen from 7 independent studies. Detailed informations on these clusters can be found in Table GO terms that best characterise this set. These terms will be among those relevant to a high number of genes. We used the hypergeometric distribution to identify statistically significant enrichments . We carried out analyses with a generalized estimating equation (gee) model to estimate parameters for correlated data, assuming a Poisson error and a log-link function. We used the R package 'geepack' [We measured, for each GO term, and for each annotation method, the number of occurrences and the number of associated genes. We carried out statistical analyses only on the number of genes per term because these two variables were strongly correlated . For the terms only found by both methods, we tested the number of genes per term against the annotation method /N, where N is the total number of parents and children nodes in the sub-DAG, NPa is the number of annotated parent nodes and NCa is the number of annotated child nodes. We used the PQI to compare the evidence and literature annotations to a combination of both annotations. We also used it to evaluate the global relevance of the GO terms found only by dependence. In this case, we calculated the PQIs for the combination of the evidence and literature and compared it to the PQIs obtained after the addition of: (i) the terms found only by dependence, and (ii) a random term set of equal size. We compared PQIs using the Kruskal-Wallis Rank Sum Test with a Bonferroni correction for multiple comparisons. We eventually used the PQI to filter out the associative relations and to limit them to reinforcing the annotation: dependent terms with a zero PQI were discarded.A Directed Acyclic Graph (DAG) is a hierarchy in which a node can have multiple parents and children. The highest node, the one having no parents is called the root node and the deepest nodes, those with no children, are the leaves. Thus, a node can be characterised by its position within the DAG. The depth or granularity of the node is its minimum distance from the root node . We compe.g., a molecular mechanism implied by or associated with the gene activity but not the gene activity itself) or, inferred by a sequence similarity, etc.We carried out a systematic inspection of each gene-term association retrieved from the GOA Human database. Associations were sorted into three categories depending on their relevance: good associations in which the GO term was directly linked to the gene product, bad associations in which the GO term was misassociated with the gene product or non-informative, and doubtful associations in which the GO term could be indirectly linked to the gene product relations and Across Hierarchies (AH) relations.The version of GO used throughout this study is the February 2005 monthly release, available from the GO website. DAG graphical representations were achieved using dot v1.10 and Graphviz 1.13(v16). All other graphics and statistical analyses were done using the R language version 2.1.0.All computational tasks and statistical analyses were carried out by MA. The biological relevance of the method was primarily evaluated by AM and assisted by CC, JM and MA. Expertises on glioblastomas and leukemias were respectively supplied by MdT and MG. AB and JM supervised this study and contributed to continuous discussions about its shortcomings. All authors have read and approved the final manuscript.GO terms associated with the Down cluster: Venn category, GO ID, Name, GOA Human annotation with evidence codes and genes, PubGene annotation with significant scores and genes.Click here for fileUp cluster. (A) Boxplots of the PQIs for the Evidence (GOA), Literature (PUB) and combination of both (GOAPUB). (B) Boxplots of the PQIs for the evidence and literature terms (GOAPUB), for the same set enriched with the associative relations (+Dep.) or a random term set of the same size (+Random).Click here for fileGlioblastomas clusters. (A) Boxplots of the PQIs for the Evidence (GOA), Literature (PUB) and combination of both (GOAPUB). (B) Boxplots of the PQIs for the evidence and literature terms (GOAPUB), for the same set enriched with the associative relations (+Dep.) or a random term set of the same size (+Random).Click here for fileAcute Lymphocyte Leukemias (ALL) clusters. (A) Boxplots of the PQIs for the Evidence (GOA), Literature (PUB) and combination of both (GOA-PUB). (B) Boxplots of the PQIs for the evidence and literature terms (GOAPUB), for the same set enriched with the associative relations (+Dep.) or a random term set of the same size (+Random).Click here for fileCircadian cluster. (A) Boxplots of the PQIs for the Evidence (GOA), Literature (PUB) and combination of both (GOAPUB). (B) Boxplots of the PQIs for the evidence and literature terms (GOAPUB), for the same set enriched with the associative relations (+Dep.) or a random term set of the same size (+Random).Click here for fileLung cluster. (A) Boxplots of the PQIs for the Evidence (GOA), Literature (PUB) and combination of both (GOAPUB). (B) Boxplots of the PQIs for the evidence and literature terms (GOAPUB), for the same set enriched with the associative relations (+Dep.) or a random term set of the same size (+Random).Click here for fileRetina clusters. (A) Boxplots of the PQIs for the Evidence (GOA), Literature (PUB) and combination of both (GOAPUB). (B) Boxplots of the PQIs for the evidence and literature terms (GOAPUB), for the same set enriched with the associative relations (+Dep.) or a random term set of the same size (+Random).Click here for fileAlzheimer's disease cluster. (A) Boxplots of the PQIs for the Evidence (GOA), Literature (PUB) and combination of both (GOAPUB). (B) Boxplots of the PQIs for the evidence and literature terms (GOAPUB), for the same set enriched with the associative relations (+Dep.) or a random term set of the same size (+Random).Click here for fileEnriched GO Process terms (p < = 0.05) associated with at least 4 genes in the glioGBM cluster. Color: red = terms found in GOA and PubGene, green = terms only found in GOA, blue = terms only found in PubGene. Shape: rectangle = significantly enriched annotated terms (p < = 0.05) ; ellipse = non-significantly enri-ched annotated terms (p > 0.05).Click here for fileEnriched GO Process terms associated with at least 4 genes in the bcr-abl cluster. Color: red = terms found in GOA and PubGene, green = terms only found in GOA, blue = terms only found in PubGene. Shape: rectangle = significantly enriched annotated terms (p < = 0.05) ; ellipse = non-significantly enriched annotated terms (p > 0.05).Click here for fileLocusLink IDs, symbols, aliases and names of the down regulated genes.Click here for file"} {"text": "Genomic scale projects have compounded the need for rapid and reliable functional annotation methods. Traditional experimental approaches have become outpaced resulting in an ever-increasing proportion of missing annotations. Computational approaches, including those based on sequence, expression, interaction and tertiary structure, have the potential to impact the growing annotation deficit.Despite a recent increase in the number and variety of prediction methods, the computational annotation of protein function remains difficult. This stems from a combination of issues such as the inherent limitations of current tools and databases, the difficulty of assessing the predictive power of different methods and more fundamental problems related to the ambiguity of the definition of function itself.a-l\u00e0 CASP keep revisiting the problems [it presents]\". Function prediction is indeed a challenging endeavor that is further hampered by the lack of a standard assessment framework ,33. It wFor further information and updates on AFP meetings see:"} {"text": "Campylobacter jejuni is the leading bacterial cause of human gastroenteritis in the developed world. To improve our understanding of this important human pathogen, the C. jejuni NCTC11168 genome was sequenced and published in 2000. The original annotation was a milestone in Campylobacter research, but is outdated. We now describe the complete re-annotation and re-analysis of the C. jejuni NCTC11168 genome using current database information, novel tools and annotation techniques not used during the original annotation.Campylobacter strains and species not available during the original annotation. Re-annotation was accompanied by a full literature search that was incorporated into the updated EMBL file [EMBL: AL111168]. The C. jejuni NCTC11168 re-annotation reduced the total number of coding sequences from 1654 to 1643, of which 90.0% have additional information regarding the identification of new motifs and/or relevant literature. Re-annotation has led to 18.2% of coding sequence product functions being revised.Re-annotation was carried out using sequence database searches such as FASTA, along with programs such as TMHMM for additional support. The re-annotation also utilises sequence data from additional O- and N-linked glycosylation. This re-annotation will be a key resource for Campylobacter research and will also provide a prototype for the re-annotation and re-interpretation of other bacterial genomes.Major updates were made to genes involved in the biosynthesis of important surface structures such as lipooligosaccharide, capsule and both Campylobacter jejuni is the leading bacterial cause of human gastroenteritis in the developed world [C. jejuni infection has also been associated with post-infection sequelae including septicaemia and neuropathies such as Guillain-Barr\u00e9 Syndrome (GBS) [C. jejuni NCTC11168 genome project published in 2000 [ed world . C. jejume (GBS) . Infectime (GBS) . The lac in 2000 , and equC. jejuni NCTC11168 genome sequence in 2000, there has been a spectacular increase in research on this important human pathogen. One result of this has been significant revisions of the genetic loci that code for important surface structures on C. jejuni strains. The surface polysaccharide region has since been identified as a capsule locus (Cj1413c \u2013 Cj1448c) [O-linked glycosylation pathway (Cj1293 \u2013 Cj1342) [N-linked glycosylation pathway has been identified in C. jejuni (Cj1119 \u2013 Cj1130) [N-linked general glycosylation system was initially thought to only be present in eukaryotes. To date, up to 30 proteins modified with the same heptasaccharide glycan structure have been identified. Research over the last 7 years on C. jejuni, coupled with the publication of a further 2 C. jejuni genome sequences [Campylobacter species [Since the publication of the Cj1448c) -7. The fequences ,16 and a species , has heiRe-annotation is defined as the process of annotating a previously annotated genome . ExampleC. jejuni NCTC11168 genome. Manual re-annotation of all coding sequences (CDSs) was carried out using current annotation techniques. Literature searches, updates to genome structure and additional unique genome searches were carried out to produce the most comprehensive annotation of any Campylobacter genome to date. The re-annotation of the C. jejuni NCTC11168 genome also represents a useful model for the re-evaluation of other bacterial genomes.In this study, we describe the re-annotation and re-analysis of the C. jejuni NCTC11168 genome was performed resulting in the reduction of the total number of CDSs from 1654 to 1643. This reduction was due to the merging of adjacent CDSs or the removal of CDSs. Three CDSs originally designated as pseudogenes were removed as a result of merging with adjacent pseudogenes. CDSs designated as pseudogenes were also updated to reflect the complete amino acid sequence for the encoded protein regardless of expression. Phase-variable CDSs that contained an intersecting homopolymeric region between adjacent CDSs on separate frames were merged. This allowed the complete amino acid sequence for appropriate genes to be obtained regardless of phase. Re-interpretation of phase-variable CDSs resulted in removal of seven CDSs. CDS (Cj1520) was removed because of the recently discovered CRISPR structural moieties [A complete re-annotation of the CDS Cj15 was remoCampylobacter species/strains or any orthologs in similar microorganisms. Additionally, the 'updated' note qualifier also contains reasoning for including 'putative' or not within the product function. Putative designations infer an accepted product function without definitive evidence. For each CDS, a full literature search was performed. In total, 64.5% of CDSs have had one or more literature qualifier added. Interestingly, from all the literature added (2092), 50.5% have been published after the year 2000. Considering there was no literature qualifier in the original annotation, this data illustrates the depth of research that has been carried out since 2000 and further supports the need to make use of this information in a re-annotation. Detailed statistics on genome modifications are given in Table A systematic re-annotation of all CDSs was performed. For the purpose of this re-annotation, all CDSs with additional information have had an 'updated' note qualifier attached. This qualifier contains consistent free-hand descriptions on recently identified motifs, relevant similarity search results and any characterisation work carried out within re added 092, 50.5C. jejuni N-linked glycosylation pathway , has been fully characterised [pglA-K (protein glycosylation) genes and has updated all product functions for genes Cj1119c \u2013 Cj1130c. The LOS locus (Cj1131c \u2013 Cj1152c) described in the original annotation was updated to include recent product functions and gene names including neuA1, B1, C1 and hldDE [O-linked glycosylation loci (Cj1293 \u2013 Cj1342) involved in flagellar glycosylation, has been updated to include neu, pse and maf genes [Cj1413c \u2013 Cj1448c), has now been updated to include kps and hdd genes [Since the original annotation, significant new information has been derived on the genetic loci encoding the four main carbohydrate surface structures. The cterised ,12-14. Tnd hldDE -24. The af genes -11. Finadd genes .C. jejuni NCTC11168 genome is provided in Additional File Additional genome-wide updates were also carried out, of which a large proportion entailed adding specificity to existing product function. For example, the identification of a new PFAM or PROSITE motif has allowed the product function to become further specified e.g. putative transport protein modified to putative MFS (Major Facilitator System) transport protein. A complete list of changes throughout the C. jejuni NCTC11168 genome was 20. We carried out a re-analysis on all pseudogenes in the NCTC11168 genome. The majority of revisions we carried out incorporated multiple features created from different coordinates on more than one frame. This process is often complicated with support needed from FASTA and TBLASTX search results. Completion of this re-analysis resulted in modification of 19 out of 20 pseudogenes (Table Cj0968/Cj0969).Pseudogene identification is a challenging process where discrepancies exist between pseudogene assignment techniques . Identifes Table . The finCj0522, Cj0523 and Cj0524 within C. jejuni NCTC11168. These three CDSs are represented as one whole CDS on a single frame within C. jejuni RM1221 (Cje0628). The three CDSs are large enough to be represented as individual CDSs and in C. jejuni NCTC11168 have been represented on more than one frame. The question can be asked as to whether these CDSs (which are intact in C. jejuni RM1221), represent a pseudogene in C. jejuni NCTC11168. Given the fact that in C. jejuni RM1221 these three CDSs do actually code for a product , it is more likely that they represent a pseudogene in C. jejuni NCTC11168. In this re-annotation, our intention was to carry out a full mark up of existing pseudogenes, however, the potential for a pseudogene has been noted.An example of the difficulty and complexity associated with pseudogene designation is observed when viewing the CDSs C. jejuni NCTC11168 and C. jejuni RM1221 are compared in Additional File C. jejuni 81\u2013176 genome has not been fully annotated so could not be used in this comparison. This is also the case for C. coli RM2228, C. lari RM2100 and C. upsaliensis 3195 which only have an estimation of pseudogene numbers based on a subset of genes [C. jejuni NCTC11168, 63% (12/19) of the pseudogenes are shared with C. jejuni RM1221. In contrast to 19 pseudogenes in C. jejuni NCTC11168, C. jejuni RM1221 contains 47 pseudogenes. Assuming these are genuine pseudogenes this would imply C. jejuni NCTC11168 and C. jejuni RM1221 , share a core set of ancestral pseudogenes. Even with the variation of isolation dates, source and geographical location, there is substantial conservation of pseudogene type. It is speculative to suggest when and how the additional pseudogenes in C. jejuni RM1221 arose, or when and how the C. jejuni NCTC11168 genome lost CDSs as pseudogenes since divergence occurred.The frequency and importance of pseudogene formation in microorganisms has attained added significance in recent years with the emergence of genome reduction theories and enhanced virulence through pathoadaptive mutations ,28. Receof genes . In C. jEscherichia coli K-12, which has predicted an additional 161 from the original single pseudogene identified [Campylobacter strains and species along with additional epsilon proteobacteria species will aid our understanding on this emerging area of interest. Also, greater understanding of pseudogene dynamics and in particular innovative pseudogene identification techniques will yield more information about the actual number and purpose of these entities within microorganisms.The significance of pseudogenes in early genome annotations were frequently ignored, as these were considered as sequencing artefacts. However, given the recent realisation of the importance of pseudogenes in pathoadaptive mutations, a greater significance is placed on their identification ,29. An eentified . The samentified . Pseudogentified . AnalysiC. jejuni genomic shotgun sequence [O-linked glycosylation and capsule loci. Further research on these loci have illustrated the impact of phase-variation on microorganism pathogenicity [Phase-variable CDSs containing hypervariable regions were also analysed. The initial annotation gave a number of hypervariable sequences found within the sequence . These hgenicity ,31,32. TCampylobacter [C. jejuni and this has now been incorporated within the genome. As a result, one CDS (Cj1520) has been removed. This CDS was previously annotated as having five repeat regions. Thus, the genome now contains a CRISPR repeat region in place of the removed CDS.As well as CDS updates, novel features were also added to the re-annotation. For example, the incorporation of the recently identified Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) regions within lobacter ,33,34. CCj0046. The SRP is a universally conserved ribonucleoprotein involved in the co-translational targeting of proteins to membranes [Cj0453 (thiamin biosynthesis protein ThiC). The RFAM motif is a conserved structure (THI element), involved in thiamin-regulation [Additional genome searches included RFAM database search to discover any non-coding RNAs. This search identified two new non-coding RNA structures. RFAM RF00169, a bacterial signal recognition particle (SRP) RNA, was identified upstream of embranes ,36. Alsogulation .The final step of the re-annotation process was the incorporation of Gene Ontology (GO) annotation. GO annotation attempts to link three structured, controlled vocabularies (ontologies) that describe gene products in terms of their associated biological processes, cellular components and molecular functions in a species-independent manner . This reC. jejuni NCTC11168 genome sequence has led to substantial updates across the entire genome, incorporating a vast amount of research information performed since the original annotation in 2000 and also integrating data from additional Campylobacter species and strains. Major updates include noteworthy modifications to the 4 main surface structure loci in the genome, 18.2% of genome product functions being updated and 90.0% of all CDSs now having additional information. The inclusion of literature searches and a GO annotation alongside genome wide structural modifications has resulted in C. jejuni NCTC11168 being the most comprehensively annotated Campylobacter genome to date.In summary, the re-annotation and re-analysis of the C. jejuni NCTC11168 CDSs [Manual re-annotation of all previously annotated 168 CDSs was carr168 CDSs and FAST168 CDSs sequence168 CDSs . Additio168 CDSs and PROS168 CDSs motif da168 CDSs database168 CDSs and SIGN168 CDSs .C. jejuni RM1221 and adopting the GO annotation of orthologous CDSs.This re-annotation included a complete literature search of all CDS numbers and gene names using PubMed , HighWirAdvances in genome annotation techniques that were unavailable during the original annotation have led to updated interpretation of pseudogenes and phase-variable CDSs. Using guidance from TBLASTX search results, we carried out a full re-analysis of all pseudogenes. CDSs designated as pseudogenes have been updated to reflect the complete amino acid sequence for the encoded protein regardless of expression. This has caused differences from the amino acid sequence of the previous annotation. Some pseudogene modifications entailed merging two or more adjacent, in frame CDSs (previously annotated as separate pseudogene CDSs), to create a single pseudogene containing internal stop codons. In other cases, pseudogene features were created with multiple coordinates representing one or more frameshift in the CDS \u2013 these had previously only detailed the start and stop coordinates so did not reflect the true position of the non-mutated CDS. In both cases the assignment of coordinates was based on matches to homologues determined through FASTA searches.CDSs containing an intersecting homopolymeric tract were merged to reflect the complete amino acid sequence for appropriate genes regardless of phase. This is analogous to the scenario described above for frameshifted pseudogenes. This modification was carried out for two CDSs with an intersecting homopolymeric tract. The joining of such CDSs was not undertaken in the original annotation.OG carried out the re-annotation process and drafted the manuscript. SDB assisted with the re-annotation process. MTH assisted with running additional programs used in the re-annotation. JP, ND and BWW participated in the conception and supervised the design of the study. All authors submitted comments on drafts and read and approved the final manuscript.C. jejuni functional classification (created at Sanger Institute).Click here for fileDistribution of functional classification before/after re-annotation.Click here for fileChanges to functional classification categories before and after the re-annotation.Click here for fileChanges to CDS functions and functional classifications.Click here for fileC. jejuni NCTC11168 re-annotation.CDSs modified in Click here for fileC. jejuni NCTC11168 and C. jejuni RM1221.Pseudogene comparison between Click here for file"} {"text": "Apriori, is utilized to mine the relationship between the enzyme class and significant InterPro entries. The candidate rules are evaluated for their classificatory capacity.The number of sequences compiled in many genome projects is growing exponentially, but most of them have not been characterized experimentally. An automatic annotation scheme must be in an urgent need to reduce the gap between the amount of new sequences produced and reliable functional annotation. This work proposes rules for automatically classifying the fungus genes. The approach involves elucidating the enzyme classifying rule that is hidden in UniProt protein knowledgebase and then applying it for classification. The association algorithm, There were five datasets collected from the Swiss-Prot for establishing the annotation rules. These were treated as the training sets. The TrEMBL entries were treated as the testing set. A correct enzyme classification rate of 70% was obtained for the prokaryote datasets and a similar rate of about 80% was obtained for the eukaryote datasets. The fungus training dataset which lacks an enzyme class description was also used to evaluate the fungus candidate rules. A total of 88 out of 5085 test entries were matched with the fungus rule set. These were otherwise poorly annotated using their functional descriptions.The feasibility of using the method presented here to classify enzyme classes based on the enzyme domain rules is evident. The rules may be also employed by the protein annotators in manual annotation or implemented in an automatic annotation flowchart. The number of sequences generated by many genome projects is soaring exponentially but most of them have not been characterized experimentally. Manual annotation methods have been proposed by experts and are popular for use at the genome centers, but their annotation capacities are exceeded by the fast growing genome data. An automatic annotation scheme is in urgent need to speed up reliable functional annotation on new sequences produced. Automatic annotation provides an efficient procedure for analyzing the gene sequences. Most automatic solutions used to characterize the gene sequences are based on a high-level sequence similarity search against some known protein databases such as using the BLAST or FASTA program. The correlation between sequence composition and functional characterization provides the foundation for transferring functional knowledge from a biochemically characterized protein to a homologous but uncharacterized one. However, sequence composition bias and database updating commonly influence the results of similarity searches, and they do not yield the exact share between biological function and domain composition based on the similarity threshold used . Many anIn the post-genomic era, the functional annotations are of great importance in understanding the real cellular processes. A variety of enzymes and pathway databases including Ecocyc, Enzyme, and KEGG, have been built to facilitate the prediction of the metabolic pathway. Such databases are supplied as reference databases in virtual construction of the metabolic networks of other organisms. On the pathway map, enzymes are the main components used for linking the metabolic networks. The fundamental units of enzyme structure governing folding and function are domains of protein . A domaiet al. [et al. [This work proposes a machine learning method for identifying enzyme classes according to the rules that are related to the protein domain composition. Using rules generated by machine learning algorithms, Kretschmann et al. and Bazz [et al. have suc [et al. ,8. In th [et al. and medi [et al. . This inThis work seeks to annotate unknown genes and establishes virtual metabolic pathways using the bioinformatics approach based on progress made in the Monascus genome project at the authors' institute. Only few Monascus genes have been biochemically characterized so far. Numerous well-characterized proteins have been stored in a public database so that it is feasible to mine the classified rules from a protein knowledgebase. The BLAST is a fast but insufficient method for annotating unknown genes because it does not provide information on the functional domain. Analyzing the constituent domains of a gene enables the determination of possible functions of the gene. However, making a decision regarding the annotation of a multi-domain protein is difficult. In this study, an annotation model was established by applying rules derived from the domain compositions in some well-characterized proteins. The concept of annotation using the domain composition was further investigated. Five datasets Table were useMany data mining methods have been applied in the biological researches. For example, a decision tree has been used in keyword annotation in the Swiss-Prot and PIGSAs presented in Table Table Furthermore, the accuracy of the presented method was compared with the rules obtained from the InterPro database. These rules were parsed where the IPR Acc's were cross-referenced to ENZYME in the entry_xref table of the InterPro database. The rules such as {IPR001711, EC 3.1.4.11} were retained for providing the enzyme identification. There were five testing datasets used to evaluate the parsed ones. As shown in Table The precision and confidence of each EC class was also evaluated in the fungus dataset. Both quantities were varied among all the EC classes tested (data not shown here). However, a precision of greater than 75% was obtained for 60% of the EC classes tested (data not shown.). In this study, the Swiss-Prot entries were chosen as the training while the TrEMBL entries as the test set. We aimed to find the EC classifying rules that are hidden in the protein knowledgebase and to estimate the accuracy of the classifying method. The rules mined and presented here can be used by an annotator to perform manual annotation. They can be also implemented in an automatic annotation flowchart. They are also feasible to be used in identifying enzyme classes based on their IPR signature.Apriori one.This report proposed an alternative approach on employing the association algorithm. The association algorithm is commonly used to identify large and frequent item sets and mine hidden relationships among items. The concept can be applied in many fields other than market basket analysis. The method is extended here to mine the association rules which are then applied to identify enzyme classes. The current prediction scheme emphasizes on identifying enzymes of taxonomically closed datasets. Rule sets generated from the eukaryote training datasets can be used to assign the EC classes accurately to poorly annotated entries whose real enzyme function remain unknown. Extending the method to predict other types of data, including the transcription factors and structure proteins, is also worthwhile. However, the low coverage is a shortcoming of the presented scheme. The matching coverage depends on the quality of the training dataset which may be extended as a combination of various datasets with each being closed in taxonomic relationship. Moreover, more rules may be generated using other association algorithms except the There were five distinctly taxonomic datasets referring to the NEWT UniProt. The entThe WEKA machine learning package which isApriori [The Apriori ,15 modulP(A+B) \u00a0\u00a0\u00a0 (1)Support (AB) = P(B|A) \u00a0\u00a0\u00a0 (2)Confidence (AB) = P(B|A) was the conditional probability of B given A, and P(A) or P(B) was the probability of A or B over all instances. The probability was defined as the observed frequency in the data set. The support of the rule was the relative frequency of transactions containing both A and B. The lift was the related measure of strength of the association. Positive correlation was indicated by lift > 1 while negative correlation was indicated by lift < 1. A large frequent itemsets were subdivided into smaller ones in numerous ways to generate the candidate association rules. The candidate association rules were redundant and many of them were subsets of larger frequent itemsets. However, the rules mined herein were of the form {A,B,C}\u21d2{D} but not {A,B}\u21d2{C,D} or {A}\u21d2{B,C,D}. For example, {IPR000873, IPR006163, IPR010080} \u21d2 {1.2.1.31}. Because most of the support values of items were between 1 and 50, a minimum support value of 0.09% was set herein to indicate that the attribute must appear 2.7 times in 3000 instances. The threshold of confidence was 0.6 and the corresponding lift value was between 10 and 30.where et al. [The criterion satisfied rules were stored in the MySQL database for further evaluation. The testing dataset was used to evaluate the candidate rules governing the enzyme domain composition. Each test datum (separated by commas) was treated as a single string and matched with the set of rules to find the corresponding EC class. The precision of EC class matching (testing dataset to rules set) and the confidence were evaluated using the following equations as given by Kretschmann et al. .n = TP + FP \u00a0\u00a0\u00a0 (6)TP represents the \"True Positives\" and FP represents the \"False Positives\" and z is a constant, 1.96 (for 95% confidence).where SHC implemented the computational approach, performed the analysis and drafted the manuscript. CCC and GYF participated in the design of this study. THL participated in the design of this study, interpreted the results, and wrote the manuscript. All authors read and approved the final manuscript.All rules generated from the fungus training dataset.Click here for file"} {"text": "Plasmodium falciparum, confirming SIFTER's prediction. The results illustrate the predictive power of exploiting a statistical model of function evolution in phylogenomic problems. A software implementation of SIFTER is available from the authors.We present a statistical graphical model to infer specific molecular function for unannotated protein sequences using homology. Based on phylogenomic principles, SIFTER accurately predicts molecular function for members of a protein family given a reconciled phylogeny and available function annotations, even when the data are sparse or noisy. Our method produced specific and consistent molecular function predictions across 100 Pfam families in comparison to the Gene Ontology annotation database, BLAST, GOtcha, and Orthostrapper. We performed a more detailed exploration of functional predictions on the adenosine-5\u2032-monophosphate/adenosine deaminase family and the lactate/malate dehydrogenase family, in the former case comparing the predictions against a gold standard set of published functional characterizations. Given function annotations for 3% of the proteins in the deaminase family, SIFTER achieves 96% accuracy in predicting molecular function for experimentally characterized proteins as reported in the literature. The accuracy of SIFTER on this dataset is a significant improvement over other currently available methods such as BLAST (75%), GeneQuiz (64%), GOtcha (89%), and Orthostrapper (11%). We also experimentally characterized the adenosine deaminase from New genome sequences continue to be published at a prodigious rate. However, unannotated sequences are of limited use to biologists. To computationally annotate a hypothetical protein for molecular function, researchers generally attempt to carry out some form of information transfer from evolutionarily related proteins. Such transfer is most successfully achieved within the context of phylogenetic relationships, exploiting the comprehensive knowledge that is available regarding molecular evolution within a given protein family. This general approach to molecular function annotation is known as phylogenomics, and it is the best method currently available for providing high-quality annotations. A drawback of phylogenomics, however, is that it is a time-consuming manual process requiring expert knowledge. In the current paper, the authors have developed a statistical approach\u2014referred to as SIFTER \u2014that allows phylogenomic analyses to be carried out automatically.The authors present the results of running SIFTER on a collection of 100 protein families. They also validate their method on a specific family for which a gold standard set of experimental annotations is available. They show that SIFTER annotates 96% of the gold standard proteins correctly, outperforming popular annotation methods including BLAST-based annotation (75%), GOtcha (89%), GeneQuiz (64%), and Orthostrapper (11%). The results support the feasibility of carrying out high-quality phylogenomic analyses of entire genomes. The post-genomic era has revealed the nucleic and amino acid sequences for large numbers of genes and proteins, but the rate of sequence acquisition far surpasses the rate of accurate protein function determination. Sequences that lack molecular function annotation are of limited use to researchers, so automated methods for molecular function annotation attempt to make up for this deficiency. But the large number of errors in protein function annotation propagated by automated methods reduces their reliability and utility \u20133.E-value, as an indicator of homology. A functional annotation is heuristically transferred to the query sequence based on reported functions of similar sequences.Most of the well-known methods or resources for molecular function annotation, such as BLAST , GOFigurSIFTER takes a different approach to function annotation. Phylogenetic information, if leveraged correctly, addresses many of the weaknesses of sequence-similarity-based annotation transfer , such asOther approaches, referred to as context methods, predict protein function using evolutionary information and protein expression and interaction data \u201326. ThesPhylogenomics is a methodology for annotating the specific molecular function of a protein using the evolutionary history of that protein as captured by a phylogenetic tree . PhylogePhylogenomics applies knowledge about how molecular function evolves to improve function prediction. Specifically, phylogenomics is based on the assertion that protein function evolves in parallel with sequence , implyinIt is broadly recognized that this method produces high-quality results for annotating proteins with specific molecular functions . Three pBayesian methodologies have influenced computational biology for many years . BayesiaThree properties of the Bayesian approach make it uniquely suited to molecular function prediction. First, Bayesian inference exploits all of the available observations, a feature that proves to be essential in this inherently observation-sparse problem. Second, the constraints of phylogenomics\u2014that function mutation tends to occur after a duplication event or that function evolution proceeds parsimoniously\u2014are imposed as prior biases, not as hard constraints. This provides a degree of robustness to assumptions that is important in a biological context. Third, Bayesian methods also tend to be robust to errors in the data. This is critical in our setting, not only because of existing errors in functional annotations, but also because phylogeny reconstruction and reconciliation often imperfectly reflect evolutionary history.The current instantiation of SIFTER uses Bayesian inference to combine all molecular function evidence within a single phylogenetic tree, using an evolutionary model of molecular function. A fully Bayesian approach to phylogenomics would integrate over all sources of uncertainty in the function annotation problem, including uncertainty in the phylogeny and its reconciliation, and uncertainty in the evolutionary model for molecular function. It is important to be clear at the outset that the current instantiation of SIFTER stops well short of full Bayesian integration. Rather, we have focused on a key inferential problem that is readily treated with Bayesian methods and is not accommodated by current tools in the literature\u2014that of combining all of the evidence within a single inferred tree using probabilistic methods. Technically, this limited use of the Bayesian formalism is referred to as \u201cempirical Bayes\u201d .Extensions to a more fully Bayesian methodology are readily contemplated; for example, we could use techniques such as those used by MrBayes to integAll automated function annotation methods require a vocabulary of molecular function names, whether the names are from the set of Enzyme Commission (EC) numbers, Gene Ontology (GO) molecular function names , or wordSIFTER builds upon phylogenomics by employing statistical inference algorithms to propagate available function annotations within a phylogeny, instead of relying on manual inference, as fully described in A \u201cduplication event\u201d captures a single instance of a gene duplicating into divergent copies of that gene within a single genome; a \u201cspeciation event\u201d captures a single instance of a gene in an ancestral species evolving into divergent copies of a gene in distinct genomes of different species. Each of the internal nodes of a phylogeny represents one of these two events, although a standard phylogeny does not distinguish between the two. The reconciled phylogeny for a protein family, which discriminates duplication events and speciation events ,46, specThe available, or observed, function annotations, associated with individual proteins at the leaves of the phylogeny, are propagated towards the root of the phylogeny and then propagated back out to the leaves of the phylogeny, based on a set of update equations defined by the model of function evolution. The result of the inference procedure is a posterior probability of each molecular function for every node in the tree (including the leaves), conditioned on the set of observed functions. The posterior probabilities at each node do not actually select a unique functional annotation for that node, so functional predictions may be selected using a decision rule based on the posterior probabilities of all of the molecular functions. This procedure gives statistical meaning to the phylogenomic notion of propagating functional annotations throughout each clade descendant from a molecular function mutation event. We do not require that the mutation event coincide with a duplication event.The inference algorithm used in SIFTER has linear complexity in the size of the tree and thus is viable for large families. The complexity of SIFTER is exponential in the number of possible molecular functions in a family, owing to the fact that we compute posterior probabilities for all possible subsets of functions. In the families that we studied, the number of functions was small and this computation was not rate-limiting; in general, however, it may be necessary to restrict the computation to smaller collections of subsets. The rate-limiting step of applying SIFTER is phylogeny reconstruction; a full-genome analysis, given limited computational resources, might use lower-quality or precomputed phylogenies along with bootstrapping, or a subset of closely related species for the larger protein families. We found that lower-quality trees do not significantly diminish the quality of the results (results will be detailed elsewhere).In this report, we use only GO IDA- and IMP-derived annotations as observations for SIFTER, because of the high error rate and contradictions in the non-experimental annotations . However, SIFTER can also incorporate other types of annotations, weighted according to their reliability.We first present results for SIFTER's performance on a large set of proteins to show general trends in prediction and to evaluate the scalability of SIFTER. We then present results for a single protein family with a gold standard set of function characterizations to evaluate prediction quality in detail. We also describe results for the lactate/malate dehydrogenase family, although it does not have a gold standard dataset. The decisive benefit of a statistical approach to phylogenomics is evidenced on each of these different datasets.To evaluate the scalability, applicability, and relative performance of SIFTER, we predicted molecular function for proteins from 100 protein families available in Pfam , using eFor each family in our 100-family dataset, we ran SIFTER on the associated reconciled tree with the experimental annotations (IDA and IMP) from the GOA database. SIFTER produced a total of 23,514 function predictions; we selected the subset of 18,736 that had non-experimental annotations from the GOA database and applied BLASTC, GOtcha, and Orthostrapper to this set . We did We chose these 100 families to meet one of the following two criteria: (1) greater than 10% proteins with experimental annotations (and more than 25 proteins), or (2) more than nine experimental annotations. Families with fewer than two incompatible experimental GO functions were excluded. The families had an average of 235 proteins, ranging from 25 to 1,116 proteins. On average, 3.3% of the proteins in a family had IDA annotations, and 0.4% had IMP annotations. Both SIFTER and Orthostrapper relied on this particularly sparse dataset for inference; evaluative techniques involving the removal of any of these annotations from inference tended to trivialize the results . Selecting well-annotated families via these criteria assists SIFTER, but it should also enhance the performance of all of the function transfer methods evaluated here. Note also that SIFTER does not require this level of annotation accuracy to be effective, as discussed below. Finally, it is important to note that many of the IEA annotations from the GOA database may come from one of the assessed methods, so we can expect consistency to be quite high.Of the 8,501 SIFTER predictions that were either identical or incompatible to the GO non-experimental annotations, 83.1% were identical. The average percentage of identical function predictions by family was 82.9%, signifying that the size of the family does not appear to impact this percentage. The median identity by family was 90.7%, and the mode was 100% (representing 25 families). The minimum identity was 14.4% (Pfam family PF00536). We estimate that 38 of the families contained non-enzyme proteins, and we found no difference in the identity percentage of SIFTER on enzyme families versus non-enzyme families. Similarly, the total number of functional annotations used as observations in SIFTER does not appear to impact the identity percentage . These data suggest that a large percentage of incompatibility is concentrated within a few families. It is not entirely clear what property of those families contributes to the greater incompatibility; it may reflect how well studied the families are relative to the number of proteins in the family.Not all of the annotation methods predict functions for 100% of the proteins. Indeed, as shown in The percentage of Orthostrapper predictions that were identical or compatible with the non-experimental GOA database function annotations in the 100-family dataset was 88%, but only 7% of proteins received Orthostrapper predictions . When fuThe difficulties encountered by Orthostrapper arise from the small number of proteins that are placed in statistically supported clusters, and the lack of annotations in these clusters. The latter limits the usefulness of the method to protein families with a high percentage of known protein functions, or to observed annotations with a low error rate. These results highlight the impact of the modeling choices in SIFTER and Orthostrapper. SIFTER uses Bayesian inference in a single phylogeny, addressing uncertainty in the ancestral variables in the phylogeny but presently not addressing uncertainty in the phylogeny itself. In contrast, Orthostrapper's approach of bootstrapped orthology addresses uncertainty in the phylogeny, but neglects uncertainty in the ancestral variables. Our results indicate that the gains to be realized by treating uncertainty within a tree may outweigh those to be realized by incorporating uncertainty among trees, but it would certainly be of interest to implement a more fully Bayesian version of SIFTER that accounts for both sources of uncertainty.We compared SIFTER's prediction (the function with the single highest posterior probability) to the top-ranked prediction from BLAST-based methods, the top-ranked prediction from GOtcha, the unranked set of non-experimental terms from the GOA database, and unranked Orthostrapper predictions. On this broad set of proteins, SIFTER's predictions were compatible or identical with the non-experimental annotations from the GOA database for 80% of the predictions, while 67% of BLAST-based predictions, 80% of GOtcha predictions, and 78% of GOtcha-ni predictions were compatible or identical to the non-experimental GOA database annotations. It is not entirely clear what these numbers represent, in particular because some unknowable fraction of the IEA annotations in the GOA database were derived using these or related methods. Orthostrapper predictions achieved 88% (Ortho) and 92% (Ortho-ns) compatibility or identity with the GOA database, but because of the small percentage of proteins receiving predictions using Orthostrapper, the absolute number of compatible or identical annotations is much lower. All pairwise comparison data are in The number of incompatible annotations is noteworthy: exact term agreement ranges from 16% to 91%, and the percentage of compatible or identical terms ranges from 45% to 95%. Collectively the methods must be producing a large number of incorrect annotations as evidenced by the high percentage of disagreement in predictions. It appears that there is no gold standard for comparison in the case of electronic annotation methods other than experimental characterization.We selected a well-characterized protein family, the adenosine-5\u2032-monophosphate (AMP)/adenosine deaminase family, for evaluation of SIFTER's predictions against a gold standard set of function annotations. We assessed these using experimental annotations that we manually identified in the literature, accepting only first-hand experimental results that were successful in unambiguously characterizing the specific chemical reaction in question. References are provided in The AMP/adenosine deaminase Pfam family contains 128 proteins. Based on five proteins with experimental annotations from the GOA database, we ran SIFTER to make predictions for the remaining 123 proteins. Of these remaining proteins, 28 had experimental characterizations found by the manual literature search. SIFTER achieved 96% accuracy (27 of 28) for predicting a correct function against this gold standard dataset. SIFTER performed better than BLAST, GeneQuiz, GOtcha, GOtcha-exp , and Orthostrapper . The comparative results are summarized in Homo sapiens results in one form of severe combined immune deficiency syndrome )SIFTER's primary role may be to reliably predict protein function for many of the Pfam families or more generic sets of homologous proteins. The argument can be made that no automated function annotation method should be used in some of these cases because the data within a family are too sparse to support annotation transfer. Thus, a second role for SIFTER may be to quantify the reliability of function transfer in under-annotated sets of homologous proteins, by using the posterior probabilities as a measure of confidence in annotation transfer. A third role may be to select targets for functional assays so as to provide maximum coverage based on function transfer for automated annotation techniques. Because of its Bayesian foundations, SIFTER is uniquely qualified to address these alternate questions in a quantifiable and robust way.Molecular function predictions cannot replace direct experimental evidence for producing flawless function annotations . HoweverIn this section we first present the modeling, algorithmic, and implementational choices that were made in SIFTER. We then turn to a discussion of the methods that we chose for empirical comparisons. Finally, we present the protocol followed for the deaminase activity assays.In classical phylogenetic analysis, probabilistic methods are used to model the evolution of characters along the branches of a phylogenetic tree and to mSIFTER borrows much of the probabilistic machinery of phylogenetic analysis in the service of an inference procedure for molecular function evolution. The major new issues include the following: (1) given our choice of GO as a source of functional labels, functions are not a simple list of mutually exclusive characters, but are vertices in a DAG; (2) we require a model akin to Jukes\u2013Cantor but appropriate for molecular function; (3) generally only a small subset of the proteins in a family are annotated, and the annotations have different degrees of reliability. We describe our approach to these issues below.The first step of SIFTER is conventional sequence-based phylogenetic reconstruction and reconciliation.Phylogenetic reconstruction is the computational bottleneck in the application of SIFTER. Thus, in the current implementation of SIFTER we have made use of parsimony methods instead of more computationally intense likelihood-based or Bayesian methods in phylogenetic reconstruction. This \u201cempirical Bayes\u201d simplification makes it possible to apply SIFTER to genome-scale problems.b10 [In detail, the steps of phylogenetic reconstruction implemented in SIFTER are as follows. Given a query protein, we (1) find a Pfam family of a homologous domain , and extb10 , using pb10 to estimThe result of this procedure is a \u201creconciled phylogeny,\u201d a rooted phylogenetic tree with branch lengths and duplication events annotated at the internal nodes ,46.Subsequent stages of SIFTER retain these structural elements of the phylogeny, but replace the amino acid characters with vectors of molecular function annotations and place a model of molecular function evolution on the branches of the phylogeny.We use the following process to define a vector of candidate molecular function annotations for a given query protein and for the other proteins in the phylogeny.Given a Pfam family of a homologous domain for a query protein, we index into the GOA database associated with some of the proteins in the family. To accommodate the fact that IDA annotations may be more reliable than IMP annotations according to the experiments by which they are generated, and to allow users to make use of other, possibly less reliable, annotations, SIFTER distinguishes between a notion of \u201ctrue function\u201d and \u201cannotated function,\u201d and defines a likelihood function linking these variables. In particular, the current implementation of SIFTER defines expert-elicited probabilities that an experimentally derived annotation is correct given the method of annotation: IDA annotations are treated as having a likelihood of 0.9 of being correct, and IMP as having a likelihood of 0.8.Q(S) = 1/\u03b7S||, where S is an arbitrary subset of the nad nodes, |S| is the cardinality of S, and the value of \u03b7 is fixed by the requirement that GOA database annotations are not restricted to the leaves of the ontology but can be found throughout the DAG. To incorporate all such annotations in SIFTER, we need to propagate annotations to the nad subset. In particular, annotations at nodes that are ancestors to nad nodes need to be propagated downward to the nad nodes. We do this by treating evidence at an ancestor node as evidence for all possible combinations of its descendants, according to the distribution We turn to a description of the model of molecular function evolution that SIFTER associates with the branches of the phylogeny. For each node in the phylogeny, corresponding to a single protein, this model defines the conditional probability for the vector of function annotations at the node, conditioning on the value of the vector of function annotations at the ancestor of the node. We chose a statistical model known as a loglinear model for the model of function evolution. We make no claims for any theoretical justification of this model. It is simply a phenomenological model that captures in broad outlines some of the desiderata of an evolutionary model for function and has worked well in practice in our phylogenomic setting.iX denote the Boolean vector of candidate molecular function annotations for node i and let mth component of this vector. Let M denote the number of components of this vector. Let \u03c0i denote the immediate ancestor of node i in the phylogeny, so that i to i as follows:Let id and m,nq are parametric functions of branch lengths in the phylogeny and path lengths in GO, respectively. This functional form is known as a \u201cnoisy-OR\u201d function [m and is equal to zero for all other values of m. Suppose that id is equal to one. Then the probability that node i has the nth function .where function , and it m,nq to be a decreasing function of the path length m,nl in GO. Specifically, we let , where s is a free parameter. This parameter is taken to be different for speciation and duplication events; in particular, it is larger in the latter case, corresponding to the phylogenomic assumption that evolutionary transitions are more rapid following a duplication event. To set the parameters sspeciation and sduplication, we can in principle make use of resampling methods such as cross-validation or the bootstrap. In the case of the deaminase family, however, the number of observed data points (five) is too small for these methods to yield reasonable results, and in our analyses of this family we simply fixed the parameters to the values sspeciation = 3 and sduplication = 4 and did not consider other values. For the 100-family dataset, we ran each family with a few different parameter settings, because the number of annotations available for the families was in general prohibitively small, and fixed them at the set of values that produced predictions most closely aligned with the non-experimental annotations from the GOA database. We define m,mq = (1/r)s/2 for self-transitions; this normalizes the self-transition probability with respect to the number of components of the annotation vector.To capture the notion that a transition should be less probable the less \u201csimilar\u201d two functions are, we defined id to be a decreasing nonlinear function of the branch length. Specifically, we set ib is the most parsimonious number of amino acid mutations along the branch from \u03c0i to i.We also need to parameterize the transition rate as a function of the branch length in the phylogeny. This is achieved by defining Having defined a probabilistic transition model for the branches of the phylogeny, and having defined a mechanism whereby evidence is incorporated into the tree, it remains to solve the problem of computing the posterior probability of the unobserved functions in the tree conditional on the evidence.This problem is readily solved using standard probabilistic propagation algorithms. Specifically, all posterior probabilities can be obtained in linear time via the classical pruning algorithm , also knE-value cutoff of 0.01. We transferred annotation from the highest scoring non-identity protein (BLASTB), which was determined by checking the alignment for 100% identity and identical species name. We also transferred annotation from the highest scoring annotated non-identity protein (BLASTA), which was the highest scoring non-identity protein that had a functional description . Phrases modifying a functional annotation such as \u201cputative\u201d and \u201c-related\u201d were ignored. An annotation including an EC number was considered unambiguous.The BLAST version 2.2.4 [E-value) that had a function description from the appropriate set: for the deaminase family we searched for \u201cadenosine deaminase,\u201d \u201cadenine deaminase,\u201d \u201cAMP deaminase,\u201d and, for the results on multiple functions, \u201cgrowth factor activity.\u201d A reference could also be in the form of an EC number or unambiguous phrase . We plotted the false positives (one minus specificity) versus true positives (sensitivity) as the acceptance cutoff for E-values ranges from 0.01 to zero, where proteins were annotated with a function if the most significant E-value for a protein with that particular function was less than the acceptance cutoff.To build the ROC plots for the BLASTC comparison, for each protein in the selected families we searched the BLAST output for the highest scoring sequence from August 22, 2004, to September 1, 2004 . The funFor GOtcha, we ran the first publicly available version of the GOtcha software , kindly For Orthostrapper , we ran In each cluster, we transferred all experimentally derived GO annotations from member proteins onto the remaining proteins without experimentally derived GO annotations. If a cluster did not contain a protein with an experimentally derived GO annotation, no functions were transferred; if a protein was present in multiple clusters, it would receive annotations transferred within each of those clusters. This method yields an unranked set of predictions for each protein.Purified Q8IJA9_PLAFA was the kind gift of Erica Boni, Chris Mehlin, and Wim Hol of the Stuctural Genomics of Pathogenic Protozoa project at the University of Washington. Adenosine and adenine were from Sigma-Aldrich , AMP was from Schwarz Laboratories , and monobasic and dibasic potassium phosphate were from EMD Chemicals .\u22121 cm\u22121 [The loss of absorbance at 265 nm was monitored with an Agilent Technologies 8453 spectrophotometer. The \u0394\u025b between substrate adenosine and product inosine is 7,740 AU M\u22121 cm\u22121 .Dataset S1(7.5 MB TGZ)Click here for additional data file.http://www.ebi.ac.uk/swissprot/) accession number for H. sapiens adenosine deaminase is P00813 and for P. falciparum adenosine deaminase is Q8IJA9. The Pfam (http://www.sanger.ac.uk/Software/Pfam/) accession number for the AMP/adenosine deaminase family is PF00962.The Swiss-Prot ("} {"text": "Drosophila melanogaster.The goal of the Sequence Ontology (SO) project is to produce a structured controlled vocabulary with a common set of terms and definitions for parts of a genomic annotation, and to describe the relationships among them. Details of SO construction, design and use, particularly with regard to part-whole relationships are discussed and the practical utility of SO is demonstrated for a set of genome annotations from The Sequence Ontology (SO) is a structured controlled vocabulary for the parts of a genomic annotation. SO provides a common set of terms and definitions that will facilitate the exchange, analysis and management of genomic data. Because SO treats part-whole relationships rigorously, data described with it can become substrates for automated reasoning, and instances of sequence features described by the SO can be subjected to a group of logical operations termed extensional mereology operators. Genomic annotations are the focal point of sequencing, bioinformatics analysis, and molecular biology. They are the means by which we attach what we know about a genome to its sequence. Unfortunately, biological terminology is notoriously ambiguous; the same word is often used to describe more than one thing and there are many dialects. For example, does a coding sequence (CDS) contain the stop codon or is the stop codon part of the 3'-untranslated region (3' UTR)? There really is no right or wrong answer to such questions, but consistency is crucial when attempting to compare annotations from different sources, or even when comparing annotations performed by the same group over an extended period of time.Arabidopsis Information Resource (TAIR) [Saccharomyces Genome Database (SGD) [At present, GenBank houses 2e (TAIR) and the se (SGD) . Each ofgene and intron, and the properties of these features describe an attribute of the feature; for example, a gene may be maternally_imprinted.The Goal of the SO is to provide a standardized set of terms and relationships with which to describe genomic annotations and provide the structure necessary for automated reasoning over their contents, thereby facilitating data exchange and comparative analyses of annotations. SO is a sister project to the Gene Ontology (GO) and is pLike other ontologies, SO consists of a controlled vocabulary of terms or concepts and a restricted set of relationships between those terms. While the concepts and relationships of the sequence ontology make it possible to describe precisely the features of a genomic annotation, discussions of them can lead to much lexical confusion, as some of the terms used by SO are also common words; thus we begin our description of SO with a discussion of its naming conventions, and adhere to these rules throughout this document.five_prime_UTR, except in cases where the number is part of the accepted name. If the commonly used name begins with a number, such as 28S RNA, the stem is moved to the front - for example, RNA_28S. Symbols are spelled out in full where appropriate, for example, prime, plus, minus; as are Greek letters. Periods, points, slashes, hyphens, and brackets are not allowed. If there is a common abbreviation it is used as the term name, and case is always lower except when the term is an acronym, for example, UTR and CDS. Where there are differences in the accepted spelling between English and US usage, the US form is used.Wherever possible, the terms used by SO to describe the parts of an annotation are those commonly used in the genomics community. In some cases, however, we have altered these terms in order to render them more computer-friendly so that users can create software classes and variables named after them. Thus, term names do not include spaces; instead, underscores are used to separate the words in phrases. Numbers are spelled out in full, for example Synonyms are used to record the variant term names that have the same meaning as the term. They are used to facilitate searching of the ontology. There is no limit to the number of synonyms a term can have, nor do they adhere to SO naming conventions. They are, however, still lowercase except when they are acronyms.Throughout the remainder of this document, the terms from SO are highlighted in italics and the names of relationships between the terms are shown in bold. The terms are always depicted exactly as they appear in the ontology. The names of EM operators are underlined.To facilitate the use of SO for the markup of gene annotation data, a subset of terms from SO consisting of some of those terms that can be located onto sequence has been selected; this condensed version of SO is especially well suited for labeling the outputs of automated or semi-automated sequence annotation pipelines. This subset is known as the Sequence Ontology Feature Annotation, or SOFA.SO, like GO, is an 'open source' ontology. New terms, definitions, and their location within the ontology are proposed, debated, and approved or rejected by an open group of individuals via a mailing list. SO is maintained in OBO format and the current version can be downloaded from the CVS repository of the SO website . For devThe terms describing sequence features in SO and SOFA are richer than those of the Feature Table of the tSO is not a database schema, nor is it a file format; it is an ontology. As such, SO transcends any particular database schema or file format. This means it can be used equally well as an external data-exchange format or internally as an integral component of a database.The Molecular Biology of the Cell [CDS is defined as a contiguous RNA sequence which begins with, and includes, a start codon and ends with, and includes, a stop codon. According to SO, the sequence of a three_prime_utr does not contain the stop_codon - and files with such sequences are SO-compliant; files of three_prime_utr containing stop_codons are not. This is a trivial example, illustrating one of the simplest use cases, but it does demonstrate the power of SO to put an end to needless negotiations between parties as to the details of a data exchange. This aspect of SO is especially well suited for use with the generic feature format (GFF) [The simplest way to use SO is to label data destined for redistribution with SO terms and to make sure that the data adhere to the SO definition of the data type. Accordingly, SO provides a human-readable definition for each term that concisely states its biological meaning. Usually the definitions are drawn from standard authoritative sources such as the Cell , and eacat (GFF) . Indeed,SO can also be employed in a much more sophisticated manner within a database. CHADO is a modLike GFF3, Chaos-XML is a filregion and junction, equivalent to the concepts of interiors and boundaries defined in the field of topological relationships [exon or a transposable_element. A junction is the space between two bases, such as an insertion_site. Building on these basic data types, SOFA can be used to describe a wide range of sequence features. Raw sequence features such as assembly components are captured by terms like contig and read. Analysis features, defined by the results of sequence-analysis programs such as BLAST [nucleotide_match. Gene models can be defined on the sequence using terms like gene, exon and CDS. Variation in sequence is captured by subtypes of the term sequence_variant. These terms have multiple parentages with either region or junction. SOFA (and SO) can also be used to describe many other sequence features, for example, repeat, reagent, remark. Thus, SOFA together with GFF3 or Chaos-XML provide an easy means by which parties can describe, standardize, and document the data they distribute and exchange.The basic types in SOFA, from which other types are defined, are ionships . A regioas BLAST are captde novo annotation. Several groups including SGD and FlyBase now use either SO or SOFA terms in their annotation efforts. SO is not restricted to new annotations, however, and may be applied to existing annotations. For example, annotations from GenBank may be converted into SO-compliant formats using Bioperl [The SO and SOFA controlled vocabularies can be used for Bioperl , because no derives_from relationship unites these two terms in the ontology. This fact illustrates another important aspect of how SO handles relationships: children always inherit from parents but never from siblings. An ncRNA is a kind_of transcript as is an mRNA. Labeling something as a transcript implies that it could possibly produce a polypeptide; labeling that same entity with the more specific term ncRNA rules that possibility out. Thus, a file that contained ncRNAs and their polypeptides would be semantically invalid.SO uses the term part_of relationships pertain to meronomies; that is to say 'part-whole' relationships. An exon, for example, is a part_of a transcript. part_of relationships are not valid in both directions. In other words, while an exon is a part_of a transcript, a transcript is not a part_of an exon. Instead, we say a transcript has_part exon. SO does not explicitly denote whole-part relationships, as every part_of relationship logically implies the inverse has_part relationship between the two terms.part_of relationships are transitive - an exon is a part_of a gene, because an exon is a part_of a transcript, and a transcript is a part_of a gene. Not every chain of part-whole relationships, however, obeys the principle of transitivity. This is because parts can be combined to make wholes according to different organizing principles. Winston et al. [configuration, whether the parts have a structural or functional role with respect to one another or the whole they form; substance, whether the part is made of the same stuff as the whole (homomerous or heteromerous); and invariance, whether the part can be separated from the whole. These six relations and their associated part_of subclasses are detailed in Table Transitivity is a more complicated issue with regards to part-whole relationships than it is for the other relationships in SO. In general, n et al. have deset al. [part_of relationships only if they all belong to the same subclass. In other words, an exon can only be part_of a gene, if an exon is a component_part_of a transcript, and a transcript is component_part_of a gene. If, however, the two statements contain different types of part_of relationship, then transitivity does not hold.Winston et al. argue thet al. solve many of the problems associated with reasoning across part_of relationships; thus, we are adopting their approach with SO. The parts contained in the sequence ontology are mostly of the type component_part_of such as exon is a part_of transcript, although there are a few occurrences of member_part_of such as read is a part_of contig.By addressing the vague English term 'part of' in this way, Winston Genomic annotations are substrates for a multitude of software applications. Annotations, for example, are rendered by graphical viewers, or, as another example, their features are searched and queried for purposes of data validation and genomics research. Using an ontology for sequence annotation purposes offers many advantages over the traditional Feature Table approach. Because controlled vocabularies do not specify the relationships that obtain between their terms, using the Feature Table has meant that relationships between features have had to be hard-coded in software applications themselves; consequently, adding a new term to the Feature Table and/or changing the details of the relationships that obtain between its terms has meant revising every software application that made use of the Feature Table. Ontologies mitigate this problem as all of the knowledge about terms and their relationships to one another is contained in the ontology, not the software.tRNA is a kind_of transcript; it need merely know that kind_of relationships are transitive and hierarchical and be capable of internally navigating the network of relationships specified by the ontology operators to ask questions about gene parts. Although new to genomics, EM operators are well known in the field of ontology, where they provide a basis for asking and answering questions pertaining to how parts are distributed within and among different wholes . Theanalysis - may alpart_of relationship and then provides a set of operations ; second, if A is a proper part of B then the B is not a part of A; third, if A is a part of B and B is a part of C then A is a part of C.EM is a formal theory of parts: it defines the properties of the ns Table that canpart_of relationships are transitive. Accordingly, we have restricted our analyses (see Results and discussion) to component parts contained 13,539 genes, , 18,735 transcripts and 61,853 exons.These data afford many potential analyses, but as our motivation was primarily to demonstrate the practical utility of SO as a tool for data management, rather than comparative genomics overlapping transcripts if at least one of its exons is shared between two of its transcripts, and will have disjoint transcripts if one of its transcripts shares no exons in common with any other transcript of that gene. For the purposes of this analysis, we further classified disjoint transcripts as sequence-disjoint and parts-disjoint. We term two disjoint transcripts sequence-disjoint if none of their exons shares any sequence in common with one another; and parts-disjoint if one or more of their exons overlap on the chromosome but have different exon boundaries. Note that the three operations are pairwise, and thus not mutually exclusive. To see why this is, imagine a gene having three transcripts, A, B, and C. Obviously, transcript A can be disjoint with respect to B, but overlap with respect to C. Thus, we can speak of a gene as having both disjoint and overlapping transcripts.As we had characterized the parts of the annotations using SO, we were able to employ the EM operators over these parts. This proved to be a natural way to explore the relative complexity of alternative splicing, as the alternatively spliced transcripts have different combinations of parts: that is, exons. We grouped alternatively spliced transcripts into two classes. An alternatively spliced gene will contain disjoint and overlapping transcripts in a genome says something about the relative complexity of alternative splicing in that genome. A gene may have any combination of these types of disjoint and overlapping transcripts, so we created a labeling system consisting of the seven possible combinations. We did this by asking three EM-based questions about the relationships between pairs of a gene's transcripts: How many pairs are there of sequence-disjoint transcripts? How many pairs are there of parts-disjoint transcripts? How many pairs are there of overlapping transcripts? Doing so allowed us to place that gene into one of seven classes with regards to the properties of its alternatively spliced transcripts. We also kept track of the number of times each of the three relationships held true for each pair combination. For example, a gene having two transcripts that are parts-disjoint with respect to one another would be labeled 0:1:0. Keeping track of the number of transcript pairs falling into each class provides an easy means to prioritize them for manual review. These results are summarized in Figure The relative numbers of sequence-disjoint transcript, 275 have parts-disjoint transcripts, and 2,664 have overlapping transcripts, and 53 have both parts-disjoint and overlapping transcripts. The percentage of D. melanogaster genes in each category is shown in Table sequence-disjoint transcripts in D. melanogaster, for example, is due to annotation practice; in fact, current FlyBase annotation practices forbid their creation, the reason being that any evidence for such transcripts is evidence for a new gene [Of the alternatively spliced fly genes, none has a new gene . This isnew gene revealedThe frequencies of genes that fall into each of the seven classes shown in Table binary product of the two transcripts; whereas those exons present in only one of the transcripts constitute their difference . In oth example . WhateveD. melanogaster are highly enriched for 5-prime untranslated exons compared with single-transcript genes. Most of these exons belong to ALWAYS_FOUND; thus, there seems to be a strong tendency in D. melanogaster for alternative transcripts to begin with a unique 5' UTR region. This fact suggests that alternative transcription in the fly may, in many cases, be a consequence of alternative-promoter usage and perhaps tissue-specific transcription start sites. The high percentage of untranslated 5-prime UNIQUE exons in D. melanogaster may also be a consequence of the large numbers of 5' ESTs that have been sequenced in the fly [To investigate these conclusions in more detail, we further examined each exon with respect to its EM-based class and its coding and untranslated portions. These results are shown Figure the fly .D. melanogaster ALWAYS_FOUND exons are coding. This makes sense, as it seems likely that one reason for an exon's inclusion in every one of a gene's alternative transcripts is that it encodes a portion of the protein essential for its function(s).Figure As with our previous analyses of alternative transcripts, our analyses of alternatively transcribed exons also illustrate the ways in which basic biology and annotation-management issues intersect one another. The fact that most ALWAYS_FOUND exons are entirely coding, for example, may have something important to say about which parts of a protein are essential for its function(s). Whereas the over-abundance of un-translated UNIQUE exons probably has more to say about the resources available to, and the protocols used by, the annotation project than it does about biology. Such considerations make it clear that the evidence used to produce an annotation is an essential part of the annotation. In this regard SO has much to offer, as it provides a rational means by which to manage annotation evidence in the context of gene-parts and the relations between those parts.part_of relationship because SO is largely a meronomy - a particular kind of ontology concerned with the relationships of parts to wholes. Extensional mereology (EM) is an area that is largely new to bioinformatics for which there are several excellent reference works available [We have sought to provide an introduction to the SO and justify why its use to unify genomic annotations is beneficial to the model organism community. We illustrate some of the ways in which SO can be used to analyze and manage annotations. Relationships are an essential component of SO, and understanding their role within the ontology is a basic prerequisite for using SO in an intelligent fashion. Much of this paper revolves around the vailable ,27,33, aUsing all of the relationships in SO allows us to automatically draw logical conclusions about data that has been labelled with SO terms and thereby provide useful insights into the underlying annotations. We have shown how SO, together with the EM-based operations it enables, can be used to standardize, analyze, and manage genome annotations.Given any standardized set of genome annotations described with SO these annotations can then be rigorously characterized. For our pilot analyses, we focused on alternatively transcribed genes and their exons, and explored the potential of EM-operators to classify and characterize them. We believe that the results of these analyses support two principle conclusions. First, EM-based classification schemes are simple to implement, and second, they capture important trends in the data and provide a concise, natural, and meaningful overview of annotations in these genomes.part_of relationships made clear, however, reasoning across diverse types of parts is a complicated process; ad-hoc approaches will not suffice where the data are complex. The more formal approach afforded by SO means that analyses can be easily be extended beyond the domain of transcripts and exons to include many other gene parts and relationships as well - including evidence. It seems clear that over the next few years both the number and complexity of annotations will increase, especially with regard to the diversity of their parts. Drawing valid conclusions from comparisons of these annotations will prove challenging. That SO has much to offer such analyses is indisputable.One criticism that might be justifiably leveled against the SO- and EM-based analyses presented here is that they are too formal, and that simpler approaches could have accomplished the same ends. As our discussion of SO and SOFA provide the model organism community with a means to unify the semantics of sequence annotation. This facilitates communication within a group and between different model organism groups. Adopting SO terminology to type the features and properties of sequence will provide both the group and the community the advantages of a common vocabulary, to use for sharing and querying data and for automated reasoning over large amounts of sequence data.SO and SOFA have been built and are maintained using the ontology-editing tool OBO-Edit. The ontologies are available at .D. melanogaster [mRNA, or tRNA. It was therefore possible to restrict the analysis to given types of transcript. CGL tools were used to validate each of the annotations, iterate through the genes and query the features. EM-operators were applied to the part features of genes.The FlyBase nogaster data wasnogaster relationgenomes section of GenBank [genomes section of GenBank. The same Perl class was used to type the feature_relationships according to SO relationship types. The EM analysis was performed over the Chaos-XML annotations using the CGL suite of modules to iterate over the parts of each gene.Other organism data was derived from the GenBank . GenBank GenBank ) and Bio GenBank . The Bio"} {"text": "Comparative sequence analysis is considered as the first step towards annotating new proteins in genome annotation. However, sequence comparison may lead to creation and propagation of function assignment errors. Thus, it is important to perform a thorough analysis for the quality of sequence-based function assignment using large-scale data in a systematic way.Arabidopsis thaliana, Saccharomyces cerevisiae, Caenorrhabditis elegans, and Drosophila melanogaster. Using a measure of functional similarity based on the three categories of Gene Ontology (GO) classifications , we quantified the correlation between functional similarity and sequence similarity measured by sequence identity or statistical significance of the alignment and compared such a correlation against randomly chosen protein pairs.We present an analysis of the relationship between sequence similarity and function similarity for the proteins in four model organisms, i.e., Various sequence-function relationships were identified from BLAST versus PSI-BLAST, sequence identity versus Expectation Value, GO indices versus semantic similarity approaches, and within genome versus between genome comparisons, for the three GO categories. Our study provides a benchmark to estimate the confidence in assignment of functions purely based on sequence similarity. Large-scale genome sequencing projects have discovered many new proteins. Of all the proteins whose sequences are known, functions have been experimentally determined for only a small percentage . AnnotatSaccharomyces cerevisiae and Arabidopsis thaliana acquired from the Website of Clusters of Orthologous Groups of proteins (COGs) [Despite the central role that sequence comparison programs play in functional annotation, a thorough analysis of the quality of methods based on a large-scale dataset has not been performed. Improvements in the sensitivity of sequence comparison algorithms have reached the point that proteins with previously undetectable sequence relationship, for instance with 10\u201315% identical residues, may be classified as similar . An estis (COGs) . The COGE. coli genome. However, this study is limited only to within genome comparisons and lacks any analysis based on inter-genome comparisons. Devos et al. [Arabidopsis thaliana, Saccharomyces cerevisiae, Caenorrhabditis elegans, and Drosophila melanogaster) and controlled vocabularies of function annotation terms in the Gene Ontology [A number of studies in sequence-function relationship have been carried out. Shah et al. showed ts et al. have stus et al. . HoweverOntology from thrThe sequence comparisons within and across the four genomes provide a global view on the relationship between sequence similarity and function similarity. Figure E. coli genome [Figure i genome and by Wi genome who use Functional conservation measures from GO annotations based on computational techniques such as electronic annotation based on sequence similarity has a behavioral pattern completely different from Figures versus the sequence similarity in terms of E-value and percentage of sequence identity for intra-genome comparisons within four genomes. In this case the localization is measured by five types as described in Section 4.4, instead of the GO Cellular Component Annotation, a detailed level that no existing software can predict reliably. Subcellular localization conservation shows similar results when compared in terms of E-value or sequence identity. Inter-genome comparisons based on the predicted subcellular localizations also behave in a manner similar to the intra-genome comparisons (data not shown). It is interesting to note that the behavior of the curves of the four genomes is similar in respect of E-value using BLAST. Figure We have also computed results as described above for any random pairs with known function annotation. Then, we calculated a normalized ratio of function similarity in terms of sequence identity by comparing the results in Figures Figure It has been long recognized that genome annotations using computational methods produce many false function assignments. Many of these methods have been applied to function prediction. They often provide valuable hypotheses, but none are perfect. As a result, it is known that many databases contain incorrect function assignments, and these erroneous assignments propagate from one database to another. Nevertheless, up until now there has been no systematic study for this critical issue. The question whether two proteins are functionally similar is very complex to answer. Function is a very complex notion involving many different aspects including chemical, biochemical, cellular, organism mediated, and developmental processes. Qualitatively it is expected that with higher sequence similarity, the two proteins are more likely to have related functions. However, quantitatively the relationship between function similarity at the different categories and sequence similarity has not been studied deeply. Such a quantitative study is fundamentally important, as it can provide assessment of gene function prediction quality and insights into the underlying mechanisms of new evolving functions through changes in sequence ,26.Our study confirms that sequence comparison often provides good suggestions for gene functions or related functions. These suggestions serve as useful hypotheses for further experimental work to confirm, refine or refute the predictions. Such a process can substantially increase the speed of biological knowledge discovery. On the other hand, when assigning function based purely on similarity to proteins of known function (as annotated in databases), it is important to be aware of incomplete or wrong annotations. Given the value of computational function annotation, our study also shows that a significant portion of gene annotations of biological process, molecular function, and cellular component based solely on sequence similarity, in particular, when the sequence similarity is low, are unreliable. Our study also provides a numerical benchmark for the extent to which one can trust computational annotation. It is possible that a confidence score can be derived from our study for any annotation based on sequence similarity. With this score in the annotation file, the user can have a better insight about the quality of the annotations. Furthermore, our analyses highlights the different sequence-function relationships identified from BLAST versus PSI-BLAST, sequence identity versus Expectation value, GO indices versus semantic similarity approaches and within genome versus between genome comparisons, for the three GO classification types.There are some limitations in our current study. Our study can only reflect certain aspect of protein function. Protein function variations may result from factors other than sequence, such as alternative splicing and post-translational modification, and our method does not address these factors. Another limitation is that when we assess gene function prediction, we only consider one hit at a time in a database. In many cases, sequence comparison yields multiple hits for one query protein and these hits may have different functions. In our future study, we will develop a new method to assess the function prediction for a query protein by combining the functions of multiple hits while considering the dependence among these functions and the E-values of the hits.Arabidopsis thaliana, Saccharomyces cerevisiae, Caenorrhabditis elegans and Drosophila melanogaster for the study. All four genomes are well-studied model organisms in eukaryotes. The complete set of Arabidopsis thaliana protein sequences for 27,288 ORFs was acquired from The Arabidopsis Information Resource (TAIR) [Caenorrhabditis elegans ORFs, 6350 Saccharomyces cerevisiae ORFs and 13,665 Drosophila melanogaster ORFs from NCBI [We selected the genomes of e (TAIR) . We alsorom NCBI . Table 1The Gene Ontology (GO) functional classification has threWe assume that the functional relationship between two proteins is reflected by the number of index levels that they share. We have demonstrated the usefulness of such an assumption in our early studies for gene function prediction ,29. We aGene Ontology annotation is based on various evidences to annotate functional categories. Towards quality control, all the plots and level 2 (1-1-3) and will have functional similarity equal to 2. The functional similarity defined this way can assume values from 1 to 12.We also calculate functional similarity in terms of semantic similarity between the GO functional annotation terms ,31. An epms is the probability of the minimum subsumer for terms t1 and t2. The minimum subsumer for terms t1 and t2 is defined as the common parent of the deepest GO Index level shared by t1 and t2.where, Saccharomyces cerevisiae, 27,288 in Arabidopsis thaliana, 21,588 in Caenorrhabditis elegans, and 18,498 in Drosophila melanogaster. It is worth mentioning that the subcellular localization predictions were not based on sequence similarity.The subcellular distribution of proteins within a proteome is useful and important to a global understanding of the molecular mechanisms of a cell. Protein localization can be seen as an indicator of its function. Localization data can be used as a means of evaluating protein information inferred from other resources. Furthermore, the subcellular localization of a protein often reveals its activity mechanism. The subcellular localization information was predicted using SubLoc ,33,41. TThe sequence similarity search was done using tools such as BLAST , FASTA ,35 and PWe compared the sequences for within as well as between genome sequence similarities. Each protein sequence was compared against the complete set of proteins for the same genome for within genome comparisons. For between genome comparisons, a pair of similar protein pair was identified using the reciprocal search method , i.e., tTo assess the significance of a sequence comparison, an expectation value or E-value can be calculated. This value represents the number of different alignments with the observed alignment score or better that are expected to occur in the database search simply by chance. The E-value is a widely accepted measure for assessing potential biological relationship, as it is an indicator of the probability for finding the match by chance. Smaller E-values represent more likelihood of having an underlying biological relationship. In this study, we will use both E-value and sequence identity as parameters to quantify sequence similarity. On the other hand, E-values depend on a number of computational factors, such as the length of the query protein and the size of search database. The issues prevent the E-value from being a reliable indicator for homology, as addressed in Fig. The data and results are publicly available at our website .TJ contributed in the data collection, sequence alignments and generation and analysis of the results. Both TJ and DX contributed in the formulation, design and writing of the study. Both authors read and approved the final manuscript."} {"text": "The international FANTOM consortium aims to produce a comprehensive picture of the mammalian transcriptome, based upon an extensive cDNA collection and functional annotation of full-length enriched cDNAs. The previous dataset, FANTOM2, comprised 60,770 full-length enriched cDNAs. Functional annotation revealed that this cDNA dataset contained only about half of the estimated number of mouse protein-coding genes, indicating that a number of cDNAs still remained to be collected and identified. To pursue the complete gene catalog that covers all predicted mouse genes, cloning and sequencing of full-length enriched cDNAs has been continued since FANTOM2. In FANTOM3, 42,031 newly isolated cDNAs were subjected to functional annotation, and the annotation of 4,347 FANTOM2 cDNAs was updated. To accomplish accurate functional annotation, we improved our automated annotation pipeline by introducing new coding sequence prediction programs and developed a Web-based annotation interface for simplifying the annotation procedures to reduce manual annotation errors. Automated coding sequence and function prediction was followed with manual curation and review by expert curators. A total of 102,801 full-length enriched mouse cDNAs were annotated. Out of 102,801 transcripts, 56,722 were functionally annotated as protein coding , providing to our knowledge the greatest current coverage of the mouse proteome by full-length cDNAs. The total number of distinct non-protein-coding transcripts increased to 34,030. The FANTOM3 annotation system, consisting of automated computational prediction, manual curation, and final expert curation, facilitated the comprehensive characterization of the mouse transcriptome, and could be applied to the transcriptomes of other species. The RIKEN Mouse Gene Encyclopedia project was launched with the aim of cloning and sequencing full-length mouse cDNAs. An international annotation consortium (FANTOM) was organized to annotate the collected mouse cDNAs. In FANTOM1, the consortium annotated 21,076 cDNAs with the development of a Web-based annotation interface [FANTOM1 and FANTOM2 considerably extended our knowledge of the mouse transcriptome, but compared with the number of predicted protein-coding genes from mouse genome sequencing, the cDNA resource covered only half of all predicted genes. Therefore, cDNA collection from a number of novel cellular and tissue sources was continued. In this process, many novel cDNAs derived from distinct genomic loci were fully sequenced. In FANTOM3, these newly sequenced cDNAs were mapped to the mouse genome and subjected to functional annotation. Given the substantial increase in cDNA sequence information in mouse and other mammalian species since FANTOM2, the new annotation process provided the opportunity to update and improve the previous functional annotation of RIKEN cDNAs from FANTOM1 and FANTOM2.Here we report the development of the new annotation interface and decision pipeline, and the modification of our annotation strategy to accelerate manual annotation. And we also provide functional annotation of 102,801 mouse full-length enriched cDNAs, to our knowledge the largest such dataset.The result of this functional annotation was shared among FANTOM3 consortium members for further analyses such as protein coding analysis and noncoding RNA (ncRNA) analysis [The Web-based online annotation system from FANTOM2 was likewise implemented for FANTOM3. This system allowed all curators to annotate transcripts from remote sites around the world through the Internet and resulted in significant acceleration of the manual annotation process. Nevertheless, time remained an issue. Even 10 min spent on manual annotation of each transcript would mean that the total task would consume 15,000 h, and our aim was to complete the task within a matter of weeks. In FANTOM2, curators could enter comments when they encountered problematic cDNAs or ones that were difficult to annotate. However, it was a heavy burden for expert curators, who reviewed and corrected annotations, to read all written comments and correct annotations one by one. For these and other reasons, we introduced a precomputational pipeline in which the annotator could accept the automated decision by ticking a series of boxes. Only where there was some ambiguity, or a better alternative name, was the annotator required to assess additional data and enter alternative decisions. In general, this process reduced the annotation time for unequivocal cases down to 10\u201320 s.We updated the original annotation rules that were determined during the FANTOM2 meeting ,5 in ordFirstly, coding sequence (CDS) annotation items were expanded in FANTOM3. In FANTOM2, curators annotated the following four items: CDS status , completeness of 5\u2032 and 3\u2032 ends of CDS, maturity of transcripts, and presence of in-frame insertion/deletion errors and stop codons. In FANTOM3, three additional items were introduced: exact positions of in-frame insertion/deletion errors, and flags for selenoproteins and mitochondrial transcripts with unique codon usage. This information was used for computational translation to make a complete dataset of protein-coding transcripts, and it allowed us to avoid unwanted frameshifts and stop codons in the middle of a CDS region.Secondly, the set of CDS prediction algorithms, the outcomes of which are displayed at the top level to the annotator, was changed based upon our previous experience. ProCrest (unpublished) and NCBI CDS Predictor (unpublished), which were used in FANTOM2, were phased out because they cannot identify exact positions of in-frame insertion/deletion errors, although they are able to predict whether these errors exist or not within CDS regions . InsteadThirdly, we improved our annotation pipelines for assigning transcript descriptions (renamed from \u201cgene names\u201d in FANTOM2), symbols (renamed from \u201cgene symbols\u201d), and synonyms to transcripts A and forEscherichia coli, it is curated as \u201cchimeric clone.\u201d If a cDNA has evidence that implies cloning in the reverse direction, for example, having CT-AC splicing patterns rather than GT-AG ones, it is curated as \u201creverse clone.\u201d These problematic entries are then automatically excluded from further curation and analyses.Fourthly, new annotation items to identify problematic clones were added. In the FANTOM3 annotation system, two buttons for these problematic clones, chimeric clones and reverse clones, were introduced to simplify the annotation process. If a cDNA is deemed to be derived from two or more mRNAs or to be a contaminant from To help curators annotate accurately, the curation interface was improved from that of FANTOM2. Information such as MGI assignment, cDNA status prediction, sequence quality, expressed sequence tag mapping, genome mapping, splicing information, predicted transmembrane regions, and protein motifs was provided on the curation screen. Some information was provided in a simple graphical display to expedite rapid decisions. Moreover, additional information such as raw alignments and hyperlinks to public databases could be accessed by clicking corresponding bars in the cDNA summary image section.In the new FANTOM3 interface, annotators were provided with an initial computational annotation that the curators were then required to accept or reject by clicking buttons. To simplify the annotation process when the computational annotation was rejected, several major reject reasons and alternative CDS predictions, CDS statuses, transcript descriptions, and GO terms were provided as a list with checkboxes, and the curators were prompted to select an appropriate one. Curators were also encouraged to add notes on each transcript, based upon their background knowledge.The computational annotation in FANTOM3 was carried out prior to manual annotation, as in FANTOM2. The FANTOM3 annotation pipelines for assigning transcript descriptions and for GO assignments are summarized in After manual curation on potential protein-coding transcripts, we next considered annotating potential non-protein-coding transcripts. To reduce human annotation errors, potential non-protein-coding transcripts were classified into several subcategories and were released stepwise depending on their coding potential. The transcripts that completely or partially matched known genes at the DNA level were open to curators first, followed by the transcripts that showed similarity to known genes at the amino acid level. Finally, the transcripts that were just covered with expressed sequence tags were subjected to manual curation. Out of 11,555 potential non-protein-coding transcripts, 7,343 (63.5%) transcripts were annotated as non-protein-coding. And 1,893 (16.4%) and 386 (3.3%) transcripts were annotated as immature and truncated forms, respectively.To improve the quality of the functional annotation dataset, a review process was carried out following the manual curation. Expert curators were selected from all registered curators based on their performance, and they reviewed the rejected entries. In FANTOM3, computational filtration was intensively performed to lighten the burden for expert curators. Several criteria are discussed below.In eukaryotes, nonsense-mediated mRNA decay is known as a mRNA surveillance mechanism . It Flanking adenine-rich sequence at the 3\u2032 end of a transcript suggests the possibility that the cDNA could be produced by internal priming of oligo-dT primer. Therefore, we extracted the transcripts that had more than ten adenosines in the 20 flanking nucleotides by using mouse genome sequence, and these transcripts were manually reviewed by expert curators. If transcripts seemed to be produced by internal priming of coding transcripts, they were curated as coding/immature.In FANTOM3, we also developed a genomic element browser by customizing the generic genome browser ,13 to reIn FANTOM3, 42,031 transcripts were newly annotated and the functional annotation of 4,347 FANTOM2 transcripts was updated with the improved annotation system. Combining the results of FANTOM2 and FANTOM3, 102,801 cDNAs were functionally annotated by the international effort. Out of these, 47,761 and 8,961 transcripts were annotated as complete coding and truncated coding, respectively, and 34,030 transcripts were annotated as non-protein-coding .Our FANTOM3 annotation system largely contributed to the prompt and precise annotation that was accomplished, and this system could be a model for other mammalian transcriptome projects.http://fantom3.gsc.riken.jp/db and ftp://fantom3.gsc.riken.jp/fantomdb/3.0.The curated annotation data are available at We annotated 102,801 sequences derived from RIKEN mouse full-length enriched cDNA libraries . The sethttp://www.soe.ucsc.edu/~kent/exe/linux/blatSuite.zip) with options \u2013minAli = 0.96 \u2013nearTop = 0.005. Next, the alignments were post-processed by an algorithm designed to extend transcript-to-genome alignments by using information about exon positions from neighboring alignments . Subsequently, the highest-scoring alignment or alignments, according to the following formula, were retained for each transcript: round , where identity = number of matches/(number of matches + number of mismatches + number of non-intron gaps), coverage = number of matches/transcript sequence size, and introns are gaps of at least 20 bp in the transcript sequence only. Ties were broken in favor of assembled chromosomes over unassembled genomic sequence. If there were still two highest-scoring alignments for a transcript, both were displayed in the annotation interface. Finally, adjacent alignment blocks were connected if they appeared to belong to the same exon. The criteria for deciding that blocks belonged to the same exon were adopted from the Sim4 program [Transcript sequences were mapped to the mouse genome (assembly mm5) in several stages. In the first stage, the sequences were aligned to the genome using BLAT version 30 with opt program : (1) gap program .http://repeatmasker.org) to exclude regions containing known repetitive sequences. FANTOM3 query sequences were searched against mouse non-expressed-sequence-tag mRNA sequences in the MGI database [http://www.informatics.jax.org), against the mouse sequences in dbEST [http://www.ncbi.nih.gov/dbEST), and against known ncRNA sequences in RNAdb (http://research.imb.uq.edu.au/rnadb) [http://www.ebi.ac.uk/interpro).Assembled full-length cDNA sequences were first masked using RepeatMasker (database (http://in dbEST (http://u/rnadb) . DNA seau/rnadb) with theu/rnadb) in the Fu/rnadb) . Open reu/rnadb) relational database management system. Other data such as similarity search alignments and clone sequences were stored in indexed flat files.\u2002The cDNA annotation (curation) interface was implemented as a Web-based application using mod_perl and the gd graphics library on a Linux system running an Apache 2.0 server. All curated annotations and annotation histories were stored in a custom database implemented in a Sybase ("} {"text": "In the last decade, sequencing projects have led to the development of a number of annotation systems dedicated to the structural and functional annotation of protein-coding genes. These annotation systems manage the annotation of the non-protein coding genes (ncRNAs) in a very crude way, allowing neither the edition of the secondary structures nor the clustering of ncRNA genes into families which are crucial for appropriate annotation of these molecules.LeARN is a flexible software package which handles the complete process of ncRNA annotation by integrating the layers of automatic detection and human curation.This software provides the infrastructure to deal properly with ncRNAs in the framework of any annotation project. It fills the gap between existing prediction software, that detect independent ncRNA occurrences, and public ncRNA repositories, that do not offer the flexibility and interactivity required for annotation projects. The software is freely available from the download section of the website Science magazine selected the discovery of small RNA with a regulatory function as a scientific breakthrough of the year [C. elegans [Our knowledge of small non-protein-coding RNAs (ncRNAs) has considerably evolved during the last decade. In 2002, the year . Since, elegans -5. BecauDetecting novel ncRNAs by experimental RNomics is not an easy task . This haab-initio methods rely on an existing bias in base composition between ncRNA and the rest of the genome in order to provide a segmentation of the genomic sequence into ncRNA regions and others [Computational methods for ncRNA prediction can be classified into four approaches; bias composition analysis, minimization of free energy, searching for homologous RNA in the context of conserved family-specific characteristics, and sequence-based homology searches. The first approach analyzes the intrinsic features of the genomic sequence in order to detect ncRNA candidates. These d others ,14. Theyd others . Candidad others . A thirdd others , or covad others . More fld others ; Milpat d others ). These d others , RNAz [2d others , MSARI [d others , ddbRNA d others have higComplementary to the development of detection strategies, many user interfaces have been developed to modify and annotate RNA sequences and structures .The aim of LeARN, the annotation platform presented here, is to manage the complete process of annotation of ncRNA genes. It can incrementally integrate the results of arbitrary detection programs and provides life scientists with user friendly interfaces allowing both structural and functional annotations. In order to facilitate later exploitation of annotations, LeARN relies on existing standards such as the Rfam database and the RNAML data exchange format, thus providing full interoperability with existing databases and software.The general architecture of LeARN is shared by most annotation platforms Figure . The firThe package . LeARN handles this redundancy by prioritizing the detection software in the main configuration file. A second type of redundancy is caused by overlaps between sequences, and this is critical in the context of ongoing BAC-to-BAC sequencing projects. In order to manage this source of redundancy, LeARN can rely on an additional file describing pairwise overlaps to avoid artificial over-prediction of redundant ncRNA genes.n-1 before starting the greedy analysis of the release n.The software allows for incremental updates of gene and family annotations which is an essential feature for ongoing sequencing projects. Incremental updating is made possible by using the RNAML files of the release In order to analyze one or several complete genomes, it is often useful to run detection computations in parallel. To accommodate the parallelization and the greedy algorithm previously described, the program offers the possibility to execute all detection programs beforehand. In this case, a command line is generated for each execution of a detection program. Each command line stores the result of its execution in a cache directory using an unambiguous filename based on the program name and version and the MD5 checksum of the analyzed sequence and can be directly executed in parallel. When the pipeline later requires the execution of a prediction program, it may directly use the cache result instead of running the prediction itself.The pipeline can easily be customized to integrate arbitrary detection programs providing results in GFF2 format. In addition to the definition of site specific pipelines, this opens the possibility of using LeARN as a light-weighted visualisation interface for researchers willing to develop new detection software.LeARN relies on a repository of RNAML documents to store the annotations of molecules and ncRNA families. This technical choice is compatible with the limited amount of data generated to annotate ncRNA (less than 10 Mb of RNAML to describe the annotation of 130 Mb of legume genome sequences). Relying on RNAML documents (i) provides a native interoperability with the visualisation software which use this standard format; (ii) takes advantage of XSLT processors which allow both document transformations and efficient (via XPATH queries) searches in XML repositories, and (iii) provides users with a light-weight package that does not require any RDBMS skill.The Web interface of LeARN is structured by the different functionalities it offers: scanning the current database and annotating ncRNA molecules or families.The \"browse\" tab allows lists of ncRNAs and ncRNA families to be displayed Figure . By clicIn order to correct the unavoidable discrepancies and errors generated by an automatic process, the LeARN package provides the expert user with an annotation interface for editing both structural and functional annotations as well as merging and splitting families. This requires user authentication. After the first login, the user must create his/her own workspace, initially defined by a copy of the public database. At any time, the 'Status' page allows the user to select the database he/she wants to use: it can be either the browsable public database or its own editable private one. Editing rights are made visible by a change in the background colour. The system allows for the parallel annotation by several experts, but prevents the concurrent annotation of the same family by different experts.In LeARN, the annotation of a ncRNA molecule is a three step process Figure . The firThe annotation process for families is similar to the previous one Figure . The firPyrococcus abyssi (Pa), Pyrococcus furiosus (Pf), Pyrococcus horikoshii (Ph) and Thermococcus kodakarensis (Tk). For these genomes, the RFAM genome browser gives 11 sRNA families in Pa, 12 sRNA families in Pf, 10 sRNA in families in Ph and 10 sRNA families in Tk for a total of 13 different families. With LeARN, iteratively built with RFAM covariance models, a classification in 16 families was obtained. From the original RFAM families, only the snoR9 family disappeared in this classification.We illustrate LeARN usage with two case studies on thermoccocales genomes We chose the C/D box sRNA family as the first case study. This family is mainly characterized by the presence of four motifs: C (RUGAUGA), D' (CUGA), C' (UGAUGA) and D boxes (CUGA). The region of 9 nucleotides downstream of D and/or D' boxes generally interacts with the target of the C/D sRNA mostly forming Watson-crick interactions. For a more accurate annotation, it is usual to further classify these ncRNA genes into subfamilies with a common target. In LeARN, four families were built automatically from the RFAM covariance model associated to the C/D box sRNA family (ID: snoPyro_CD). The largest snoPyro_CD family contained 74 candidates while the other ones contained only 1 candidate. From the Family browser, it was possible to select and merge all the four families into only one (RFL0002) making it possible to analyze all candidates together and to classify them according to the conserved boxes and the sequence similarity at the target interaction site. This was done on the merged family using the Edit family function. Using the incrementally updated alignments provided by LeARN, the clustering of sequences into subfamilies based on their targets was straightforward. At the end of the process, 23 novel families were proposed. We renamed them according to the archaea sRNA database . Three cThe second case study concerns H/ACA sRNA. In RFAM, HgcE, HgcF and HgcG have unknown function but were found to be H/ACA sRNA genes named respectively Pf3, Pf6 and Pf7 . Thanks To summarize, LeARN provides a way to complete the gap between existing databases and gene prediction tools. It offers the user a working environment for editing sequence and structure of individual sRNA, as well as RNA families, by using sRNA dedicated functionalities for automatic and manual annotation operations.The first case study showed that one of the advantages of LeARN is to manage redundancy of candidates. The case of sRNA sR9 is a good example. One of the drawbacks of the automatic process of sRNA identification was the assignment of the sR9 C/D box sRNA to the general snoRNA family instead of the more accurate snoR9 family provided by the covariance model of RFAM. This is certainly a result of the proposed iterative approach which relies in part on the energy heuristic clustering. Despite this wrong assignment and the incomplete sequence for two members of this family, it was possible to group all the sR9 candidates and to extend those for which the H/ACA region was lacking by using the various functionalities of LeARN. In both case studies, it was particularly useful to be able to edit a sequence (extension of some regions) with regard to the literature and available knowledge of known H/ACA sRNA, candidates of the family, and availability of genomic sequences. The availability of an automatic or manual alignment of sRNA of the sequence and the structure, and the graphical representation of the consensus secondary structure considerably facilitated the improvement of the structural annotation.All the annotations were done and saved in the private environment of the annotator which offers a very nice and useful working space for personal annotation. After the annotation, it appears now essential to submit these structural annotations to the administrator to replace less accurate ones. One can also imagine that these annotations could be submitted, in the context of a collaborative annotation process, to any RNA database administrator in order to share the annotations with the scientific community. For example, novel sequence and structural annotations of C/D box and H/ACA archaea sRNA could allow new, more accurate RFAM covariance models to be generated for future archaea genome annotations.The demo section of the LeARN home page provides access both to raw results generated by the automatic process applied on the four thermoccocale genomes (\"demo server\"), and to the database after the edition of C/D box and H/ACA sRNA families (\"annotation of C/D box and H/ACA families\").The LeARN package can be used for ncRNA annotation projects for any set of sequences including complete genomes. It integrates tools and web interfaces covering the three layers of the ncRNA annotation process: a flexible detection and clustering pipeline, a RNAML database and a web interface to manage the expert annotation. The software has been designed to manage the complete ncRNA annotation process in the frame of whole genome annotation projects and to fill the gap between existing detection software and public ncRNA repositories. Moreover, LeARN is also an extendible and light-weight package, which can be used as an annotation interface either by life scientists wanting to annotate a single ncRNA family or by bioinformaticists who need a simple interface to visually evaluate their results. It can also be used to build training datasets or for any other activity involving ncRNA annotation.Project name: LeARNProject home page: Operating system: UNIXProgramming language: Perl-OO, XSLTOther requirements: Infernal [Infernal , Vienna Infernal , Mfold [Infernal , Rfam [8Infernal , ClustalInfernal , ncbi-blInfernal . The AdmLicense: Free Software license CECILL2 Any restrictions to use by non-academics: NoneXML: eXtensible Markup LanguageXSLT: eXtensible Stylesheet Language TransformationsCN designed, implemented and tested the software; JG led the design. JG, CG and TS tested the programs and contributed to the preparation of the manuscript. All authors have read and approved the final manuscript.LeARN 1.0.1 tarball. Tarball with LeARN source code. For installation instructions, see .Click here for file"} {"text": "The Gene Ontology (GO) is a collaborative effort that provides structured vocabularies for annotating the molecular function, biological role, and cellular location of gene products in a highly systematic way and in a species-neutral manner with the aim of unifying the representation of gene function across different organisms. Each contributing member of the GO Consortium independently associates GO terms to gene products from the organism(s) they are annotating. Here we introduce the Reference Genome project, which brings together those independent efforts into a unified framework based on the evolutionary relationships between genes in these different organisms. The Reference Genome project has two primary goals: to increase the depth and breadth of annotations for genes in each of the organisms in the project, and to create data sets and tools that enable other genome annotation efforts to infer GO annotations for homologous genes in their organisms. In addition, the project has several important incidental benefits, such as increasing annotation consistency across genome databases, and providing important improvements to the GO's logical structure and biological content. Biological research is increasingly dependent on the availability of well-structured representations of biological data with detailed, accurate descriptions provided by the curators of the data repositories. The Reference Genome project's goal is to provide comprehensive functional annotation for the genomes of human as well as eleven organisms that are important models in biomedical research. To achieve this, we have developed an approach that superposes experimentally-based annotations onto the leaves of phylogenetic trees and then we manually annotate the function of the common ancestors, predicated on the assumption that the ancestors possessed the experimentally determined functions that are held in common at these leaves, and that these functions are likely to be conserved in all other descendents of each family. A curator in this context is a Ph.D. trained professional life scientist whose task is to meaningfully integrate published, and in some cases unpublished, biological data into a database The functional annotation of gene products, both proteins and RNAs, is a major endeavor that requires a judicious mix of manual analysis and computational tools. The manual aspect of this annotation task is carried out by curators, from the Latin The GO was developed within the community of the Model Organism Databases (MODs), whose goal is to annotate the genomes of organisms having important impact on biomedical research are available.The annotations based on experimental data provide a solid, dependable substrate for downstream analyses to infer the functions of related gene products. High-quality manual annotation by experts is an absolute prerequisite for seeding this system and, other than the major MOD projects and large sequence databse projects (such as UniProt and Reactome), very few research communities have the resources or trained GO curators to perform this labor-intensive task. Therefore, the functional annotation of non-manually curated genomes typically relies on automated methods that provide the core information for the transfer of annotations from related genes for which experimentally supported annotations Arabidopsis thaliana, Caenorhabditis elegans, Danio rerio, Dictyostelium discoideum, Drosophila melanogaster, Escherichia coli, Gallus gallus, Mus musculus, Rattus norvegicus, Saccharomyces cerevisiae, and Schizosaccharomyces pombe. Collectively those twelve species are referred to as the \u201cGO Reference Genomes\u201d. Each model organism has its own advantages for studying different aspects of gene function, ranging from basic metabolic reactions to cellular processes, development, physiology, behavior, and disease. The organisms selected to provide this gold-standard reference set have the following characteristics: they represent a wide range of the phylogenetic spectrum; they are the basis of a significant body of scientific literature; a reasonably sized community of researchers study the organism; and the organism is an important experimental system for the study of human disease, or for economically important activities such as agriculture. Importantly, all of these organisms are supported by an established database that includes GO curators who have the expertise to annotate gene products in these genomes according to shared, rigorous standards set by the groups participating in the Reference Genome project (see below).The GO Reference Genome project is committed to providing comprehensive GO annotations for the human genome, as well as that of eleven important model organisms: Although the development of the GO has been a collaborative effort since its inception, each participating group has previously worked independently in assigning GO annotations. Thus, prior to this project, specific protocols for annotation varied greatly between the different databases. Variation in annotation results from different curator decisions as to which data is appropriate to annotate and which GO terms to employ. We expect these reference annotations to have two important applications. First, they will increase the quality of the annotations provided by the GO Consortium, with a focus on providing precise annotations for each gene and the broadest possible coverage of each genome. Second, the gold-standard annotation set will greatly accelerate the annotation of new genomes where extensive experimental data on gene function or the resources and expertise to perform the annotations are unavailable.Depth refers to the amount of information about each gene that has been captured. For maximal depth, annotations should be as precise as possible; ideally, all experimentally determined information about the gene products from each of these organisms should be curated to the deepest level in the gene ontology graph. Breadth refers to the coverage of the genome, that is, the percentage of genes annotated. For maximal breadth, the annotations would ideally cover every gene product in a genome. From a production standpoint, these dual aspects imply a dependency, that is, we must carry out curation in two passes: first literature-based annotation of to capture all information based on experiments, followed by the inference of annotations to the homologous gene products that have not yet been experimentally characterized. Finally, it is important to distinguish genes for which the function is actually unknown from genes that simply have not yet been annotated. To this end, reviewed proteins for which there is no experimental data and that do not share significant homology with experimentally characterized proteins are annotated to the root term of each ontology: biological process (GO:0008150), molecular function (GO:0003674), and cellular component (GO:0005575).There are two different aspects of comprehensive annotation: \u201cbreadth\u201d and \u201cdepth\u201d. This procedure maximizes both depth and breadth of annotation across all of the curated genomes. We refer to the annotations as \u2018comprehensive\u2019 rather than \u2018complete\u2019 because it is not always feasible to completely annotate every published paper for every gene with our resources. For genes with a large body of literature, the comprehensiveness of annotations is assessed by curators based on a recent review or text-mining applications.One major advantage of annotating several genomes concurrently is the ability to carry out parallel annotations on homologous genes. Annotating several genes in a single step improves annotation efficiency. Moreover, it improves breadth of annotations by allowing easy access to known function of related genes. Finally, concurrent annotation of gene families across different databases promotes annotation consistency.The organisms represented in the GO Reference Genome project span well over 1 billion years of evolutionary divergence. The premise that underpins the comparative genomics approach is that homologous genes descended from a common ancestor often have related functions. This is not, of course, to deny that genes will diverge in function, but it is generally true that at least some aspects of function are conserved . For our purposes, a critical first step is the establishment of a standard approach to determining sets of homologous genes. Ideally, the evolutionary history of each gene in all organisms would be analyzed and stored in a single resource that could be used as the definitive reference for gene family relationships and homologous gene sets. However, generating this resource is a non-trivial problem, both theoretically, as just described, as well as practically. At present no single resource offers a fully satisfactory solution. Different resources exist that provide different results in terms of specificity and coverage and have different strengths and weaknesses Data availability\u201d below). The P-POD One central confounding problem has been the lack of a \u201cgold standard\u201d protein set that would be used by all databases and homology prediction tools. Because the different homology prediction tools do not use the same protein sets as inputs their results cannot be meaningfully compared. Moreover, the protein sets that are being annotated by the GO Consortium members may, and often do, differ from those used by the different homology prediction programs. The GO Consortium is now providing an index of protein sequence accession identifiers for each organism to groups who compute homology sets (see \u201cTree-based propagation of annotations to homologous genes\u201d below). We are using trees generated by the PANTHER project (http://www.pantherdb.org/) based on our standardized protein-coding gene sets. The trees also include protein sequences from 34 other species to provide a more complete phylogenetic spectrum. The quality of the trees was assessed by comparing the trees to \u201cortholog clusters\u201d generated by the OrthoMCL algorithm for the same protein sets. The agreement was very good overall: of the 412 OrthoMCL clusters covering the comprehensively annotated Reference Genome genes, 387 (94%) were consistent with the trees. Most of the disagreements involved a relatively distant evolutionary relationship that was difficult to resolve with certainty. Manual analysis of the trees is part of the curation process to ensure that suspicious absence of presence of proteins in the trees is supported by the genome sequence and/or the multiple sequence alignments upon which the trees are determined.Having agreed to use standardized protein sequence datasets as inputs, we next considered the existing algorithmic approaches to the determination of homology that would best meet our objectives. We chose the phylogenetic tree-based approach because it is based on an explicit evolutionary model that can be computationally evaluated. Moreover, the trees are amenable to intuitive graphical output that facilitates the rapid identification of homology sets by curators to 27,029 in Arabidopsis thalianaWhile at present the total number of gene products in any organism is imprecisely known there are reasonable estimates available from the MODs for the numbers of genes encoding protein products in each genome, ranging from 4,389 in protein representing every gene in each genome, this strategy still presents a large and formidable target annotation list. Nevertheless, it is clear that coordination of the Reference Genome project demands a coherent prioritization of targets for curation. Accordingly, Reference Genome curators are selecting targets using the following principles:The goal of the Reference Genome project is to provide constantly up-to-date annotations for all gene families; however this work will take time. Even by initially concentrating solely on one canonical Genes whose products are highly conserved during evolution, e.g. the gyrase/topoisomerase II gene family conserved from bacteria to human.Genes known to be implicated in human disease and their orthologs in other taxa, e.g. the MutS homolog gene family, that includes the gene MSH6, a DNA mismatch repair protein involved in a hereditary form of colorectal cancer in humans.Genes whose products are involved in known biochemical and signaling pathways, e.g. the PYGB gene (a phosphorylase) that participates in glycogen degradation.Genes identified from recently published literature as having an important or new scientific impact, e.g. POU5F1 (POU class 5 homeobox 1 gene) that is important for stem cell function.This promotes the comprehensive annotation of genes of high relevance to the current research efforts, as well as the development of the ontology to fully support those annotations.Literature curation is done by the different groups using the same method: curators read the published literature about the gene they are annotating, capturing several key pieces of information: the organism being studied, the gene product to be annotated; the type of experiment performed; the GO term(s) that best describes the gene product function/process/location; and an identifier as the source for the information (citation). For each gene that is part of a curation target set, curators review existing annotations as well as add new annotations based on more recent information. If there is no literature, then the genes are immediately considered completely annotated with respect to the available experimental data. For genes with little literature, the curator reviews all available papers, but for genes for which hundreds of papers are available this is impractical. In these cases, curators assess the comprehensiveness of curation based upon recent reviews or text-mining applications, and curate key primary publications accordingly. When this is complete, the gene is considered comprehensively annotated based on the information available in the biomedical literature.Genes that are concurrently annotated are periodically selected for annotation consistency checks among the different curation groups. Automated tests include the verification that older annotations lacking traceable evidence are replaced with annotations that adhere to the new standards, and verifying that outlier annotations, that is, those made only in one organism, are valid and not due to annotation errors. The manual review uses a peer review system in which a curator evaluates the experimentally determined annotations provided by other curators for a selected gene family. The curation consistency review process often identifies problems with the interpretation of particular GO terms. To ensure proper use of these terms in the future, they are flagged within the GO with a comment that a curator must take extra care when using these terms. For example, certain concepts, such as \u201cdevelopment\u201d, \u201cdifferentiation\u201d and \u201cmorphogenesis\u201d are used with various, overlapping meanings in the literature. In GO they are distinctively defined, and we strive to ascertain that all annotations uniformly use terms as defined by the GO. The consistency review also identifies GO annotations that may be incorrect, or do not have sufficient evidence.Generating sets of homologous genes\u2019 above. The homology inference process has two steps: (1) inferring annotations of an ancestral gene, based on the experimental annotations of its modern descendants, and (2) propagating those ancestral annotations to other descendants by inheritance. For the Reference Genome project, both of these steps are documented by an evidence trail that allows GO users to evaluate the inferences that were made. In the first step, a curator annotates an ancestral node in the phylogenetic tree, based on one or more experimentally annotated extant sequences. To document this step, a tree node (with a stable identifier) is associated with both a GO term identifier, and evidence for the association . In the second step, this annotation is propagated to all its descendants (by assuming inheritance as the norm), unless the curator explicitly annotates a descendant as having lost the annotation and provides a citation for this statement. To document this step, a modern-day sequence is associated with both a GO term identifier, and evidence for the association . The two documented steps allow each homology annotation to be traced through to its ancestral node (exactly what inference was made), and then to the modern-day sequences that provide experimental evidence for the annotation. This is not an automatic process, rather a curator reviews each inferred annotation with care since the function of a gene can diverge during evolution, particularly after gene duplication events that may free one of the duplicated copies from selection constraints and allow the evolution of new functionality.The GO Reference Genome project infers functions by homology using a tree-based process that has been previously described An illustration of this process is shown in Gene products selected for concurrent annotation in the course of the Reference Genome project have improved the breadth and depth of annotation coverage. As of November 2008, we have annotated approximately 4,000 gene products. These genes have a higher percentage of annotations derived from published experimental research. Moreover, the annotation of these genes is significantly more detailed relative to when we started this project. Initially, 34% of the 4,000 genes had annotations supported by experimental data. Now, there are 71%, a 2-fold increase; while a randomly selected sample with the same number of genes, has only 52%, a 1.5-fold increase.We might expect the Reference Genome project to yield annotations to more specific terms. Given some specificity metric for a term, we can calculate the average specificity of terms used in annotations for Reference Genome genes and compare these against the average specificity of annotations as a whole, and observe whether there has been an overall increase in specificity. Unfortunately, there is no single perfect measure of specificity. The depth of a term in the graph structure is often a poor proxy, as this is open to ontology structure bias. In this paper we use the Shannon Information Content (IC) as a proxy for specificity of a term. The IC of a term reflects the frequency of annotations to that term (or to descendants of that term), with frequently used terms yielding a lower score than infrequently used terms. The IC is calculated as follows:maximum IC within each branch. We then calculate the average of this maximum IC for all genes in a set to get a measure of the annotation specificity for that set. We compared this number for two sets of genes: the group of all annotated genes for all 12 gene reference genome species , and the subset of this set corresponding to those genes that have been selected for thorough annotation. We then averaged the maximum IC values for both sets of genes before being selected for annotation by the Reference Genome project (July 2006) and again with the most recent set of annotations (December 2008). The results, shown in We can measure the increase in IC on a gene set over time by measuring the average IC of the terms used to annotate the genes in that set before and after reference genome curation. Genes can have multiple annotations in each of the three branches of the GO; here we take the Another measure of the depth and breadth of GO annotations is what range of the ontology graph they cover. The graph coverage of a gene is the size of the set of terms used to annotate a gene, plus all ancestors of that term. In July 2006, the average graph coverage per reference genome gene in a reference species was 34.7, versus an average of 22.9 over all genes in all 12 species. In December 2008 this increased to 64.0 versus 27.0. This shows that the coverage of genes selected for the reference set is proportionally higher, 1.84 versus 1.18.The collaborative annotation of a group of similar gene products has also proven to be useful for the development of GO itself. For example, as a direct consequence of the Reference Genome project, 223 ontology changes or term modifications were made . Examples of requested new terms include \u201cregulation of NAD(P)H oxidase activity\u201d, \u201cDNA 5\u2032-adenosine monophosphate hydrolase activity\u201d, \u201cneurofilament bundle assembly\u201d, and \u201cquinolinate metabolic process\u201d. We have also enhanced the ontology by adding synonyms , improving definitions, and correcting inconsistencies. Examples of terms where definitions and inconsistencies have been corrected include \u201celectron transport\u201d (replaced by two terms: \u201celectron transport chain\u201d and \u201coxidation reduction\u201d), and \u201csecretory pathway\u201d (replaced by two terms: \u201cexocytosis\u201d and \u201cvesicle-mediated transport\u201d).http://amigo.geneontology.org/) GO annotations may be viewed using AmiGO, the GOC browser (http://geneontology.org/GO.annotation.conventions.shtml). The genes that have been targeted by the Reference Genome project have significantly improved annotation specificity as compared to their previous annotations, and the number of genes annotated by inference through homology has also increased. This increased breadth and depth of genome coverage in the annotations is one of the major goals of the project. An additional benefit has been the improvements to the GO itself, and this will consequently improve the accuracy of inferences based on these annotations. Genomes that are fully and reliably functionally annotated empower scientific research, as they are essential for use in the analysis of many high-throughput methodologies and for the automated inferential annotation of other genomes, a major motivation of the Reference Genome project's work. We encourage users to communicate with the GO Consortium (send e-mail to gohelp@geneontology.org) with questions or suggestions for improvements to better achieve this aim.The aim of the Reference Genome project is to provide a source of comprehensive and reliable GO annotations for twelve key genomes based upon rigorous standards. This endeavor faces many difficult challenges, such as: the determination and provision of reference protein sets for each genome; the establishment of gene families for curation; the application of consistent best practices for annotation; and the development of methodologies for evaluating progress towards our goal. Although this is a laborious effort, steady progress is being made in developing this resource for the research community. This initiative has propelled the GOC into the provision of standardized protein sets for these genomes that we expect to be of broad utility beyond the Reference Genome project. By engaging curators from across the MODs in joint discussions we are observing improvements in curation consistency and refinement of the GOC best practices guidelines (see http://geneontology.org/GO.refgenome.shtml. Annotations made by the databases participating in the Reference Genome project are available from the GOC website in gene_association file format (http://geneontology.org/GO.current.annotations.shtml). The protein sequence datasets are available for the community as a standardized resource from http://geneontology.org/gp2protein/, and as FASTA sequence files here: ftp://ftp.pantherdb.org/genome/pthr7.0. These sets provide a representative protein sequence for each protein-coding gene in each genome, cross-referenced to UniProt whenever possible, but augmented with RefSeq and Ensembl protein identifiers as well. The exact queries used to gather statistics for the annotation improvement reports can be found at: http://geneontology.org/GO.database.schema-with-views.shtml.Access to all GOC software and data is free and without constraints of any kind. An overview of the project as well as links to all resources described below can be found at"} {"text": "Individual researchers are struggling to keep up with the accelerating emergence of high-throughput biological data, and to extract information that relates to their specific questions. Integration of accumulated evidence should permit researchers to form fewer - and more accurate - hypotheses for further study through experimentation.Saccharomyces cerevisiae :S7) is applied to predict GO terms and phenotypes for 21,603 Mus musculus genes, using a diverse collection of integrated data sources . This combined 'guilt-by-profiling' and 'guilt-by-association' approach optimizes the combination of two inference methodologies. Predictions at all levels of confidence are evaluated by examining genes not used in training, and top predictions are examined manually using available literature and knowledge base resources.Here a method previously used to predict Gene Ontology (GO) terms for M. musculus.We assigned a confidence score to each gene/term combination. The results provided high prediction performance, with nearly every GO term achieving greater than 40% precision at 1% recall. Among the 36 novel predictions for GO terms and 40 for phenotypes that were studied manually, >80% and >40%, respectively, were identified as accurate. We also illustrate that a combination of 'guilt-by-profiling' and 'guilt-by-association' outperforms either approach alone in their application to With the ever-increasing collection of high-throughput experimental techniques, data acquisition at the genomic scale has never occurred more rapidly. As the raw data continue to amass, each biologist is faced with the difficult challenge of integrating and interpreting the data that are most relevant to each specific research question. Comprehensive annotation systems are thus of paramount importance, as evidenced by the integration of a large number of data types in many model organism databases -4. Such Recognizing this problem, curation systems are becoming increasingly reliant on computational approaches to assist in the annotation process. Sequence similarity (both at the nucleotide and peptide levels) has traditionally been the primary source of automated annotation. Particular motifs found within a sequence can be used to infer a gene product's molecular activity, with increasing work being done to identify the domains that facilitate protein interactions . SimilarMus musculus: the MouseFunc project ' indicates the range from a to b, inclusive of a and b). We wished to evaluate separately the performance for terms of different types and levels of generality. To this end, terms were divided into 12 disjoint sets, each representing both a single branch in the GO and a range in current annotation count, that is, the number of genes annotated with the term . The number of terms in each set is given in Table The 2,938 GO terms currently (as of November 2006) annotated with a number of genes in the range were selected for training and prediction curve (AUC-ROC) and precision at specified levels of recall . We use the same general approach here, but include an additional measure: 'mean average precision' (MAP). An ROC curve indicates the relationship between true positives and false positives as the score threshold for calling a prediction is varied. Also used are points along the precision-recall curve (as the threshold varies), with MAP being the mean precision obtained at all distinct recall levels. In the case of a tie among scores, the average precision over all permutations of the response variable (the GO or phenotype term annotations) is taken as the precision for that level of recall. The MAP has been identified as a good alternative to other precision-recall based statistics. Most notably, it has been deemed more useful than area under the precision-recall curve (AUC-PR) as a measure of comparison between classifiers . We alsoTable For GO term prediction, mean AUC-ROC for the combined classifier exceeds 0.8 for 9 out of 12 categories (with 0.5 AUC-ROC being expected for random predictions). At 1% recall, the precision ranges from 41% to 92% across the 12 categories, with MF and CC terms being easier to predict for than BP terms, and with difficulty in prediction increasing as the existing annotation count for these terms decreases . The performance levels for each category as the threshold varies can be seen in Figures Phenotype annotations prove systematically more difficult to classify than GO term annotations . The AUC-ROC for each pooled phenotype category exceeds 0.7, with precision at 1% recall ranging between 10% and 44% (again with the observation that terms with lower annotation count have lower predictive accuracy). Performance characteristics along the threshold range are depicted in Figures Cross-validation and out-of-bag performaTo gain intuition from specific examples, we examined some of the most interesting novel predictions within the literature. A gene/term prediction is novel and was considered interesting if it was not currently annotated in the reference database and if there did not exist any current annotation involving the gene and any non-root ancestor of the term. Otherwise, we consider such predictions to be 'refinements' of existing annotations. Within each category, the top interesting gene/term combinations are scanned in decreasing order of confidence and - to avoid over-weighting particular genes or terms - a further filtering step is employed to limit each gene and term to a single appearance in the list to be followed up in depth. This filter is described in detail in the Materials and methods (see below).Predicted GO annotations were reviewed by biologists within the MGI group who are experienced with literature curation (DPH and JAB). Three predictions were reviewed for each of the 12 GO categories based on novelty with respect to existing annotations. Each of these 36 predictions were placed in one of the following four classes: class (i), available experimental literature supports the predicted annotation (21 predictions); class (ii), prediction likely to be correct but supported only by indirect evidence in the literature (5 predictions); class (iii), veracity of prediction is unclear (4 predictions); and class (iv), the prediction is incorrect or unlikely to be correct based on current knowledge (6 predictions). We determined that 19 predicted annotations were class (i), that is, they would qualify for annotation by current curation standards but have not yet been annotated. A remaining two had been annotated since our training data were collected, and were, therefore, correct by definition. Excluding 4 'unclear' evaluations, this leads to 26/32 \u2248 81% accuracy. Additional data file Adra2, predicted for annotation with 'blood pressure regulation' (GO:0008217), could be annotated to 'baroreceptor feedback regulation of blood pressure' (GO:0001978), which is a child of 'blood pressure regulation' ), 11 were considered 'unclear' (class [iii]), and 16 were found 'unlikely' or 'very unlikely' (class [iv]). Excluding 'unclear' evaluations, the success rate was 13/29 \u2248 45%. Additional data file Nfatc2, the absence of an embryonic heart phenotype had been reported .M. musculus data manipulation given below.Training data for the machine learning algorithms were organized generally as described in , with deE0.5, E0.6, E0.7, E0.8, and E0.9; each group defined as:The Pearson product-moment correlation coefficient of expression data was computed for each gene-pair (for each of the three data sets). These continuous coefficients were then binned into five groupings, a and b are genes, and \u03c11 is the correlation coefficient function. This resulted in 15 binary attributes describing each gene-pair. The protein-protein interaction data were already provided in gene-pair format, with each positive edge resulting in a positive gene-pair for a single binary matrix column representation.where A and B represent sets of annotated domains belonging to (respectively) genes a and b. The Jaccard similarity coefficient is then defined as:Protein domain pattern data were converted to gene-pair format using the Jaccard similarity coefficient. Let a, b) having \u03c12 \u2265 0.9 were assigned membership in a single set. Each domain dataset (Pfam and Interpro) was processed using this method, resulting in two binary variables.Those gene-pairs , resulting in 6 additional binary variables (3 each for Biomart and Inparanoid data).Thus, a total of 26 distinct binary variables describing each gene-pair were used by the functional linkage classifier. For phenotype prediction, eight additional variables were included representing co-annotation of terms within the MF and CC annotation sets , leading to 34 binary variables. We excluded terms within the BP branch to avoid potential circularity because GO BP terms are often tightly related to phenotypes. The data were then organized into a matrix with genes as rows and the 34 predictive variables as columns.For the random forest base classifiers, binary gene-centric data were represented in a binary matrix form . When predicting GO terms, the training matrix was composed of the protein domain data, the phenotype data (top level only), the phylogenetic profile data, and the disease data . For phenotype term prediction, the training matrix consisted of the same data used for GO prediction as above except without the phenotype data .To generate functional linkage graphs, a single probabilistic decision tree was builribed in .v/20 where v is the number of remaining variables to be sampled at a given node. For the phenotype predictions, 200 trees/forest and the same v/20 were used as parameters. Each term corresponds to a separate random forest model.Random forests are used as described in , with pa\u03b1 parameter corresponding to each category is shown in Table The two base classifiers for each GO or phenotype term were combined via a logistic model, as described in . The \u03b1 pt we convert gene i's score si to a new score:The combined scores go through a final calibration so that they more closely approximate posterior probabilities. The calibration method developed for this study was also applied in the companion paper describing the MouseFunc project , where iL \u2208 is a free parameter chosen such that the following relation holds:t) is the number of annotations for term t .where count(AUC-PR, area under the precision-recall curve; AUC-ROC, area under the ROC curve; BP, biological process; CC, cellular component; GO, Gene Ontology; MAP, mean average precision; MF, molecular function; MGI, Mouse Genome Informatics; OMIM, Online Mendelian Inheritance in Man; ROC, receiver operator characteristic; SAGE, serial analysis of gene expression.The authors declare that they have no competing interests.FR conceived the study, and MT, WT and FR conceived of the methods. MT and WT performed all code construction and program optimization. FG aided in data pre-processing and clustering and constructed the web interface. DH performed literature evaluations of predictions with guidance from JB. MT and FR drafted the manuscript. All authors read and approved the manuscript.The following additional data are available with the online version of this paper. Additional data file Predictions, ratings, and evidence supporting novel GO term predictions.Click here for filePhenotype term predictions and supporting evidence.Click here for file"} {"text": "To address the challenges of information integration and retrieval, the computational genomics community increasingly has come to rely on the methodology of creating annotations of scientific literature using terms from controlled structured vocabularies such as the Gene Ontology (GO). Here we address the question of what such annotations signify and of how they are created by working biologists. Our goal is to promote a better understanding of how the results of experiments are captured in annotations, in the hope that this will lead both to better representations of biological reality through annotation and ontology development and to more informed use of GO resources by experimental scientists. The PubMed literature database contains over 15 million citations and it is beyond the ability of anyone to comprehend information in such amounts without computational help. One avenue to which bioinformaticians have turned is the discipline of ontology that allows experimental data to be stored in such a way that it constitutes a formal, structured representation of the reality captured by the underlying biological science. An ontology of a given domain represents types and the relations between them, and is designed to support computational reasoning about the instances of these types. From the perspective of the biologist, the development of bio-ontologies has enabled and facilitated the analysis of very large datasets. This utility comes not from the ontologies per se, but from the use to which they are put during the curation process that results in \u2018annotations\u2019.The principal use of an ontology such as the GO is for tTo help in understanding this work, we provide a glossary of the terms that are most important to our discussion:annotation is the statement of a connection between a type of gene product and the types designated by terms in an ontology such as the GO. This statement is created on the basis of observations of the instances of such types made in experiments and of the inferences drawn from such observations. For present purposes we are interested in the annotations prepared by model organism databases to a type called \u2018gene\u2019, a term which is seen as encompassing all gene-product types. For the purpose of this discussion, we do not need to address the distinction between gene and gene product.An instance is a particular entity in spatio-temporal reality, which instantiates a type . In the cases discussed here, the instances would be actual molecules or cellular components that can be physically identified or isolated or associated biological processes that can be physically observed.An type is a general kind instantiated by an open-ended totality of instances that share certain qualities and propensities in common. For example, the type nucleus, whose instances are the membrane bound organelles containing the genetic material present in instances of the type eukaryotic cell.A level of granularity is a collection of instances (and of corresponding types) characterized by the fact that they form units (\u2018grains\u2019), such as molecules, cells, organisms in the organization of biological reality. Successive levels of granularity form a hierarchy by virtue of the fact that grains at smaller scales are parts of grains at successively larger scales.A gene product instance is a molecule generated by the expression of a nucleic acid sequence that plays some role in the biology of an organism. For example, an instance of the Shh gene product would be a molecule of the protein produced by the Shh gene.A molecular function instance is the enduring potential of a gene product instance to perform actions, such as catalysis or binding, on the molecular level of granularity. A molecule of the Adh1 gene product sitting in a test tube has the potential to catalyze the reaction that converts an alcohol into an aldehyde or a ketone. It is assumed that in the correct context, this catalysis event would occur. The potential of this molecule describes its molecular function.A biological process instance (aka \u201coccurrence\u201d) is a change or complex of changes on the level of granularity of the cell or organism that is mediated by one or more gene products. For example, the development of an arm in a given embryo would be an instance of the biological process limb development.A cellular component instance is a part of a cell or its extracellular environment where a gene product may be located. For example, a cellular component instance intrinsic to internal side of plasma membrane is that part of a specific cell that comprises the lipid bilayer of the plasma membrane and the cytoplasmic area adjacent to the internal lipid layer where a gene product would project.A instance terms in the above, there is a corresponding type term defined in the obvious way; thus a molecular function type is a type of molecular function instance, and so on.For each of the Curation is the creation of annotations on the basis of the data (for example data about gene products) contained in experimental reports, primarily as contained in the scientific literature published on the basis of the observation of corresponding instances.evidence code is a three-letter designation used by curators during the annotation process that describes the type of experimental support linking gene product types with types from the GO Molecular Function, Cellular Component and Biological Process ontologies. For example, the evidence code IDA (Inferred from Direct Assay) is used when an experimenter has devised an assay that measures the execution of a given molecular function and the experimental results show that instances of the gene product serve as agents in such executions. An assay is designed to detect, either directly or indirectly, those occurrences that are the executions of a given molecular function type. Thereby the assay identifies instances of that function type. The code IGI (Inferred From Genetic Interaction) is used when an inference is drawn, from genetic experiments using instances of more than one gene product type, to the effect that molecules of one of these types are responsible for the execution of a specified molecular function.An IMP (Inferred by mutant phenotype), and IPI . The consortium uses other evidence codes to describe inferences used in annotations that are not supported by direct experimental evidence, but these will not be considered in this discussion . Here we give examples of the process of annotation supported by experimental evidence using the IDA and IMP evidence codes. We use these examples to illustrate how using an annotation helps us understand the underlying biological methods that were used to support the inferences between the types that the annotation represents. With this knowledge in hand, we can then use this information to generate new inferences or to filter the information for specific needs.The Gene Ontology Consortium (GOC) uses two further evidence codes to describe experimental support for an annotation: A GO annotation represents a link between a gene product type and a molecular function, biological process, or cellular component type . Formally, a GO annotation consists of a row of 15 columns. For the purpose of this discussion, there are 4 primary fields: i) the public database ID for the gene or gene product being annotated ; ii) the GO:ID for the ontology term being associated with the gene product; iii) an evidence code, and iv) the reference/citation for the source of the information that supports the particular annotation (6) and other OBO Foundry ontologies (7), and from the mouse anatomical dictionary (8) are used in conjunction with GO terms in the annotations. As a result, the annotation can more accurately describe the biological reality that needs to be captured.m is an instance of a molecule type M (represented for example in the UniProt database), and its propensity to act in a certain way is an instance of the molecular function type F (represented by a corresponding GO term). So, a molecule of the gene product type Adh1, alcohol dehydrogenase 1 (class I), has as its function an instance of the molecular function type alcohol dehydrogenase activity. This means that such a molecule has the potential to execute this function in a given contexts. The term \u2018activity\u2019, in this sense, is meant as it is used in a biochemical context; and is more appropriately read as meaning: \u2018potential activity\u2019. Note that although the same string, \u201calcohol dehydrogenase\u201d, is used both in the gene name and in the molecular function, the string itself refers to different entities: in the former to the molecule type; in the latter to the type of function that molecule has the propensity to execute. This ambiguity is rooted in the tendency to name molecules based on the functions they execute, and it is important to understand this distinction since the name of a molecule and the molecular function to which the molecule is attributed may not necessarily agree, for instance because the molecule may execute multiple functions.In the simplest biological situation, molecules of a given type are associated with a single molecular function type. A specific molecule Zp2 are found in the oocyte and have the propensity to bind molecules of the gene product type Acr during fertilization [If we say that instances of a given gene product type have a potential to execute a given function, this does not mean that every instance of this type will in fact execute this function. Thus molecules of the mouse gene product type lization . If, howF exists comes in the form of an \u2018assay\u2019 for the execution of that function type in molecules of some specific type M. If instances of F are identified in such an assay, this justifies a corresponding molecular function annotation asserting an association between M and F. As an example, Figure retinol dehydrogenase activity taken from a study by Zhang et al. [retinol dehydrogenase activity is defined in the molecular function ontology by the reaction: retinol + NAD+ \u2192 retinal + NADH + H+. Instances of gene product molecules annotated to this term have the potential to execute this catalytic activity. In this experiment, a cell protein extract was incubated with two substrates, all-trans-retinol (open circles) or 9-cis-retinol (filled circles), and the cofactor NAD+ for 10 minutes and the amount of retinal generated was measured. The graph shows the rate of accumulation of product with respect to the concentration of substrate (retinoid) used. The results show that the reaction defined by the GO molecular function type retinol dehydrogenase activity has indeed been instantiated \u2013 the execution of this function has occurred. The observed occurrences of retinol being converted to retinal are evidence for the existence of instances of this molecular function type. In this experiment, the instances of the function type are identified through observation of actual executions. We assert that some molecules in this extract have molecular functions of type retinol dehydrogenase activity because occurrences of executions of instances of this type have been directly measured.The experimental evidence used to test whether a given molecular function type g et al. whose execution contributes to the occurrence of a biological process of a given type. Inferences about such type-type relations can be made because experiments are designed to test what transpires when specified biological conditions are satisfied in typical circumstances \u2013 circumstances in which, as a result of the efforts of the experimenter, disturbing events do not interfere. Experiments are designed to be reproducible and predictive, describing the instances that one would expect to find in biological systems meeting the defined conditions. If future experiments show that preceding experiments did not describe the intended typical situation, then the conclusions from the preceding experiments are questioned and may be reanalyzed and reinterpreted, or even rejected entirely, and the corresponding annotations then need to be amended accordingly.seretonin secretion as an is_a child of neurotransmitter secretion from the GO Biological Process ontology. This modification was made as a result of an annotation from a paper showing that serotonin can be secreted by cells of the immune system where it does not act as a neurotransmitter.Annotations in this way sometimes point to errors in the type-type relationships described in the ontology. An example is the recent removal of the type P are detected, either by direct observation or by experimental assay, as being associated with instances of a given gene product type M, then this justifies the assertion of that sort of association between M and P which is called a biological process annotation.Associations between gene products and biological processes, too, can be detected experimentally. When instances of biological process type et al on the effects of a mutation of the Shh gene on mouse heart development [heart development as: \u2018the process whose specific outcome is the progression of the heart over time, from its formation to the mature structure. The heart is a hollow, muscular organ, which, by contracting rhythmically, keeps up the circulation of the blood.\u2019For those species of organisms where the tools of genetic study can be successfully applied, the association of gene product types with biological process types is usually achieved through the study of the perturbations of biological processes following genetic mutation. Curators use the IMP evidence code for these annotations. Figure elopment . The lefet al, an MGI curator has made an annotation linking heart development and the Shh gene using the IMP evidence code (Fig. Shh gene with a molecular function whose execution contributes to an occurrence of the biological process heart development. We know that the biological process heart development exists because we observe it in the normal animal. We know that a molecule of SHH contributes to this process because when we take away all instances of the gene product of the Shh gene in an animal, the process of heart development is disturbed. The annotation thus affirms that a molecule of SHH protein has the potential to execute a molecular function that contributes to an instance of the type heart development in the Biological Process ontology. We also generalize that the execution of the molecular function of a molecule of SHH in a given mouse will in some way contribute to the development of that mouse's heart. However, the results of any phenotypic assay are limited to the resolution of the phenotype itself. In the experiment described above, we have validated the biological process, but cannot make any direct inferences about the nature of the function executed. It is for this and other practical reasons that the molecular function and biological process ontologies were developed independently.Based on the mutational study reported in Washington-Smoak ode Fig. . This anAtp1a1 gene is used to label the location of instances of such products in preimplantation mouse embryos (Figure ATP1A1 gene product to the GO cellular component plasma membrane (Fig. In a large majority of cases, annotations linking gene product with cellular component types are made on the basis of a direct observation of an instance of the cellular component in a microscope, as for example in , which rs Figure . The fluThe development of an ontology for a given domain reflects a shared understanding of this domain on the part of domain scientists. This understanding, for biological systems, is the result of the accumulation of experimental results reflecting that iterative process of hypothesis generation and experimental testing for falsification which is the scientific method. The process of annotation brings new experimental results into relationship with the existing scientific knowledge that is captured in the ontology. There will necessarily be times when new results yield conflicts with the current version of the ontology. One of the strengths of the GO development paradigm is that development of the GO has been a task performed by biologist-curators who are experts in understanding specific experimental systems: as a result, the GO is continually being updated in response to new information. GO curators regularly request that new terms be added to the GO or suggest rearrangements to the GO structure, and the GO has an ontology development pipeline that addresses not only these requests but also submissions coming in from external users. By coordinating the development of the ontology with the creation of annotations rooted in the experimental literature, the validity of the types and relationships in the ontology is continually checked against the real-world instances observed in experiments. GO curators refer to this as annotation-driven ontology development. In addition, the GO community works with scientific experts for specific biological systems to evaluate and update GO representations for the corresponding parts of the ontology .about such instances. Rather it is about the corresponding types. This is possible because annotations are derived by scientific curators from the published reports of scientific experiments that describe general cases, cases for which we have scientific evidence supporting the conclusion that the instances upon which the experiments are performed are typical instances of the corresponding types. If such evidence is called into question through further experimentation, then as we saw, the corresponding annotations may need to be revised. The resultant tight coupling between ontology development and curation of experimental literature goes far towards ensuring that ontologies such as GO reflect the most sophisticated understanding of the relevant biology that is available to scientists. One area of future work would be to find ways to computationally identify inconsistencies in the type-type relations in the ontology based on inconsistencies of annotations to the types.Gene Ontology annotations report connections between gene products and the biological types that are represented in the GO using GO evidence codes. The evidence codes record the process by which these connections are established and reflect either the experimental analysis of actual instances of gene products or inferential reasoning from such analysis. We believe that an understanding of the role of instances in the spatiotemporal reality upon which experiments are performed can provide for a more rigorous analysis of the knowledge that is conveyed by annotations to ontology terms. While each annotation rests ultimately on the observation of instances in the context of a scientific experiment, the annotation itself is not It is to us obvious that our cumulative biological knowledge should represent how instances relate to one another in reality, and that any development of bio-ontologies and of relationships between such ontologies should take into account information of the sort that is captured in annotations. While we are still at an early stage in the process of creating truly adequate and algorithmically processable representations of biological reality, we believe that the GO methodology of allowing ontology development and creation of annotations to influence each other mutually represents an evolutionary path forward, in which both annotations and ontology are being enhanced in both quality and reach.The authors declare that they have no competing interests.All authors contributed equally to this effort through discussion, writing, and revision of the manuscript."} {"text": "ArrayIDer) to retrieve the most recent accession mapping files from public databases based on EST clone names or accessions and rapidly generate database accessions for entire microarrays.Systems biology modeling from microarray data requires the most contemporary structural and functional array annotation. However, microarray annotations, especially for non-commercial, non-traditional biomedical model organisms, are often dated. In addition, most microarray analysis tools do not readily accept EST clone names, which are abundantly represented on arrays. Manual re-annotation of microarrays is impracticable and so we developed a computational re-annotation tool (ArrayIDer could markedly improve annotation. We structurally re-annotated 55% of the entire array. Moreover, we decreased non-chicken functional annotations by 2 fold. One beneficial consequence of our re-annotation was to identify 290 pseudogenes, of which 66 were previously incorrectly annotated.We utilized the Fred Hutchinson Cancer Research Centre 13K chicken cDNA array \u2013 a widely-used non-commercial chicken microarray \u2013 to demonstrate the principle that ArrayIDer allows rapid automated structural re-annotation of entire arrays and provides multiple accession types for use in subsequent functional analysis. This information is especially valuable for systems biology modeling in the non-traditional biomedical model organisms. Microarrays have become a standard tool for functional genomics allowing analysis of thousands of mRNA transcripts simultaneously and they are widely used for a diverse range of species -4. MicroEasyGO [Although 10 software packages have been developed to map between popular database identifiers ,10,12-18ArrayIDer, a user-friendly program that generates a library of public accessions available from the Gene Expression Omnibus (GEO) browser [ArrayIDer currently accepts data from any microarray containing EST identifiers compatible with the NCBI UniGene database [ArrayIDer generates a library of gene and protein accessions from the latest updated NCBI UniGene [ArrayIDer retrieves identifiers from UniGene and IPI that match the EST input list. All annotations of ESTs to genes (and accompanied proteins) are as assigned by NCBI UniGene. ESTs listed in UniGene are grouped in a UniGene cluster based on their nucleotide overlap. The gene represented by each cluster is determined by the top BlastX hit of the nucleotide sequence. Gene information regarding the EST cluster to gene match is retrieved from the NCBI Homologene database, where known orthologs for genes are mapped through multiple species. The structural annotations retrieved by ArrayIDer are only retrieved from the species-specific UniGene database, which contains pre-assigned structural annotations made according to the methods used at the Homologene database. An online version of ArrayIDer allows rapid identifier searching of EST libraries of several species generated by AgBase.Here we describe browser for bothdatabase from nin UniGene and Inte UniGene databaseArrayIDer are available at the AgBase website \u2192 Array Annotation \u2192 ArrayIDer) and researchers can use the simple interface to search structural annotations for their microarray ESTs or accessions in the species' EST library. Libraries available online are updated when new versions of the underlying databases are released. Any available library can be extended by users by contacting AgBase directly to request structural annotation for their arrays. Conversely, and especially for those conversant with Perl, ArrayIDer is available for download for researchers to generate a library for species currently not listed on AgBase to avoid requesting the work be done by AgBase staff. ArrayIDer runs locally via the command line console or by execution in a designated directory. To run locally ArrayIDer requires: 1) Perl platform (version 5.8.8 built 8.17 or higher); 2) installation of Archive::Extract, DBI and NET::FTP Perl modules; 3) a text formatted input file of cDNA/EST clone names or GenBank nucleotide sequence accessions; and 4) an internet connection. The script downloads and unpacks the required databases directly from the internet.Microarray libraries for multiple species generated with ArrayIDer standalone version reads the input list and searches each entry against the latest version of NCBI UniGene to retrieve initial gene and protein information for comprehensive identifier mapping. An example of the online output can be found in Figure Fig. The online ArrayIDer allows researchers to rapidly update the structural annotation of their microarray and use this information in downstream gene expression modeling and pathways analysis. To demonstrate the use of ArrayIDer we selected a widely-used non-commercial array, the Fred Hutchinson Cancer Research Centre (FHCRC) 13K chicken cDNA array [NA array . The arrArrayIDer provided a 6.67-fold increase in chicken-specific annotations. Among the chicken structural annotations assigned, 55% (4177) are assigned to a Swiss-Prot/TrEMBL accession. 45% (3404) are assigned to a predicted \"XP_\" accession, which are candidates for further annotation curation. Identification and curation of these XP_ accessions improves the species' genome annotation.Microarray structural re-annotation results were compared to the annotation currently provided for the FHCRC array. In total, 13,234 probe identifiers were submitted as input for the script. Originally, 1136 array probes were structurally annotated to a chicken gene, with a further 7820 structural annotations to other (non-chicken) species . ArrayIDer that mapped to gene elements labelled as pseudogenes. Pseudogenes have been defined as defunct relatives of known genes that are considered non-functional; however, some pseudogenes can be transcribed and play a role in gene regulation and expression [AgBase. Four months after submission of the changes, we re-analyzed the FHCRC array with ArrayIDer and identified the remaining 224 gene elements labelled as pseudogenes, indicating the 66 genes are corrected and updated in the public database.Initially, we identified 290 transcripts on the FHCRC array with pression . MoreoveArrayIDer, we were able to decrease non-chicken functional annotations associated with the array by 2 fold (from 7309 to 3671).The Gene Ontology (GO) is the de facto standard method for functional annotation of gene products . While EArrayIDer allows rapid automated re-annotation of entire arrays and provides the user with multiple accession types for use in functional analysis. Together this information is especially valuable for the non-traditional biomedical model organisms to utilize the wide range of existing tools for systems biology modeling downstream. We focus on expanding the number of public databases used to assign accessions, include up-to-date, curated functional annotations for both commercial and custom designed microarrays (including specific requests) and incorporate this information into the AgBase database for user-friendly online access.Continual structural- and functional re-annotation of microarrays ensures the most up-to-date gene product information for modeling functional genomics datasets. ArrayIDerProject name: \u2192 Array annotation \u2192 ArrayIDerProject home page: Operating system(s): Platform independentProgramming language: PerlOther requirements: Perl modules Archive::Extract, DBI and NET::FTPLicense: Freely availableBVDB developed the pipeline, compiled the tool's script and drafted the manuscript. JHK contributed in the tool's script optimization and performance evaluation. FMM and SCB participated in the pipeline development and help draft the manuscript. All authors read and approved the final manuscript."} {"text": "Magnaporthe oryzae, the causal agent of blast disease of rice, is the most destructive disease of rice worldwide. The genome of this fungal pathogen has been sequenced and an automated annotation has recently been updated to Version 6 . However, a comprehensive manual curation remains to be performed. Gene Ontology (GO) annotation is a valuable means of assigning functional information using standardized vocabulary. We report an overview of the GO annotation for Version 5 of M. oryzae genome assembly.M. oryzae and GO proteins from multiple organisms with published associations to GO terms. Significant alignment pairs were manually reviewed. Functional assignments were further cross-validated with manually reviewed data, conserved domains, or data determined by wet lab experiments. Additionally, biological appropriateness of the functional assignments was manually checked.A similarity-based GO annotation with manual review was conducted, which was then integrated with a literature-based GO annotation with computational assistance. For similarity-based GO annotation a stringent reciprocal best hits method was used to identify similarity between predicted proteins of . Additionally, the genome of M. oryzae is constantly being refined and updated as new information is incorporated. For the latest GO annotation of Version 6 genome, please visit our website . The preliminary GO annotation of Version 6 genome is placed at a local MySql database that is publically queryable via a user-friendly interface Adhoc Query System.In total, 6,286 proteins received GO term assignment via the homology-based annotation, including 2,870 hypothetical proteins. Literature-based experimental evidence, such as microarray, MPSS, T-DNA insertion mutation, or gene knockout mutation, resulted in 2,810 proteins being annotated with GO terms. Of these, 1,673 proteins were annotated with new terms developed for Plant-Associated Microbe Gene Ontology (PAMGO). In addition, 67 experiment-determined secreted proteins were annotated with PAMGO terms. Integration of the two data sets resulted in 7,412 proteins (57%) being annotated with 1,957 distinct and specific GO terms. Unannotated proteins were assigned to the 3 root terms. The Version 5 GO annotation is publically queryable via the GO site M. oryzae genome assemblies that will be solid foundations for further functional interrogation of M. oryzae.Our analysis provides comprehensive and robust GO annotations of the Magnaporthe oryzae, the rice blast fungus, infects rice and other agriculturally important cereals, such as wheat, rye and barley. The pathogen is found throughout the world and each year is estimated to destroy enough rice to feed more than 60 million people were searched using key words, including alternative species names for the organism such as Relevant published papers were read and genes or gene products and their functions were identified.Where necessary, gene IDs and sequences at public databases, such as the NCBI protein database were identified.Based on the functions identified in the paper(s), appropriate GO terms were found using AmiGO, the GO-supported tool for searching and browsing the Gene Ontology database..Evidence codes were assigned following the guide at Data were recorded into the gene association file manually or using custom PERL scripts for large gene sets with the same biological process.Similarity-based annotations were replaced with literature-based annotations, where redundant, using custom PERL scripts.Custom PERL scripts were used to annotate each protein with GO terms from the three ontologies using the following protocol. Any protein not annotated with a GO term following similarity-based and literature-based GO annotations was annotated with the three root GO terms, GO:0005575 (Cellular Component), GO:0003674 (Molecular Function), and GO:0008150 . Additionally, if any protein was lacking annotation from any of the three GO categories, Cellular Component, Molecular Function, or Biological Process, the protein was annotated with the root GO terms of the missing GO categories..Errors in the gene association file were checked using the script, filter-gene-association.pl, which was downloaded from the GO database at M. oryzae genome sequence was uploaded to the GO database at . Many protocols and scripts were created for generating and parsing the data. For example, a protocol and five scripts were developed to replace redundant similarity-based annotation with literature-based annotation. Furthermore, a protocol and eight scripts were developed to provide each gene with a GO term from the three ontologies. In addition, a PERL script to record many genes into the gene association file was developed. This script, with slight modification, easily recorded different types of data, such as microarray expression, MPSS, or T-DNA insertion mutation, etc., into the gene association file. These protocols and scripts are available upon request from the corresponding or the first author.The gene association file for Version 5 of the -20 and percentage of identity (pid) \u2265 40%. Furthermore, 4,535 (93%) of the 4,881 proteins were annotated based on highly significant similarities with E-values = 0 and pid \u2265 40% predicted proteins were annotated with 1,911 distinct and specific GO terms out of a total of 29,126 assigned terms. Totally, 4,881 (78%) of the 6,286 proteins were considered to be significant matches to characterized GO proteins, with an E-value < 10M. oryzae was experimentally demonstrated to be secreted through cloning into an overexpression vector and expressed in M. oryzae transformants . These 67 secreted proteins were annotated with a biological process term GO:0009306 (\"protein secretion\") and a cellular component term GO:0005576 (extracellular region). An evidence code IDA was assigned to annotations of these 67 proteins since function was determined through direct assay.A total of 67 secreted proteins of M. oryzae were validated by comparison and analysis of gene location and structure, clustering of genes, and phylogenetic reconstruction [A total of 128 curated cytochrome P450's of M. oryzae were validated by integrated computational analysis of whole genome microarray expression data, and matches to InterPro, pfam, and COG [A total of 428 putative transcription factors of and COG . Again, A total of 2,548 conserved domains from NCBI CDD were used as evidence for cross-checking putative functions, but no GO annotation was made based solely on identification of these domains.M. oryzae proteins for the following reasons: 1) These proteins have significant similarity to experimentally-characterized homologs over the majority (at least 80%) of the full length sequences. 2) The pairwise alignments of good matches between the characterized proteins and the proteins of M. oryzae were manually reviewed. 3) Functional domains were conserved between the M. oryzae proteins and their homologs. 4) The GO assignments from the characterized match proteins to the M. oryzae proteins were manually determined to be biologically relevant.In addition, the evidence code ISS was assigned to annotations of 216 -20 and pid < 40% were assigned GO terms from their characterized matches, but the evidence codes were identified as IEA (Inferred from Electronic Annotation).The remaining 1,343 proteins with a reciprocal BLASTP best match of e-value > 10M. oryzae. Among the 6,286 proteins, 2,732 hypothetical proteins, 125 predicted proteins, and 14 unknown proteins were assigned functions.In sum, GO terms were assigned to 6,286 proteins of More than 400 research articles were read, and 71 genes with gene knockout mutations and with accession numbers and sequences deposited in public databases such as NCBI were manually annotated using GO terms, including newly developed Plant-Associated Microbe Gene Ontology (PAMGO) terms. Gene products were annotated with GO terms relevant to their biological functions. For example, 6 genes were annotated with GO:0000187 (\"activation of MAPK activity\"), 5 genes with GO:0075053 (\"formation of symbiont penetration peg for entry into host\"), 14 genes with GO:0044409 (\"entry into host\"), 8 genes with GO:0044412 (\"growth or development of symbiont within host\"), and 43 genes with GO:0009405 (\"pathogenesis\"). The evidence code IMP (inferred from Mutant Phenotype) was assigned to these annotations since gene-knockout mutants were generated in order to determine functions of these genes.(MPSS) studies [A total of 210 genes were annotated on the basis of published microarray studies . Again, studies , includiOn the basis of whole genome T-DNA insertion mutation data , 120 genIn total, 2,810 proteins were annotated based on experimental data from published peer-reviewed literature. Of these, 1,673 proteins were annotated with terms created by the PAMGO consortium to describe interactions between symbionts and their hosts.M. oryzae, and each protein being annotated with GO terms from the three GO categories.Integration of the similarity-based and literature-based annotation resulted in 7,412 proteins being annotated with specific GO terms, covering more than 57% of the inferred proteome. The remaining 5,464 predicted proteins, not having high similarity to GO-annotated proteins, were annotated with three general GO terms. GO:0005575 (Cellular Component), GO:0003674 (Molecular Function), and GO:0008150 . Therefore, our GO annotation provides an annotation of the entire 12,832 proteins predicted in Magnaporthe oryzae is available at the GO Consortium database .The GO annotation of Version 5 of the genome sequence of M. oryzae. Through careful manual inspection of these annotations, we are able to provide a reliable and robust GO annotation for more than half of the predicted gene products. Of 6,286 proteins receiving computational annotations, only 1,343 did not exceed our stringent match criteria upon manual review and so were assigned the evidence code IEA. It should be noted that annotations with the IEA evidence code are retained in the GO database for only one year, and then the GO Consortium will remove them from a gene association file. To be retained, IEA annotations must be manually reviewed in order to be assigned an upgraded evidence code such as ISS . Currently, there is no recognized standard to assign the ISS code. We recommend the following criteria for assigning the ISS code:Here, we present a detailed protocol for integrating the results of similarity-based annotation with a literature-based annotation of the predicted proteome of Version 5 of the genome sequence of the rice blast fungus \u2022 The functions of the proteins from which the annotation will be transferred must be experimentally characterized.-20 E-value, and \u2265 40% percentage of identity (pid) as cutoff criteria in our similarity-based GO annotation. Ideally, orthology should be established by phylogenetic analysis.\u2022 The similarity between the characterized proteins and the proteins under study must be significant. For example, we used \u2265 80% coverage of both query and subject sequences, \u2264 10\u2022 The pairwise alignment between the characterized proteins and the proteins under study should be manually reviewed and cross-validated with characterized or reviewed data of other resources such as functional domains, active sites, and sequence patterns etc.\u2022 Biological appropriateness of all assigned GO terms should be manually reviewed.The authors declare that they have no competing interests."} {"text": "Accurate annotation of translation initiation sites (TISs) is essential for understanding the translation initiation mechanism. However, the reliability of TIS annotation in widely used databases such as RefSeq is uncertain due to the lack of experimental benchmarks.i.e. over-annotating the longest open reading frame (LORF) and under-annotating ATG start codon. Finally, we have established a new TIS database, SupTISA, based on the best prediction of all the predictors; SupTISA has achieved an average accuracy of 92% over all 532 complete genomes.Based on a homogeneity assumption that gene translation-related signals are uniformly distributed across a genome, we have established a computational method for a large-scale quantitative assessment of the reliability of TIS annotations for any prokaryotic genome. The method consists of modeling a positional weight matrix (PWM) of aligned sequences around predicted TISs in terms of a linear combination of three elementary PWMs, one for true TIS and the two others for false TISs. The three elementary PWMs are obtained using a reference set with highly reliable TIS predictions. A generalized least square estimator determines the weighting of the true TIS in the observed PWM, from which the accuracy of the prediction is derived. The validity of the method and the extent of the limitation of the assumptions are explicitly addressed by testing on experimentally verified TISs with variable accuracy of the reference sets. The method is applied to estimate the accuracy of TIS annotations that are provided on public databases such as RefSeq and ProTISA and by programs such as EasyGene, GeneMarkS, Glimmer 3 and TiCo. It is shown that RefSeq's TIS prediction is significantly less accurate than two recent predictors, Tico and ProTISA. With convincing proofs, we show two general preferential biases in the RefSeq annotation, Large-scale computational evaluation of TIS annotation has been achieved. A new TIS database much better than RefSeq has been constructed, and it provides a valuable resource for further TIS studies. The position of the first nucleotide base pair (bp) in the start codon is denoted by translation initiation site (TIS). The sequence upstream to the TIS, the start codon itself and the sequence downstream to the TIS show specific patterns which differ from genome to genome. The sequence at about 20 bps upstream to the TIS in most prokaryotic genes contains primarily purine rich Shine-Dalgarno sequence . Howeveret al. and Zhu [et al. reportedet al. [Knowledge of exact TIS is essential for conducting experiments involving the identification of natively purified proteins by N-terminal amino acid sequencing as well as heterologous protein production . Howeveret al. has obseet al. [et al. [S. solfataricus as an example, Zhu, et al. [S. solfataricus is lower than MED 2.0. Generally speaking, there exists no systematic method to computationally evaluate the accuracy of TIS prediction.Several attempts have been made to assess the reliability of TIS annotation. Nielsen and Krogh were theet al. , using t [et al. made a q [et al. of align, et al. showed tWe propose here a computational method to quantitatively estimate the TIS annotation accuracy of a prokaryotic genome; the annotation can be provided by either a program or a database. The method is based on a homogeneity assumption that the sequence patterns represented by a PWM around TISs are homogenous for a generic subset of genes of a genome. The whole set of TIS predictions are split into two sets; set The validity of the method is established with tests on experimentally verified TISs set EcoGene . Then, tLet us first introduce several definitions:\u2022 a blackboard bold symbol \u2022 the sample size of \u2022 the symbol \u2022 the symbol \u2022 the symbol \u2022 the symbol l bps upstream and r bps downstream of start codons (in this paper l = 50 and r = 15) to form a window of width l + r. The PWM for the set b at an aligned position j is denoted by Wj (b), where b = 1 denotes adenine (A), b = 2 denotes cytosine (C), and so forth.The main task of this work is to estimate Three elementary PWMs will be relevant to our analysis, and correspond to three types of TISs in the annotation. The first is true TIS, and the corresponding PWM is denoted by The difference between the three types of PWMs are biologically clear. An annotation of finite accuracy will give rise to a PWM which is a linear combination of the above three PWMs. Specifically, for the set \u03b1's from the above four observed PWMs, the first coming from the set where The three elementary PWMs are obtained from the reference set, which is very important in this evaluation. The reference set needs to be as reliable as possible, and should not be biased towards any database/predictor to be evaluated. We have chosen to use the six most recent TIS databases/predictors, namely, RefSeq , ProTISAThe procedure to obtain three PWMs from the reference set is as follows. Since the true TISs are known, the aligned sequences around the true TISs directly give rise to Finally, let us discuss the limitation of the homogeneity assumption. The sequence pattern encompasses regulatory signals which are important to the translation of genes. The homogeneity property is based on the idea that the translation mechanism is largely universal across a genome. There may be several translation mechanisms acting on a genome ,3,5,14; Let \u03b5 depends on both W's). Furthermore, to eliminate redundancy from data, it is wise to make a Z-transformation [W of (l + r) \u00d7 4 dimensions to a matrix V of (l + r) \u00d7 3 dimensions:where ormation from thej = 1, 2,..., l+r. Consequently, we rewrite Eq. 3 aswhere The nucleotide frequencies at different positions in all the PWMs are assumed to be independent . The assE(\u03b5') = 0 andTogether with the homogeneity assumption, we show that l+r) \u00d7 3(l+r) covariance matrix calculated on the set where t for simplification. Because of Eq. 6, \u03a3' has a complicate dependence on \u03b1, and we need to solve a nonlinear optimization problem. This is done by an iterative procedure, with an initial \u03b1 to evaluate \u03a3' which is substituted into a group of linear equations of optimization to calculate new \u03b1. The new \u03b1 is then used to update \u03a3', and the calculation repeats until \u03b1 converges. As explained in details in Additional File where Throughout the calculation, we face a question of how reliable the estimates of EcoGene854, allows us to design a procedure to test the reliability of our method. The procedure goes as follows. First, randomly divide genes in EcoGene854 into two equal-size-set \u03b1 from 40% to 90% at a step of 10% by replacing 100(1 -\u03b1)% of the true TISs by randomly choosing false TISs. The aligned sequences with the newly assigned TISs of the set \u03b1, we repeat the generation of the set The experimentally confirmed TISs in EcoGene , denoted\u03b1. When the reference set is 100% accurate, \u03b1 with \u00b1 2.6%. We have also found that the estimate In Figure An intriguing question is what happens if the reference set is not 100% accurate. This can be easily checked by carrying out a series of tests with varying accuracy of E. coli, we find that they agree remarkably well. This confirms the validity of the bootstrapping calculation. Generally speaking, Figure The above designed tests provide a unique opportunity to test if a bootstrapping strategy offers any knowledge about the uncertainty of the estimate. We carried out a bootstrapping calculation for the runs with A. baumannii ATCC 17978 to 96.8% in P. pentosaceus ATCC 25745 with an average of 80.6 \u00b1 9.9%. About 40% of the genomes have accuracies higher than 85.0%, including genomes from several well studied genera such as Bacillus, Escherichia, Salmonella and Pseudomonas. In contrast, 13.5% of the genomes, most of which are GC-rich, have very suspicious TIS annotations with accuracies lower than 70%. A complete list of estimated accuracies for the 532 genomes is available in Additional File RefSeq is the most widely used public database on TIS, and its accuracy is the most concerned matter of this study. We have conducted an overall assessment on the TIS annotation for RefSeq. A total of 532 genomes are assessed. The annotation accuracy varies widely from 3.3% in Below, we examined two annotation preferences that potentially contribute to the RefSeq annotation quality, namely tendencies to over-annotate LORF and to under-annotate ATG start codon.i.e., always taking the 5'-most start codon), then its TIS accuracy would equal to the percentage of LORF in all true TISs (which will be referred below to as the percentage of true LORF). Our method can define a way to estimate this percentage of true LORF. For a genome for which we can generate a reliable reference set, then we can generate an artificial annotation by adopting the LORF rule. The final estimated accuracy of this artificial annotation is the percentage of true LORF. This method is applied to Y. pestis, and the estimated percentage of true LORF is 63.7%. The calculation of the actual percentage of LORF in the RefSeq annotation for Y. pestis is 92.6%. We then judge that there is about 30% over-annotation of LORF in this genome. This study is carried out for a total of 532 genomes, and the results are shown in Figure As reported previously ,8, RefSeEscherichia genus is chosen to present our results; reported observations hold on most of the other genera . Figure Escherichia genus.Another preference is the under-annotation of ATG start codon, for which we have now developed some statistical measures to provide further quantitative evidence. We have conducted calculation within genus, a taxonomic category ranking below family but above species. It is reasonable to expect that the TISs of species from the same genus show little difference in statistic such as the start codon usage. A total of 29 genera containing at least five selected genomes are studied, and the Since our reference set is constructed with the intersection of all relevant TIS databases/predictors, it is not biased towards any one, and hence we can carry out the analysis of accuracy for all of the predictors for the 532 genomes. This subsection is devoted to a discussion of their performances. We chose RefSeq as a standard of accuracy comparison for presenting the results. To reduce false positives, genes not annotated by RefSeq and genes with length short than 300 bps were excluded, as implied in ,18. FiguE. coli K12 and B. subtilis [As two of the most popular gene-finders, Glimmer and GeneMark have been used to annotate hundreds of genomes. The most recent versions, Glimmer 3 and GeneMarkS, include a RBS model to predict TISs, which is in a form of PWM whose parameters are derived by a Gibbs sampler. High performances are reported on two well-studied genomes subtilis ,16. WhenEasyGene has only published 157 genomes and is bE. coli K12 and B. subtilis, as well as on GC rich genomes such as P. aeruginosa PAO1 [Unlike gene finders, TiCo is a post-processor of an existent annotation. High performance was reported on osa PAO1 . As showProTISA is a recently published database dedicated to TIS annotation in prokaryotic genomes. It is generated by collecting various confirmed TISs and predictions from MED-Start (upgraded), which post-processes the RefSeq annotation ,22,23. TThe method of evaluation proposed in this paper is based on a fundamentally different principle, the principle of homogeneity for the PWMs of any subset of genome as a linear combination of three elementary PWMs. This principle is based on the universal process of gene translation, and it is a macroscopic property for the ensemble of TISs. This information is supplementary to the properties that are used by TIS predictors, and hence can (and should) be used to provides a complementary way for achieving the global annotation performance. In other words, we propose to construct a new TIS annotation database by selecting the best TIS predictor's annotation for any given genome; the resulting annotations organize a new database (of 532 genomes at present) and is named SupTISA . This isSpecifically, for each genome, SupTISA selects the one of RefSeq, ProTISA, EasyGene, GeneMarkS, Glimmer 3 and TiCo with the highest accuracy as the SupTISA annotation and provides its downloading at the web address for TIS Translation is a fundamental process for an organism, and the regulatory signals relevant to this process should have relatively uniform distribution across a genome. A PWM of aligned sequences around TIS summarizes the statistical information of the signal, and is then a tool to use for study how much, in a given set of annotation, the true signal has contributed. This is the principle we use for inventing, for the first time, an algorithm for large-scale evaluation of TIS's prediction accuracy. The work done on the testing with confirmed genes and on assessing six databases/predictors over 532 genomes give rise to a series of consistent results. Although the actual accuracy results may be subject to a few percents of uncertainty, due to statistical fluctuations of finite sample sizes and possible distortion of the reference sets, the assessments seem to be a valid leading order measure of the TIS annotations. Such assessment is meaningful, especially when the estimated accuracy is low: typically, some unjustified or simplified assumptions are used during the annotation. Our assessment then provides a tool for experimental or computational biologists to avoid to be mis-led by an over-simplified annotation. We have shown that the RefSeq annotations for some genomes are of this nature.in vivo and in silico studies of translation initiation. In P. horikoshii OT3 and several other archaeal genomes, Cang and Wang [Correct annotation is important to both and Wang reportedZSS and GQH conceived the study, designed the applications and drafted the manuscript, ZSS and HQZ co-supervised the development of the work, XBZ and GQH designed and implemented the algorithm. LNJ performed part of the test. All authors read and approved the final manuscript.Supplementary details of the method. Details for deducing Eq. 6 and minimizing the sum of squared errors in Eq. 7.Click here for fileEstimated TIS annotation accuracies of six selected databases/predictors. Accuracies of TIS annotation on a total of 532 genomes for RefSeq, Glimmer 3, GeneMarkS, EasyGene, TiCo and ProTISA.Click here for fileCorrelation between annotation accuracy and ATG start codon usage. A total of 29 genera were selected. The linear fit was applied if the Pearson Correlation is significant at 95% confidence.Click here for file"} {"text": "The expressed sequence tag (EST) methodology is an attractive option for the generation of sequence data for species for which no completely sequenced genome is available. The annotation and comparative analysis of such datasets poses a formidable challenge for research groups that do not have the bioinformatics infrastructure of major genome sequencing centres. Therefore, there is a need for user-friendly tools to facilitate the annotation of non-model species EST datasets with well-defined ontologies that enable meaningful cross-species comparisons. To address this, we have developed annot8r, a platform for the rapid annotation of EST datasets with GO-terms, EC-numbers and KEGG-pathways.annot8r automatically downloads all files relevant for the annotation process and generates a reference database that stores UniProt entries, their associated Gene Ontology (GO), Enzyme Commission (EC) and Kyoto Encyclopaedia of Genes and Genomes (KEGG) annotation and additional relevant data. For each of GO, EC and KEGG, annot8r extracts a specific sequence subset from the UniProt dataset based on the information stored in the reference database. These three subsets are then formatted for BLAST searches. The user provides the protein or nucleotide sequences to be annotated and annot8r runs BLAST searches against these three subsets. The BLAST results are parsed and the corresponding annotations retrieved from the reference database. The annotations are saved both as flat files and also in a relational postgreSQL results database to facilitate more advanced searches within the results. annot8r is integrated with the PartiGene suite of EST analysis tools.annot8r is a tool that assigns GO, EC and KEGG annotations for data sets resulting from EST sequencing projects both rapidly and efficiently. The benefits of an underlying relational database, flexibility and the ease of use of the program make it ideally suited for non-model species EST-sequencing projects. Protein sequences from model organisms are generally well annotated. The situation is different for non-model species where often the core of available sequence data comes from expressed sequence tags (ESTs). To date almost one thousand of the species represented in dbEST have at We have developed annot8r, a software tool that facilitates the annotation of new sequences with GO terms, EC numbers and KEGG pathways based on similarity searches against annotated subsets of the EMBL UniProt database . annot8rannot8r has been tested on both LINUX and Mac OS X Darwin platforms. The software is written in Perl and requires a standard Perl installation (5.8.0 or later) and the BioPerl module . On someannot8r is started from a terminal window and takes the user step-by-step through 1) the download of relevant files, (2) the extraction of data from these files, (3) the preparation for BLAST searches, (4) running BLAST searches and (5) the actual annotation . The entire annotation process is fully automated, but the user is encouraged to provide input regarding the stringency of the annotation. BLAST score or expect value based cut-offs annot8r records the best hit supporting this particular annotation and the corresponding score and e-value for this hit. In addition the number of additional hits also supporting this annotation is recorded. Furthermore, the fraction of hits out of all collected hits for a particular sequence that support this annotation is calculated. This calculation accounts for terms where the maximum number of sequences in the database for a certain annotation is smaller than the number of hits collected, so that in all cases a fraction of 1.0 means maximum possible support.The annotation results are stored in comma-separated value text files that can easily be read into spreadsheets, and in a relational postgreSQL database. A relational database facilitates more advanced queries, for example the identification of annotation terms which are present in one species, but not in another species, or, annotation terms which are present in all species investigated. Detailed examples illustrating this are given in the tutorial part of the user guide.Removing non-informative entries from the UniProt database and splitting it into three significantly smaller databases specific for GO terms, EC numbers and KEGG pathways before running BLAST searches reduces the time required for the sequence similarity searches compared to a full UniProt search by a factor of ~5. On a single processor 3.6 GHz Intel Pentium workstation the BLAST searches for a set of 1000 typical EST-derived proteins take ~75 minutes against the annot8r databases as compared to ~400 minutes for the complete UniProt database.The 'correctness' of annotations based on sequence similarity will depend on factors such as the quality of the annotations in the reference dataset, the specificity of the annotation, whether the sequence belongs to a protein family, and the level of similarity to the reference. This makes estimates of the quality of annotations difficult. To provide the user with some ideas of best-practice cut-offs, we have analysed the relation between sequence similarity and annotation quality for EC annotation. EC annotations have four hierarchy levels. The top level describes the general type of the enzyme reaction. The three sublevels classify the biochemical reaction in ever-greater detail. The UniProt subset containing EC annotations was subjected to a BLAST search against itself. After removing self-hits, the sequences were assigned EC numbers and the annotations sorted according to the underlying BLAST score. Figure Collecting not just the first hit, but also a list of top-scoring hits can give rise to alternative or conflicting annotations. We believe that the best strategy in cases such as these is to provide the user with all relevant information necessary to make an informed judgement. Therefore, to assist the user in the assessment of the quality of a particular annotation, annot8r also considers alternative annotations. Based on the e-value or BLAST score cut-off and number of hits that are set by the user, annot8r records for each putative annotation the best hit and its respective scores that suggest this annotation term, the number of additional hits which are also in support of this annotation term, and the fraction of hits better than cut-off supporting each alternative annotation. This allows the user to consider alternative or conflicting annotations and gives guidance as to the distinctness and accuracy of the annotation. For example, if for one particular sequence two EC numbers have a similar score and share the three top EC levels, but display diversity at level four the prediction of the specific substrate used will require a more in-depth analysis while the more general reaction is likely to be correct.Other tools are available for the annotation of sequences from non-model organisms with GO terms (for examples see the list provided by the GO-consortium ). The moThe most time consuming step of the annotation procedure is similarity searching. Here annot8r follows a unique route. Instead of searching the full databases (UniProt or NCBI non-redundant) annot8r uses a pre-screening step to generate subsets of UniProt specific to GO, EC and KEGG annotation. The benefit of this is two-fold. As the databases to be searched against are significantly smaller, search times are reduced. We intend to exploit this gain in speed to set up an annot8r web-server in the future. Also, removing non-informative sequences from UniProt before running the BLAST searches avoids the risk of having only non-informative hits in the top hits.An additional strength of annot8r is the provision of the results in a relational database in addition to flat-files. This enables a skilled user to run more complex search queries on the results. To encourage users with little bioinformatics experience to use this feature, we have given detailed examples in the tutorial part of the user guide [see Additional file annot8r is an easy to install and easy to use tool that allows high throughput annotation at low computational cost. It enables the researcher to annotate non-model species sequences with GO, EC and KEGG terms. A relational database makes annot8r particularly suited for comparative studies.Project name: annot8rProject home page: Operating system: LinuxProgramming language: PerlOther requirements: BioPerl, CPAN, PostgreSQL, BLASTLicense: GNU GPLRestrictions: noneRS developed the software and drafted the manuscript. MLB initiated the project and assisted with testing and documenting the software. Both authors contributed to writing the manuscript.Zipped tar archive that contains the program, the user guide and sample files needed for running the tutorial.Click here for file"} {"text": "Due to the rapid release of new data from genome sequencing projects, the majority of protein sequences in public databases have not been experimentally characterized; rather, sequences are annotated using computational analysis. The level of misannotation and the types of misannotation in large public databases are currently unknown and have not been analyzed in depth. We have investigated the misannotation levels for molecular function in four public protein sequence databases for a model set of 37 enzyme families for which extensive experimental information is available. The manually curated database Swiss-Prot shows the lowest annotation error levels (close to 0% for most families); the two other protein sequence databases (GenBank NR and TrEMBL) and the protein sequences in the KEGG pathways database exhibit similar and surprisingly high levels of misannotation that average 5%\u201363% across the six superfamilies studied. For 10 of the 37 families examined, the level of misannotation in one or more of these databases is >80%. Examination of the NR database over time shows that misannotation has increased from 1993 to 2005. The types of misannotation that were found fall into several categories, most associated with \u201coverprediction\u201d of molecular function. These results suggest that misannotation in enzyme superfamilies containing multiple families that catalyze different reactions is a larger problem than has been recognized. Strategies are suggested for addressing some of the systematic problems contributing to these high levels of misannotation. One of the core elements of modern biological scientific investigation is the universal availability of millions of protein sequences from thousands of different organisms, allowing for exciting new investigations into biological questions. These sequences, found in large primary sequence databases such as GenBank NR or UniProt/TrEMBL, in secondary databases such as the valuable pathways database KEGG, or in highly curated databases such as UniProt/Swiss-Prot, are often annotated by computationally predicted protein functions. The scale of the available predicted function information is enormous but the accuracy of these predictions is essentially unknown. We investigate the critical question of the accuracy of functional predictions in these four public databases. We used 37 well-characterized enzyme families as a gold standard for comparing the accuracy of functional annotations in these databases. We find that function prediction error is a serious problem in all but the manually curated database Swiss-Prot. We discuss several approaches for mitigating the consequences of these high levels of misannotation. The frequent addition of new genomes into public sequence databases allows for rapid access to sequences from more than a quarter million named species Two important papers examining genome annotation error in one and three small genomes respectively E. coliConcomitant with the growth of sequence data, annotation strategies have become more sophisticated, benefiting especially from the use of multiple orthogonal methods to improve prediction accuracy Misannotation levels were determined for sequences annotated to the functions of experimentally well-characterized enzyme families and superfamilies used as a \u201cgold standard,\u201d allowing us to identify misannotated sequences with confidence. Except for Swiss-Prot, all of the databases examined exhibited much higher levels of misannotation than have previously been suggested. Examination of the NR database revealed both evidence for error propagation from previously misannotated proteins and that levels of misannotation have increased over time. The major types of misannotations that were found were classified and their prevalence determined, allowing us to propose strategies for addressing some of the problems that contribute to them. This is the first study to use a gold standard set of superfamilies and families to examine misannotation in the archival NR and TrEMBL databases.http://sfld.rbvi.ucsf.edu/) Annotation error in the NR, TrEMBL, KEGG, and Swiss-Prot databases was determined using as a gold standard 37 highly curated and experimentally well-characterized enzyme families from the Structure-Function Linkage Database (SFLD) for four of the superfamilies . For allSimilar to the results across superfamilies, most of the 37 families investigated displayed consistent levels of misannotation across the NR, TrEMBL and KEGG databases. For instance, the average percent misannotation in the 4-hydroxyphenylpyruvate dioxygenase family , the famThe accuracy of these results was validated using several orthogonal protocols see . The litThe effect on predicted levels of misannotation due to the use of a relatively stringent similarity threshold (Trusted Cutoff (TC)) in the final step of the analysis protocol was evaluated using less stringent thresholds for the NR database. See . While tExpecting that larger volumes of sequence data and improved methods for annotation would result in higher accuracy annotations over time, we investigated whether the levels of misannotation had changed over the period 1993\u20132005. Using sequences from the NR database, the original sequence submission dates were retrieved and binned into groups based upon their submission dates and misannotation assignments (\u201ccorrect\u201d or \u201cincorrect\u201d) according to our protocol. Surprisingly, we found that for the 37 families investigated in this study, misannotation has increased over this twelve-year period: essentially no misannotated sequences were submitted in 1993, while in 2005 approximately 40% of the sequences submitted to NR were misannotated . Not onlTo better understand the types of misannotation that were found, each misannotated sequence was labeled with an individual, mutually exclusive evidence code describing the type of annotation error it represented. Four primary classes of misannotation emerged from the protocol used in the analysis . Figure Examples of some misannotations from the NR database that were associated with these misannotation codes are provided in \u2212150 and are also annotated as \u2018mandelate racemase,\u2019 likely indicating a case of error propagation. A protein similarity network illustrating the excellent match of this sequence to fuconate dehydratase sequences is provided in An example of an SFA misannotation is gi 17987990 (GenBank:NP_540624), annotated to the mandelate racemase function in the enolase superfamily. This sequence did not score against the mandelate racemase family HMM, but it did score against other enolase superfamily HMMs. In particular, it scored above the TC for the fuconate dehydratase family and contained all the necessary functional residues for that function. As such, we predicted that this sequence is misannotated and that it instead catalyzes the fuconate dehydratase reaction. Using gi 17987990 as a query, 11 other sequences in NR score against this sequence with a BLAST E-value of better than or equal to 1\u00d710o-succinylbenzoate synthase (OSBS) and scored against the HMM for that family, the general base required for catalysis of the enzymatic reaction, lysine 166, is substituted in this sequence with a histidine. This sequence also contains a number of additional substitutions in sequence motifs conserved in authentic members of the OSBS family The sequence gi: 71915096 (GenBank:AAZ54998) is an example of an MFR misannotation from the enolase superfamily. Although it was annotated in NR as an The sequence gi 16082480 (GenBank:NP_393564) provides an example of the BTC type of misannotation. This sequence was annotated in NR as galactonate dehydratase. It scored against the galactonate dehydratase family HMM at a bit score of only 126.6, well below the TC for this family, 843.6, and was therefore classified as misannotated. Additionally, the sequence scored well against the gluconate dehydratase family HMM. The gluconate dehydratase family was not one of the 37 families used as a gold standard in this study because insufficient experimental information was available in the SFLD when our analysis was performed. Additional alignment and operon context information is now available to predict that gi 16082480 is indeed a gluconate dehydratase rather than a galactonate hydratase (see the SFLD).http://sfld.rbvi.ucsf.edu).The detailed results from this study are available in The misannotation levels determined in this work are substantially higher than those reported in previous studies. Several reasons may account for these high levels. First, this study is different in methodology from earlier studies that estimated levels of misannotation in specific genome projects. Two important earlier studies that predicted misannotation levels did so based on discrepancies in annotations made by different groups for specific genomes associated with \u201cover annotation,\u201d i.e., annotation of sequences at a greater level of functional specificity than available evidence supports. We suggest that support for manually curated databases, including organismal databases and databases such as Swiss-Prot, could provide high confidence annotation for a subset of proteins. For large databases annotated largely by automated methods, the misannotation problem could be ameliorated to some extent by the use of evidence codes describing in a systematic and computer-readable format the evidence available to support annotation assignments.The functions analyzed in this investigation were selected from the August 11, 2005 version of the Structure-Function Linkage database (SFLD) Four public databases were analyzed for misannotation: the NCBI GenBank Non-redundant (NR) protein database The misannotation analysis followed the protocol given in Sequences retrieved by the keyword search were scored in an automated fashion against all of the HMMs in the SFLD using the HMMER program hmmpfam A highly permissive and inclusive E-value cutoff of 100 was used for this step to gather highly divergent hits and to determine at what scores sequences from related families hit each family HMM. Using hmmalign (HMMER) each sequence was aligned to each HMM it scored against and discrepancies between the sequence and residues known to be necessary for catalysis were output.The annotation of every sequence retrieved was examined manually. The sequences associated with annotations unrelated to the analysis function or that were not annotated to an enzymatic function (including sequences annotated only to a gene name) were removed. If an annotation contained both an enzymatic designation and a designation not associated with its catalytic functionality only the catalytic designation was analyzed. Annotations that used the terms \u2018family\u2019, \u2018-like\u2019, \u2018similar to\u2019, \u2018related to\u2019 and \u2018homolog\u2019 were not included in the final analysis set. Terms like \u2018family or \u2018homolog\u2019 do not denote a specific reaction and can be inferred to mean either similarity in function or similarity in sequence based upon the user's context. As there was no specified context for these terms in the annotations, it was not possible to disambiguate the \u2018functional similarity\u2019 annotations from the \u2018sequence similarity\u2019 annotations, therefore, all such annotations were removed. The descriptors of \u2018hypothetical\u2019, \u2018probable\u2019, \u2018putative\u2019, \u2018potential\u2019, \u2018predicted\u2019 and \u2018likely\u2019 are also not well-defined terms Using the output from the automated HMMER-based analysis, pruned as described, each sequence in the analysis set was analyzed in a four-step process and labeled with appropriate misannotation codes if a misannotation was found . The firEvery sequence that was found by the automated process to be missing one or more functionally important residues was checked manually. First, the alignment of the sequence to the family HMM alignment was visually inspected to ensure that there was no obvious misalignment or conservative substitution (conservative amino acid substitutions were accepted). Using the alignment program Muscle In order to differentiate family members from non-family members, HMM bit-score thresholds were determined for each gold standard family. Sequences in the SFLD assigned to families and sequences from GO that were marked with the evidence code \u201cInferred from Direct Assay\u201d (IDA) were scored against all of the SFLD HMMs using the HMMER program hmmpfam using an E-value cutoff of 100 . The scores were compiled and the sequences labeled according to whether they were true positives or true negatives for the family against which they scored. The Trusted Cutoff (TC) was defined as the HMM score of the lowest-scoring true family member against the family HMM . The TC For each sequence analyzed in the GenBank NR database, the original submission date of that sequence was retrieved from NR. The sequences were binned by submission year and predicted annotation status . The fra\u221250 .To generate the network shown in All data plots were produced using the software R v2.6.0.Figure S1Three analysis thresholds used in the misannotation analysis. This example for the galactonate dehydratase family (enolase superfamily) illustrates how the three scoring thresholds were defined for each of the 37 families evaluated in this study. The Trusted Cutoff (TC) was defined as the lowest score at which a true family member scores against the family HMM. The Noise Cutoff (NC) threshold was defined as the highest score at which a non-family member scores against the family HMM. The Lenient Cutoff (LC) threshold uses the set of true family sequences to which some false positive sequences have been added so that they represent 5% of the total sequences. Using this artificial set of family sequences, the LC threshold for each family was defined as the lowest score at which one of these non-family sequences scored.(1.00 MB TIF)Click here for additional data file.Figure S2Average percent misannotation in the NR database across families in each superfamily using different thresholds. The black bar in each plot depicts the average percent misannotation predicted in the analysis over each superfamily at the three scoring thresholds described in additional (0.01 MB TIF)Click here for additional data file.Table S1Percent misannotation for each family in the NR, TrEMBL, KEGG and Swiss-Prot databases.(0.13 MB DOC)Click here for additional data file.Text S1Misannotation analysis controls and tests(0.06 MB DOC)Click here for additional data file.Dataset S1Sequences analyzed in misannotation analysis and their designations(1.21 MB XLS)Click here for additional data file.Video S1Movie of the annotations from the NR database displayed by year (1993\u20132005). The movie tracks correctly annotated and misannotated sequences in the test set over the years 1993\u20132005. The similarity network is arranged by superfamily and colored as in (1.29 MB MOV)Click here for additional data file."} {"text": "As sequencing costs have decreased, whole genome sequencing has become a viable and integral part of biological laboratory research. However, the tools with which genes can be found and functionally characterized have not been readily adapted to be part of the everyday biological sciences toolkit. Most annotation pipelines remain as a service provided by large institutions or come as an unwieldy conglomerate of independent components, each requiring their own setup and maintenance.To address this issue we have created the Genome Reverse Compiler, an easy-to-use, open-source, automated annotation tool. The GRC is independent of third party software installs and only requires a Linux operating system. This stands in contrast to most annotation packages, which typically require installation of relational databases, sequence similarity software, and a number of other programming language modules. We provide details on the methodology used by GRC and evaluate its performance on several groups of prokaryotes using GRC's built in comparison module.Traditionally, to perform whole genome annotation a user would either set up a pipeline or take advantage of an online service. With GRC the user need only provide the genome he or she wants to annotate and the function resource files to use. The result is high usability and a very minimal learning curve for the intended audience of life science researchers and bioinformaticians. We believe that the GRC fills a valuable niche in allowing users to perform explorative, whole-genome annotation. While there has been extensive work in both automated gene finding -4 and fuThe Genome Reverse Compiler is open source software intended for explorative annotation of prokaryotic genomic sequences. Its name and philosophy are based on analogy with a high-level programming language compiler. In this analogy, the genome is a program in a certain low-level language that humans cannot understand. Given the sequence of any prokaryotic genome, GRC produces its corresponding \"high-level program\" \u2013 its annotation. GRC allows the user to annotate a target genome by simply providing annotated protein sequences, in widely accepted formats, from organisms related to the target. GRC uses a similarity search against these sequences, and sequence information from the genome itself, to find protein coding genes and determine putative function of their products. We believe an integrated, open source annotation tool such as GRC benefits the life sciences community in several ways. It opens up the realm of electronic annotation to researchers who wish to annotate sequences in-house but who lack the resources to setup an annotation pipeline. Also, submission to an online annotation service may not be realistic for those wishing to annotate a large number of sequences or for sequences that do not meet with submission restrictions. GRC can provide targeted whole genome annotation since it allows users to provide the protein sequence database to be used for annotation; such a mechanism can be especially helpful in situations where users have their own curated database of sequences in addition to publicly available sequences.In whole genome annotation, before an organism's genes can be annotated they must be found within the genomic sequence. In its current form, the GRC focuses on finding ORFs and evaluating whether they will likely be translated into protein. In making this evaluation, one consideration is sequence composition: whether the amino acid composition of the sequence is characteristic of typical coding genes found in the target organism. Some other sources of information to consider are: whether the sequence is conserved across multiple organisms (an indicator it is subject to selective pressure), whether two open reading frames overlap with one another, and the sequence length of an ORF.ab initio. That is, to determine the function of a gene solely based on its sequence composition without reference to a similar sequence whose function is already known.Once an ORF is determined likely to be a real gene, an annotation procedure may assign some additional information. Typically this information includes the function of the gene product. Currently there is no way to computationally determine function Common practice is to assign the function of genes based on sequence similarity comparisons to a database of genes whose functions are known. In many annotation procedures, the database sequence that has the top scoring, statistically significant alignment with a target gene has its function transferred to that target gene. Because functional information is frequently electronically transplanted from one sequence to another, the degree of separation between the original source of functional information and where it is applied can be great. This may cause an inappropriate functional assignment and can lead to \"error propagation\", where erroneous information is repeatedly applied to various sequences through multiple electronic annotations . To addrTraditional biological nomenclature for describing genes and their products have many subtleties, redundancies, and inconsistencies. The distinctions and assumptions necessary for interpreting this information do not promote interoperability among functional genomic databases and are difficult to account for computationally. This problem can be addressed by using a structured, precisely defined system for specifying information about a gene. One such system is the Gene Ontology . The Genin silico, the location and function of protein coding genes as part of an integrated process.The rapid accumulation and widespread availability of genomic information for prokaryotes makes it possible to use information from previous annotations of closely related organisms to annotate a newly sequenced genome. Sequencing costs are already low enough that hundreds of new prokaryotic genomes are being sequenced every year. Moreover, efforts are underway to fill the still existing \"phylogenetic gaps\" in the databases of prokaryotic sequences . The GRCab initio by building a sequence model based on the target genomic sequence. In creating or applying this model it is possible to overly bias results against anomalous sequences, such as viral genes or recently acquired conjugated genes. GRC incorporates a gene finding module which uses information from closely related genomes. In addition to sequence similarity information, this algorithm evaluates the information content of sequences using entropy-density profiles (EDPs) introduced by Zhu et al. [Many popular gene finding algorithms perform u et al. .GRC BLAST database). Composition is evaluated using entropy-density profiles introduced by Zhu et al. [pi be the count for each amino acid in a sequence where i = 1, ..., 20 represents the index of a specific amino acid. For a given sequence of length l, let fi be the frequency of the ith amino acid where To evaluate whether sequences are likely to be protein coding genes we consider sequence conservation, composition, and overlap in the genome. Conservation is determined by a sequence similarity search using FSA-BLAST against u et al. , and subu et al. and Glimu et al. . EDPs aru et al. , used toith amino acid of a sequence is defined as:The entropy-density for the Si for i = 1, ..., 20.To compute the EDP feature vector for a given sequence we compute Dc or Dnc, is defined as the Euclidean distance:Zhu et al. demonstrate that global EDPs representing coding and non-coding sequences for all prokaryotes can act as good centers for their respective groups in the 20-dimensional phase space and as a result can be used as initial discriminators to classify a sequence as coding or non-coding ,20. To p\u03b1 represents \"c\" for coding or \"nc\" for non-coding.where The EDR is then defined to be:M represent this set of sequences. In order to minimize the number of unnecessary overlap evaluations, we first determine the most likely start site for each ORF. The start sites are adjusted from the original maximal coordinate to the highest scoring start site. Each start site is scored according to the average frequency at which its codon occurs and how well they fit the gene model suggested by the highest scoring compatible alignment (see below). All potential start sites are placed in a priority queue based on score.The gene finding procedure for GRC is as follows: All ORFs are generated from a linear scan of the genome. Let M. This process creates a set of likely coding ORFs C as well as a set of ORFs likely to be non-coding L. The likely coding and non-coding sets C and L are used to retrain the respective coding, non-coding global EDPs for the organism. Entropy distance ratios are calculated for each sequence using the new global EDPs. All ORFs with poor similarity scores and EDR scores are removed from the original set M creating a refined set M'. Using the new EDR values, a second round of overlap evaluation is performed on M' to determine the final set of protein coding genes is used to represent the ORF.As part of the overlap evaluation process, it may be found that altering the start coordinate of one of the conflicting ORFs will resolve an overlap. Because the highest scoring start sites are determined before overlaps are evaluated, only the alternative start sites for the low-scoring ORF of an overlapping pair are considered in resolving an overlap. Obviously if the overlap does not occur on the 5' end of an ORF, there is no point in exploring alternative start sites. Alternative start sites are considered in order of their score as given by the start site priority queue. Because the GRC stores information about multiple pairwise alignments for each ORF, it is possible that certain alignments are compatible with some start sites and not with others Figure . For eac\u03b1 value (explained above), and is compatible with the start coordinate .If GO annotations are provided as additional input, GRC's functional assignment becomes more adaptable. By default GRC assigns GO terms associated with the source subject as it does in the regular annotation procedure. However, when using the Gene Ontology with GRC the user also has the option to filter the term assignments based on GO evidence codes, term depth, and GO category. Evidence codes are a three letter code associated with a Gene Ontology annotation, which specifies a source of support category for a particular annotation. Although currently the vast majority of evidence codes for prokaryotic annotations are 'IEA', e.g. transferring a function from a 98% identical sequence experimentally determined to be glucokinase may be preferable to transferring the term \"hypothetical\" from a 99% identical sequence. If the user specifies a minimum GO term depth, terms associated with the source subject, that pass the depth restriction, are assigned. If none of the GO terms from the source subject meet all the filtering criteria then GO terms are assigned from another subject that has the highest a score and GO terms that do meet the criteria.A problem encountered in transferring function is deciding which function to use when there are multiple high-scoring alignments. GRC's default practice is to transfer the function of the database sequence whose alignment best fits the ORF sequence. However, just because a subject sequence is most similar to the target gene does not guarantee that it is well annotated and is the best candidate for functional transference, GRC also has the option of generating GO \"consensus annotations.\" Multiple, significant alignments, and their associated functions, can represent a net or distributed knowledge about the query sequence. In these cases, if only the top-scoring function is transferred, then the net knowledge is lost. We provide in GRC a feature for capturing this net knowledge by creating GO consensus annotations. Consensus annotations are intended to leverage the information distributed across the GO-DAG from multiple alignments into term assignments which have a high level of evidential support. The assumption behind consensus annotations is that multiple alignments will indicate terms that occur in relative proximity to one another within the GO-DAG and that this proximity is indicative of either a protein family with similar function or a variation in function specifics for homologous sequences in the database. The goal is to capture the proximity, and subsequent agreement, of a group of terms through these GO term assignments. Similar algorithms have been developed in GOMIT and CLUGAdditionally, the user is able to specify a minimum percent coverage that the alignment must satisfy, for both the query and subject, in order to be used for function assignment. These options give a measure of control such that the annotation of an entire genome can be customized to a user's particular interests. The ability to fine-tune GO term assignment in terms of GO evidence codes, depth, and category, the use of consensus annotations, and the extensive information about functional assignment decisions contained in the output, together constitute a powerful functional assignment system not found to our knowledge in other automated annotation systems.Also implemented in GRC is a module that allows the user to evaluate the performance of the tool with respect to a reference annotation. One part of the module provides a detailed analysis of precision and sensitivity with respect to gene finding. The details provided are meant to act as the engine to drive open-source development of the GRC and allow the user to easily evaluate the impact of his or her changes with respect to real organisms. This module also does automatic evaluation of function assignment.Output from this module allows the review of current annotations based on evidence found in the annotation process.R, and a defined system of measurement. For the purposes of using metrics, all the coordinates provided in the reference set are assumed correct. We evaluate the correctness of gene calls with respect to the starting set M composed of those ORFs found through a linear scan of the genome. This allows us to frame the gene finding problem for the GRC as one of classification. Given the set M, label each ORF in M as either coding (by placing it in the positive set P) or non-coding (by placing it in the negative set N). This leads to the following evaluation with respect to the reference set: every gene coordinate pair in set P is either a true positive (TP), a false positive (FP), or has no reference (NRP), and every coordinate pair in N is either a true negative (TN) or false negative (FN).Evaluating the performance of gene finding requires both a reference set of gene coordinates, True positive (TP): an ORF in set P that is in the same frame and has the same stop site as a gene in set RFalse positive (FP): an ORF in set P that occupies the same space as a gene in set R but does not meet the conditions for a TPNo reference positive (NRP): an ORF in set P that does not occupy the same space as any gene in set RFalse negative (FN): an ORF in set N that is in the same frame and has the same stop site as a gene in set R (see note below)True negative (TN): an ORF in set N that does not meet the conditions for a FNR that are shorter than the minimum gene length specified are not counted as false negatives.When using the GRC, the user must specify the minimum gene length. This is the minimum nucleotide length for gene finding, which means all putative genes returned by the GRC will be greater than or equal to this number. Genes in When measuring the performance of gene finding with respect to a reference, we wish to answer the following:\u2022 How many of the genes in the reference set did we find (assert as being protein coding)?\u2022 Out of the ORFs we asserted as being protein coding, how many were correct?\u2022 And out of those correct, how many also had correct start site coordinates?We can answer each of these questions with the following measurements:TPs = the number of true positives which have a correct start coordinate.where In testing function assignment, we wish to measure the number of genes we assign a correct function to and, because one gene can have multiple functions, the total number of functions correctly assigned. Without a system for formal functional classification, testing function assignment can be difficult.e.g. \"protein.\" To address this problem we use the Gene Ontology, which allows us to devise a more precise system for measuring function assignment performance. This system assumes that there exists a reference annotation that specifies the most specific GO terms detailing the functional characteristics of each gene in the test genome. GOA formatted files from EMBL's Integr8 project [Comparing plain text functional descriptions will result in measuring the number of common keywords and trying to ensure that they do not convey a common biological phenomena with little meaning, project and the project are freet be the target gene whose functional assignment correctness we wish to determine. Let r be the reference gene whose function we wish to compare t to. There are three conditions which must be met before we can evaluate whether the function assigned to t is correct:Let t must be a true positive in gene finding with respect to the reference gene r.1. t must be assigned a GO term as a result of the BLAST search.2. r must also be assigned a GO term from the same GO category as t.3. P and correct reference annotation, then the incompatible assignment is likely incorrect. If, on the other hand, there is a relevant GO term missing in the reference annotation, then there is a chance that the GRC assigned term might be accounting for this missing information. For the purposes of GRC evaluation (see below) incompatible assignments are considered incorrect.Assuming these conditions are met, we then assign a label to each GO term that has been assigned to each TP ORF in the result set P Figure . A term GRC is comprised of multiple components, each of which can be used independently from the annotation pipeline Figure . GRC_ORFThe algorithms comprising the GRC are implemented in C++ and Perl. The source code is available to download under the GNU license and comes packaged with precompiled binaries on an Intel \u00d786 Linux machine. Running the software requires only that the user have standard installations of g++, Perl, and Make on a Linux operating system.We provide an additional component that can be used to easily evaluate the performance and decisions made by the GRC. GRC_Compare takes as input the output from GRC_Annotate and a reference annotation for the genome annotated. It provides an evaluation of the gene finding as well as functional assignment.The GRC is run from the command prompt. Annotating a genome is as simple as specifying the files that contain the genomic sequence and the functionally characterized sequences from one or a number (set) of closely related organisms. We support several major formats from both NCBI and EMBL.Example for running the GRC:GRCv1.0.pl -g Genome.fna -d DatabaseDirectoryBecause the GRC can take advantage of multiple sequence alignments in gene finding, determining start site position, and making functional assignments, the user also has the option to specify the number of top BLAST hits to use.The output provided by the GRC increases with the amount of information provided by the user. At base level GRC provides both a list of putative protein coding genes and a list of ORFs, generated by GRC ORFs, hypothesized not to be protein coding. These lists provide the following for each ORF:1. Highest scoring alignment values.2. Entropy distance ratio for the sequence.3. Assigned functions and associated confidence values.4. Gene coordinate information.This level of output requires only the genomic sequence and FASTA-formatted amino acid sequences for the annotation database. In this case, the functions assigned are merely the plain text descriptions obtained from FASTA headers. If the user provides additional functional information in the form of GO annotations, these will be combined with the sequence information to provide GO term assignment.We test the performance of the GRC using leave-one-out genome annotation. For a group of related organisms, all with pre-existing annotations, each organism is annotated by the GRC using the sequences and functional descriptions from the rest of the group. Performance information is then generated using GRC_Compare to compare the GRC's annotation to that of the target organism.In gene finding it is common practice to specify a minimum gene length ,2,35. AnFor each annotation the GRC was set using the following parameters:\u2022 Number of BLAST hits to use per query = 10\u2022 BLAST e-value threshold = .001\u2022 Effective BLAST database size = 2879860 (char.)\u2022 BLAST scoring matrix = BLOSUM62To provide a frame of reference, we compare the GRC's performance to the popular gene finding program Glimmer v3.02 . GlimmerGlimmer parameters:\u2022 Maximum overlap = 50\u2022 Score threshold \u2265 30In order to test functional assignment, we use Gene Ontology terms. The GO annotations are used as both database functions to be assigned and as reference functions. Currently, there are relatively few well curated GO annotations for multiple closely related organisms. We obtain each organism's GO annotation from EMBL's Intergr8 project . These GE. coli, Group 2 from members of the genus Pseudomonas, and Group 3 from members from the class Gammaproteobacteria . If the NRPs were not taken as False Positives our precision results would be better than those shown in Figure E. coli W3110 the count of NRPs is 240; for the genome P. syringae pv. tomato strain DC3000 the count of NRPs is 432 versus the fraction that have functions that could be incorrect (incompatible). The other is to look at the total number of terms correctly assigned. Figure Table e.g. assign the molecular function term. The depth and distance information in Table It is possible for a confirmed or compatible annotation to be trivial in that the term assigned has no functional specificity Pseudomonas pha1448a, a protein known to be part of tryptophan synthesis (EMBL Accession = Q48QG6) was assigned the term GO:0008652 by GRC. This is a biological process term defined as \"amino acid biosynthetic process.\" Because protein Q48QG6 was already assigned a biological process term for tryptophan metabolic process (GO:0006568) in the reference annotation, and that term was neither an ancestor nor child of the one assigned by GRC, the assignment was labelled incompatible. Also interesting to note, is that the number of compatible annotations increases as the groups become more distantly related. These annotations could be improvements on the current annotation but are also likely to include some incorrect functional assignments.Term assignments labelled incompatible do not necessarily mean the assignment is incorrect. For instance, in the annotation of With a carefully selected annotation database the user can annotate a genome of interest in a few hours. The main bottleneck in the annotation procedure of GRC is the sequence similarity comparison. BLAST is known to scale in proportion to the product of the lengths of the query sequence and the database searched . In FiguEscherichia coli str. K12 substr. W3110 and Pseudomonas syringae pv. tomato str. DC3000. In Table As noted in the introduction, there exist genome annotation services available, most notably RAST . AlthougPseudomonas syringae pathovar tomato strain T1 genome project [In GRC we have created a reliable, open-source, annotation tool which can be used for explorative annotation to investigate a genome based on the users interests. By supporting commonly available sequence and annotation formats, we provide a tool that puts very little demand on any user wishing to annotate a prokaryotic genome. GRC synthesizes information from both sequence composition and sequence similarity to minimize the deficiencies inherent in using just one. Using standards from NCBI RefSeq GRC has project . For thiin silico prediction will be a biological reality.When predicted computationally, gene calls, start coordinates, and assigned functions should be taken as highly tentative until they have been curated and approved by an expert human curator. Predictions made by GRC are no different. Although GRC achieves high performance values with respect to two test groups, these groups are close phylogenetically. As the relationships between the organisms in the database and the target genome become more distant so will the applicability of annotations made by GRC. It should also be re-emphasized that the functional performance metrics were generated using reference functions (from GOA files) that were themselves electronically created. Ideally all reference information used to measure the performance of GRC should be experimentally derived. Because the GRC effectively transfers information from one organism to another, mistakes in database annotations can be propagated into a new annotation created by GRC. The confidence values, alignment information, and many of the other values output by GRC are provided so that the user can evaluate whether a gene call or functional assignment merits further investigation. These values do not provide any kind of guarantee that an RNA annotation. RNA genes and features are important pieces of information in any prokaryotic genome. The fact that RNAs are usually well conserved in closely related species should make it relatively easy to include them in GRC annotations, although locating precise boundaries may be difficult. Better use of user-provided data. There are two main issues here. The first is the presence of experimentally derived functional assignments; those should be given preference in functional transfer, and are easily detectable in GO annotated genomes by the evidence code. The second is a user-defined special reference genome. It is often the case that among several closely related genomes there is one that is especially well annotated. For example, among Pseudomonas syringae, strain DC3000 is by far the best annotated. If users provide such information, GRC can be modified to make use of it and thus produce better annotations. Metagenomics annotations. An explorative annotation tool is in theory ideally suited for annotation of metagenomics sequences. In order for GRC to be useful in such a context a user would have to provide a BLAST database that would cover a wide range of prokaryotic species. This is not a simple task, and therefore we are planning to develop techniques that will allow the generation of a reasonably small approximation of a nonredundant and yet comprehensive set of well-annotated prokaryotic proteins.Work on GRC is ongoing. We are currently working on the following aspects. Project home page: Operating systems: LinuxProgramming Languages: C++ and Perl Requirements: linux, g++, perl, MakeLicense: GNU General Public License. This license allows the source code to be redistributed and/or modified under the terms of the GNU General Public License as published by the Free Software Foundation. The source code for the application is available at no charge.Any restrictions to use by non-academics: NoneAW contributed the bulk of the writing for this work, is the main programmer for the project, conceived many of the performance measurement techniques, and various other software features. JS conceived the initial GRC project idea, provided funding and guidance for this work, and has contributed to the interpretation of data and writing of the manuscript. Both AW and JS have read and approved the final manuscript.Supplementary data. Full performance tables for gene finding and GO analysis for each organism.Click here for file"} {"text": "One motivation of systems biology research is to understand gene functions and interactions from functional genomics data such as that derived from microarrays. Up-to-date structural and functional annotations of genes are an essential foundation of systems biology modeling. We propose that the first essential step in any systems biology modeling of functional genomics data, especially for species with recently sequenced genomes, is gene structural and functional re-annotation. To demonstrate the impact of such re-annotation, we structurally and functionally re-annotated a microarray developed, and previously used, as a tool for disease research. We quantified the impact of this re-annotation on the array based on the total numbers of structural- and functional-annotations, the Gene Annotation Quality (GAQ) score, and canonical pathway coverage. We next quantified the impact of re-annotation on systems biology modeling using a previously published experiment that used this microarray. We show that re-annotation improves the quantity and quality of structural- and functional-annotations, allows a more comprehensive Gene Ontology based modeling, and improves pathway coverage for both the whole array and a differentially expressed mRNA subset. Our results also demonstrate that re-annotation can result in a different knowledge outcome derived from previous published research findings. We propose that, because of this, re-annotation should be considered to be an essential first step for deriving value from functional genomics data. Integrating and modeling \u2018omics\u2019 datasets in systems biology facilitates biological understanding at a molecular systems level. Biological systems are studied from global gene, transcript, protein, protein interaction and metabolite levels. Microarray technology advanced functional genomics by facilitating high-throughput acquisition of large functional genomics datasets Up-to-date gene product structural- and functional- annotations are an essential foundation of systems biology modeling. The primary repository for structural annotations of most commercial and custom-made microarrays, and their related studies, is the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus Drosophila melanogaster, Anopheles gambiae, Mus musculus, Homo sapiens, Caenorhabditis elegans, and Caenorhabditis briggsae). From 2003 to present, 103 animal genomes are completely sequenced and published, and this trajectory of data generation is expected to increase. Chicken is one animal that exemplifies the rapidly evolving structural and functional annotations. The chicken UniGene clustering database, the major repository for structural annotation of ESTs, was first released in 2003, and was developed up to build 40 used here. This is an average of one update every two months. The GO consortium is the primary repository for functional annotations. GO functional annotations are continually updated and on average there is a new chicken GOA database released monthly; from June 5th 2004 there have been 46 releases.For species for which genomic sequences are newly available, the rapidity of updates of annotation data and the increase in high-throughput experimental platforms, such as microarrays, is astounding. The challenge of appropriately managing and, especially, interpreting experiment-based datasets will be especially difficult for those species with small research communities, such as ecologically important or agricultural animal species. The number of animal species completely sequenced and published before 2003 was 6 http://agbase.msstate.edu/tools/reannotation/.All structural mappings, GO term assignments and pathway analysis are available on the AgBase website at Salmonella enterica Serovar enteritidis-challenged chicken model ArrayIDer. We mapped 49 ESTs to corresponding chicken genes, which is an increase of 3.1-fold compared to the original data. Only 5 of the 54 ESTs did not have structural annotations, compared to 13 in the original dataset. The total unique chicken gene annotations were increased by almost 2.7-fold.For the differentially expressed mRNAs, 57 were originally identified to play a significant role in the host-pathogen response within a The re-annotation not only increased the number of structural annotations, but also greatly increased the number of functional annotations. The total number of GO terms represented by the retrieved proteins increased more than 7.0-fold and the total number of unique GO terms by more than 2.5-fold. To quantify the quality of the functional annotations assigned to our re-annotated data set we calculated the GAQ score. The GAQ score consists of the total number of proteins, the GO term depth in the GO tree and the assigned evidence code for the GO annotation Although we greatly increased the number of GO annotations, the increase of total GAQ score from 43,245 to 305,996 is not statistically significantly different because of the large number of GO annotations assigned with the lower scoring evidence codes \u2018ND\u2019 (No Data) and \u2018IEA\u2019 (Inferred by Electronic Annotation). However, the mean GAQ score was statistically significantly increased by 13% compared to the original (P<0.002) i.e. the GO annotation quality per protein improved after re-annotation. The GO depth score improved more than 6.5-fold, demonstrating an increased level of biological detail for the re-annotated dataset. The overall GO annotation confidence score, improved more than 7.1-fold. In addition, to assess the confidence improvement without the down-weighting caused by IEA evidence code scores in the total GAQ score, we calculated the GO annotation confidence score excluding IEA evidence code scores. The GO annotation confidence score based on annotations that themselves are based on data derived from experimental assays improved more than 10.7-fold, demonstrating that we can be more confident about the assigned annotations in the re-annotated whole microarray dataset.The impact of re-annotation is especially well demonstrated for the 615 proteins that were originally annotated to chicken . The total GO depth score and total GO annotation confidence score both increased by more than 10.8- and 9.6-fold respectively.We generated GOSlim models for cellular component [CC], molecular function [MF] and biological process [BP] Gene Ontology for the microarray and differentially expressed mRNA dataset to visualize the major functional groups represented. We used the \u201cGOA and whole proteome\u201d GOSlim set We used Ingenuity Pathway Analysis (IPA) to retrieve and compare the significant genetic pathways and networks able to be modeled by both the original and re-annotated whole microarray and differentially-expressed mRNA datasets. We identified 133 pathways common to both the original and re-annotated whole microarray dataset. Although these pathways are all shared, the pathway coverage was increased 6.9-fold, with a coverage variance of only 49% of the original variance. We identified 35 pathways unique to the original dataset and 37 unique to the re-annotated data. However, unique original dataset pathways were identified with 91 genes and mean coverage of 4.2%. In contrast, unique re-annotated dataset pathways were identified with 608 genes and mean coverage of 25.4%. We identified 34 pathways shared between the original and re-annotated differentially expressed mRNA datasets. Similar to the whole array, the pathway coverage in the re-annotated dataset was increased 6.3-fold, with a coverage variance of 61% of the original variance. Fourteen pathways were unique to the re-annotated differentially expressed mRNA dataset. www.ingenuity.com), but not relative to the proportional coverage of the pathway. The proportion of proteins in a given pathway is given by the \u201cratio coverage\u201d i.e. the number of genes from the data set that map to the pathway divided by the total number of genes that map to the canonical pathway. Re-annotation resulted in 4.5- and 1.7-fold improvements in mean pathway ratio coverage in the entire and differentially-expressed datasets, respectively . The assumption being that the genome annotation is current. In contrast, here we have re-annotated a functional genomics data set itself. However, we are aware that a potential issue confounding functional genomics data re-annotation is annotation error in the genomic databases ArrayIDer) for microarray ArrayIDer to further update these structural annotations. In the original publication, only 1131 ESTs were structurally annotated based on chicken genes, while the remainder of the microarray was structurally annotated using orthologs from 249 different species Although structural and functional re-annotation of functional genomic datasets should be intuitive we have not seen that doing such a re-annotation is commonly reported in the material and methods section of the published literature. Here we aimed to provide a quantitative example of the importance of such re-annotation . Previously this same dataset was structurally re-annotated when we designed an automated method The GIFtS algorithm calculates a gene's annotation quality score using a binary vector system (either \u20180\u2019 or \u20181\u2019) representing either the presence or absence of data. In effect, if one gene has 3 annotations and another gene 10 annotations, both will be scored equally. In contrast, the GAQ scoring algorithm assigns scores for each annotation for a particular gene individually and so better annotated genes score higher.In addition, GAQ uses the \u2018depth\u2019 of a GO term in the GO acyclic graph as a quantitative measure for the level of annotation detail, the GAS, GCI and GIFtS algorithms do not take the GO annotation's level of detail in consideration. Finally, unlike the GAQ algorithm, GAS does not allow direct input of large numbers of gene product accession numbers and, although GIFtS and GCI can do so, both algorithms are limited only to human genes and requires ortholog-searching to be used for any other species.We used the GAQ score to assess the improvement in functional annotations after re-annotation of the FHCRC Chicken 13K cDNA v2.0 microarray and a differentially expressed mRNA set from a previous study using this microarray. At the time of writing, 97.1% of all chicken GO annotations in the GOA database are \u201cInferred by Electronic Annotation\u201d (IEA). Because we have more functional annotations (\u2018breadth\u2019) in the re-annotated whole FHCRC microarray dataset, the proportion of IEA, together with the higher number of proteins in the re-annotated dataset, cause the 7.08-fold increase of the total GAQ score not to be significant. However, even though the mean GAQ score increased only by 13%, this increase of the mean GAQ score is a marked improvement compared to the original annotation.In addition, the GAQ score down-weights annotations inferred electronically compared to those inferred by experimental assays. Were we to exclude the IEA annotations then the power to model the data would reduce because fewer proteins would have annotations (i.e. annotation \u201cbreadth\u201d decreases); but the mean GAQ for each gene would increase because the lower scoring proteins would be excluded. Regardless, IEA is a valid method to annotate to the GO and, so long as the evidence code is kept in mind during modeling, then we consider IEA annotations valuable additions\u2014especially in model organisms other than mouse. For this reason we included the GO annotation confidence score calculated without the lowest scoring, most common IEA annotations, and doing so demonstrates that re-annotation even without IEA is an improvement.We calculated the GO annotation confidence score excluding IEA evidence code scores to measure the improved annotation without the down-weighting caused by IEA evidence code scores .For the original whole microarray annotation, the ratio of the GO annotation confidence score including and excluding IEA, before re-annotation is 6.1 (8037 vs. 1325) and after re-annotation 5.8 (20040 vs. 3460). For the differentially expressed mRNA annotation, the ratio of the GO annotation confidence score including and excluding IEA, before re-annotation is 9.1 (81 vs. 5) and after re-annotation 1.9 (527 vs. 283).Although the general trend is similar before and after re-annotation of the original whole microarray dataset, there is an exception for the differentially expressed mRNA dataset. We believe that this difference is due to the structural re-annotation of mRNA clones to corrected, up-to-date structural annotations with less functional annotations assigned.Originally, clone pat.pk0035.g9.f was structurally annotated as Ribonuclease homolog precursor and functionally annotated in chicken GOA database build 17 with only 4 annotations were assigned, all based on IEA evidence. When this gene is re-annotated using chicken GOA database build 46, 66 annotations were assigned, of which only 12 were IEA-based. The total GO annotation confidence score for this particular gene is 226, representing 80% of the total GO annotation confidence score (283) of all 9 genes functionally re-annotated to chicken GOA database build 46. Re-annotation structurally annotates clone pat.pk0035.g9.f as \u201cMarker protein\u201d . This gene is functionally annotated in chicken GOA database 46 to 22 annotations, all based on IEA evidence.GOSlim sets are designed to summarize GO datasets and, although this approach loses detailed functional information, it is suitable for comparing and visualizing the overall effect of functional re-annotation. The whole-microarray GOSlim modeling showed increases in GO annotation for all GOSlim groups for each of the three GO ontologies. However, re-annotation results in more unique GO annotations and increased GO depth, thus more GO annotation detail. Summarizing these GO annotations to relevant, more global GOSlim groups results in a higher GO annotation count for global GO terms for GO Cellular Component ontology terms such as \u201ccellular_component\u201d or \u201ccell\u201d. Similarly for GO Molecular Function, this phenomenon is shown for the GO group \u201cbinding\u201d, which is a global GO term that accounts for 74.4% of the chicken gene products in the Molecular Function ontology.GOSlim modeling of the differentially expressed mRNA dataset showed both increased and decreased GOSlim groups. In some instances this reduction is a direct result of re-annotation. For example, the number of GO annotations summarized to \u2018transporter activity\u2019 decreases, but in contrast the more specific GO terms \u2018protein transporter activity\u2019 and \u2018ion transmembrane transporter activity\u2019 have more GO annotations. As expected, re-annotation results in increased GO annotation granularity or specificity for the differentially expressed mRNA dataset.Because the higher order GO terms used in the GOSlim analysis do not describe in detail which canonical pathways and networks are represented by the datasets, we used Ingenuity Pathway Analysis (IPA) to retrieve all significant canonical pathways. We clearly increased the canonical pathway coverage and lowered statistical variance in assigning pathways.In order to calculate the statistical variance using the Fisher's exact test, we used the Agilent 44K chicken microarray as reference list, which is closest to the FHCRC 13k chicken cDNA microarray. Although not optimal, this approach provides the best evaluative means for assessing the impact of re-annotation on functional genomics data. In addition, IPA uses the Fisher's exact test for significance calculations, which assumes gene-independence. However, in biology, gene expression cannot always, or arguably is never, independent At the time of manuscript preparation, the most current publicly available structural and functional annotations were used. We improved the total structural annotations by 10.5 fold, the functional annotations by about 6.3 fold and the pathway coverage by 6.9 fold, since the last update of the FHCRC array in 2006. The time period from the update until our data analysis covers 40 months, which represents 20 UniGene updates, 40 GOA chicken database updates and 13 IPA database updates. This continual updating suggests that re-annotation of our annotations would again be necessary in about 4 to 6 months.Salmonella enterica Serovar enteritidis infection Salmonella infection. The impact of our re-annotation on interpretation of this study can be described on three levels.Even though it is clear that re-annotation has a significant effect on data quality, the most important question for knowledge generation is whether or not it has an impact on data interpretation. The FHCRC 13K chicken cDNA microarray has been used for cancer First, the previous annotation allowed only for candidate gene identification and, as stated by Zhou et al., in the paper, one constraint at the time was lack of annotation allowing in-depth analysis of signal transduction pathways. Our re-annotation increased the pathway coverage of several major immune response pathways , which aSecond, the re-annotated data allows us to confirm and consolidate suggestions from Zhou et al. For example, differential expression of CD3epsilon, cytokine IL-1\u03b2, and chemokines ah294 (CCL5) were identified as key genes involved in the immune response to SE. The re-annotated data not only allowed chicken-specific functional annotation of these genes, but allowed identification of their related pathways with greater confidence and coverage and 6.Salmonella enterica serovar Typhimurium infection in mice Salmonella enterica serovar Enteritidis (SE) infection using the chicken model.Third, re-annotation identified additional genes involved in major immune pathways that were not identified in the original work. For example, Zhou et al., identified differential regulation of two ESTs (pat.pk0028.f8.f and pat.pk0032.e7.f), which both were structurally annotated to the T-cell surface glycoprotein CD28. Re-annotation, however, showed that the EST pat.pk0028.f8.f (GenBank AI980641) is more correctly structurally annotated to protein tyrosine phosphatase type IVA member 1 (PTP4A1) and EST pat.pk0032.e7.f (GenBank AI980751) is more correctly structurally re-annotated to inducible T-cell co-stimulator (ICOS or CD287). ICOS is interesting in that it shares structural and functional similarities with CD28 and both are required for na\u00efve CD4+ T-cell activation, yet ICOS contributes more to T cell survival and proliferation during an immune response Oryza sativa. Not only is this NCBI record now obsolete, but our re-annotation corrected the structural annotation to chicken LOC693257 NK-lysin. NK-lysin is a known anti-microbial peptide, expressed in T and NK cells Salmonella species Salmonella enterica serovar Typhimurium resulted in higher NK-lysin mRNA expression Similarly, another EST with higher expression in chickens with low SE burden (resistant birds) is pat.pk0024.f7.f, which was originally structurally-annotated to P0498A12.26, a protein coding region from the plant In summary, although bio- and computational-technologies are greatly accelerating functional genomics research, we propose that re-annotation should be the standard first step when analysing functional genomics data. This step is especially valuable for those species in which data and resources are rapidly expanding, including those for which genomic sequence information is only recently available.http://www.ncbi.nlm.nih.gov/geo/, accession GPL 1836). The table of 15,769 rows was downloaded and filtered for duplicate EST entries, which resulted in 15,227 usable ESTs as described on the GEO website. We used the EST clone IDs for further analysis. The identifiers of the differentially expressed mRNAs were retrieved from the published manuscript by Zhou et al. We used the FHCRC Chicken 13K cDNA v.2.0 microarray ArrayIDer tool ArrayIDer retrieves gene and protein information from both the NCBI UniGene (http://www.ncbi.nlm.nih.gov/./unigene/) and International Protein Index http://www.ebi.ac.uk/IPI/IPIhelp.html). ESTs without structural annotations were searched against the EBI InterPro database AgBasend 2007, as a functional annotation baseline available at the time the FHCRC 13K chicken cDNA v2.0 microarray was published. We used the GOA chicken database build 46, published on July 30th 2009, as resource for re-annotations. To compare the original and re-annotated data, we used the 13K microarray and the differentially expressed mRNA data, and both Gene Ontology (GO) and network-based modeling.We re-annotated the entire microarray to the most recent structural annotation using the GOSlimViewerAgBase to group the GO annotations to higher order terms based on the \u2018GOA and whole proteome GOSlim\u2019 set for comparing the distribution of major biological groups represented in each dataset. In addition, we used the Gene Ontology Annotation Quality (GAQ) We used www.ingenuity.com) to identify and visualize significant canonical pathways represented on the whole microarray and the differentially expressed mRNA datasets of the experiment of Zhou et al. IPA uses publicly available databases and \u2018literature curated\u2019 gene information to calculate statistically significant canonical pathways. IPA uses a Fisher's exact test to calculate a P-value determining the probability of the gene associations in the datasets and pathways. To calculate association and gene significance, we used the Agilent 44K chicken microarray (NCBI GEO accession: GPL4993) provided by IPA as a reference list, since this reference is the closest chicken microarray to the FHCRC microarray available in IPA. We used p\u22640.05 to select pathways with significant gene representation. We compared the original and re-annotated data based on the represented significant canonical pathways and pathway coverage.We used the Ingenuity Pathway Analysis application (IPA; Ingenuity\u00ae Systems,"} {"text": "This study describes a large-scale manual re-annotation of data samples in the Gene Expression Omnibus (GEO), using variables and values derived from the National Cancer Institute thesaurus. A framework is described for creating an annotation scheme for various diseases that is flexible, comprehensive, and scalable. The annotation structure is evaluated by measuring coverage and agreement between annotators.There were 12,500 samples annotated with approximately 30 variables, in each of six disease categories \u2013 breast cancer, colon cancer, inflammatory bowel disease (IBD), rheumatoid arthritis (RA), systemic lupus erythematosus (SLE), and Type 1 diabetes mellitus (DM). The annotators provided excellent variable coverage, with known values for over 98% of three critical variables: disease state, tissue, and sample type. There was 89% strict inter-annotator agreement and 92% agreement when using semantic and partial similarity measures.We show that it is possible to perform manual re-annotation of a large repository in a reliable manner. Large repositories of gene expression data are currently available and serve as online resources for researchers, including the Gene Expression Omnibus (GEO), the Center for Information Biology Gene Expression Database (CIBEX), the European Bioinformatics Institute's ArrayExpress and the Stanford Tissue Microarray Database -4. ReposMany centers have focused on re-annotating biomedical data with the goal of increasing utility for researchers. The promise of fast-paced annotation amid rapid accumulation of data has spurred great interest in progressive development of automated methods ,5. To dabreast cancer is entered as a value for a \"disease state\", whereas breast tumor is entered as a value for \"cell line\" in the sample excerpted in Table breast tumor is ambiguous under \"cell line\" because this axis should specifically refer to breast cancer instead of tumor, given that these cell lines refer to models of neoplastic diseases.Several attempts directed specifically at annotation of gene expression data have been performed and remain the subject of ongoing work. In particular, GEO datasets (GDS) are being developed to systematically categorize statistically and biologically similar samples that were processed using a similar platform within a single study . The proIt is not surprising, therefore, that re-annotating GEO and other large microarray data repositories is the focus of several groups. In particular, automatic text processing is being used to capture disease states corresponding to a given sample from GDS annotations. In a recently published article in which the objective was to identify disease and control samples within an experiment, the GDS subsets were analyzed using representative text phrases and algorithms for negation and lexical variation . AlthougWe therefore describe a large-scale manual re-annotation of data samples in GEO, including variable fields derived from the NCI thesaurus and corresponding values that also utilize primarily controlled terminology . The objThree sections below specifically: (1) enumerate the iterative process used for developing an annotation structure, (2) describe the annotation tool and the annotators' characteristics, and (3) describe the framework for evaluation.An iterative process was designed for identifying the variables selected for annotation, as follows:1. Variable generation \u2013 Human experts develop a list of variables for annotation. This procedure is based on guidelines and publications that are related to the disease category. Variables were then trimmed based on consensus among three physicians.2. Supervised domain annotation \u2013 A trained annotator was instructed to start annotating the given variables under physician supervision. Whenever a variable deemed important was identified, it was listed for further deliberation. The process was then repeated \u2013 back to number (1) above, until no further variables were identified or the amount of samples for preliminary annotation was reached .3. Unsupervised annotation \u2013 A trained human annotator then performed unsupervised annotation independently, after receiving a standardized, written instruction protocol. Instructions were specifically developed for each disease category. Two human annotators were assigned to code each data sample. Randomized assignment between annotators was performed by disease category to minimize the occurrence of two coders being assigned to annotate the same disease category (and therefore the same samples) repeatedly.4. Disagreement and partial agreement identification \u2013 After the human annotators finished coding their assigned experiments, the data was compiled and the assigned values were compared to measure agreement. The method to assess agreement is further described below.5. Re-annotation \u2013 Finally, the samples containing values that were not in agreement initially were re-annotated and the correct annotation was determined by a majority vote. In the event of a three-way tie, one of the investigators performed a manual review and final adjudication.To ensure consistency of terminology, the NCI thesaurus was utilized for the disease domains annotated, consistent with prior annotation initiatives ,8. This The variable \"tissue\" was assigned several different values, one of which was \"breast.\" This assignment provided flexibility, allowing for addition of other tissue types, whenever the disease domain changes. There was also sufficient granularity to allow for actual interrogation(s) into the database for future hypothesis generation or validation. A full description of the web-based annotation tool and the quantity of samples annotated over time is described in a separate paper .There were a total of six annotators, including four senior biology students, one graduate student in the biological sciences field, and one physician. As noted previously, each sample had at least two annotators assigning values to variables. The annotation task was to provide phenotypic information for each data sample that was available in GEO for breast and colon cancer, IBD, DM, SLE, and RA. Thus, it was critical to obtain standardized values for most of the annotation variables to ensure that the annotations would be consistent. This entailed a review of data descriptions listed in various sources \u2013 the data sets (GDS), series information (GSE) and sample information (GSM). In addition, information was available in supplementary files and in published scientific articles, which are not in GEO. Manual review of all these data sources was necessary to obtain sufficient variable coverage. Coverage was defined as the percentage of non-'unknown' values that were assigned to a variable. Specifically, it can be represented as:Coverage = X/Y, where X represents the number of variables with values that are not \"unknown.\" Y represents the total number of variables that were annotated.Table To validate the reliability of the annotation scheme, we computed the percentage of agreement between annotators, defined as the number of variables for which both annotators gave the same value, divided by the total number of variables that were annotated. We calculated percentage agreement for each level of similarity across all disease categories.A substantial fraction of GEO, including 45 platforms, 2,445 studies, and 58,432 samples were extracted into the analytical database. Among them, several disease categories are represented, but only 11,511 samples (19.7%) are included in various GDS subsets. Over a period of five weeks, 12,500 samples (21.4%) from a limited set of disease categories were annotated, as shown in Table In addition, for each disease category, a comprehensive and controlled set of phenotypic variables were provided, as shown in Table The next goal was to provide adequate coverage for as many variables that were identified. Table Inter-annotator agreement results are shown in Table Overall, there was excellent inter-annotator agreement across multiple disease domains. Table Repositories for gene expression data such as GEO are expanding very rapidly . HoweverThis study's re-annotation evaluation was performed on sample quantities that are two orders of magnitude higher than most prior reports ,5,12. A We also described the methodology used for identifying relevant variables in each disease category. This iterative process is efficient and provided a mechanism for identifying relevant variables for domain categories. This technique provides a framework for inducing structure of a specific domain in an iterative and consultative manner. Excellent inter-annotator agreement confirmed that the annotation variables were robust and easily identifiable.Finally, we provided a framework for measuring inter-annotator agreement. Apart from strict agreement measured using exact string matching between variable values, we defined and considered two other similarity categories that were known to be especially useful for annotations that relied heavily on free text. We showed an improvement in agreement using these more lenient similarity measures. The degree of improvement was mitigated by the very controlled terminology from the NCI Thesaurus that annotators utilized, and was augmented by the annotation tool. Several studies use semantic similarity as a measurement of agreement in annotation of microarray data ,5. SeverPhenotypic annotations and data sample information are critically important for translational research. In particular, it is important to have good coverage for vital information, specific to clinical domain, as well as providing accurate annotations. We show that it is possible to perform manual re-annotation of a large repository in a reliable and efficient manner.The authors declare that they have no competing interests.All authors were involved in designing the study, and developing the annotation structure. Likewise, all authors were involved in the annotation system design and development of the user interface. After the initial pilot phase, RL, CH and LOM were involved in further variable identification and selection. CH was involved in performing some of the annotation in the pilot phase. RL was involved in supervising the entire annotation process and evaluating annotation quality. All authors contributed to preparation of this manuscript and read and approved the final version."} {"text": "Genes and gene products are frequently annotated with Gene Ontology concepts based on the evidence provided in genomics articles. Manually locating and curating information about a genomic entity from the biomedical literature requires vast amounts of human effort. Hence, there is clearly a need forautomated computational tools to annotate the genes and gene products with Gene Ontology concepts by computationally capturing the related knowledge embedded in textual data.In this article, we present an automated genomic entity annotation system, GEANN, which extracts information about the characteristics of genes and gene products in article abstracts from PubMed, and translates the discoveredknowledge into Gene Ontology (GO) concepts, a widely-used standardized vocabulary of genomic traits. GEANN utilizes textual \"extraction patterns\", and a semantic matching framework to locate phrases matching to a pattern and produce Gene Ontology annotations for genes and gene products.In our experiments, GEANN has reached to the precision level of 78% at therecall level of 61%. On a select set of Gene Ontology concepts, GEANN either outperforms or is comparable to two other automated annotation studies. Use of WordNet for semantic pattern matching improves the precision and recall by 24% and 15%, respectively, and the improvement due to semantic pattern matching becomes more apparent as the Gene Ontology terms become more general.GEANN is useful for two distinct purposes: (i) automating the annotation of genomic entities with Gene Ontology concepts, and (ii) providing existing annotations with additional \"evidence articles\" from the literature. The use of textual extraction patterns that are constructed based on the existing annotations achieve high precision. The semantic pattern matching framework provides a more flexible pattern matching scheme with respect to \"exactmatching\" with the advantage of locating approximate pattern occurrences with similar semantics. Relatively low recall performance of our pattern-based approach may be enhanced either by employing a probabilistic annotation framework based on the annotation neighbourhoods in textual data, or, alternatively, the statistical enrichment threshold may be adjusted to lower values for applications that put more value on achieving higher recall values. The number of published molecular biology and genomics research articles has been increasing at a fast rate. Advancements in computational methods expediting the predictions of thousands of genes have generated high volumes of biological data. In addition, with the advent of microarray technology, it is now possible to observe the expression profiles for thousands of genes simultaneously. Consequently, introduction of all these technologies has resulted in remarkable increases in the produced and published data.Currently, biological knowledge recorded in textual documents is not readily available for computerized analysis. And, the current practice of manual curation of text documents requires enormous human resources. Hence, there is a need for automated computational tools to extract useful information from textual data.molecular function, biological process and cellular component, and contains around 20,000 concepts which are organized in a hierarchy.The computationally extracted knowledge needs to be transformed into a form that can both be analyzed by computers and is readable by humans. To this end, different fields have developed various ontologies in an effort to define a standard vocabulary of each field. In the context of genomics, Gene Ontology (GO) is propoevidence article. An evidence article for an annotation usually discusses or refers to a specific gene trait that leads to the corresponding annotation. For a GO concept g, the evidence article set of g contains all the articles that are referenced as evidence articles for the existing annotations of genes with g.Presently, GO annotations are either manually curated from the literature or computationally created . Most oIn this work, we focus on information extraction from biomedical publications in terms of GO concept annotations. We present a gene annotation system, called GEANN, that allows for\u2022 Automated extraction of knowledge about various traits of genomic entities from unstructured textual data; and\u2022 Annotating genes and proteins with appropriate concepts from GO, based on the extracted knowledge.GEANN utilizes the existing GO concept evidence articles to construct textual extraction patterns for each GO concept. The extraction patterns are flexible in that GEANN employs semantic matching of phrases by utilizing WordNet . WordNetpattern crosswalks which involves the creation of new patterns via combining patterns with overlapping components into larger patterns. GEANN then searches PubMed publication abstracts for matches to the patterns of genomic entities, and uses the located matches to annotate genomic entities with GO concepts.The extracted pattern set is further enriched by employing In this article, we evaluate GEANN's annotation accuracy over 114 GO concepts, where GEANN has reached to 78% precision at 61% recall on the average. GEANN is being developed as part of PathCase is a tag for a genomic entity name placed by the named entity recognizer.Note that, in comparison to 3-tuple regular patterns, side-joined patterns has five tuples, where and are sequence of words, and the remaining tuples are bags of words. Next, we give an example of a side-joined pattern that is created for the GO concept Example 3: Consider the two patterns P1 and P2 below.Then, the side-joined pattern is:[see Additional file Side-joined patterns are helpful in detecting consecutive pattern matches that partially overlap in their matches. If there exist two consecutive regular pattern matches, then such a matching should be evaluated differently than two separate matches of regular patterns as it may provide stronger evidence for the existence of a possible GO annotation in the matching region. Therefore, side-joined patterns are abstractions to capture consecutive matches.1={left1}{right1} and P2={left2}{right2} can be merged into a 4-tuple middle-joined pattern as illustrated in Figure The second type of extended patterns are constructed based on partial overlappings between the middle and side (right or left) tuples of two patterns. Since middle tuples are constructed from significant terms/phrases, a partial overlapping, that is, a subset of a middle tuple, will also be a significant term. A pattern pair Pa. right middle walk: {right1} \u2229 \u2260 \u2205 and \u2229 {left2} = \u2205b. left middle walk: \u2229 {left2} \u2260 \u2205 and {right1} \u2229 = \u2205c. middle walk: \u2229 {left2} \u2260 \u2205 and {right1} \u2229 \u2260 \u2205positive transcription elongation factor.In comparison to 3-tuple regular patterns, middle-joined patterns have 4 tuples: {left}{right} where and are word sequences, whereas {right} and {left} are bags of words. In case (a), the first middle tuple is the intersection of {right1} and tuples where the intersection is aligned according to the order of words in . Case (b) is handled similarly. As for case (c), the first and the second middle tuples are subsets of and . Middle-joined pattern construction is illustrated in Figure Example 4: Middle-joined Pattern (type (c) middle walk). Consider the two patterns P1 and P2 below where window size is three.3 is:Then, the resulting middle-joined pattern P[see Additional file partial matches since, by definition, their middle tuples are constructed from the intersection of a middle tuple and a side tuple of different patterns. For instance, in example 3, a partial match to P1 followed by a partial match to P2 can be accommodated by the middle-joined pattern P3. Otherwise, such a match would be missed.Like side-joined patterns, middle-joined patterns capture consecutive pattern matches in textual data. In addition, since we enforce the full matching of middle tuple(s) for a valid match, partial matches to the middle tuple of a regular pattern is missed. However, middle-joined patterns accommodate consecutive g, PatternScore(P) of a pattern P with a middle tuple which is constructed from a term/phrase t isPattern scores are used to assign a confidence value for a candidate annotation which is created as a result of match to a particular pattern. For scoring patterns, GEANN uses the statistical enrichment scores of significant terms/phrases as the scores of the patterns. That is, given a GO concept E is the evidence article set of g, and D is the set of all articles in the database.where t1 and t2, and GO concept g with evidence article set E, PatternScore(ExP) is computed asSimilarly, extended patterns are also scored based on the statistical enrichment scores of their middle tuples. However, since the extended patterns have two middle tuples, the statistical enrichment is adapted accordingly as follows. Given a side-joined or middle-joined pattern ExP with middle tuples phrases f is the frequency of articles that contain both t1, t2 in E, and f is the frequency of articles that contain both t1 and t2 in D. For middle-joined patterns t1 and t2 is required to be consecutive, while for side-joined patterns there may be up to WindowSize number of words between t1 and t2 to compensate for the tuple between t1 and t2 in a side-joined pattern.where The fact that we design our pattern scoring mechanism completely based on the enrichment scores of the significant phrases is closely related to the pattern construction phase. Among the elements of a pattern, the middle tuple constitutes the core of a pattern since only the middle tuple consists of phrases or terms that are determined based on frequency-based enrichment criteria. On the other hand, the remaining elements of a pattern, are directly taken from the surrounding words of significant phrases in evidence articles without being subject to any statistical selection process. Hence, middle tuples are the elements that provide the semantic connection between a pattern and the GO concept to which it belongs. Alternatively, we could use the support of the significant phrase in the middle tuple. Nevertheless, enrichment score already utilizes support information , and further refines it by considering the global support so that the influence of the patterns with significant phrases that are common to almost all articles in the database would be relatively smaller.semantically similar to the surrounding words around the occurrence of P's middle tuple in Pr. We require exact occurrence of P's middle tuple in Pr since the middle tuple is the core of a pattern, and it is the only element of a pattern, which is computed based on a statistical measure. And, the motivation for looking for semantic similarity rather than exact one-to-one match for side tuples is that, for instance, given a pattern P1 = \"{increase catalytic rate}{RNA polymerase II}\", we want to be able to detect phrases which give the sense that \"transcription elongation\" is positively affected. Hence, phrases like \"stimulates rate of transcription elongation\" or \"facilitates transcription elongation\" also constitute \"semantic\" matches to the above pattern.Now that the patterns are obtained, the next step is searching for occurrences of patterns with the goal of predicting new annotations based on pattern matches. Given a pattern P and a article Pr, we have a match for P in Pr if (i) Pr contains the significant phrase in the middle tuple of P, and (ii) left and right tuples of P are Pat to be searched in a set of articles ArticleSet, and the GO concept that Pat belongs to, the algorithm returns a set of gene annotation predictions with their confidence scores. For each occurrence of Pat's middle tuple in an article Pr in ArticleSet, the corresponding left and right tuples are extracted from the surrounding words around the occurrence in Pr. Then, Pat's left and right tuples are compared for semantic similarity to the left and right tuples that are just extracted from Pr. We next describe the implementation of this comparison procedure function.[see Additional file i, Wj), where Wi\u2208 WS1 and Wj \u2208 WS2, if they have similar meanings. To this end, we have implemented a semantic similarity computation framework based on WordNet. Given a word pair , many semantic similarity measures are proposed to compute the similarity between the word pair Wi and Wj in different contexts [In order to determine the extent of semantic matching between two given sets of words WS1 and WS2, GEANN employs WordNet to check each word pair (Wcontexts -31. InstGiven a taxonomy T and two nodes (representing words in WordNet) t1 and t2 in T, the most intuitive way to compute the similarity between t1 and t2 is to measure the distance between t1 and t2 in T ,31. As tedge_distance = 1/2 = 0.5 while the similarity between \"car\" and \"bicycle\" is Simedge_distance = 1/3 = 0.33.If there are multiple paths from t1 to t2, then the shortest path is selected to compute the similarity. For instance, in Figure Information content of a node t in taxonomy T is computed based on the occurrence probability p(t) of t in T. p(t) is the ratio of the nodes that are subsumed by t to the total number of nodes in T. Lesser occurrence probability for a node t implies a higher information content. Information content IC(t) of node t is quantified as -log p(t) which decreases as t gets more general in the taxonomy. As an example, the occurrence probability of node \"automotive\" in Figure Resnik proposesi are the words from WS1 and WS2, and the weight of the edges are the semantic similarity between ti and tj where ti \u2208 WS1 and tj \u2208 WS2. The problem of computing the maximum total matching weight on G is to find a subset E' of edges in G such that no edges in E' share a node, all nodes are incident to an edge in E', and the sum of edge weights in E' is maximum.[see Additional file Then, the match score of pattern P to an occurrence O of P in an article is computed as the average similarity of the semantic matching score computed for the left and the right tuples of P. That is,where P. LeftTuple is the left tuple of pattern P, and O. LeftTuple is the word set in O that matches the left tuple of P. Similarly, O. RightTuple is the word set in O that matches P. RightTuple.The semantic similarity score returned from the WordNet evaluation is used as the base of our confidence for the match between P and O. Thus, each individual pattern match between P and O is assigned a score based on (i) the score of the pattern P, and (ii) the semantic similarity between P and O computed using WordNet (Eq. 1). That is,Having located a text occurrence O that matches the pattern P, and evaluated the match score, the next step is to decide about the genomic entity that will be associated with the match, and, hence, will get annotated with the specific GO concept the pattern belongs to. We next describe the implementation of this function. In this context, locating the corresponding gene for a candidate annotation, there are two main issues that one needs to deal with: (i) detecting terms or phrases that are gene or gene product names, and (ii) determining which one of the genes to choose, among possibly many candidates located around the matching region in the text. The first task is a particular version of the problem of developing a named entity tagger, which is an active research area in the natural language processing field. Since our focus in this study is not on developing a named entity tagger, we utilized an existing biological named entity recognizer, called Abner . Abner iOnce the gene names in the text are tagged by the named entity tagger, the next task is to decide on the gene to be annotated. This task is not straightforward as there may be several gene products/genes around the matched phrase in the abstract. Thus, we need to find a mechanism to correctly recognize the genomic entity the matched occurrence O refers to. Our approach is based on a set of heuristics: we first look into the sentence containing the matching M, and choose the gene product that comes first before the matching phrase in the same sentence. If we cannot find one, then we check the part that follows the matching region in the same sentence. If there is no gene name mentioned in the same sentence, we check the previous and the following sentences, respectively.g by a pattern P with occurrence O in the text is a function of both the match score of P to O (Eq. 2) and the distance of the reference to the gene in the text to O, that isFinally, each predicted annotation is assigned an annotation (confidence) score. The final annotation score of a gene FDistance is the distance function, t is the distance of gene reference g to occurrence O in terms of the number of words between them, and n is the minimum number of sentences that span g, O, and the set of words between g and O. As an example, if g and O are in the same sentence, n = 1, and if g reference is in the next sentence that follows the sentence containing O then n = 2, and so on. Intuitively, the distance function should generate lower scores as the distance t increases. In addition, being located in different sentences should considerably decay the distance function value. Therefore, FDistance should be a monotonically decreasing function as t or n increases. In this article, we use the following heuristic distance function that conforms to the above intuitions concerning t and n.where Alternative distance functions are possible as long as those alternatives are monotonically decreasing as t or n increases. While designing the above particular function, we choose to incorporate n, for instance, as an exponent of distance t as we have observed in several examples that reliability annotations decay significantly when a pattern match in a sentence is used to annotate a gene in a different sentence. In contrast, it is our observation that the impact of the distance parameter t is less severe in comparison to n. Thus, t is incorporated to affect the value of the function in a linearly inverse proportional manner.evidence code which indicates how the annotation is created, i.e., how reliable it is. The least reliable annotations are the ones that have the evidence code IEA (Inferred from Electronic Annotation) which are computationally created, and not curated. Therefore, we exclude such annotations from our training data.In order to evaluate the performance of GEANN, we perform experiments on annotating genes in NCBI's GenBank . During Our experiments are based on the precision-recall analysis of the predicted annotation set under several circumstances. To this end, for each case, we adopt the k-fold cross validation scheme as folloDefinition (Precision): Given a GO concept C and the set S of predicted annotations for C, precision for C is the ratio of the number of genes in S that are correctly predicted to the total number of genes in S.Definition : Given a GO concept C and the set S of predicted annotations for C, recall for C is the ratio of the number of genes in S that are correctly predicted to the number of genes that are known to be annotated with C.Definition : F-Value is the harmonic mean of precision and recall, and computed asSince we perform 10-fold cross validation, for an accurate analysis, we enforce the requirement that each partition has at least three evidence articles to test during the evaluations. Hence, we make sure that each selected GO concept for experimental evaluation has at least 30 evidence articles and genes. Thus, the experimental GO concept set consists of 114 GO concepts [see Additional file In order to approximate the word frequencies in the actual PubMed database , we used a larger corpus of 150,000 article abstracts which consist of articles that are referred to in support of an annotation in GO (our training set). This corpus is only used for the calculation of statistical enrichment scores, and consists of articles that Genbank curators list as related reference articles for the genes in the Genbank database. Reference article set for each gene is part of the Genbank database, and it is publicly available to downlGEANN maps gene name occurrences found in PubMed article abstracts to actual gene records in GenBank. One major problem in this type of study is to determine which entities from two different data sources are really referring to the same object. The reconciliation process also known as the entity disambiguation problem by itselA1(Handling Shared Gene Synonyms): Among the GenBank genes that match to the symbol being annotated, if at least one of the matched genes has the annotation involving a particular GO concept, then this annotation prediction is considered as a correct prediction (or true positive). On the other hand, if none of the genes sharing the gene symbol of the predicted annotation has a record corresponding to the particular GO concept among its GO annotations, then such results are considered as incorrect predictions or false positives.A2(Annotating via the GO Hierarchy): If one of the matche genes in GenBank is annotated with a descendant of the given GO concept G, then G also annotates the gene due to the true-path rule of GO, which states that if the child concept annotates a gene, then all its ancestor concepts also annotate that gene.A3(Using Annotations in Genbank that have have no Literature Evidence): If the predicted annotation is included in GenBank, then we consider this prediction as a true positive regardless of (a) its evidence code, and (b) whether it has a literature reference.We first evaluate the overall performance of GEANN. Predicted annotations are ordered by their annotation scores. First, precision and recall values for individual GO concepts are computed by considering the top-k predictions where k is increased by one at each step until either all the annotation information available for a GO concept is located, or all the GO candidates in the predicted set are processed. Next, the precision/recall values from individual GO concept assessments are combined by taking the average of precision/recall values at each k value for top-k results.From Figure Observation 1: GEANN yields 78% precision at 48% recall.Note that the accuracy of the tagging gene/gene products in the text influences the association of a pattern to a genomic entity. However, named entity taggers (NETs) also negatively affect the accuracy. In the rest of the paper, we consider this negative effect [For more details, see Additional file Next, we evaluate the accuracy of GEANN across the three different subontologies of GO, namely, biological process, molecular function, and cellular location.Observation 3: In terms of precision, GEANN provides the best precision for the oncepts from cellular component (CC) and molecular function subontologies where precision is computed as 80% while biological process (BP) subontology yields the highest recall (63% at 77% precision). The fact that MF subontology provides better precision may be due to the fact that biological process concepts refer to biological pathways, and pathways are more general biological abstractions in comparison to the specific functionalities of enzyme proteins/genes, a number of which is included in each pathway.[see Additional file In this section, we compare our approach to two other studies, namely, Raychaudhuri et al. and IzumIon homeostasis GO:0006873 (6 annotations), Membrane fusion GO:0006944 (8 annotations). Furthermore, one of the test concepts (Biogenesis) has since become obsolete. Therefore, here we present comparative results for the remaining nine GO concepts in terms of F-values. Table Using 12 GO concepts, Izumitani et al. compares its system to Raychaudhuri et al.'s study. To provide a comparison, our analysis in this experiment is also confined to this set of GO concepts. The following GO concepts could not be cross-validated due to their small annotation set size: Comparing F-Values against Izumitani et al. and Raychaudhuri et al.Table Comparing F-Values against Izumitani et al. For GO SubOntologiesTable Observation 4: GEANN's performance is comparable to or better than Izumitani et al. and Raychaudhuri et al. In terms of the average F-value over the test GO concept set of size 9, GEANN outperforms both systems, and for six of the nine test concepts, GEANN performs the best.Next, we experimentally measure the improvement provided directly by the use of WordNet as the semantic similarity infrastructure. For comparison purposes, we developed a baseline methodology by replacing the semantic similarity pattern matching part in GEANN's implementation with a naive syntactical pattern matching method which recognizes only exact match between a pattern and a textual phrase. All the other scoring and pattern construction mechanisms are kept the same for both the baseline system and the GEANN in order to focus on the semantic pattern matching infrastructure of GEANN. Then, we run both the baseline approach and GEANN on our experimental set of 114 GO concepts. Figure Observation 5: GEANN with semantic matching outperforms the baseline approach by 24% in terms of recall and 15% in terms of the precision.The improvement in the accuracy is expected since not only exact matches to the side tuples of the patterns, but also approximate matches can be located and scored based on the well-studied taxonomy similarity measures and by utilizing semantic relationships between the concepts of WordNet. Table Next, we explore if the semantic pattern matching approach performs better for the GO concepts from a specific subontology of GO. Figure Observation 6: Semantic pattern matching approach performs almost equally well for each subontology of GO, and the prediction accuracy improvement is more or less uniformly distributed over different subontologies.This indicates that semantic pattern matching approach is not specific to a particular set of GO concepts, but is effective throughout the whole GO ontology.Since GO is hierarchically organized, GO concepts that are closer to the root concept represent more general biological knowledge than those that are closer to the leaf levels. Hence, to see how the semantic pattern matching framework performs at different levels of GO, next we cluster the concepts by the GO level they reside, and analyze the cross-validation accuracy by changing the GO level. Figure Observation 7: There is no perfect regularity in terms of changes in recall/precision improvement, as the concepts get more specific.Observation 8: There is a general trend of decrease in both precision and recall improvement as the concepts get more specific.The semantic similarity measures rely on the existence of a path between the synsets of the words that are compared. Intuitively, as the GO level gets deeper towards the leaf level, the concepts gets more specific. Hence, the sentences describing such concepts would be more likely to include terms that are domain-specific and less likely to be found in WordNet, which narrows down the space that WordNet can be influential. In addition, since WordNet is a general purpose English word taxonomy, and is not specific to the biomedical domain, its capacity to accommodate the terms in the biology domain should not be overestimated. For instance, during our experiments, around 25% of the semantic similarity computations returned the score of zero.As there are many alternative measures that have been proposed in the literature to compute semantic similarity over the taxonomies, it is informative to explore the impact of different measures on GEANN's accuracy. Evaluation of alternative measures is by itself the main topic of many research articles ,28,37. IObservation 9: Replacement of IC-based semantic similarity computation with edge-counting method does not cause dramatic changes on the overall accuracy of GEANN . A couple of GO concepts had dramatic change (10%) on their either recall or precision, such occurrences were not sufficiently common to influence the overall accuracy significantly.The above observation is reasonable since the proposed framework here is not primarily based on the type or the nature of the adapted similarity measure. What is crucial to GEANN's success in using semantic similarity over a traditional syntactical pattern matching system is the adoption of a flexible matching system that takes advantage of semantic relationship of words, which is not always intuitively or readily available in a typical pattern matching system. Hence, the above observation confirms that (a) the adopted similarity measure is only a plug-in tool in the overall framework, (b) a particular measure is not at the core of our paradigm, (c) any of the well-known semantic similarity measures that are studied in the literature are likely to be employed by GEANN.Top-10 most affected GO concepts when the semantic similarity measure is replaced by another oneTable Next, we further examine the small set of GO concepts that were affected by the change of the similarity measure. About 10 GO concepts experienced an F-value change greater than 4%, which are given in Table As illustrated through the experimental results, an inherent drawback of pattern-based text mining systems is the fact that their recall performance is frequently low. In this section, we describe and evaluate two different approaches to obtain annotation predictions with high recall: (i) through a probabilistic annotation framework, and (ii) by adjusting statistical enrichment threshold value [see Additional file Observation 10: At recall of 61% which is the maximum recall that GEANN can achieve at its maximum precision level, the probabilistic approach has a precision of 51% while GEANN has precision of 78%.Observation 11: The probabilistic approach can reach to higher recall values (77% at the maximum) which is significantly higher than what GEANN provides (61% at the maximum).Observation 19: Adjusting enrichment threshold to lower values results in higher recall than the maximum recall value provided by the probabilistic approach.In this article, we explore a method that automatically infers new GO annotations for genes and gene products from PubMed abstracts. To this end, we develop GEANN that utilizes the existing annotation information to construct textual extraction patterns characterizing an annotation with a specific GO concept. During the annotation stage, GEANN searches for phrases in PubMed abstracts that match the created patterns. Matches are scored and associated with the most proper genomic entity or a set of entities around the matching region. As the final output, GEANN lists the genes that are predicted to be annotated with a given GO concept. In our experiments, GEANN either has outperformed or is comparable to earlier automated annotation work.[For a much more detailed discussion of future and related work see Additional file AC designed the study, drafted the manuscript and carried out the experimental studies. GO participated in its coordination and helped to draft the manuscript. All authors read and approved the final manuscript.SupplementaryMaterial. This file contains detailed discussion on sections that are not included (or are briefly mentioned). More specifically, the supplementary material document contains an illustration of the overall approach, formal algorithm sketches for procedures described in the main manuscript, some additional experimental results, an elaborate comparative discussion on two alternative approaches to obtain higher recall values, and a detailed discussion on related and future work.Click here for fileAppendix 1. This file contains the experimental GO concept set along with overall precision/recall values for each GO concept.Click here for file"} {"text": "Using SO terms to label the parts of sequence annotationsgreatly facilitates downstream analyses of their contents, as it ensures that annotationsproduced by different groups conform to a single standard. This greatly facilitatesanalyses of annotation contents and characteristics, e.g. comparisons of UTRs,alternative splicing, etc. Because SO also specifies the relationships between features,e.g. \t\tThis document provides a step-by-step guide to producing a SO compliant filedescribing a sequence annotation. We illustrate this by using an annotated gene as anexample. First we show where the terms needed to describe the gene's features arelocated in SO and their relationships to one another. We then show line by line howto format the file to construct a SO compliant annotation of this gene."} {"text": "Eimeria infected chickens and finally we propose guidelines for optimal annotation strategies.Reliable annotation linking oligonucleotide probes to target genes is essential for functional biological analysis of microarray experiments. We used the IMAD, OligoRAP and sigReannot pipelines to update the annotation for the ARK-Genomics Chicken 20 K array as part of a joined EADGENE/SABRE workshop. In this manuscript we compare their annotation strategies and results. Furthermore, we analyse the effect of differences in updated annotation on functional analysis for an experiment involving IMAD, OligoRAP and sigReannot update both annotation and estimated target specificity. The 3 pipelines can assign oligos to target specificity categories although with varying degrees of resolution. Target specificity is judged based on the amount and type of oligo versus target-gene alignments (hits), which are determined by filter thresholds that users can adjust based on their experimental conditions. Linking oligos to annotation on the other hand is based on rigid rules, which differ between pipelines.For 52.7% of the oligos from a subset selected for in depth comparison all pipelines linked to one or more Ensembl genes with consensus on 44.0%. In 31.0% of the cases none of the pipelines could assign an Ensembl gene to an oligo and for the remaining 16.3% the coverage differed between pipelines. Differences in updated annotation were mainly due to different thresholds for hybridisation potential filtering of oligo versus target-gene alignments and different policies for expanding annotation using indirect links. The differences in updated annotation packages had a significant effect on GO term enrichment analysis with consensus on only 67.2% of the enriched terms.In addition to flexible thresholds to determine target specificity, annotation tools should provide metadata describing the relationships between oligos and the annotation assigned to them. These relationships can then be used to judge the varying degrees of reliability allowing users to fine-tune the balance between reliability and coverage. This is important as it can have a significant effect on functional microarray analysis as exemplified by the lack of consensus on almost one third of the terms found with GO term enrichment analysis based on updated IMAD, OligoRAP or sigReannot annotation. High throughput gene expression experiments using microarrays are based on the principle of hybridising strands of nucleotides to form a duplex. For each gene (the target) a microarray contains many copies of one or more short strands (the probes) in small regions on the array called spots. In a microarray experiment expressed sequences or sequences derived thereof are labelled and allowed to hybridise to the probes making the amount of label at each spot an indicator for the amount of gene expression. Since all spots are processed simultaneously, it is essential that all probes have optimal target specificity under the same experimental conditions. Therefore, optimal microarray design requires 1) a completely sequenced reference genome, 2) complete annotation for this reference genome to know what parts may be expressed and 3) complete knowledge about the natural variation amongst the sampled individuals.Unfortunately there is currently not a single species for which such complete information is available. Although some reference genomes are now close to completion, the recently published first results of the ENCODE project indicate that our knowledge of what is expressed is vastly underestimated . Hence, Previous re-annotation studies have shown that up to half of the probes for popular microarrays can be problematic as they suffer from cross hybridisation, from detecting something else than what they were designed for, or both -7. The sOther evidence that current probe annotation is often suboptimal comes from microarray reproducibility studies. Although reproducibility of modern arrays using the same array platform and version is usually good to very good, reproducibility between different array versions even on the same platform can be very poor -14. Re-aEimeria infected chickens, which was selected as starting material for the joined EADGENE [Summarising, it is important to update the annotation for arrays regularly to improve the reliability of probe-target assignments. Three tools to update oligo annotation for microarrays \u2013 IMAD, OligoRAP and sigReannot \u2013 are described elsewhere in this issue of BMC Proceedings -18. In t EADGENE and SABR EADGENE workshopab initio in silico gene predictions, 3) miRBase micro RNAs and 4) a small set of contributed sequences. Microarray data from an experiment with Eimeria infected chickens and using this array was provided as starting material for the EADGENE/SABRE post-analyses workshop [The microarray used is the ARK-Genomics Chicken 20 K array consisting of 20.460 probes ranging in length from 60 to 75 nucleotides with the majority of the probes 70 nucleotides long . It was workshop . From thworkshop enrichmeIMAD, OligoRAP and sigReannot were used to update annotation as described elsewhere in this issue -18. At tet al. [Hybridisation filter thresholds were synchronised based on He et al. where poDifferent custom array annotation packages based on Ensembl Gene IDs were made using the updated annotation provided by the IMAD, OligoRAP and sigReannot pipelines. These custom annotation packages were made with Bioconductor using thFigure The first step requires the oligo sequences and species of interest as input and aligns the oligos with potential targets. An overview of datasources used for the alignments is provided in Figure Firstly, IMAD ignores strand information and hence might link to annotation derived from features located on the opposite strand of a hit. SigReannot is strand-aware, but can link to annotation from the opposite strand if no annotation was found on the hit strand. Most array platforms only detect a single strand and under normal conditions a gene produces only RNA from a single strand. But there can exceptions like in the case of viral reverse transcriptases, some of which can switch templates resulting in chimeric cDNA molecules . IMAD waSecondly, sigReannot uses UTR/intron extension in case no hits were found on Ensembl transcripts. The latter means that sigReannot searches UniGene as seconIMAD and OligoRAP also use additional sources for probe-target alignments to increase the coverage of annotated probes. In addition to Ensembl transcripts IMAD aligns probes with UniGene and DFCIThe second step is to filter the hits based on the quality of the alignments, which relates to the hybridisation potential of a hit. All three pipelines can filter alignments on the percentage of sequence identity. Low quality hits that do not pass this filter, but which do contain small stretches of uninterrupted matches might still contribute to signal on a microarray. Therefore, OligoRAP and sigReannot feature a second filter for the minimum size of what is called the longest contiguous stretch or continuous block, respectively. Finally, OligoRAP has a third filter for the maximum total amount of mismatches. When the probes are not all equally long this filter will produce different results as compared to the percentage identity filter.In contrast to sigReannot and IMAD, OligoRAP applies the filter step not immediately after aligning oligos with targets, but after all annotation is retrieved instead (after step 4). This allows OligoRAP to check if two or more short hits were derived from intron-separated exons of the same gene. If such hits are found they are merged into a larger hit, which is necessary for OligoRAP, because it aligns with reference genomes as compared to IMAD and sigReannot, which only align with transcripts.Based on the amount and type of hits oligos can be assigned to target specificity classes (TSCs). An overview of how TSCs overlap or differ between the 3 pipelines is given in Figure OligoRAP and sigReannot use comparable TSCs in case there was only one HQ hit (TSC O1 & S1), there were multiple LQ hits (TSC O5 & S5), or there were no hits at all (TSC O6 & S6).When there was only 1 LQ hit, OligoRAP puts these oligos in a single TSC (O2), but sigReannot differentiates between LQ hits with longest contiguous stretches of 30 nucleotides and more (S3) and with stretches of less than 30 nucleotides (S4). The latter TSC contains gene-specific oligos, which are less reliable for detecting lowly expressed genes, because they have the worst signal to noise ratio. By providing an extra TSC for these oligos, users can choose to drop them from further analysis or at least can see quickly they are less reliable. OligoRAP handles this problem differently by allowing users to specify multiple combinations of filter thresholds per run. This allows them to analyse for example the effect of more lenient or stricter thresholds for HQ and/or LQ hits and covers all TSCs instead of just the oligos with only one LQ hit. Analysing different combinations of filter thresholds is also possible with IMAD and sigReannot, but this a bit more work as it requires a user to run the pipeline with the most lenient hybridisation potential filter thresholds followed by post-processing of the results to generate results for more stringent thresholds.In the case of multiple HQ hits or a mix of HQ and LQ hits sigReannot and OligoRAP classify them differently. SigReannot differentiates between cases with one HQ hit accompanied with one or more LQ hits (TSC S2) and cases with multiple HQ hits with or without LQ hits (TSC S7). OligoRAP on the other hand differentiates between multiple HQ hits (TSC O3) and a mix of HQ and LQ hits (TSC O4). The reason sigReannot differentiates between S2 and S7 while OligoRAP assigns such oligos all to O4 is a difference in annotation retrieval policy (see below). TSC O3 is interesting, because in theory these oligos target shared domains or different highly similar genes. Therefore these oligos could still be informative as such genes are usually involved in similar biological processes just like different splice variants derived from the same gene. In practice however many of the oligos in TSC O3 have multiple HQ hits due to redundancy and this is usually the result of assembly and/or annotation problems. Either way it makes sense to differentiate between these oligos and ones that target a mix of HQ and LQ hits as the latter can suffer from cross-hybridisation with transcripts from highly dissimilar targets and hence are not informative.Steps three and four consist of annotation retrieval directly from the alignments or indirectly from previously fetched annotation, respectively. There are many differences in annotation features retrieved by the different pipelines, but all pipelines provide links to Ensembl gene IDs, Ensembl Transcripts IDs, UniGene cluster accessions and GO terms IDs derived from Ensembl genes Figure . The larFinally, the fifth step involves formatting and storing the results in various ways. SigReannot's annotation is provided as collection of tab-delimited flat files. IMAD on the other hand uses a MySQL database to store its results and OligoRAP's native output format is BioMoby XML, butA subset of 791 oligos was selected from the experimental data provided for the workshop to assess the effect of the different annotation strategies on coverage. These oligos were selected, because they showed differential gene expression signals. Hence these probes clearly bind transcripts and any orphan oligos in the updated annotation produced by sigReannot, OligoRAP and IMAD indicate false negatives due to incomplete data sources, incomplete annotation strategies or both. The focus for this comparison is on Ensembl gene ID assignments as all three pipelines provide these and hence they can be easily compared. Figure In case an oligo was not linked to any Ensembl genes by any of the pipelines, they clearly all agree, but in case two or more pipelines link to Ensembl genes, that does not necessarily mean they link to the same Ensembl genes for the corresponding oligos. Therefore, the consensus between the pipelines in linking oligos to Ensembl genes was determined Figure . For 44.In case the annotation pipelines did not agree on the Ensembl genes linked to an oligo the reason for this lack of consensus was determined Figure . MultiplIn the vast majority of the remaining cases where consensus is lacking, the pipelines initially find the same hits, but judge them differently when deciding whether to link to Ensembl genes based on these hits or not. In 88 and 66 cases oligos are linked to Ensembl genes located on the opposite strand of the hit with IMAD and sigReannot, respectively. The difference is the result of sigReannot not linking to annotation from the opposite strand if there was also annotation on the strand of the hit. Furthermore, sigReannot only takes annotation from the opposite strand into account for HQ hits and hence ignores such annotation for LQ hits. So sigReannot is a bit more conservative in linking to annotation from the opposite strand.OligoRAP only links a hit to annotation if there is (near) perfect overlap between the hit and the annotation . For IMAD and sigReannot this does not apply as they only align the oligos with transcripts. In case a hit extended beyond the borders of a transcript on the genome, IMAD and sigReannot will find a shorter hit covering only the part that overlaps with the transcript. This results in 24 oligos with extra links to Ensembl genes with IMAD and 17 with sigReannot as compared to OligoRAP. The difference between IMAD and sigReannot is the result of partial overlap combined with lower thresholds for IMAD either because of the lack of a contiguous stretch filter in IMAD (2 cases) or because the annotation was derived from the opposite strand and the hit didn't pass sigReannot's HQ hit thresholds (5 cases).SigReannot's UTR/intron extension feature generates additional links to Ensembl genes for 16 oligos with hits in the vicinity of Ensembl transcripts. IMAD and OligoRAP cannot link to annotation located in the vicinity of a hit on the genome and this explains their absence.If a BLAST result contains overlapping HSPs these were all ignored by IMAD resulting in 7 oligos where links to Ensembl genes are missing as compared to OligoRAP and sigReannot. Further inspection of these 7 probes revealed they contained repeats and IMAD has been adjusted to include hits from overlapping HSPs.Finally, the \"other\" leftover category contains 9 rare cases. In one of these IMAD missed a hit, because it uses BLAST ,37 with It must be noted that IMAD and sigReannot normally provide maximally a single link to an Ensembl gene per oligo. In case there are multiple hits for an oligo these pipelines will try to find a best one and if this fails not link to Ensembl at all. For this workshop the IMAD and sigReannot teams provided additional data for oligos with multiple hits, so they could all be taken into account and compared, but would users compare standard IMAD and sigReannot data, they might find additional differences due to different prioritisation of hits to find the best one.In most cases the oligos with multiple hits are non-specific, but further investigation revealed 9 extreme cases of oligos with numerous hits (up to 200) on transcripts representing large gene families or sharing domains such as genes coding for MHC proteins, olfactory proteins, homeobox proteins, protein kinases and potassium voltage-gated channel proteins. Although it was clearly not possible to assign a best hit in these cases linking the oligo to the gene family or shared domain could still be highly informative despite the lower resolution.et al. [GO term enrichment analysis was chosen as an example to investigate the results of differences in updated annotation on functional microarray analysis. For this analysis all probes of the ARK-Genomics 20 K chicken array were taken into account, annotation was updated with IMAD, OligoRAP & sigReannot and enrichment of GO terms in the lists of significantly up- or down regulated genes was performed as described by Haisheng et al. . Three cIMAD only differentiates between oligos with 1 hit, multiple hits or no hits at all and uses a single hybridisation potential filter for sequence identity. This is less advanced than OligoRAP and sigReannot, which differentiate between LQ and HQ hits and introduce a second hybridisation potential filter for short contiguous stretches of matching nucleotides. Despite these differences, basically all three pipelines can divide the oligos into several TSCs giving users an indication of the target specificity of the oligos. Furthermore, depending on experimental conditions, users can customise the parameters for the hybridisation potential filters.After the pipelines have aligned oligos with potential targets they have to decide whether or not to link to certain annotation based on these alignments and this is where they differ the most. Should the pipelines be very conservative and link only to annotation derived from other sequences that (near) perfectly overlap the alignment of the oligo with the potential target, like OligoRAP does? Or should the annotation strategy be more lenient and include annotation from indirect links like sigReannot does when it uses UTR/intron extension to link indirectly to Ensembl via UniGene? The question basically boils down to whether to prefer reliability over coverage or the other way around. After some discussion during the workshop the biologists present decided they couldn't choose between optimal coverage and optimal reliability. Instead they would prefer to have as much annotation as possible and have metadata attached to the annotation indicating the reliability of the link between the oligo and for example an Ensembl gene. Similar to the target specificity categories one can think of a few annotation link reliability categories that would allow the biologists to filter their results in downstream analysis and see the effect of in- or excluding less reliable annotation in addition to the effect of in- or excluding potential non-specific oligos. We propose the following categories:1) Direct sequence-based links: annotation was derived from alignment of the oligo with a target sequence.2) Indirect linksa) Sequence-based and with (near) perfect overlap of the oligo-target alignment with the alignment of the target with the other sequence from which the annotation is derived.b) Sequence-based and with partial overlap of the oligo-target alignment with the alignment of the target with the other sequence from which the annotation is derived.c) Sequence-based and without any overlap of the oligo-target alignment with the alignment of the target with the other sequence from which the annotation is derived.i) Oligo-target alignment is located up- or downstream in the vicinity of the gene from which the annotation is derived.ii) Oligo-target alignment is located in an intron of the gene from which the annotation is derived.iii) Oligo-target alignment is located on the opposite strand of the gene from which the annotation is derived.d) Non sequence-based links. For example in the case of expanding annotation using text mining.e) Non gene-specific link. For example to a gene family or shared domain.These categories can be easily expanded where necessary. For category 2 cii one could flag for example whether there was other sequence-based evidence that makes the link more reliable. This would be the case if an oligo aligns with an intron and if there are ESTs that align with both the gene's exons and the intron suggesting the gene model was too conservative and intron retention splice variants do exist.Approximately four years after the design of the ARK-Genomics 20 K chicken array almost one third of the probes could no longer or still not be linked to high quality annotation in the form of a link to an Ensembl gene with neither IMAD nor OligoRAP nor sigReannot. This indicates that keeping annotation as well as target specificity up-to-date is important to make most of microarray experiments.IMAD, OligoRAP and sigReannot can assign oligos to target specificity classes (TSCs) although with different levels of resolution. These TSCs are based on the amount of target each oligo hits and users can specify thresholds for hybridisation potential filter used to determine the impact of these hits. Thereby the hybridisation potential filters combined with the TSCs give users the flexibility to adjust the target specificity estimates to their experimental conditions. In addition it allows them to play safe by discarding potential cross-hybridising probes or live on the edge to get higher annotation coverage. In contrast to target specificity users have no control over the annotation that is fetched based on the hits of the oligos with potential targets. Fetching annotation from indirect relationships between oligos and potential targets can help to boost coverage, but will also result in varying levels of reliability of the updated annotation. Not only have users currently no control over which annotation is retrieved, they currently also cannot see the difference between annotation from more reliable direct links and from less reliable indirect links. Based on the feedback from the EADGENE/SABRE post-analysis workshop we therefore suggest annotation link reliability categories be added to indicate the type of relationship between oligos and their annotation. Adding such indicators for the reliability of the annotation will be an important step in the future development of IMAD, OligoRAP and sigReannot and allow users to fine tune the balance between reliability and coverage. This is important as it can have a significant effect on functional analysis of microarray data as exemplified by the lack of consensus on almost one third of the terms found with GO term enrichment analysis using updated annotation generated with IMAD, OligoRAP and sigReannot.Links to supplemental files with annotation as used for the workshop as well as presentations as presented at the workshop are available from the EADGENE portal:The authors declare that they have no competing interests.PBTN generated OligoRAP annotation, PC generated sigReannot annotation and MW generated IMAD annotation. PBTN, PC, MW, DP & CK analysed the generated annotation and compared the different annotation strategies. HN studied the effect of different pipelines and different filter thresholds on GO term over-representation analysis. JAML, MAMG, MW and CK secured funding and managed the project. PBTN drafted the manuscript, which was improved with the help of all other authors. All authors read and approved the final manuscript."} {"text": "Projections from hippocampal CA1-subiculum (CA1/SB) areas to the prefrontal cortex (PFC), which are involved in memory and learning processes, produce long term synaptic plasticity in PFC neurons. We examined modifying effects of these projections on nociceptive responses recorded in the prelimbic and cingulate areas of the PFC.Extracellular unit discharges evoked by mechanical noxious stimulation delivered to the rat-tail and field potentials evoked by a single stimulus pulse delivered to CA1/SB were recorded in the PFC. High frequency stimulation delivered to CA1/SB, which produced long-term potentiation (LTP) of field potentials, induced long-term enhancement (LTE) of nociceptive responses in 78% of cases, while, conversely, in 22% responses decreased . These neurons were scattered throughout the cingulate and prelimbic areas. The results obtained for field potentials and nociceptive discharges suggest that CA1/SB-PFC pathways can produce heterosynaptic potentiation in PFC neurons. HFS had no effects on Fos expression in the cingulated cortex. Low frequency stimulation delivered to the CA1/SB induced LTD of nociceptive discharges in all cases. After recovery from LTD, HFS delivered to CA1/SB had the opposite effect, inducing LTE of nociceptive responses in the same neuron. The bidirectional type of plasticity was evident in these nociceptive responses, as in the homosynaptic plasticity reported previously. Neurons inducing LTD are found mainly in the prelimbic area, in which Fos expression was also shown to be inhibited by LFS. The electrophysiological results closely paralleled those of immunostaining. Our results indicate that CA1/SB-PFC pathways inhibit excitatory pyramidal cell activities in prelimbic areas.Pressure stimulation (300 g) applied to the rat-tail induced nociceptive responses in the cingulate and prelimbic areas of the PFC, which receives direct pathways from CA1/SB. HFS and LFS delivered to the CA1/SB induced long-term plasticity of nociceptive responses. Thus, CA1/SB-PFC projections modulate the nociceptive responses of PFC neurons. The two segregated central pathways for sensory-discriminative and affective dimensions of pain have been examined in human brain imaging studies , which iCA1 pyramidal cells in the hippocampus (HP) receive pain information from peripheral nociceptors . In a raThe projections from the HP to the PFC were established by anatomical ,14 and pWe analyzed the effects of CA1/SB inputs into the prelimbic and cingulate areas of the PFC on nociceptive responses evoked by peripheral mechanical noxious stimulation. HFS/LFS delivered to the CA1/SB induced LTP/LTD-like changes in nociceptive responses recorded in the PFC, suggesting the HP-PFC pathway to be involved in affectional memory in pain processing.Adult male Wistar rats were used in all experiments. The rats were housed under controlled temperature (25\u00b0C) and humidity (40 - 50%) conditions with a 12-h light/dark cycle, and had free access to food and water. Experiments conformed to guidelines issued by the National Institutes of Health for Laboratory Animals and all procedures were approved by the Animal Experimental Committee of Tokyo Women's Medical University. Efforts were made to minimize the number of animals used and their suffering. All rats were anesthetized with a single injection of urethane and mounted in a stereotaxic instrument . Body temperature was maintained at 37 - 38\u00b0C using a chemical thermo-mat.Recording electrodes were placed in the cingulate or prelimbic areas of the PFC , equipped with a probe with a circular contact area and a 1 mm in diameter tip. Mechanical stimuli were delivered every 90 s at constant force with a feedback system. Stimulus intensities used in this experiment were 300 - 500 gf with a 0.1 s rising (and decreasing) time to maximum force and a 2 s hold time. The stimulus condition applied to the tail evoked c-fiber activities in peripheral nerves .In these experiments, a monopolar stainless steel stimulus electrode was lowered into the dorsal portion of the CA1/SB area . At the end of each experiment, the animals were perfused with normal saline and 4% paraformaldehyde. After overnight post-fixation, the brains were sectioned (50 \u03bcm) and stained with hematoxylin-eosin solution for examination of the recording and stimulus sites under light microscopy.2 areas in the cingulate and prelimbic areas were counted in 15 slices for each animal.We counted the number of Fos-positive cells in the PFC with HFSSignificant differences in discharges evoked by mechanical stimuli were assessed with the nonparametric paired-test (Wilcoxon) to compare pre- and post-stimulation values. Data are expressed as means \u00b1 standard errors (S.E.). Results for the numbers of Fos positive cells were statistically analyzed with the Mann-Whitney test (untreated group versus HFS/LFS group). A probability level of < 0.05 was considered significant.Noxious mechanical stimulation delivered to peripheral tissue elicited unit discharges and the duration of these responses reportedly reflects stimulus intensity . We measLFS delivered to the CA1/SB induced LTD of nociceptive responses and 20.4 \u00b1 1.0 (right side) in the cingulate area and 19.7 \u00b1 0.9 (left side) and 21.9 \u00b1 1.3 (right side) in the prelimbic area Fig . There wNociceptive information from peripheral tissue mainly projected to the superficial layers of the cingulate and prelimbic areas , while sNMDA mediated plasticity in the prefrontal cortex was recently reviewed . NMDA reLFS delivered to HP/SB induced LTD in nociceptive responses to peripheral noxious stimuli. LFS produces LTD mediated by the glutamate receptor, NMDA and mGulLFS significantly decreased Fos expression in cells in the ipsilateral prelimbic area. The histological data were consistent with the areas in which LTD was recorded in the electrophysiological experiments. LFS delivered to HP/SB may inhibit excitation of prelimbic pyramidal cells via HP-PFC pathways. Thus, LFS decreased Fos expression and induced LTD of nociceptive responses.The PFC is the center of the affectional dimension of pain and involves memories of fear, which are formed by pain experiences. Strong sensory information from peripheral nerves, such as the effects of amputation-induced LTP on synapses receiving information from sensory nerves, may be the cause of phantom pain . ConnectAll three authors participated in the preparation of this manuscript. and approved the final manuscript. The individual contributions of three authors to the manuscript are below.H Nakamura carried out the electrophysiological studies and statistical analysis.Y Katayama carried out the immunostaining and statistical analysis of Fos expression.Y Kawakami conceived the study and coordinated all experiments."} {"text": "Hippocampus abdominalis, has an exceptionally highly developed form of male parental care, with female-female competition and male mate choice.Both natural and sexual selection are thought to influence genetic diversity, but the study of the relative importance of these two factors on ecologically-relevant traits has traditionally focused on species with conventional sex-roles, with male-male competition and female-based mate choice. With its high variability and significance in both immune function and olfactory-mediated mate choice, the major histocompatibility complex (MHC/MH) is an ideal system in which to evaluate the relative contributions of these two selective forces to genetic diversity. Intrasexual competition and mate choice are both reversed in sex-role reversed species, and sex-related differences in the detection and use of MH-odor cues are expected to influence the intensity of sexual selection in such species. The seahorse, Here, we demonstrate that the sex-role reversed seahorse has a single MH class II beta-chain gene and that the diversity of the seahorse MHII\u03b2 locus and its pattern of variation are comparable to those detected in species with conventional sex roles. Despite the presence of only a single gene copy, intralocus MHII\u03b2 allelic diversity in this species exceeds that observed in species with multiple copies of this locus. The MHII\u03b2 locus of the seahorse exhibits a novel expression domain in the male brood pouch.The high variation found at the seahorse MHII\u03b2 gene indicates that sex-role reversed species are capable of maintaining the high MHC diversity typical in most vertebrates.Whether such species have evolved the capacity to use MH-odor cues during mate choice is presently being investigated using mate choice experiments. If this possibility can be rejected, such systems would offer an exceptional opportunity to study the effects of natural selection in isolation, providing powerful comparative models for understanding the relative importance of selective factors in shaping patterns of genetic variation. The impact of natural and sexual selection on genetic diversity has been intensively studied in both natural and captive-bred populations , but theThe hypervariable major histocompatibility complex (MHC/MH) has proven to be a powerful model in which to investigate the importance of natural and sexual selection in shaping genetic diversity -8. The MThe investigation of MHC genes in a diversity of vertebrates indicates that these loci are more diverse than any other gene family . NaturalDespite consistently high levels of variation, there are major differences in the genomic organization of MHC genes in different vertebrate groups. While these loci are physically linked in mammals, class I and II genes are unlinked in bony fishes (class Actinopterygii) ,23. Due While previous studies on teleosts have shown that both natural and sexual selection structure MH allelic diversity in species with conventional female-based mate choice ,30,31, nHippocampus abdominalis, have found evidence of female-female competition and male mate choice, suggesting that natural populations of this species are sex-role reversed [The teleost family Syngnathidae (seahorses and pipefish) is a well-suited model system to study questions concerning the relationship between sex roles and MH diversity. Both conventional and sex-role reversed species exist in the family and sex-role reversal has evolved several times independently in this group . Studiesreversed .Here, we characterize MH-variation in wild-caught and captive-bred individuals of sex-role reversed populations of the potbellied seahorse, a species with a highly developed form of male parental care. Genome sequencing and transcriptome screening confirm the existence of a single, highly variable copy of the MHII\u03b2 locus in this species, with a pattern of variation identical to that detected in species with conventional sex roles. This pattern of genetic variation has been influenced by a combination of intralocus recombination and positive selection on sites believed to be important for peptide binding. MHII\u03b2 is expressed in brood pouch tissues of male seahorses, suggesting that these molecules may be functionally active during male pregnancy. Our results indicate that sex-role reversed taxa such as the seahorse are capable of maintaining the high MHC diversity typical of vertebrate species with conventional sex roles.Hippocampus kuda: e-value = 0.0, Hippocampus sp.: e-value = 2e-100, Monopterus albus: e-value = 2e-35, Archoplites interruptus: e-value = 1e-33, Tetraodon nigroviridis: e-value = 1e-33). The structure of MHII\u03b2 in the seahorse is similar to that in other vertebrates, with 6 exons separated by 5 introns of varying length located in introns 2 and 4 , but none of these values remained significant after correcting for multiple comparisons . The 17 alleles include 25 polymorphic nucleotide sites and a total of 17 amino acid differences were also detected in the captive-bred population. The nucleotide diversity \u03c0 of the seahorse MHII \u03b21-domain is 0.034. The dataset used for subsequent analyses contains 270 bp of exon 2 , after omitting exon-spanning codons at the 5' and 3' ends of the exon .Sequencing of the highly-variable peptide binding region of the seahorse MHII\u03b2 locus identified a total of 17 s Figure . 86% of Only 2 of the 25 nucleotide substitutions detected in exon 2 of the seahorse are synonymous, leading to a dN/dS ratio of 3.7 . A network without these recombinant alleles is qualitatively similar to the full network, but the placement of Hiab-DAB-E2*09 shifts in the pruned dataset, reflecting its high level of divergence from the central haplotypes . These culeatus , Oncorhyviatilis and Poecticulata ). As malticulata ,33, the ticulata ,20, and ticulata , but theH. abdominalis: 17 alleles in 101 individuals, Oncorhynchus gilae gilae: 5/142, O. tshawytscha: 12/144, Salmo trutta: 24/180, O. mykiss: 88/423), but exhibits fewer polymorphic sites than that found in salmonids [H. abdominalis and salmonids show comparable nucleotide diversities in the PBR-containing \u03b21-domain of exon 2 -43. H. a = 0.054 ).Gasterosteus aculeatus), an important model system for the study of teleost MH evolution, are thought to carry at least 4 copies of MHII\u03b2 [Poecilia reticulata, a species with at least 2 MHII\u03b2 loci, recovered 18 exon 2 alleles in 56 individuals [Poecilia formosa [Perca fluviatilis [As interlocus gene conversion is thought to contribute to the diversity of gene families , one migof MHII\u03b2 ,26,45. Acus \u2264 9) . This pa formosa and Percviatilis , with 9 viatilis ,48 and bviatilis . These fPoecilia spp. and per, illustH. abdominalis), a species with an exceptionally well-developed form of paternal care and male mate choice. The sex-role reversed H. abdominalis exhibits levels of MHII\u03b2 diversity similar to that detected in species with conventional sex roles. This species has a single functional MH class II beta-chain gene that is expressed in the male brood pouch, suggesting that this gene may be immunologically active in these tissues. The pattern of MHII\u03b2 genetic diversity in the seahorse has been influenced by positive selection and recombination, and intralocus genetic diversity in this species exceeds that present in species carrying multiple copies of this gene. Mating experiments are currently being used to determine whether MH-odor cues are used in mate choice decisions in H. abdominalis, data which should help to shed light on the relative roles of natural and sexual selection in generating the high levels of MHII\u03b2 diversity found in the seahorse.We provide the first data on the pattern of MH diversity in the seahorse . Sequences were aligned in BioEdit v.7.0.9.1 [Whole genomic DNA was extracted from muscle tissue of a single protocol . To charGenBank:Z077, StizTo amplify MHII\u03b2, we used long-range PCR under the following conditions: 1\u00d7 ThermoPol reaction buffer (NEB), 1.2 \u03bcM dNTPs, 0.9 \u03bcM of each primer, 1.5 U of a 1:20 Pfu DNA polymerase (Promega) and Taq DNA Polymerase (NEB) mixture and approx. 60 ng DNA per 30 \u03bcL reaction. PCR running conditions involved an initial denaturation at 92\u00b0C for 5 min, followed by 35 cycles of 92\u00b0C for 30 sec, 58\u00b0C for 30 sec and 68\u00b0C for 0.5 - 4 min (depending on product length), with a final extension at 68\u00b0C for 5 - 15 min.As the initial primer set provided only a fragment of the MHII\u03b2 locus, genome walking was used to complete the sequence using a protocol modified from the Universal GenomeWalker Kit (Clontech). One \u03bcg of high-quality genomic DNA was digested separately with 10 U of the enzymes EcoRV (NEB), PvuII (NEB), StuI (NEB), DraI (NEB), AluI (Promega), HincII (NEB) and Cac8I (NEB) according to the manufacturer's recommendations. Purification of digested DNA and adaptor ligation followed the Clontech protocol. Genome walking was performed using a nested PCR approach with 1\u00d7 ThermoPol reaction buffer, 1 \u03bcM dNTPs, 0.4 \u03bcM AP1 primer, 0.4 \u03bcM gene-specific primer 1, 1 U Taq DNA polymerase (NEB) and 1 \u03bcL of the DNA-adaptor-library in a 20 \u03bcL reaction volume for the first round PCR. The nested PCR was performed using the same protocol, but with the AP2 primer and a nested gene-specific primer along with 1 \u03bcL of a 1:50 dilution of the initial PCR product. Cycling conditions were identical in both PCRs, with 2 min at 92\u00b0C, 30 cycles of 30 sec at 92\u00b0C, 30 sec at 57\u00b0/60\u00b0/63\u00b0C and 3 min at 68\u00b0C.PCR products were purified for sequencing using either a MultiScreen PCR filter plate (Millipore), gel-purification with the Wizard SV Gel and PCR Clean-Up System (Promega), or via cloning with a Topo TA Cloning Kit (Invitrogen) following the manufacturers' recommendations. 10-20 positive colonies per plate were picked into 25 \u03bcL of ddH20, directly PCR-amplified and sequenced. Cloned products were compared to direct sequences generated with several different primer combinations, in order to identify allelic phase and to identify any cloning-mediated PCR artifacts. Purified PCR products were prepared for sequencing by adding 1 \u03bcL Big Dye v3.1 Terminator Cycle Sequencing mixture (Applied Biosystems) and 1 \u03bcL primer to 2-8 \u03bcl of purified product in a 10 \u03bcL volume. Cycling conditions were 30 cycles of 10 sec at 96\u00b0C, 5 sec at 50\u00b0C and 4 min at 60\u00b0C. Ethanol-purified products were sequenced on an ABI 3730 automated sequencer (Applied Biosystems).\u00ae Reagent (Invitrogen) according to the manufacturers' recommendations. One \u03bcg of purified RNA was digested with 9 \u03bcL of DNase I (Promega) and reverse-transcribed into cDNA with 1 \u03bcL ImProm II Reverse Transcriptase (Promega) using 2 \u03bcL of a 500 \u03bcg/\u03bcL solution of a dT-adaptor primer (TAGGAATTCTCGAGCGGCCGCTTTTTTTTTTTT) in 25 \u03bcL volume. The program for the RT-PCR followed the manufacturer's recommendations (Promega). 3 \u03bcL of a 1:2 dilution of Millipore-purified cDNA was used as template in a PCR reaction with MHIIb-E1F2 and MHIIb-E6R under the standard PCR conditions outlined above.To determine whether MHII\u03b2 sequences obtained from genomic DNA represent functional alleles, we amplified and sequenced a partial MHII\u03b2 cDNA sequence (exon 2 - 5) from liver, muscle and pouch tissue of a reproductively mature non-pregnant male seahorse. RNA was extracted using TRIZOLH. abdominalis possesses a single functional MHII\u03b2 gene (see below). To further explore this pattern, we screened cDNA libraries of seahorse pouch and reference tissues from pregnant and non-pregnant individuals for the presence of MH genes using 454 sequencing. Briefly, both normalized and unnormalized cDNA libraries prepared from purified total RNA derived from the pouch tissues of a single pregnant and non-pregnant seahorse, together with a pool of normalized reference tissues from the pregnant individual , were individually MID-tagged with a unique sequence identifier. MID-tagged libraries were sequenced using GS FLX Titanium Chemistry (Roche), following the manufacturer's recommendations. A full plate of 454 sequencing yielded a total of 850 K filtered reads (average read length 230 bp), 92% of which could be assembled into 38 K contigs. The full results of this transcriptome screen will be described in detail elsewhere .Genomic DNA and cDNA sequencing indicate that In order to investigate the hypervariable PBR of MHII\u03b2, complete exon 2 sequences were amplified in an additional 100 individuals as part of a larger study investigating MH-based mate choice preferences in the seahorse. Seahorses are listed under Appendix II of the United Nations Convention on the International Trade in Endangered Species (CITES), and the majority of the samples included here thus originate from a captive-bred population derived from individuals collected from several sex-role reversed Tasmanian populations. The seahorses in this captive-bred population are held in large communal breeding tanks with 50 males and 50 females per tank, allowing free mate choice . This population is genetically diverse and an individual-based assignment test indicates the existence of a single Tasmanian population of captive-bred and wild-caught individuals (Structure: Pr(K = 1) = 1; see Additional file We obtained exon 2 sequences from 47 F1 individuals from 5 families (n = 8-13 per family), to investigate whether MH alleles segregate in a Mendelian fashion. This approach demonstrates the mode of inheritance of these loci and provides a means to evaluate the reliability of sequence profiles generated for this fragment of the MHII\u03b2 gene, through parent-offspring comparisons.Hiab-DAB-E2*01-17, following standard terminology [Sequence data were assembled using Sequencing Analysis 5.2 (Applied Biosystems). Sequences were aligned with Muscle v.4.0 and veriminology . MH haplminology , an apprminology . SeqPHASminology .DnaSP v.4.90.1 was useddN and dS were calculated using Mega v.4.0.2 under a RECCO v.0.93 [RECCO is based on a minimal cost solution, in which the relative cost of obtaining a sequence in an alignment from the other sequences by mutation and recombination is evaluated.Recombination in the seahorse exon 2 dataset was tested using the default settings of tations) . The ideAB participated in the design of the study, carried out the laboratory work and data analysis and wrote the manuscript. ABW conceived the study, supervised the laboratory work and data analysis and helped to draft the manuscript. Both authors read and approved the final manuscript.Figure S1: Genetic structure plot.Click here for file"} {"text": "The relationship between weight loss and mortality has important clinical and public health significance but has proved to be complex. Evidence is mixed and particularly limited on the association between weight loss in mid-life and premature death (i.e. before 65 years of age), a small albeit important segment of total mortality. We aimed to study the association between midlife weight change and mortality accounting for health and lifestyle characteristics, and also considering potential bias due to preexisting chronic diseases and smoking status.Longitudinal, population-based, \u2018the 1946 British\u2019 birth cohort study.In 2750 men and women, mortality from age 53 through 65 years was analyzed according to categories of measured 10 year weight change between 43 and 53 years. Cox's hazard ratios (HR) were progressively adjusted for socio-demographic, lifestyle and health characteristics.Nearly 20% of participants lost weight and over 50% gained 5 kg or more in midlife. There were 164 deaths. Compared to those who gained between 2 and 5 kg, those who lost 5 kg or more had an increased risk of premature death independently of midlife physical activity, socio-economic circumstances and educational attainment. This association was unaltered when highest weight loss (lost more than 15 Kg) (p\u200a=\u200a0.04) and early deaths were excluded (p<0.001), but was no longer significant after adjustment for cardiovascular risk factors and health status .The inverse association between weight loss in midlife and higher risk of premature death may be explained by vascular risk factors and ill health. In consideration of the burden of premature death, closer monitoring of weight loss in mid-life is warranted. Premature death (i.e. before 65 years of age) is a small segment of total mortality but of great clinical and public health importance We used data from a British birth cohort study to test the hypothesis that weight loss over a 10-year period in midlife (43\u201353 years) would be associated with an increased risk of all-cause premature death between 53 and 65 years. We also investigated whether health conditions , smoking, and body size at age 43 explained any observed associations.Participants provided written informed consent and the Multicentre Research Ethics Committee (MREC) approved the study.http://www.nshd.mrc.ac.uk/data.aspx).Bona fide researchers can apply to access the NSHD data via a standard application procedure is an ongoing birth cohort study of a socially-stratified sample of 5 362 newborns born in England, Scotland and Wales in one week in March 1946, and followed up 24 times. Height and weight were measured at all study waves to the nearest 0.1 kg with participants in light clothing and no shoes (self-reported at 20 and 26 years). In 1989 and 1999 trained nurses used standardized protocols at home visits to assess health and lifestyle characteristics. Cohort members were flagged for death on the National Health Service (NHS) Central Register. The start of mortality follow-up was taken as 1999 when cohort members were aged 53 years and ended in March 2011, just before the study members' 65th birthday. Of the 5 362 cohort members 469 had died before age 53 years, 640 had withdrawn from the study, 580 lived abroad, and 638 were untraced or non-responders. Of the remaining 3 035, we excluded those who were not weighed and measured at both 43 and 53 years (n\u200a=\u200a274) and those who did not have mortality data available (n\u200a=\u200a11) leaving 2 750 for analysis. A further 182 were excluded from complete-case analyses due to missing covariate information.The exposure variable was 10 year weight change calculated by subtracting weight at age 43 from weight at age 53 years. We derived a categorized variable for weight change using the following groups: We classified educational attainment achieved by the age of 26 years according to the Burnham scale into: 2. A priori power calculations (at 90% and 5% significance) confirmed that a hazard ratio (HR) of 1.03 per kg of weight change could be detected assuming a linear relationship. Weight change predicted mortality similarly in men and women , so results are presented for men and women together adjusted for sex.We compared participants' characteristics across weight change categories using ANOVA and \u03c72) at 43 years, and by smoking status (yes/no) at the beginning of the weight loss period (43 years) to formally test whether obesity and smoking status modified the association of weight change with mortality. We also conducted separate analyses by subgroup where evidence of an interaction was observed. Finally, because weight change is likely related to body size, in additional models we investigated the relationship between BMI at 43 years and mortality controlling for sex, and weight change. A similar model was fitted replacing BMI at 43 years with BMI at 53 years. We conducted the statistical analyses using STATA software, version 12.0 .Survival across weight change categories was first assessed by graphical inspection of Kaplan-Meier plot, and a formal test using Schoenfeld residuals showed no violation of the proportionality assumption (P\u200a=\u200a0.41). To assess the relationship between weight change and mortality we performed two separate analyses. First, we included weight change as a continuous variable and used restricted cubic spline with three knots to assess the shape of the association with mortality rates. The association with mortality was assessed using the Cox proportional hazards model adjusting for sex and weight at 43 years. The p-value for overall significance of the association was obtained comparing the spline models with the null model using a Wald test . Second, the relationship between weight change and mortality was modeled using the Cox model with weight change in categories (above), initially on the maximum sample with weight change available . The unadjusted model was then refitted on a reduced sample with complete data on all covariates (n\u200a=\u200a2 568) to assess potential bias introduced by missing data. Adjustment was then made for socio-demographic, lifestyle and health characteristics in stages to assess confounding. To account for the possibility that existing underlying disease, which results in mortality, may have caused the weight loss, Similar percentages of men and women experienced weight loss, and 11% lost more than 2 kg and 48% gained more than 5 kg between 43 and 53 years. Participants grouped by weight change categories differed across a number of factors . For insThe analysis using spline models suggested that the association between weight change and mortality rates was not linear . There wIn sensitivity analyses, when participants who lost more than 15 kg were excluded (n\u200a=\u200a20), the HR for the highest weight loss category (\u22125 to <\u221215 Kg) compared with the reference group was only modestly attenuated in a model adjusted for sex and body weight at age 43 years when compared with the main analysis. Similarly, when early deaths (before age 55 years) (n\u200a=\u200a12) were excluded, the HR for the high weight loss group was significant through to model 3 . Also consistent with the main analysis, the HR was considerably attenuated and not significant in the fully adjusted model .In stratified analyses, the increased mortality rate in the highest weight loss group (compared to the reference) was stronger and remained significant after full adjustment (model 4) in non-smokers but not in smokers at age 43 years . There was no evidence of an interaction between weight change and obesity status (BMI>30 kg/m2) at 43 years (p\u200a=\u200a0.50). Finally, we found that participants who were obese at 43 years had a higher mortality risk compared to those who were in the normal weight category at the same age. This association was independent of their weight change between 43 and 53 years and was hardly attenuated by adjustment for all other covariates. Conversely, those who were obese at 53 years did not have a significantly higher mortality risk compared to those who were normal weight (n\u200a=\u200a902) in similar models.In this prospective birth cohort study we found that compared to modest weight gain (less than 5kg), weight loss of more than 5 kg in mid-life was associated with a higher risk of mortality before old age . The association was strongly attenuated when cardiovascular risk factors and health status were taken into account.The obesity epidemic is spreading worldwide Our finding that health state strongly attenuates the association between weight loss of more than 5kg and mortality is consistent with a previous study also conducted in a UK population-based sample Our study has limitations. Mortality risks were higher amongst those who were not compared to those who were included in our analysis Overall our findings may reflect illness-related weight loss, In the present study the direct association between weight loss and risk of premature death was largely explained by modifiable risk factors before and treatable clinical conditions after weight loss occurred, and was independent of obesity at 43 years. Together this may suggest that weight loss in midlife as well as obesity warrants monitoring to improve prevention and tailor treatment."} {"text": "Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data. Data sharing is important for accelerating scientific discoveries, especially when there are not enough local samples to test a hypothesis , 2. Howe\u03f5. A common mechanism to achieve differential privacy is the Laplace mechanism ) The remainder of the paper is organized as follows. g(\u0394). The method approximates the original infinite feature space \u03a9 of g(\u0394) with a finite feature space p(\u03c9) of g(\u0394). Then add the noise to the weight parameters in the primal form based on the new space p(\u03c9) which degrades the approximation accuracy of \u03a9. Another problem is that the utility bounds use the same regularization parameter value to compare the private and nonprivate classifiers. They take no consideration into the change of regularization parameter incurred by privacy constraints. Chaudhuri et al. = (1/2b)ex|/b\u2212|. A Laplace noise has a variance 2b2 with a magnitude of b. The magnitude b of the noise depends on the concept of sensitivity which is defined as follows.A common mechanism to achieve differential privacy is the Laplace mechanism that addf denote a numeric function, and the sensitivity of f is defined as the maximal L1-norm distance between the outputs of f over the two datasets D and D\u2032 which differ in only one tuple. Formally,Let b = \u0394f/\u03f5. To fulfill \u03f5-differential privacy for a numeric function f over D, it is sufficient to publish f(D) + X, where X is drawn from Lap(\u0394f/\u03f5).With the concept of sensitivity, the noise follows a zero-mean Laplace distribution with the magnitude D = { | i \u2208 Z+, \u20091 \u2264 i \u2264 n}, where xi \u2208 Rd denotes the training input points, yi \u2208 {1, \u22121} are the training class labels, and n is the size of training data. Here, d is the dimension of input data and \u201c+1\u201d and \u201c\u22121\u201d are class labels. A SVM maximizes the geometric margin between two classes of data and minimizes the error from misclassified data points. The primal form of a soft-margin SVM can be written asw is the normal vector to the hyperplane separating two classes of data, C is a regularization parameter that weighs smoothness and errors , and fw(xi) = \u2329\u03d5(xi), w\u232a, where \u03d5(x) : Rd \u2192 RF is a function mapping training data point from their input space Rd to a new F-dimensional feature space RF (F may be infinite). Sometimes we map the training data from their input space to another high-dimensional feature space in order to classify nonlinearly separable data. When F is large or infinite, the innerproducts in feature space RF may be computed efficiently by an explicit representation of the kernel function k = \u2329\u03d5(x), \u03d5(y)\u232a. For example, k = xTy is a linear kernel function for a linear SVM, and k = exp\u2061(\u2212||x\u2212y||22/\u03c32) is a RBF kernel function, which is translation invariant.SVM is one of the most popular supervised binary classification methods that takes a sample and a predetermined kernel function as input, and outputs a predicted class label for this sample. Consider training data l) = max\u2061), we can obtain a dual form SVM written as\u03b1i \u2208 \u03b1, i \u2208 is a persample parameter and wj \u2208 w, j \u2208 is a perfeature weight parameter. The weight vector w can be converted from sample weight vector \u03b1 via w = \u2211i=1nyi\u03b1ixi in the linear SVM.In this paper, we use a RBF kernel function. Our method can be applied to any translation invariant kernel SVM. With the hinge loss In this section, we first introduce a framework overview and then the technical details of our hybrid SVM method. We assume that all data samples follow the same distribution. Here, we assume that all original data from different data sets follow some unknown joint multivariate distribution and all data tuples are samples from this distribution.\u03c1 = T, \u03c1i \u2208 Rd in the mapping function of the approximation form to the RBF kernel. Second, with \u03c1, we transform the private data from the original sample space to the new 2D-dimensional feature space via the mapping function \u03b1 in the dual space with the transformed private data and w in the primal space via the linear relationship between \u03b1 and w in the linear SVM. Finally, draw \u03bc from Lap(\u03bb)D2 where \u03c1. Then users can transform their test data to the new 2D-dimensional feature space with \u03c1 and classify the transformed data with \u03c1 has no privacy risk because it is retrieved directly from public data. More details about hybrid SVM will be given in the successive subsections.z^(x) in . Then wePrivacy Properties. We present the following theorem showing the privacy property of \u03f5-differential privacy.w over a pair of neighbouring datasets is \u03bb in step 4 is set to \u03f5-differential privacy which completes the proof.For step 1, no private data is used, and hence step 1 does not impact the privacy guarantee. Due to Corollary 15 in and the H induced by an infinite dimensional feature mapping with a random z. The random finite-dimensional D i.i.d. vectors \u03c11,\u2026, \u03c1D from the Fourier transform of a positive-definite translation-invariant kernel function k, such as the RBF kernel function. Then we can obtain an approximation form z(x)Tz(y) of k using the real-valued mapping function z(x) : Rd \u2192 RD defined by the following equation:b1,\u2026, bD are i.i.d. samples drawn from a uniform distribution U. z(x) : Rd \u2192 RD maps the data from its original d-dimensional input space to the new D-dimensional feature space. Their approach is based on the fact that the kernel function of a continuous positive-definite translation-invariant kernel is the Fourier transform of a nonnegative measure. The uniform convergence property of the approximation form z(x)Tz(y) to the kernel function k has also been proved in [k refers to the RBF kernel function.Rahimi and Recht approximroved in . In our x in z(x) and only the vectors \u03c11,\u2026, \u03c1D are needed to construct the random finite-dimensional \u03c11,\u2026, \u03c1D with an optimization function defined as follows:In our problem setting, since a small amount of public data can be considered as follows:min\u2061\u03c1\u2208RD\u00d7z(x)Tz(y) of the kernel function k by deploying the public data to compute the \u03c1, than randomly sampling \u03c1 from the fourier transform of the kernel function k as shown in [w. Fortunately we can employ the differentially private linear SVM approach in [w after transforming all private data to a new 2D-dimensional feature space using the mapping \u03c11,\u2026, \u03c1D as follows:Thus, we can obtain a more accurate approximation form shown in . To guarroach in to compufined in with the\u03c11,\u2026, \u03c1D to approximate the RBF kernel function, we can convert RBF kernel SVM in the d-dimensional input space into the linear SVM in a new 2D-dimensional feature space with D2, where With the vectors ace with , then usIn this section, we experimentally evaluate our hybrid SVM and compare it with one state-of-the-art method, called private SVM and on baseline method. We evaluate the utility of the trained SVM classifier using the AUC metric. Hybrid SVM and private SVM are implemented in MATLAB R2010b, and all experiments were performed on a PC with 3.2\u2009GHz CPU and 8\u2009G RAM.Datasets. We used two open source datasets from the Integrated Public Use Microdata Series ,\u2009 \u2009the US and Brazil census datasets with 370,000 and 190,000 records collected in the US and Brazil, respectively. One motivation for using these public datasets is that it bears similar attributes as some medical records, but it is publicly available for testing and comparisons. From each dataset, we selected 40,000 records, with 10,000 records serving as the public data pool. There were 13 attributes in both datasets, namely, age, gender, marital status, education, disability, nationality, working hours per week, number of years residing in the current location, ownership of dwelling, family size, number of children, number of automobiles, and annual income. Among these attributes, marital status is the only categorical attribute containing more than 2 values, that is, single, married, and divorced/widowed. Because SVMs do not handle categorical features by default, we transformed marital status into two binary attributes, is single and is married . With this transformation, our two datasets had 14 dimensions. For each dataset, we randomly extract a subset of original data as a public data pool, from which public data is sampled uniformly, and use the remaining 30000 tuples as the private data.Comparison. We experimentally compared the performance of our hybrid SVM against two approaches, namely, public data baseline and private SVM [vate SVM . The pubMetrics. We used the other attributes to predict the value of annual income by converting annual income into a binary attribute: values higher than a predefined threshold were mapped to 1, and otherwise to \u22121. Here, we set the predefined threshold as the median value of annual income. The classification accuracy was measured by the AUC (the area under an ROC curve) [\u03f5, the dataset dimensionality, and the data cardinality . To vary the data cardinality parameter, we randomly generate subsets of records in the training records set, with the sampling rate varying from 0.1 to 1. For various data dimensionalities with the range being 5, 8, 11, and 14, we select three attribute subsets in the US and Brazil datasets for classification. The first five dimensions include: age, gender, education, family size, and annual income. The second eight dimensions contain the previous five attributes, and additionally nativity, owner of dwelling, and number of automobiles. The third eleven dimensions consist of all the attributes in the second 8 dimensions and is single, is married, and number of children. C curve) . The boxFigures Figures \u03c1 randomly from the Fourier transform of RBF kernel. In contrast, hybrid SVM computes \u03c1 via the public data. This helps improve the accuracy of \u03c1 and leads to less variance.Figures \u03c1 with the public data, since a nonlinear optimization equation needs to be solved. As the other private SVM methods, our hybrid SVM is intended for off-line use, and hence the time is generally acceptable for even 14 dimensional datasets.Finally, We proposed and developed a RBF kernel SVM using a small amount of public data and a large amount of private data to preserve differential privacy with improved utility. In this algorithm, we use public data to compute the parameters in an approximation form of the RBF kernel function and then train private classifiers with linear SVM after converting all private data into a new feature space defined by the approximation form. A limitation of our approach is that we used the L-BFGS method , which i"} {"text": "This work presents results in the field of advanced substrate solutions in order to achieve high crystalline quality group-III nitrides based heterostructures for high frequency and power devices or for sensor applications. With that objective, Low Temperature Co-fired Ceramics has been used, as a non-crystalline substrate. Structures like these have never been developed before, and for economic reasons will represent a groundbreaking material in these fields of Electronic. In this sense, the report presents the characterization through various techniques of three series of specimens where GaN was deposited on this ceramic composite, using different buffer layers, and a singular metal-organic chemical vapor deposition related technique for low temperature deposition. Other single crystalline ceramic-based templates were also utilized as substrate materials, for comparison purposes. However, up to date, large volumes of these materials have not been obtained, and, therefore, bulk III-N substrates with extensive surfaces are not available yet. Traditionally the chosen templates for depositions of nitrides are wafers of sapphire, silicon and silicon carbide. These substrates, however, imply disadvantages related to either a high dielectric constant, high leakage currents or to high economic costs.Group-III nitrides (III-N), especially those related to GaN and AlN, have been strategic semiconducting materials in Power Electronics (PE) for the last 30 years, due to their outstanding temperature stability and dielectric strength combined with a wide bandgap2. Among manifold advantages, the use of low temperature co-fired ceramics (LTCC) as an alternative glass-ceramic substrate enables tailoring of crucial properties, such as permittivity or coefficient of thermal expansion (CTE), thus enhancing the systems figures of merit. It can be argued, though, that this material is not fit for PE applications due to its low thermal conductivity, 2-5\u2009W/mK3, but this issue can be overcome by placing metallic via arrays under the high power device4, which is feasible in the LTCC technology.Nevertheless, it still should be possible to lower the production prices by using composite materials made of ceramic fillers embedded into a glass matrix during a sinter process as a base, which demonstrated to be suitable for high-frequency circuits5. Nevertheless, two main handicaps impede the implementation of III-N growth technologies on LTCC-based substrates by often needed conditions: (i) the inherent, relatively high roughness and porosity of this support lead to low homogeneities in III-N structure and composition; (ii) the often used III-N growth temperatures (over 700\u2009\u00b0C) may promote ceramic damage and metal contamination. These are the main reasons why the potential of III-N-on-LTCC has still not been fully exploited in order to obtain the atomically long-range periodic structures that are needed for PE active devices, such as High Electron Mobility Transistors (HEMTs), although the possibilities of a LTCC hybrid technology for passive circuit components in such field have already been explored for over a decade10.In principle, the production of LTCC materials is not cheaper than that of Si itself, but the fact that it can be processed layer by layer while in green state eases the fabrication, by using masks and drills, of inner channels for the allocation of a metallic network acting as electrical connectors, passive circuits and thermal drains. Since this architecture of contacts can be placed prior to the production of the active layers of the devices, this leads to a further reduction of both costs and size, as well as to an improvement of the whole assembly performanceIn this work, we show the results after the first stages of novel proposals to overcome the challenges in III-N/LTCC production and, therefore, the first steps in the production of a revolutionary material for power and high-frequency electronics. In this way, worthy qualities of GaN epilayers were reached by: (i) new recipes for improving surface flatness, chemical affinity and CTE matching between LTCC and III-Ns; (ii) the use of intermediate preparation or buffer layers; (iii) the choice of techniques for the synthesis of III-N crystal layers at temperatures much lower than conventional. The GaN quality progressive improvement, up to the achievement of the best quality so far obtained, to the best of our knowledge, for a III-N layer grown on a LTCC (and similar ceramics) substrate, is reported.Different aspects of the fabricated materials are discussed in this section: surface roughness, layers architecture, topology, chemical composition, and crystallographic structure. A comprehensive view of the studied sets of samples is presented in Table\u00a011, a well-established polishing/lapping12 method was applied to LTCC substrates in samples CT3 and CT4 (not in samples CT1 or CT2) before proceeding to further element deposition. A 20-minutes lapping was applied using B4C as abrasive material followed by a 90-minutes polishing step with 1 \u00b5m-size diamond particles. By means of this methodology, the achieved LTCC surface roughness (taking into account the pores) had a root mean square deviation (RMS)13 of Rq\u2009=\u2009(120\u2009\u00b1\u20095) nm.Some materials utilized as bulk substrates for the nitride-based heterostructures in this work are of ceramic nature, therefore, surface roughness and porosity are, as previously mentioned, very important issues regarding the achievement of a good crystalline structure for the top layers. If this substrate roughness is not low enough, the first depositions of Ga and N atoms will most probably be inhomogeneous, promoting regions of different crystallographic orientations. Therefore, following the fabrication of the LTCC substrates through a sintering processThe LTCC surface is closed after firing, so the surface roughness is determined by the powder fraction of the ceramic filler contained into the glass (amorphous). Moreover, the polishing opens the intrinsic pores of the material, which are always present. Size and density of these pores depend on the powder composition of the tape and the sintering conditions. Though the so achieved LTCC surfaces are smoother than their non-treated counterparts, the substrate topography still led to the formation of pits in the material surface because occasionally larger pores are opened by the detachment of the filler particles, promoting a rough surface along the overgrown GaN at these positions. Besides, the LTCC roughness limits the deposition rate and the crystallinity of the nitride, which impedes to continue growing a thicker GaN layer as a roughness relieving solution to this problem. As a first step in the characterization of the materials, the amount and density of pits and surface features for the GaN was observed by optical microscopy, as shown in Fig.\u00a014. Figure\u00a0In addition to this, other techniques, such as Atomic Force and Electron Microscopies, were used at the University of C\u00e1diz (UCA) in order to obtain a more detailed view of these pits, specifically regarding to their depths. This technique was applied by using NT-SS-I SuperSharp tips from Next-Tip SL to obtain Atomic Force Microscopy (AFM) images, which were afterwards processed with the Gwyddion softwareSq)13. It has to be taken into account that the values in this table indicate pore to pore measurements, using 1\u2009\u00d7\u20091 \u00b5m2 areas. Note that if those areas included the surface pits, which could be the case of carrying out the calculations using AFM maps for 5\u2009\u00d7\u20095 or 10\u2009\u00d7\u200910 \u00b5m2 areas, Sq values would increase enormously for the samples using LTCC, while they would not significantly change for those samples using non-porous substrates. For example, when studying a 10\u2009\u00d7\u200910\u2009\u00b5m2 AFM maps, Sq values for the GaN surface in samples CT4, ST1, SC1 and SC2 turn out to be 672, 30.0, 5.0 and 7.4\u2009nm, respectively. This is a clear indication than the porosity in GaN layer is mainly produced by the porosity in the substrate. In any case, a clear evolution is observed regarding the decrease of the surface roughness of GaN structures along the successive set of samples and growth processes, being remarkable the fact that the roughness values in samples using LTCC, when the growth process is modified, became close to the ones in heterostructures using other substrates with different growth processes.On a more general application, AFM was used to quantify the roughness of the GaN surfaces. Representative areas of such surfaces (between pores) are shown in Fig.\u00a0Although all heterostructures present a GaN top layer, the layer thicknesses and stacking sequence differ significantly from one specimen to another, also within the same series. The thicknesses, summarized in Table\u00a02O3 as an intermediate layer (CT1) has the most irregular surface formation, since this Al2O3 is achieved as a conglomerate of non-intentional conic-like features, on top of which the AlN/GaN sequence was grown. Surprisingly enough, the layers on top of the Al2O3 were continuous and quite homogeneous along the whole surface. Such BF-DCTEM images explain the look of optical micrographs in Fig.\u00a02 intermediate layer (sample CT2) generates a more uniform support for the AlN/GaN formation. This is clearly because the used sol-gel silica deposition, with a great capacity for filling cavities and surface planarization, smoothens the surface and suppresses the propagation of the LTCC roughness to upper layers. However, the SiO2 placement approach was not applied to substrate materials in other series, since despite that structural advantage, its CTE value is not adjusted to that of the LTCC, as it happens in the case of the Al2O3. Therefore, the use of an alumina interlayer was considered of more interest in the following series.At a first glance, it is clear that the sample using unpolished CT 700 substrate and Al16.As previously mentioned, the heterostructures fabricated using RF-plasma (CT1-3 and ST1) make use of an AlN buffer layer, in order to obtain a better adjustment between the gallium nitride compound and the support below. Note that AlN was considered the best material for that purpose, due to a high chemical affinity to both underneath alumina and above gallium nitride, a more similar basal lattice parameter to GaN than other templates, and the promotion of two dimensional GaN nucleation. In fact, this material has been employed for more than 30 years as the most suitable buffer layer for the later growth of GaNThe intensity in BF-DCTEM micrographs of an electron-transparent material at certain regions decreases with the atomic density of the structure in the direction of the interacting electron beam. Therefore, the dark, thin GaN shown in the Fig.\u00a017. This structure is closer to a single crystal than those in the previous series, which are pure polycrystals. Clearly, the random orientation of those domains stems from the early stages of the growth. The roughness of this surface is still high enough to avoid a proper matching between the III-N structures during the coalescence process, and a multitude of grains remain differently oriented even along the whole growth process.SEM micrographs in Fig.\u00a0X-ray diffraction (XRD) was utilized to study the details of the mosaic structures on different substrate materials. Figure\u00a02O3 is not a totally compact film, but presents columnar shapes with embedded elongated voids, as a result of an applied anodic treatment to the aluminum precursor of these layers nm in the measurement carried out with the 200\u2009\u00b5m size probe with a roughness layer of (23.5\u2009\u00b1\u20090.4) nm and a thickness non-uniformity of only 2.6%. As in the case of CT4, the GaN layer thickness obtained by SE is in good agreement with measurements from TEM images . HAADF images have been obtained using a 1\u2009nm STEM probe and an 8\u2009cm camera length, which corresponds to a 20 mrad inner collecting angle for the HAADF detector. On the other hand, EDX spectra have been obtained in standard-less mode using the same STEM probe.TEM working in scanning mode (STEM) allows the application of a variety of chemical sensitive techniques with a high spatial resolution, such as High Angle Annular Dark Field (HAADF) imaging and Energy Dispersive X-Ray (EDX) spectroscopy. For this, it has to be taken into account that the intensity in HAADF images is proportional to 2O3 layers. Such features, which correspond to locally thinner specimen regions, allow confirming the presence of elongated voids mentioned previously.Figure\u00a02 interlayer in SiCer and SiO2 sol gel in sample CT2), though the corresponding spectra are not shown here. Similarly, these types of spectra were taken for the thick top layer in specimens from the last series, revealing that it was also GaN. As a visual example, an EDX linescan for sample CT4 is presented in this figure, illustrating such affirmations. Again in this case, the presence of both Ga and Al signal at the GaN/Al2O3 interface may be due to surface roughness.These affirmations are also supported by compositional quantitative results. Punctual EDX spectra Fig.\u00a0 corroborIn the long term, the achievements in this work are directed towards the deposition of monocrystalline GaN acting as active material for transistors for PE and high frequency devices. Therefore, it is imperative to characterize the crystallographic structure of the achieved nitride for each case.20 supports the EDX results about the composition of the top layer.One first approach to this characterization is the use of XRD. The diffractograms in Fig.\u00a0A HRTEM micrograph for the AlN/GaN bilayer in sample CT1, representative of this structure in samples using RF-plasma for the III-N growth, is presented in Fig.\u00a02O3 . Figure\u00a02 interlayer allows to take a SAED including all the materials in this interphase, as presented in Fig.\u00a02, no peaks related to this intermediate layer appear in this diffractogram. It can be observed that the reflections associated to the silicon {200} and the GaN {0002} and {1The Selected Area Electron Diffraction (SAED) patterns in Fig.\u00a0GaN heterostructures have been fabricated by a low-temperature MOCVD technique using three sets of substrates and three different approaches regarding the growth temperature, its duration and the type of plasma source. Along this work, an evolution on the thickness and crystalline quality of the GaN-on-LTCC has been perceived. In those materials with unpolished LTCC substrate, nano-poly-crystalline layers of 15\u2009nm of GaN on top of AlN buffer were formed. When used a polished substrate and with the same plasma source than the one in the previous case (RF-plasma), the GaN layer, with an abrupt separation to the AlN underneath, improved in terms of crystallinity, roughness and thickness (up to 70\u2009nm). Remarkably, the improved GaN that was fabricated using a DC-plasma source presents a mosaic microstructure with layers thickness larger than 500\u2009nm, formed by single-crystalline columnar grains randomly misoriented (tilted) among each other, but aligned in the polar direction, or close to it, with respect to the substrate surface. When first deposited, the GaN arranges in cubic fashion predominantly, but tends to form hexagonal structures afterwards.Results on the general structure, composition, topology and crystallinity have been reported. To the best of our knowledge, this work presents the highest crystalline quality obtained to the date for GaN grown on top of porous LTCC materials.25; (b) Sitall, a glass-ceramic bulk material based on the Al2O3-SiO2-MgO-TiO2-CeO2-La2O3 system26; and (c) SiCer, a composite formed by a thin Si single-crystalline wafer, bonded to a CTE adapted LTCC green body during sintering27. These substrates were either bought from commercial suppliers or fabricated at the Technical University of Ilmenau (TU Ilmenau). The studied specimens are collected in Table\u00a0Three sets of specimens have been studied in this work, each one related to a different type of ceramic or glass-ceramic substrate that was used for the deposition of GaN on top of intermediate and buffer layers. These substrates are: (a) HERATAPE\u00ae CT 700 from Heraeus (CT 700), a LTCC substrate2O3 at the facilities of the Technical University of Sofia by voltastatic oxidation (40\u2009V) in 0.3\u2009M oxalic acid solution at 15\u2009\u00b0C. The samples were gradually immersed by their Al-face in the electrolyte at a rate from 5 to 10\u2009\u00b5m/s to obtain a complete transformation of the previously sputtered aluminium layer into a nanoporous aluminium oxide28. As a source of direct current, a power supply (40\u2009V/5\u2009A), manufactured by Voltcraft\u00ae Germany, was utilized. The immersion velocity is defined by the substrate roughness and the thickness of the initial aluminum layer. Also, in sample CT2, a SiO2 layer produced by sol gel technology is used instead of alumina, being fabricated by the researchers at TU Ilmenau29.The alumina interlayers were produced in the form of nanoporous Al2 intermediate layer utilized for the growth of GaN on SiCer forms by Si ambient oxidation during the chamber-to-chamber specimen transportation process.Nitride semiconducting layers were deposited at Lakehead University by plasma assisted, low temperature MOCVD related technique, using a custom-built reactor with three chambers: a loading lock, a UHV chamber and a residual gas analyzer chamber along with a plasma source for nitrogen species. In fact, the 2 nm-thick SiO30. Therefore, though samples are classified in different groups, attending to the substrate, these three different growth processes have to be, then, also taken into account when comparing the results regarding the achieved GaN layers.Regarding the GaN growth, it has to be taken into account that different approaches were applied for the fabrication of this compound: RF nitrogen plasma at 550\u2009\u00b0C (samples CT1 and CT2) and at 540\u2009\u00b0C (specimens CT3 and ST1), was used with varying pressures and \u201cNitrogen to Metal-Organic\u201d precursor ratios in order to obtain a 2D growth of GaN. In the mentioned cases, a first step is applied for creating an AlN buffer layer with growing parameters kept in ways in which 3D growths are achieved . On the other hand, for samples CT4, ST2, SC1 and SC2, a DC plasma at 550\u2009\u00b0C, applied a period three times longer than the one used in the second series, was utilized to grow directly GaN (without AlN). More details on the LTCC substrate composition and MOCVD parameters for the three series are given elsewhere+ ions using a Gatan model 691 Precision Ion Polishing System. Specific details on the experimental setup for the different (S)TEM related techniques are indicated in further sections of this work.Following the fabrication of these materials, topographic, structural and compositional characterizations were carried out through the use of techniques related to XRD, AFM, Optical Microscopy and SEM, TEM, STEM microscopies. The equipment employed to apply these techniques consisted, respectively, in a Bruker D8 Advance X-Ray Diffractometer, an AFM Bruker Multimode Nanoscope IIIa operating in tapping mode, a DSX510 Olympus digital microscope, a Field-Effect ZEISS GeminiSEM 500 SEM, a FEI Tecnai F30 TEM (operated at 300\u2009kV), a FEG-2010 Jeol and a FEI TALOS STEM microscopes (both working at 200\u2009kV). In order to carry out (S)TEM characterization, samples were first prepared in cross-section XTEM disposition at UCA and thinned down until electron-transparency using traditional grinding-polishing methods and ion-milled with ArSE was applied at UCA to investigate the topography of macroscopic areas of GaN and intermediate layers. SE measurements were performed in the spectral range between 450 and 950\u2009nm with an automatic rotating analyzer J. A. Woollam V-WASE (variable angle) ellipsometer equipped with an automatic retarder.All data generated or analyzed during this study are included in this published article (and its Supplementary Information files)."} {"text": "These associations were not explained by the effects of maternal depressive symptoms after pregnancy, which both added to and partially mediated the prenatal effects. Maternal depressive symptoms throughout pregnancy are associated with increased ADHD symptomatology in young children. Maternal depressive symptoms after pregnancy add to, but only partially mediate, the prenatal effects. Preventive interventions suited for the pregnancy period may benefit both maternal and offspring mental health.Maternal depressive symptoms during pregnancy have been associated with child behavioural symptoms of attention-deficit/hyperactivity disorder (ADHD) in early childhood. However, it remains unclear if depressive symptoms throughout pregnancy are more harmful to the child than depressive symptoms only during certain times, and if maternal depressive symptoms after pregnancy add to or mediate any prenatal effects. 1,779 mother-child dyads participated in the Prediction and Prevention of Pre-eclampsia and Intrauterine Growth Restriction (PREDO) study. Mothers filled in the Center of Epidemiological Studies Depression Scale biweekly from 12+0\u201313+6 to 38+0\u201339+6 weeks+days of gestation or delivery, and the Beck Depression Inventory-II and the Conners\u2019 Hyperactivity Index at the child\u2019s age of 3 to 6 years . Maternal depressive symptoms were highly stable throughout pregnancy, and children of mothers with consistently high depressive symptoms showed higher average levels (mean difference = 0.46 SD units, 95% Confidence Interval [CI] 0.36, 0.56, Attention-deficit/hyperactivity disorder (ADHD) is characterized by a persistent pattern of inattention, impulsivity, and hyperactivity. It is one of the most prevalent neurodevelopmental disorders in children with prevalence rates varying from 5.9 to 7.1% . These sIn the recent decade, the prevalence rates of ADHD have shown a nearly 30% increase . As our Yet, although extensive research on the effects of maternal depression on offspring outcomes has started to emerge ,15, relaThese studies are, however, limited by a number of reasons. First, they measured depressive symptoms \u201cduring the past seven days or last two weeks\u201d at one or two time points during pregnancy not covering the entire pregnancy ,20. SecoHence, we tested, in a large sample of pregnant Finnish women, if depressive symptoms, measured biweekly from gestational week 12 onwards until term or delivery, were associated with ADHD symptoms in their 3- to 6-year-old children. The biweekly assessments allowed us to address gestation-week and trimester-specific effects, and maternal re-ratings of depressive symptoms at the time of rating the 3- to 6-year-old child allowed us to address if any effects were specific to the prenatal stage. Our study also tested if maternal depressive symptoms after pregnancy added to or mediated any of the prenatal effects. Finally, we tested if maternal pre-pregnancy obesity, hypertensive pregnancy disorders, and gestational diabetes, or maternal ADHD symptoms accounted for any observed effects. We have previously demonstrated in this cohort associations between maternal depressive symptoms and child internalizing, externalizing, and total problems, including DSM IV-oriented ADHD problems as measuThe Prediction and Prevention of Pre-eclampsia and Intrauterine Growth Restriction (PREDO) study comprises altogether 4,777 mothers and their singleton offspring born alive in Finland between 2006 and 2010 . The womSD) = 0.5 years, range 3.0 to 6.3 years; 51.5% boys).In 2011\u20132012 we invited 4,586 mother-child dyads , 55 women declined participation in a follow-up, and for 100 women, addresses were not traceable), and 2,667 (58.2%) participated. Of them 2,312 (68.0% of those with data on depressive symptoms during pregnancy) had pregnancy as well as follow-up data available at the child\u2019s age of 1.9 to 6.3 years (50.6% boys). Since the CHI is validated for children who are 3 years and older , we excln = 2,274), the women who participated and whose children in the follow-up were 3 years and older were older at delivery , had more often a tertiary education , were less often single , were less often multiparous , smoked less often throughout pregnancy , and reported less often a history of a depression diagnosis .Compared to the women who were invited but did not participate in the follow-up to \u201cvery much\u201d (3) . A sum-sDepressive symptoms were reported biweekly up to 14 times throughout pregnancy starting from 12+0\u201313+6 to 38+0\u201339+6 weeks+days gestation or delivery using the Center for Epidemiological Studies Depression Scale (CES-D) . The CESIn the follow-up, depressive symptoms were reported using the Beck Depression Inventory-II (BDI-II) . This scBoth depression scales have good psychometric properties \u201330, and 2), gestational diabetes (yes vs. no) and hypertensive pregnancy disorders were extracted from the MBR and/or from medical records independently verified by a clinical jury.Maternal pre-pregnancy obesity , which tWe first examined maternal depressive symptoms profiles during pregnancy with a latent profile analysis. We compared solutions with two to eight clusters, and identified the most optimal one by using Akaike Information Criterion, sample size-adjusted Bayesian Information Criterion, and Vuong-Lo-Mendell-Rubin Likelihood Ratio Test and Lo-Mendell-Rubin Adjusted Likelihood Ratio Tests. We then tested if the child ADHD symptom scores, treated as a continuous outcome variable, and the proportion of children with clinically significant ADHD symptoms, treated as a dichotomous variable using the ADHD symptom score 10 or above as a clinical cutoff , differeWe also examined if the associations between maternal depressive symptoms during pregnancy and child ADHD symptoms were gestation-week- or trimester-specific. In these tests, we used linear regression analysis when we treated child ADHD symptoms as continuous and logistic regression analysis when we dichotomized child ADHD symptoms scores at the clinical cutoff. Further, in these analyses maternal depressive symptom scores were square root transformed to improve linear model fitting.In all of the above analyses we first made adjustments for child\u2019s sex and age at follow-up (model 1). Thereafter, we additionally adjusted for maternal age at childbirth, parity, family structure, education level, type 1 diabetes, chronic hypertension, history of physician-diagnosed depression, antidepressant and other psychotropic medication use, alcohol use and smoking during pregnancy, and gestation length and weight at birth adjusted for sex and gestation length (model 2); for maternal pre-pregnancy obesity, gestational diabetes, gestational hypertension, and pre-eclampsia (model 3); for maternal ADHD problems (model4); and finally, for all of the above and maternal depressive symptoms at follow-up parallel to rating the child (model 5).We also tested if maternal depressive symptoms after pregnancy added to the prenatal effects with an interaction term of maternal trimester-weighted mean depressive symptoms during pregnancy*maternal depressive symptoms after pregnancy that was added to the linear (continuous ADHD symptom scores) and logistic regression models . In addition, we tested if maternal depressive symptoms after pregnancy mediated the effects of maternal trimester-weighted mean depressive symptoms during pregnancy using the PROCESS macro for mediation in SPSS 24 with 5000 bootstrapping re-samples with bias-corrected CIs ,33. Finap-values < 0.001) [Characteristics of the study participants are in < 0.001) . The medp-values < 0.001). For the two latent profile groups, the percentage of women with data on all 14 measurement points compared to the ones with at least one missing value was not significantly different . Further, higher maternal biweekly, trimester-specific and trimp-values for interactions = 0.03 for depressive symptoms during pregnancy*depressive symptoms after pregnancy interaction on child continuous and clinically significant ADHD symptoms scores). Across all adjustment models, child ADHD symptom scores and proportion of children with clinically significant symptoms were the highest if the mother reported depressive symptoms above the clinical cutoff both during and after pregnancy and 2.8-times higher odds for clinically significant ADHD symptoms. It was therefore not surprising that we found no gestation-week or trimester-specific associations between maternal depressive symptoms during pregnancy and child ADHD symptoms. None of these associations were accounted for by a number of perinatal, maternal and neonatal characteristics, and a series of sensitivity analyses demonstrated that the associations did not either vary by maternal pre-pregnancy obesity, hypertensive pregnancy disorders, or gestational diabetes, child\u2019s sex, maternal history of physician-diagnosed depression, or maternal ADHD problems.Our study also showed that higher levels of maternal depressive symptoms after pregnancy were associated with higher child ADHD symptom scores. These higher levels of depressive symptoms after pregnancy only partially accounted for the prenatal effects as maternal depressive symptoms during pregnancy also had a significant direct effect on the child\u2019s ADHD symptoms when adjusting for the symptoms after pregnancy. They did, however, add to the prenatal effects, such that child ADHD symptom scores and the proportion and odds of children with clinically significant ADHD symptoms were the highest among those women with clinically significant depressive symptoms both during and after pregnancy. Together maternal depressive symptoms during and after pregnancy accounted for 11% of the variation in the child\u2019s ADHD symptoms.Our findings correspond with the Developmental Origins of Health and Disease (DOHaD) framework suggesting that prenatal exposure to environmental adversity may carry enduring effects on brain developmental sequelae, including risk for ADHD symptomatology ,13,34. OAn obvious study limitation is that we are not able to specify the brain structural or functional nor biological or behavioural underlying mechanisms. Existing literature suggests that higher maternal depressive symptoms and/or salivary cortisol levels during pregnancy are linked with altered offspring brain structure and functional connectivity , and witFurther study limitations relate to child ADHD symptoms being reported by the mother only. However, Leis et al. (2014) found that the effect of maternal prenatal depression on child hyperactivity was significant whether the child was rated by the mother or the teacher. Furthermore, we measured ADHD symptoms dimensionally, and did not use diagnostic criteria, rendering generalizations to ADHD disorder tentative. Since maternal depressive symptoms after pregnancy were self-rated at the time of rating the child\u2019s behaviour, and maternal depression and child behaviour may influence each other, we cannot rule out shared method variance. Sample attrition which was not independent of maternal characteristics also limits the external validity of our findings.Our findings show that maternal depressive symptoms during and after pregnancy are associated with child ADHD symptomatology and suggest that early pregnancy screening and preventive interventions focusing on maternal depressive symptoms may benefit not only maternal, but offspring wellbeing. Preventive interventions, suited for pregnancy, are urgently needed, as a recent meta-analysis demonstrated null to very small benefits of existing techniques in decreasing maternal distress during pregnancy .S1 Table(DOCX)Click here for additional data file.S2 Table(DOCX)Click here for additional data file.S3 Table(DOCX)Click here for additional data file.S4 Table(DOCX)Click here for additional data file."} {"text": "Nowadays there is increasing interest in identifying\u2013and using\u2013metabolites that can be employed as biomarkers for diagnosing, treating and monitoring diseases. Saliva and NMR have been widely used for this purpose as they are fast and inexpensive methods. This case-control study aimed to find biomarkers that could be related to glioblastoma (GBL) and periodontal disease (PD) and studied a possible association between GBL and periodontal status. The participants numbered 130, of whom 10 were diagnosed with GBL and were assigned to the cases group, while the remaining 120 did not present any pathology and were assigned to the control group. On one hand, significantly increased (p < 0.05) metabolites were found in GBL group: leucine, valine, isoleucine, propionate, alanine, acetate, ethanolamine and sucrose. Moreover, a good tendency to separation between the two groups was observed on the scatterplot of the NMR. On the other hand, the distribution of the groups attending to the periodontal status was very similar and we didn\u00b4t find any association between GBL and periodontal status . Subsequently, the sample as a whole was divided into three groups by periodontal status in order to identify biomarkers for PD. Group 1 was composed of periodontally healthy individuals, group 2 had gingivitis or early periodontitis and group 3 had moderate to advanced periodontitis. On comparing periodontal status, a significant increase (p < 0.05) in certain metabolites was observed. These findings along with previous reports suggest that these could be used as biomarkers of a PD: caproate, isocaproate+butyrate, isovalerate, isopropanol+methanol, 4 aminobutyrate, choline, sucrose, sucrose-glucose-lysine, lactate-proline, lactate and proline. The scatter plot showed a good tendency to wards separation between group 1 and 3. NMR spectroscopy analysis provides information on both the structure and the composition of low-molecular-mass metabolites in biological fluids and is a rapid and low-cost technique for exploring pathological metabolic processes. The major advantages of NMR spectroscopy include its unbiased metabolite detection, quantitative nature, and high reproducibility. NMR-based metabolomics could be appropriate as a cost-effective solution for high throughput analysis . Saliva Glioblastoma (GBL) (2016 WHO classification of CNS tumors) is one of the most lethal primary malignant tumors of the central nervous system, as the mean survival is under 15 months and the five-year rate is under 10%. The risk factors for GBL are unknown, although constant exposure to ionizing radiation or chemical agents can increase its development. GBL is mainly diagnosed at advanced ages, with a mean age on diagnosis of 64 years [Chronic periodontitis (CP) is an inflammatory disorder characterized by the progressive and irreversible destruction of the tissues surrounding the tooth. It affects approximately 50% of the adult population and its incidence and severity increase with age, reaching a prevalence of 70% of over-65 years-old in the USA , and pro1H NMR to identify whether saliva contained greater concentrations of any particular metabolites which could serve as biomarkers for diagnosing and monitoring GBL and PD and to study a posible association between both diseases.The aim of this study was to use This case-control study was approved by the Ethics Committee of the Hospital Cl\u00ednico Universitario of Valencia, Spain in accordance with the Declaration of Helsinki of 1964 and subsequent amendments by the World Medical Association. The case group comprised hospitalized patients with brain tumors awaiting surgery in the neurosurgery unit of this hospital. Following the operation, the suspected diagnosis was confirmed. The control group was composed of patients from the University of Valencia dental clinic. Oral examination and saliva sampling were performed in both groups.All the participants of legal age (18 years or over) were given an informed consent document with all the information concerning the study, to be signed voluntarily before taking part. For the control group, those who had taken antibiotics in the past six months, had fewer than eight teeth (excluding third molars), were pregnant or, in general, presented any condition that could lead to error like cardiovascular diseases, diabetes mellitus, rheumatoid arthritis, chronic obstructive pulmonary disease, pneumonia, chronic kidney disease, metabolic syndrome, obesity and Alzheimer\u00b4s disease, were excluded from the study . The casSaliva samples were obtained in the early morning to avoid the introduction of exogenous agents into the oral samples. The participants had not ingested any food, or chewed gum, or brushed their teeth or used any oral hygiene product in the two hours before the sample was taken, and had not smoken for at least one hour before. To collect the saliva samples, we used the \u201cdraining method\u201d: the participants were seated comfortably for a few minutes in a resting position with their heads tilted slightly forward, in a quiet environment to avoid non-test stimuli. The slightly parted lips allowed the saliva to fall into a wide-necked sterile container. The liquid collected was then transferred with a pipette to a sterile 1.5 mL Eppendorf tube and was frozen immediately at -80\u00b0C until the NMR measurements were made ,20.http://www.hmdb.ca) and the Chenomx spectral database contained in Chenomix NMR Suite 8.1 .The data acquisition and processing were conducted as previously described by Galbis-Estrada et al. . Forty-eChemometrics statistical analyses were performed using in-house MATLAB scripts and the PLS Toolbox 6.7 . Metabolite levels were computed from the raw (untransformed) data and expressed as mean \u00b1 SD (standard deviation). T-Student\u2019s test was used to determine the statistical significance of differences between the means in both of the cases, and the control group and ANOVA to estimate the differences between the three categories of periodontal status. A chi- squared was used for comparative proportions. The significance level was p < 0.05. Principal component analysis (PCA) and projection to latent structures for discriminant analysis (PLS-DA) were applied to the NMR spectral datasets. Results were cross-validated using the leave-one-out to evaluate the accuracy of each classification model ; in eachTen participants met the requirements for assigning the case group. This was performed by 1 male (10%) and 9 females (90%) in which the average age was 54.7 years old, range 26\u201378. In contrast, the control group comprised 120 participants of whom 49 were males (40.8%) and 71 were females . The average age was 51.8 years old, range 19\u201381 .http://www.hmdb.ca) and the Chenomx spectral database contained in Chenomix NMR Suite 8.1 sofware . The databases compare metabolites together with their respective concentrations based on a known reference signal, in this study we used TSP, 32 mM. A sample 1H NMR spectrum of saliva from a participant is shown in The identification and quantification of the saliva metabolite concentrations were made using the Human Metabolome Database , a total of 39 subjects were placed in group 1 , 59 in group 2 (gingivitis/early periodontitis) and 32 in group 3 (moderate/advanced periodontitis). A total of 68 metabolites were assigned, quantified and included at an ANOVA test . On compNumerous metabolites have been proposed for diagnosing, monitoring and treating inflammatory diseases. Identification and quatification leves of N-acetyl aspartate (NAA), choline, glutamate, glutamine, lactate, alanine, glucose, inositol, creatinine and lipids have been useful in previous studies to separate glial tumors by type and grade, and determine the choice of therapy and treatment efficacy to evaluate the progression or remission of GBL \u201325. The Short chain fatty acids such as butyrate, caproate, isocaproate, propionate, isovalerate and lactate play an important role in periodontal disorders. They are end-products of bacterial metabolism and have been strongly linked to deep periodontal pockets, loss of insertion, bleeding and inflammation. The acids prevent cell division, making repair difficult and favoring junctional epithelium degeneration processes, which in turn allows the entry of pathogens and the formation of periodontal pockets. These bacterial metabolites stimulate an inflammatory response and the liberation of cytokines. At cell level, they inhibit leukocyte apoptosis and cell proliferation in the gingival epithelium and endothelium, preventing their repair. . As has 1H NMR or LS-MS to identify metabolites related with PD, collect and analyze saliva samples seems to be a good and useful method to follow-up different diseases due to the fact that it is an easy, fast and non-invasive option, and its results are similar to those obtained from serum analysis or tissue cells cultures. There are a lack of studies in screening saliva to find biomarkers of GBL. In this study, we used 1H NMR but other complementary analytical platforms from the same sample, as such Mass Spectrometry, would be necessary to conclude if saliva could be used in the screening of GBL.The present study found significant differences in some metabolites in saliva of GBL and PD patients but any association between GBM and periodontal status was found. Sucrose and propionate in GBL patients and caproate, isocaproate-butyrate, isovalerate, lacatate+proline and proline in PD patients, were very significantly increased . Previous studies have analyzed saliva using S1 TableThe statistically significant ones are marked * p < 0.05 and ** p < 0.01.(DOCX)Click here for additional data file.S2 TableThe statistically significant ones are marked * p < 0.05 and ** p < 0.01.(DOCX)Click here for additional data file."} {"text": "Moreover, our sensor with such an unprecedented response capability can be operated as a barometric altimeter with an altitude resolution of about 1\u2009m. The outstanding behaviors of our devices make nanoparticle arrays for use as actuation materials for pressure measurement.Tunneling conductance among nanoparticle arrays is extremely sensitive to the spacing of nanoparticles and might be applied to fabricate ultra-sensitive sensors. Such sensors are of paramount significance for various application, such as automotive systems and consumer electronics. Here, we represent a sensitive pressure sensor which is composed of a piezoresistive strain transducer fabricated from closely spaced nanoparticle films deposited on a flexible membrane. Benefited from this unique quantum transport mechanism, the thermal noise of the sensor decreases significantly, providing the opportunity for our devices to serve as high-performance pressure sensors with an ultrahigh resolution as fine as about 0.5\u2009Pa and a high sensitivity of 0.13\u2009kPa Designing reliable piezoresistive pressure sensors based on percolative nanoparticle (NP) arrays remains a challenge. Here, the authors propose a percolative NP array sensor deposited on a flexible membrane with ultra-high sensitivity and resolution by modifying the thickness of the membrane. Electromechanical pressure sensors consist of two essential components: a membrane and a transducer element, which converts the applied pressure to an electrical signal change. Most recently, a wide variety of materials and nanostructures such as two-dimensional layers3, nanotubes6, nanofibers8, nanoparticles (NPs)10, and even composited conductive rubbers11 were focused to be used in these devices. Piezoresistive sensing is the most frequently used transduction mechanism in these pressure sensors12, owing to advantages such as direct current input, high yield, simple structure and manufacturing process, low cost, scalable, as well as easy signal collection14. These piezoresistive sensing elements undergo a change in their internal resistance when they are stressed, which breaks the ohmic contact or forms new defects in the materials. Generally, these piezoresistive sensing elements are hard to distinguish external\u00a0pressure changes lower than 100\u2009Pa, since the piezoresistive mechanism does not work if the external pressure\u00a0change is tiny16. There is a tremendous interest to develop MEMS integrated pressure sensors that allow for atmospheric applications with a very high resolution of sub-10\u2009Pa. With such a resolution, an altitude difference of about 1\u2009m can be distinguished by barometric measurement.Precision pressure sensors are essential to many micro-electro-mechanical systems (MEMS) devices, with applications in various areas. Currently, the development of smart systems and wearable devices has drawn tremendous attention toward the high-resolution MEMS integrated pressure sensors working stably on atmospheric pressure18, humidity sensors19, as well as force and mass sensors20. In the closely spaced NP arrays, the spacing of the adjacent NPs is so small that the electron transports between NPs are dominated by tunneling or hopping22. There are a large number of percolative paths existing in the disorder NP arrays. Since the quantum tunneling or hopping is extremely sensitive to the inter-particle spacing, the percolative paths could be broken or regenerated by a tiny change in the geometries of the NP arrays. As a result, the conductance of percolative NP arrays is sensitively related to the deformation of the substrates on which the NPs deposit23. It is reasonable to assume that such mechanism is applicable to a piezoresistive pressure sensor by fabricating percolative NP arrays on flexible membranes as transducer elements.Recently, percolative NP arrays have been used as piezoresistive transducers of ultrasensitive mechanical sensors, such as strain sensors28. The device characterizes with an extremely high resolution of about 0.5\u2009Pa. Working as a barometric altitude sensor, it demonstrates the ability to distinguish altitude difference of about 1\u2009m. While the majority of the piezoresistive pressure-sensing devices today use doped silicon transducers wherein they undergo a change in their carrier mobility when they are stressed, our devices offer an alternative with potentially higher pressure resolution in terms of higher sensitivity, reduced thermal disturbance, and decreased power consumption with a larger resistance of about 10\u2009M\u03a9.In this paper, we realize a new configuration of piezoresistive pressure sensor fabricated from percolation-based conductive nanostructures. Differing from current piezoresistive pressure gauges, these devices transduce the external pressure on the elastic membrane on which the NPs deposited to the change of the tunneling conductance across the NP percolating networks29. A quarter cross-sectional view of our sensor is shown in Fig. 10. These NPs formed a discontinuous film in a disordered manner on a highly deformable membrane such as polyethylene terephthalate (PET) with prepatterned interdigital electrodes (IDEs). These can be considered as percolation pathways that conduct electric current distinguishable from the leakage current when a fixed voltage is applied24. The strain-sensing mechanism of this structure comes from the deformation-dependent percolation morphology over the IDEs. By applying an external pressure, a small deformation of the PET membrane induces a change in the inter-particle spacing, enabling more or fewer conductive percolation pathways, thus leading to a change in the electron conductance, as shown in Fig. Similar to typical configurations of pressure sensors, the architecture of our pressure sensor is comprised of a strain gauge fabricated directly on the surface of the membrane and hermetically encapsulated on a vacuum or gas-filled reference cavity31, considering the similarity between the coverage of the NP assembly and the particle filling fraction used in the percolation model, both of which increase with the deposition time. In our device, the electrodes cover an area ranging from several square millimeters to several ten square millimeters, resulting in a huge aspect ratio of the inter-electrode gaps. This morphology leads to a rapid increase of the conductance after the NP coverage reaches the percolation threshold, which is determined by the electrode separation, due to the formation of a large number of conductive percolation pathways (or say closely spaced NP chains across the electrodes). Furthermore, due to the quantum tunneling nature of electron transport, the development of the conductance during NP deposition is not only dependent on the geometric filling pattern of the NPs but also dependent on the distribution of inter-particle gaps along the conductive percolation pathway, which also changes with the increase of the deposition mass. As a result, a finely gradual change in the slope of the conductance evolution curve reported in Fig. The flexible strain-sensing element Fig. was fabrTo analyze the micro-morphological characteristics of NPs, images from the scanning transmission electron microscopy (STEM) using a high-angle annular dark field (HAADF) detector Fig. and the 33, the high mobility and easy coalescence behavior of gold NPs leads to large instability when they were used to constitute percolative conducting NP arrays.Generally, NPs of various metals can be used as the piezoresistive sensing medium. In the present research, palladium (Pd) NPs were used in preference owing to their less coalescence and high-resolution transmission electron microscopy (HR-TEM) characterizations demonstrated that there is a PdOx layer of 0.5\u2009nm in thickness formed on the NP surface versus \u0394P, where \u0394G\u2009=\u2009G\u2009\u2212\u2009G0, in which G and G0 denote the conductance with and without an applied differential pressure \u0394P (with reference to atmospheric pressure), respectively. We first discuss the situation of a sensor with a 0.05-mm-thick PET membrane. Over the whole applied pressure range, our sensor showed a steady response to static pressure, and the conductance under each pressure was constant /\u0394P) could be used to characterize the sensitivity S of the pressure sensor32. For smaller differential pressures (lower than 60\u2009Pa), there is an approximately linear relationship between the response and the applied pressure, with a pressure sensitivity value S\u2009=\u20090.13\u2009kPa\u22121. Above 60\u2009Pa, the sensitivity dropped to 0.049\u2009kPa\u22121. A response drop in sensitivity at higher pressures has been widely observed in recently reported pressure sensors37. In our sensor devices, the decrease in sensitivity might be attributed to the transition of the deformation behavior of the PET membrane.A home-made system was used to test the sensing performance of the sensing elements as shown in Supplementary Note 38. This could be proved by the changes of relative conductance influenced by different strain, as shown in Fig. For a thin film assembly of NPs on a flexible membrane, a compressive strain may induce a decrease in the mean distance to adjacent NPs, resulting in an increase in the conductance of the NP array, as shown in Fig. Back to the pressure sensor device, the strain generated on the PET membrane under pressure is not such simple. Since the edge of the PET membrane supported on the cavity is constrained, inhomogeneous deformation is generated on the membrane under applied pressure. We find that when a pressure is applied to the PET membrane, a compressive strain is generated from the center of the membrane while a tensile strain is exerted on the surrounding area. For convenience in analysis, we pay close attention to the cross section across the center of the circular membrane, as depicted by the schematic diagram of Fig. The inhomogeneous deformation of the membrane under pressure induces an inhomogeneous distribution of the inter-particle distance changes in the NP arrays. The conductance measured across the electrodes is an integration of the electron transport over all the conductive percolation pathways, which contain various inter-particle distances characterized with a complex function of position and pressure. Therefore, the response of conductance to pressure is no more a simple exponential function of pressure. At a smaller applied pressure, the compressive strain dominates the main area of the membrane, so that the whole NP array undergoes a conductance enhancement. An approximately linear dependence between conductance and pressure is observed. With increasing applied pressure, a transition from compressive to tensile strain can be observed as the position changes from center to edge of the membrane see Fig. . The ext\u03c3 of 0.0024%. This indicates that the sensor could resolve pressure changes as small as 1.5\u2009Pa /(\u0394G/G0)) without difficulty. This high stability and repeatability were also demonstrated in a compression test on the NP-coated PET membrane-based strain-sensing element, wherein the conductance response characteristic did not show any evident changes after repeated compressing for over at least 500 cycles . The root mean square (RMS) noise at different applied pressures was calculated from the fluctuations in \u0394G/G0, as shown in Fig. 2f39. It can be seen that up to 60\u2009Pa applied differential pressure, the RMS noise is always lower than 0.005% and remains fairly constant with pressure, making the noise-limited pressure resolution of the sensor as small as 0.38\u2009Pa. Figure G. This relative change in conductance was well above the conductance fluctuation levels, so the reversible decrease and increase in G upon loading could be clearly distinguished from the random electrical noises. This indicates that our sensor has the ability to reliably detect pressure variations as low as 0.5\u2009Pa.We now look at the resolution of the pressure measurement of the sensor. Generally, the random electrical noise in a piezoresistive sensor, which is dominated by thermal and flicker noise, sets the fundamental lower limit of its piezoresistive transducer resolution5 , graphenes43, PtSe23, and GaAs44 is shown. It is clear that the sensitivity of our devices is among the highest category. The more remarkable is that our sensors show an excellent resolution which is nearly three orders of magnitude higher than that of most of the others44. It is known that the ability to detect subtle pressure variations in the regime from 1\u2009Pa to 1\u2009kPa is crucial for many modern applications. The ultrahigh resolution realized in this paper is a significant improvement in current sensing capabilities.In Fig. 45. We also investigated how the thickness of the PET membrane influenced the effective pressure regimes and sensitivity.A report has shown that modifying the mechanical and geometrical properties of the flexible substrates could change the measuring range of sensors46. This is true when the applied pressure is small. The pressure-response curves measured for sensors having three different PET membrane thicknesses are compared in Fig. \u22121 for the 0.1\u2009mm PET membrane, and for the 0.25\u2009mm PET membrane the sensitivity is 0.0042\u2009kPa\u22121. However, at higher pressures, the sensitivity of the pressure sensor with thinner PET membrane dropped noticeably due to the expanding of the tensile strain regions, as discussed above. Since thinner films tend to be subject to significant visco-elastic creep or even plastic deformation at higher applied pressure . But at higher pressure, the deformation of the membrane may be plastic so that significant hysteresis emerges , where l is the spacing of the adjacent NPs and \u03b2 is a size- and temperature-dependent electron coupling term52. The exponential relationship means that NP arrays respond to a tiny pressure-induced deformation of the actuation membranes with atomic-scale sensitivity. Furthermore, the electrical potential energy built between adjacent NPs due to electron charging could sufficiently reduce the random transport of electrons having energy less than kBT (kB is the Boltzmann constant), which contribute to the lower energy portion of the statistical distribution of electron energy, although no Coulomb blockade was observable at room temperature. As a result, thermal noise may decrease significantly, which enables an increased sensing resolution.The high sensitivity and ultrahigh resolution realized by our sensor can be attributed to the nature of the current transport, which is dominated by electron tunneling or hopping of the inter-electrode gaps. Pd NPs were generated from a home-made magnetron plasma gas aggregation cluster source in argon stream at a pressure of about 80\u2009Pa and extracted to a high vacuum deposition chamber with a differential pumping system. Some deposition parameters are displayed in Supplementary Table The fabrication of sensing elements is shown in Supplementary Note The piezoresistive pressure sensor was connected to a pressure controller for applying different pressures (see Supplementary Note The commercial software ANSYS 19 was chosen to perform FEA (see Supplementary Note \u03b5 could be calculated (more details are given in Supplementary Note The actuation layer was removed from the sensor. Strains were generated from the deformations of the actuation layer which was subjected to a micrometer step by step (see Supplementary Fig. Supplementary InformationPeer Review FileDescription of Additional Supplementary Filessupplementary movie 1Source Data"} {"text": "Gs). Gs, which so far was only predicted theoretically, is inversely proportional to such interactions. As model systems, we use HeLa and HaCaT tissue cultures with water and with an aqueous DMSO solution. The measurements are done using Centrifugal Adhesion Balance (CAB) when set to effective zero gravity. As expected, the addition of DMSO to water reduces Gs. This reduction in Gs is usually higher for HaCaT than for HeLa cells, which agrees with the common usage of DMSO in dermal medicine. We also varied the rigidities of the tissues. The tissue rigidity is not expected to relate to Gs, and indeed our results didn\u2019t show a correlation between these two physical properties.The pharmaceutical industry uses various solvents to increase drug penetrability to tissues. The solvent\u2019s choice affects the efficacy of a drug. In this paper, we provide an unprecedented means of relating a solvent to a tissue quantitatively. We show that the solvents induce reorientation of the tissue surface molecules in a way that favors interaction and, therefore, penetrability of a solvent to a tissue. We provide, for the first time, a number for this tendency through a new physical property termed Interfacial Modulus ( Water is a commonly used medium for transdermal drug delivery because tissue hydration appears to increase transdermal delivery of both hydrophilic and lipophilic permeants5. However, it is not a universal medium because it does not increase the drug percutaneous absorption6. Thus, in many cases, there is a desire to increase the efficacy of drug delivery7. One long-standing approach to increase drugs penetrability to the skin has been to use penetration enhancers that interact with skin constituents to promote drug flux. Most commonly used enhancers are water, azone, pyrrolidones, fatty acids, alcohols, and sulfoxides of which Dimethyl sulfoxide (DMSO) is the most prevalent example10. Sulfoxides in general, and DMSO in particular, have been proven to be better at transdermal drug delivery compared to water13.Drug delivery via transdermal, intramuscular and subcutaneous injections are often performed instead of oral administration, especially for administering therapeutic peptides or proteinsin vitro and in vivo applications primarily due to its enhanced solubility for pharmaceutical reagents, since many drugs drugs are not soluble in hydrophilic solvents14. It is also used as an anti-freeze agent (cryo-preservation)15. In addition of being an effective solvent for small molecules, DMSO is also capable of dissolving macromolecules such as peptides and proteins2 and facilitating drug diffusion across cell membranes comprised of lipid bilayers. Chemically, it is a powerful aprotic solvent which hydrogen bonds with itself rather than with water; it is colorless, odorless and hygroscopic and is often used in many areas of pharmaceutical sciences as a \u201cuniversal solvent\u201d3. Moreover, it is cost effective to synthesize, is stable at room temperature conditions and has a workable cytotoxicity of up to 10% for most biomedical purposes.DMSO is a common drug solvent for 13.In this paper we quantify DMSO\u2019s tissue penetrative capacity when mixed with water. This penetrative capacity of the solvent influences its wetting behavior on the substrates and effect the drop pinning. For instance, HaCaT substrate represents cell line from an adult human skin and since skin acts as a barrier to penetrating molecules, chemical permeability enhancers such as DMSO are added to deliver active molecules into or via the skinstratum corneum, are the main barrier to penetration of exogenous substances through the skin10. Yet, evaluating the degree of enhancement of such agents currently lacks a quantitative method. In this paper we show a method for quantifying penetration enhancers, and demonstrate it on DMSO and water. We study the interaction of the surface molecules of HeLa and HaCaT tissue cultures with water and how addition of DMSO to water can influence this interaction. HaCaT substrates are cell line from an adult human skin and HeLa substrates are cervical tumor cell line. Rigidities of HeLa and HaCaT cell substrates are chosen such that they mimic their respective in vivo conditions while water and DMSO are chosen for their pharmaceutical applications. Both solvents are known to not interfere with various drugs and can efficiently interact and penetrate tissues along with other active agents13. To quantify the way addition of DMSO to water enhances the tissue penetrative capacity of drugs13, we use the concept of the interfacial modulus (Gs) as explained below.Though a methodic choice of solvents is important for the drug delivery purposes, and there are studies that target the various parameters in the problem, still the solvent penetration efficacy is not quantifiable. For example, it is known that the lipids of the topmost layer of the skin, the 19 and assumes surface deformation. This approach is particularly sensitive to the normal force that acts on the drop, and therefore requires measurements at effective zero gravity. However, it allows the determination of the interfacial modulus according to:f\u2225 is the lateral force required to slide the drop along the surface, \u03b3LV is the liquid-vapor surface tension, \u03b8 is the contact angle that the drop adopted when it was resting on the surface before the onset of motion, \u03b8R and \u03b8A are the drop receding and advancing contact angles, respectively as shown in Fig.\u00a0Gs represents the tendency of the surface to resist interacting with the liquid. It can, therefore, serve as a measure of solvent\u2019s affinity to a certain substrate17. The determination of Gs is done through measurements of drop retention forces at effective zero gravity, i.e. at zero normal force. The force considered in Eq. . The normal component, \u03b3LV sin\u03b8 creates a ridge at the triple line, that although doesn\u2019t affect the macroscopic contact angle, it does lower the rate of liquid wetting the surface, makes solids exhibit contact angle hysteresis, can increase the drop retentive force and exhibit time effect23 at solid-liquid interface.This approach considers outmost surface layer deformation which is proportional to the Laplace pressure inside the drop and occurs in the normal direction time effect, we study lateral adhesion of a drop after allowing it to rest undisturbed for a fixed waiting time (tstill) duration. We use the Centrifugal Adhesion Balance (CAB)26 so that the normal components of the gravitational and centrifugal forces cancel each other, namely a state of zero normal force (i.e. effective zero gravity), and their lateral components are gradually increased. Figure\u00a0To study Figure\u00a0f\u22a5 and f\u2225 are the normal and lateral force acting on the drop, respectively, m is the drop\u2019s mass, \u03c9 is the CAB angular velocity, R is the drop\u2019s distance from the CAB\u2019s center of rotation, g is the gravitational acceleration, and \u03b1 is the tilt angle with respect to the horizon. For the experiments done in this study, f\u22a5\u2009=\u20090 was maintained and f\u2225 was gradually increased. The substrates used were HeLa and HaCaT tissue culture cells.CAB manipulates normal and lateral forces according to following equations:2. After reaching 100% confluency, cell samples were crosslinked and preserved in 4% formaldehyde for subsequent experiments. Liquid solution used are deionized (DI) water \u2009\u2264\u20090.7\u2009\u00d7\u200910\u22126\u2009\u2126\u22121 cm\u22121) and 10% DMSO-90% water solution for their biological significance.Culture human cell lines HaCaT and HeLa were obtained from the American Type Culture Collection (ATCC). The cells were seeded on silicone plates mimicking physiological tissue stiffness with rigidity of 2 kPa, 8 kPa and 64 kPa (manufactured by MuWells inc) and maintained in DMEM supplemented with 10% fetal bovine serum at the condition of 37\u2009\u00b0C and 5% COGs, there is a need to have the tissue in a dry form so that its outer side is more hydrophobic, and its more hydrophilic functional groups are buried inside the layer. One way to achieve this, is to expose the tissue to air over a long time. A quicker way is DI water and ethanol rinse as explained above.The cell cultures used are cleaned and dried by a DI water rinse followed by 70% (30% DI water) ethanol rinse and finally a 90% (10% DI water) ethanol rinse. This step is crucial since biological tissues are naturally wet, both water and DMSO aqueous solutions would wet them completely and contact angles couldn\u2019t be measured. Therefore, to measure Extra care was taken to prevent the contamination of the tissue samples or alter their surface properties due to human error. However, it is important to note that even in a controlled environment the cell distribution in the substrate cannot be controlled and has a random nature . The drGs values of each substrate - liquid pair are measured from values taken at that moment.Figure\u00a0tstill, during which the drop is resting motionless on the substrate and the CAB is still. After that, the CAB\u2019s arm starts rotating, the sliding force on the drop increases and the drop is still motionless while maintaining zero effective gravity. The time that elapses from the moment the CAB\u2019s arm started rotating till the moment the drop started to slide is noted as tactive. Thus, the total time that the drop is resting on the surface prior to sliding, trest, is trest\u2009=\u2009tstill\u2009+\u2009tactive. Figure\u00a0f\u2225 required to slide a drop as a function of drop resting time (trest).As can be seen in Fig.\u00a0f\u2225, required to slide water and aqueous DMSO on HeLa tissue cultures. The common feature in Fig.\u00a0trest. With the increase in resting time, the retention force increases and reaches, or clearly approaches, a plateau. This dependence of retention force on resting time or time effect is observed for HaCaT-Water and HaCaT-10% DMSO system as well measurements at effective zero gravity. The CAB study also shows that the drop\u2019s lateral retention force increases with the time the drop rests on the surface (drop resting time) and eventually reaches, or clearly approaches, a plateau. This time effect was observed for both HeLa and HaCaT tissue cultures, and is yet another feature of the interfacial modulus which describes the resistance of a solid (or tissue) surface to interact with a contacting liquid. The higher the interaction, the more significant is the solid surface molecular re-orientation that gives rise to the interfacial modulus. The interfacial modulus is not expected to be a function of the tissue\u2019s rigidity, and indeed we found no correlation between these properties. This study demonstrates how to quantify interactions of any tissue with any solvent or any tissue penetration enhancers.The interactions of water and DMSO solution to HeLa and HaCaT tissue cultures of varying rigidities was investigated. While it was known that DMSO increases the affinity of the solvent to the tissue, such knowledge was not quantitative as of yet. Here, we quantify this property of DMSO by measuring the interfacial modulus -"} {"text": "Smartphone-based technologies for medical imaging purposes are limited, especially when it involves the measurement of physiological information of the tissues. Herein, a smartphone-based near-infrared (NIR) imaging device was developed to measure physiological changes in tissues across a wide area and without contact. A custom attachment containing multiple multi-wavelength LED light sources , source driver, and optical filters and lenses was clipped onto a smartphone that served as the detector during data acquisition. The ability of the device to measure physiological changes was validated via occlusion studies on control subjects. Noise removal techniques using singular value decomposition algorithms effectively removed surface noise and distinctly differentiated the physiological changes in response to occlusion. In the long term, the developed smartphone-based NIR imaging device with capabilities to capture physiological changes will be a great low-cost alternative for clinicians and eventually for patients with chronic ulcers and bed sores, and/or in pre-screening for potential ulcers in diabetic subjects. Chronic wounds\u2014also termed as ulcers\u2014are wounds with a full thickness in depth and a slow healing tendency. Chronic wounds are a silent epidemic that affect a large fraction of the world, and ~1\u20132% of the population will experience a chronic wound during their lifetime in developed countries [The clinical gold-standard assessment of wounds during their periodic treatment employs visual inspection of chronic wounds to assess wound healing status. Visual clinical assessment of the wound occurs by its color, degree of epithelialization, and size reduction across weeks of treatment. It is a non-objective approach with no systematic or digitized tracking of healing status. Oxygen is a vital factor that is required to enhance wound healing . DetermiSubclinical wound assessment tools include histological detection\u2014to characterize infection, Doppler ultrasound \u2014to measuMore recently, various non-invasive optical imaging techniques have been developed to measure oxygenation in and around the wounds. These include hyperspectral imaging (HSI), multispectral imaging (MSI), diffuse reflectance spectroscopy (DRS), and near-infrared spectroscopy (NIRS) ,5. HSI aTranslating the above developed imaging technologies for wound care management in low resource settings is further challenging due to limited resources/income and affordability for such expensive imaging approaches. Herein, with a global focus in mind, a low-cost smartphone-based near-infrared optical imaging technology was developed and its feasibility tested to obtain physiological information from the wound site apart from visible clinical changes. Physiological changes manifest prior to visual reduction in wound size, allowing potential detection of serious complications early on.Near-infrared optical imaging is an emerging non-invasive and non-ionizing technology that can map the hemodynamic changes in the site of interest even up to a few centimeters deep. The technology uses near-infrared (NIR) light between 650 and 1000 nm, which is minimally absorbed and preferentially scattered allowing deep tissue imaging. Multi-wavelength NIR images map the spatial and temporal distribution of the optical properties . NIR optical imaging technology has been used in various applications such as cancer diagnostics, functional brain mapping, and more recently in the area of wound imaging. In the area of wound imaging, both spectroscopic point-based NIR imaging and areaRecently, researchers have developed smartphone-based apps for 2D and 3D wound image analysis ,14, to tIn the area of medical imaging, there are a few smartphone-based imaging technologies that have been developed to cater to various applications ,21,22,23A near-infrared smartphone-based imaging system or SPOT device see was deveAn overview of the data acquisition and analysis steps is given in The device was fastened to a table side for stable imaging and maintained approximately 3\u201d above the imaging surface. The ambient light was lowered during imaging studies in order to minimize background noise. The smartphone\u2019s camera was maintained in auto focus mode and high dynamic range (HDR) mode of acquisition was selected during imaging studies. The sampling rate (in video mode) was chosen to be 60 fps and images of 1920 \u00d7 1080 (5M) pixels were captured. The custom-attachment device had a manual switch to control the LED driver with a 5 s delay. The delay allowed for the user to start the camera and the camera to auto-focus and stabilize. The LED driver controlled the white light LED and the multi-wavelength LEDs that simultaneously multiplex at 2.5 Hz frequency to emit each wavelength independently. In parallel, the camera was continuously capturing the diffuse reflectance images at 60 fps during a 4 s cycle. The cycle was repeated three times for each case, with each cycle preceded by a white light flash at the same frequency of 2.5 Hz during multiplexing. The white light was also programmed to flash at the end of the three cycles, thus acting as an indicator that data acquisition was complete. The LED driver was programmed to automatically stop after the three cycles, whereas the smartphone camera was manually stopped. In future, the LED driver and camera controls will be automated and synchronized via an app for ease of imaging. The diffuse reflectance signals acquired at a frame rate of 60 fps were uploaded from the camera and further image analysis was carried out on an external computer (desktop or laptop).The 60 fps video file included the diffuse reflectance signals from all the four wavelengths . The diffuse reflectance signals (or intensity) of the red color channel of each frame were averaged and plotted across the number of frames is a widely used image processing technique that is implemented in medical imaging data to reduce the image dimensionality. It extracts relevant details by reducing the dimensionality of the data via a simple implementation. SVD is so closely related with principal component analysis (PCA) that the techniques can often be interchangeably used. However, SVD is a more general method and robust approach to understand changes of basis . SVD hasX). SVD was used as an approximation for a matrix X of a given full rank, where X (M \u00d7 N) is decomposed into orthogonal matrices U (M \u00d7 M) and V (N \u00d7 N) and a diagonal matrix S (N \u00d7 M), as shown in Equation (1).Herein, SVD was implemented to the diffuse reflectance data at each wavelength and the orthogonal U and V matrices, as a reduced low ranking matrix (represented as matrix A for the Kth EV), given by Equation (2).The diagonal matrix represents the significance of each eigen value (by an assigned weight), organized with the most significant eigen values (EVs) in descending order. The matrix, A low-ranking image implies majority of the information is stored within a few EVs and can be represented by small set of these dominant components . Each EVThe resulting reconstructed images of diffuse reflectance signals were cropped to only the field of interest. The data were normalized and coregistered onto the white light image for anatomical representation. Coregistration was achieved via optimizing intensity-based algorithms with an initial step size of 0.02 at 300 iterations (using built-in coregistration-based functions in MATLAB). A preliminary analysis of the most significant EVs and their role in image reconstruction is described in Venous occlusion studies are a standard validation technique widely employed to demonstrate the feasibility of physiological measuring imaging technology. It has been widely used by various researchers for various imaging studies in the past ,30,31,32In this Institutional Review Board (IRB) approved study, four healthy control subjects over 18 years of age were recruited and imaged in the lab using the smartphone-based imaging device. Initially, the subject was seated in a relaxed position with arm on bench top and a pressure cuff at the bicep for venous occlusion. A fiducial marker was placed on the subject within the field of view (for coregistration purposes). The diffuse reflectance signal was acquired under rest conditions, and after 45 s of occlusion at 160 mm Hg. The arm cuff pressure was released rapidly after acquiring the second image and the last image was acquired within 2 s of cuff release. A schematic of the study and the time stamp at which diffuse reflectance images were acquired is shown in A quantitative analysis was performed to determine if the difference in the diffuse reflectance data across the three time stamps using various EVs was significant. Initially, a region of interest (ROI) was selected in similar regions (from the wrist below) in subjects 1, 2, and 4. The ROI remained constant across the time stamps in each of the above subjects . In each ROI, 17 30 \u00d7 30 pixel boxes were selected as shown in Single value decomposition (SVD) was applied to images acquired at rest across all four subjects. A sample plot of the intensity of the EVs at each EV is given in Reconstructed images of diffuse reflectance at a given wavelength (here only 690 nm data were used during preliminary assessment of the device) were compared across the three time stamps . Similar results were observed across the four subjects, where the optical images varied across the time stamps when 3:15, 4:15, and/or 5:15 EVs were used and further removal of EVs diminished that difference. The results of the quantitative analysis are shown in Upon further removing the EVs , the qualitative pseudo-color plots from rest did not appear to depict physiological signals. Hence quantitative analysis was not carried out across these EV ranges. While subject 1, 2, and 4\u2019s wrist was imaged, subject 3\u2019s dorsal of the hand was imaged during occlusion (to determine if the differences in diffuse reflectance across the time stamps was significant) at a different location on the hand. From qualitative pseudo-color plots in Comparing the optical images across the rest, occlusion, and release, it was consistently observed that the diffuse reflected signal reduced after 45 s of occlusion, when compared to rest. Similarly, a significant increase was observed upon immediate release after the 45 s occlusion across all the four subjects. Typically 690 nm predominantly signifies deoxy-hemoglobin concentration changes. Upon occlusion, the deoxygenated hemoglobin tends to increase in the occluded region causing an increased absorption (or decreased diffuse reflectance) of the 690 nm NIR light. Upon immediate release, there is possibly a rapid decrease in deoxygenated hemoglobin as oxygen rich blood flows through the tissue, causing a reduction in its absorption (or increase in diffuse reflectance).An NIR-based SPOT (smartphone-based oxygenation tool) device was developed to image for physiological changes in, in- vivo tissues without contact. The custom attachment that contained all the relevant imaging components was used along with the smartphone\u2019s camera \u2014collectively the SPOT device\u2014to acquire multi-wavelength diffuse reflectance signals. The ability of the SPOT device to observe physiological changes was validated via occlusion studies. Upon employing SVD-based noise removal algorithms, subsurface information that pertains to physiological changes was observed distinctly when using EVs 3:15, 4:15, and/or 5:15. The differences in the diffuse reflected signal in response to occlusion were similar across all the imaged subjects. Future work will include extensive quantitative studies to show the percentage of change in optical signals and its consistency during repeatability studies within the subject, across the subjects, and at different locations of the hand. Additionally, the multi-wavelength NIR images will be used along with modified Beer\u2013Lambert\u2019s law to obtain changes in oxy- and deoxy-hemoglobin concentrations along with oxygenation saturation maps (and hence the device is termed as smartphone-based oxygenation tool\u2014SPOT).The SPOT device is currently modified to synchronize source and detector operations and automate image acquisition via custom-developed application software. In the current study, data corresponding to each wavelength were manually extracted, causing delays in the entire data processing steps. Our ongoing studies are attempting to automate the data extraction process using machine learning algorithms, such that the required attributes are extracted and frames appropriately labeled automatically. Assessing wounds from a subclinical physiological perspective is a novel addition to smartphone-based technologies that augments wound care management, with a potential to predict serious complications early on, or periodically monitor the wound status in chronic cases. A smartphone-based imaging technology with capabilities to capture physiological changes (as a tissue oxygenation measuring tool) will be a great low-cost alternative for clinicians and eventually for patients with chronic ulcers, bed sores, and/or in pre-screening for potential ulcers in diabetic subjects.Both the authors are co-inventors on the patent related to the smartphone-based imaging technology and methodology described in this manuscript. The patent is currently filed by Florida International University."} {"text": "Esophageal diverticula and esophageal fibrovascular polyps are uncommon clinical entities. While an asymptomatic presentation is possible, symptoms, when present, may be dissimilar in their gastrointestinal or respiratory characteristics. Additionally, these findings typically occur in different segments of the esophagus, with polyps occurring most frequently in the cervical esophagus and the midesophagus being the predominant location of pathologic diverticula.We report the case of a 55-year-old patient who presented with a two-year history of progressive dysphagia secondary to a large proximal to midesophageal mass. Workup included esophagography, computed tomography, and endoscopy with ultrasound and was initially consistent with a diagnosis of a large esophageal fibrovascular polyp. Upon operative exploration, the mass was found to be a midesophageal diverticulum associated with a leading lipoma. The patient was successfully treated with transthoracic stapled diverticulectomy. At postoperative follow-up the patient was tolerating oral intake with no symptoms of dysphagia.Esophageal diverticula are typically found in the midesophagus and are thought to arise from radial traction secondary to mediastinal inflammation. Esophageal fibrovascular polyps may result from tracheobronchial compression, and esophagography typically identifies a mobile intraluminal mass.Esophageal fibrovascular polyps and diverticula are rare, and a high index of suspicion is important in evaluation of these entities. Unlike distal esophageal diverticula which are often associated with gastroesophageal reflux disease, they are classically associated with chronic mediastinal inflammation and result from traction forces on the esophageal wall . HoweverWe report the case of a 55-year-old woman whose workup supported the diagnosis of a large FVP. On exploration, the lesion was found to be a midesophageal diverticulum traveling in the submuscular plane. The lesion was successfully managed with transthoracic diverticulectomy and buttressed closure.This work has been reported in line with the SCARE criteria .2A 55-year-old healthy woman was referred to our institution with a two-year history of progressive dysphagia to solids . She repPhysical exam and laboratory testing were unremarkable. Esophagography demonstrated a filling defect in the upper thoracic esophagus. Computed tomography (CT) demonstrated an 8\u202fcm mass. Endoscopic ultrasound (EUS) demonstrated a pedunculated mass with a submucosal origin beginning at 20\u202fcm from the incisors on the right side of the neck . The lesThe exploration began via a right cervical approach. The recurrent laryngeal nerve was identified and the cervical esophagus was mobilized. The mass was palpable on the posterior esophageal wall at the thoracic inlet. Upon a short myotomy, no stalk was identified and the mass could not be delivered to the neck. The cervical incision was closed and a right thoracotomy was performed. The mass was seen extending from the level of the azygos vein to the thoracic inlet. The esophageal muscular layer was intact. Following myotomy, the soft mass, which was densely adhered to the mucosa, was visualized and dissected from the underlying mucosa. It became evident that the mass maintained its attachment to a portion of the mucosa. Complete mobilization revealed the mass to be a lipoma at the tip of a large midesophageal diverticulum traveling in a submucosal plane. Repeat endoscopy demonstrated an ostium in the esophageal wall opening into a blind-ending pouch. The diverticulum was fully mobilized and resected using a stapler . MucosalThe patient was diagnosed with a large midesophageal diverticulum with a lead point lipoma. The patient\u2019s postoperative course was uncomplicated. A postoperative esophagogram demonstrated no esophageal leak or obstruction. Pathology demonstrated a 7.5\u202fcm diverticulum with a 4.5\u202fcm lipoma without malignancy. At follow-up on the nineteenth postoperative day, the patient was tolerating a diet without dysphagia.3Fibrovascular polyps classically present as pedunculated masses arising immediately distal to the cricopharyngeus ,7,9. ResMidesophageal diverticula are found near the carina and classically have been attributed to radial traction from mediastinal inflammatory processes ,2. When This case is unique in its presentation in that workup supported a diagnosis of FVP. The intramural tract of the diverticulum mimicked the pedunculated stalk of a FVP. The diverticular ostium was collapsed and not identified on endoscopy until after full operative mobilization of the diverticulum. Unlike most reported cases of midesophageal diverticula, this was neither associated with mediastinal inflammation nor with an underlying motility disorder. The diverticular ostium was collapsed and not identified on endoscopy until after full mobilization of the diverticulum. The presence of a leading lipoma is also uncharacteristic. These findings highlight the importance of maintaining a high index of suspicion when evaluating a suspected FVP or midesophageal diverticulum, as well as astute use and interpretation of diagnostic imaging modalities in patient evaluation.The authors have no conflicts of interest to disclose.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.MD Anderson Cancer Center Institutional Review Board \u2013 This investigation is exempt from ethical approval at our institution.Consent obtained.BS and GW contributed to conceptualization, study design, and manuscript drafting and editing. KM, EC, and RV contributed to data collection, data analysis, and manuscript drafting and editing.NA.Erin M. Corsini.Boris Sepesi.Not commissioned, externally peer-reviewed."} {"text": "Merluccius merluccius) as a case study. Our work evidenced sophisticated seasonal oocyte recruitment dynamics and patterns, mostly driven by a low-cost predefinition of fecundity as a function of fish body size, likely influenced also by environmental cues. Fecundity seems to be defined at a much earlier stage of oocyte development than previously thought, implying a quasi-determinate \u2013 rather than indeterminate \u2013 fecundity type in hake. These results imply a major change in the conceptual approach to reproductive strategies in teleosts. These findings not only question the current binary classification of fecundity as either determinate or indeterminate, but also suggest that current practices regarding potential fecundity estimation in fishes should be complemented with studies on primary oocyte dynamics. Accordingly, the methodology and approach adopted in this study may be profitably applied for unravelling some of the complexities associated with oocyte recruitment and thereby SRP variability.Information on temporal variations in stock reproductive potential (SRP) is essential in fisheries management. Despite this relevance, fundamental understanding of egg production variability remains largely unclear due to difficulties in tracking the underlying complex fluctuations in early oocyte recruitment that determines fecundity. We applied advanced oocyte packing density theory to get in-depth, quantitative insights across oocyte stages and seasons, selecting the commercially valuable European hake ( In fish populations, recruitment results from a myriad of interactions, beginning with factors that determine the level of egg production3 which, in turn, can be traced back to fundamental processes influencing oogenesis5. Related parameters \u2013 such as fecundity and length of spawning season \u2013 are therefore key in defining stock reproductive potential (SRP)6 and, consequently, need to be taken into consideration in population dynamics studies7. However, properly quantifying the formation of the smallest oocytes (primary oocytes) and thereby better understanding why fecundity varies is a highly complex issue requiring advanced methodology. An added complication in such studies is that, in order to track the fate of the sex cells, the ovarian samples need to be collected over a sufficiently long time-scale to cover the different parts of the reproductive cycle. Furthermore, the impact of environmental stressors and cues on oogenesis can change markedly throughout the year.In modern natural resource management, maintaining a population\u2019s reproductive potential above a certain minimum threshold value is one of the most important practical components of sustainability plans for populations and ecosystems services8. The temporal relation between oocyte recruitment from primary to secondary growth and the spawning season defines the fecundity type, which ranges from clearly determinate to indeterminate9. In species with determinate fecundity, oocyte recruitment is completed before the onset of the spawning season. This means that potential annual fecundity can, in principle, be estimated by the standing stock of prespawning, secondary growth oocytes since, in the beginning of this phase of development, those oocytes, which will subsequently be released during the spawning season, are stored to be later matured in lots (cohorts). In contrast, indeterminate species are capable of recruiting oocytes to secondary growth throughout the spawning season. Thus, direct estimation of potential annual fecundity is not possible because the total number of oocytes produced per season is not fixed prior to spawning. Instead, annual fecundity is estimated by multiplying typical batch fecundity (number of eggs spawned in a single spawning event) with the number of batches released10. Therefore, the appropriate method for estimating egg production depends on the fecundity type of the species in question. However, despite its importance in the reproductive biology of fisheries, the fecundity type \u2013 determinate or indeterminate \u2013 remains unknown for many species11. In this regard it seems that, rather than simply obtaining \u201csnapshots\u201d of secondary growth oocyte development, dedicated studies of oogenesis are needed12. Moreover, the question as to whether the fecundity type is genetically predefined, or modulated by habitat and environmental characteristics as an ecophenotypic response, is mostly unresolved13, which calls into doubt the rigid labelling of species into one type or the other, which is traditionally the case in most marine laboratories. In fact, several studies have already linked fecundity type to geographic distribution14, spawning season and the energy allocation strategy during reproduction15, suggesting varying degrees of plasticity.The course of oogenesis goes through three main steps: proliferation of oogonia in the lamellar germinal epithelium followed by development of primary oocytes and then secondary oocytes (which ultimately includes ovulation)11. The advanced Oocyte Packing Density (OPD) theory16, which builds on the digital auto-diametric method17 with facets of stereology and volume-based theory, provides a far better understanding of oocyte recruitment, as even the smallest oocytes in different stages can be reliably quantified independently of fecundity type20. Although the use of OPD theory is certainly gaining momentum, it is rightly pointed out that its formulations require special attention by new experimenters if they are to be executed correctly. Such efforts should, however, be balanced against the fact that existing studies on indeterminate species remain scarce, typically providing a limited view of this highly complex oocyte recruitment process, and focusing, for example, mostly on the spawning season, even though teleost reproductive cycles are well known to show circannual periodicity. So, the underlying seasonal oocyte production has, thus far, rarely been adequately addressed quantitatively18.In response to all of these uncertainties regarding basic and applied aspects of identifying fecundity patterns, theoretical and methodological advances have been made over the last decade or so, leading to far more detailed and accurate fecundity studies and also, in several cases, using less labour-intensive methods21. Amongst the long list of external environmental variables possibly involved, photoperiod has been described as one of the most common cues, triggering gonad development in temperate fish species, while factors such as temperature, food availability and physiological status undoubtedly act as drivers, regulating the rate of oocyte recruitment and development22. However, in indeterminate species with a protracted spawning season, it quickly becomes extremely complicated to ascertain and isolate the varied roles of these external cues and drivers due to the asynchronies in oocyte development patterns23, and the fact that such lengthy experiments are difficult to run under realistic conditions in the laboratory. All things considered, there is a clear need to improve our understanding of the underlying regulation of oocyte recruitment and fecundity pattern, especially in species with an apparent indeterminate fecundity. This requires not only extensive field sampling covering the full reproductive cycle and access to detailed environmental information, but also advanced laboratory routines to track stage-specific oocyte development in reliable, quantitative terms. In this article, we address the fundamental problems of such research and the associated methodologies, in the course of a complex research effort of this kind on the European hake (Merluccius merluccius) from the Galician shelf. This species has a highly complicated reproductive strategy but, in general terms, it is among the most data rich species in the world due to its prominent role in the fishing industry as one of the main commercial target species of the French, Spanish and Portuguese fishing fleets in the North Atlantic. A protracted spawning season has been documented including several spawning peaks or seasons within the year24. Generally, the species is considered a batch spawner, with an asynchronous oocyte development and continuous indeterminate fecundity18. For the southern stock, a high degree of variability in several reproductive traits has been reported25; however, certain fundamental aspects of its reproduction are still barely touched upon26. Altogether, the specific aims of the research described in the present article were (i) to quantify the seasonal dynamics of the whole range of oocyte stages in European hake, applying OPD theory \u2013 with special emphasis on early oocyte recruitment, which is widely recognized as a \u201cblack box\u201d in the teleost literature; (ii) to re-examine the traditional classification that classifies hake as a species with an indeterminate fecundity pattern; and finally, (iii) to identify potential links between oocyte recruitment and environmental cues, while bearing in mind that multiple cues might be involved.It seems important, therefore, to gain a better understanding of the process of recruitment of the oocytes to be developed during the spawning season, as well as of the factors underlying changes in egg production, both of which are linked to stock productivity27 and herring28. This included the appearance and formation, in the cytoplasm, of the so-called circumnuclear ring (CNR) \u2013 which, being rich in organelles and RNA, is homologous with the Balbiani body and, therefore, a central criterion for staging these oocytes manifested markedly different patterns as a function of ovarian growth \u2013 or more specifically, the ovarian phase defined by the presence of the most advanced oocyte stage (MAO) at earlier ovarian phases, but then steadily declined towards spawning , starting off with a sharp increase when MAO approached secondary growth were observed for subsequent secondary growth oocytes, i.e. in the stages defined by the presence of cortical alveoli (CAO), vitellogenic (EVO and VTO),\u00a0then migrating nucleus (MNO) and finally, hydrating oocytes (HYO) showed a pattern similar to PVO4a and PVO4b oocytes Fig.\u00a0, showed ing Fig.\u00a0. In conting Fig.\u00a0, showed wth Fig.\u00a0. SimilarYO) Fig.\u00a0. While PO4a Fig.\u00a0 and fromO4a Fig.\u00a0, referrites Fig.\u00a0. Ovariesabsolute number of oocytes in different stages in the whole ovary, found by multiplying OPDi with the corresponding ovary weight adjusted for TL, i.e. relative TL-based NOi, was recognized as an effective way forward to pinpoint oocyte production, since individual body size differences : PVO2 increased noticeably in number (peaking at \u22482500 oocytes cm-3), while PVO1 and PVO3 increased only modestly. Meanwhile, as seen in Fig.\u00a0\u22123, respectively), although this is low compared to the same values for PVO stages 1 to 3 (\u22481300 oocytes cm\u22123). The general decline in number\u00a0continued as the PVOs developed and thereby grew in cell size and, as shown in Fig.\u00a0\u22123). The seasonal variation of PVO4c and CAO abundance was roughly similar, with generally higher numbers during winter and spring, more so in the case of PVO4c (cf. \u2248180 and 130 oocytes cm\u22123 before and after summer solstice).As seen in Fig.\u00a0\u22123, i.e. 30% lower than for CAO. For both vitellogenic stages, the lowest numbers were found in July (\u224825 oocytes cm\u22123), but there also three concurrent, but diminishing peaks throughout the year. This pattern of oocytic abundance appears to be linked with \u201cpulses\u201d of higher oceanic temperatures and subsequently more-concentrated spawning activity data on postovulatory follicles (POFs) presented three peaks (or possibly four), the first, in January, being the most pronounced, followed by a second smaller peak and a third which was intermediate in size. Finally, in Fig.\u00a0i levels of atresia , late alpha (L\u03b1) and \u03b2 stages of atresia), similar patterns can be seen, most noticeably with the L\u03b1 stage, although the peaks are delayed compared to the VTO peaks. During the third spawning peak, coinciding with the warmest waters and medium or late vitellogenesis (VTO) Fig.\u00a0, typical\u22123), there was a marked increase (by \u22481750 oocytes cm\u22123). The PVO4b pattern begins to resemble those of the PVO4c to VTO stages, shown in Fig.\u00a0Before addressing this topic, it should be noted that data on PVO stages 1, 2 and 3 were pooled, for simplicity, as fluctuations during each of these stages were very similar, with only PVO2 showing a certain degree of deviation Figs.\u00a0 and 3A. During the developing and spawning capable ovarian phases, larger females generally had the highest numbers of oocytes Table\u00a0. For insTo further address fecundity dynamics in European hake, the total number of egg batches a representative individual (within each 10-cm category) could possibly produce over the three spawning seasons in question SS1 to 3) were estimated theoretically. In the case of 60\u201370\u2009cm females, this number was 66 , provided all oocytes from PVO4b onwards ended up as eggs . Interestingly, the total amount of PVO1-3, which are not contributing to annual fecundity, also increased significantly with female size, and SS1 showed significant lower values , reflect the potential fecundity. A 60\u201370\u2009cm female, for example, would then produce \u22481.8 million oocytes, spawned in 6 egg batches over 0.8 months in SS1; \u22481.5 million oocytes in 8 egg batches over 1.1 months in SS2; and 1.3 million oocytes in 8 egg batches over 1.1 months in SS3 (Table\u00a005) Fig.\u00a0.Figure 530. Here we have shown that oocytes are recruited in hake much earlier than previously thought22. This result challenges the classic definitions of determinate and indeterminate tactics31 and hence our understanding of fish reproductive strategies. In our opinion, this finding is not exclusive for hake and very likely happens in many, if not most, of the oviparous and pelagophil species. We found that the egg production is mostly dependent on female size, and thereby on female energetics, i.e. their feeding capacity13. However, the spawning dynamics differ not only by female size, but also among seasons. The methodology and approach taken in this study appear well suited to throw light on the complexities associated with fish reproductive productivity.Oocyte recruitment is influenced by a complex interaction of temporal events and maternal features which, in the case of marine teleosts, have generally received little attention from researchers, and a number of assumptions have therefore been made regarding their reproductive strategiesi). This enabled us to make a highly precise exploration of the temporal dynamics of oocyte recruitment, accounting also for the formation of the very smallest primary oocytes, three classes of atresia, post-ovulatory follicles (POFs) as well as the level of other tissue components, including blood capillaries. All in all, the present methods, securely founded on earlier works arising from the introduction of the OPD theory nearly a decade ago16, should provide realistic approximations of the seasonal, numerical oocyte production (NO), which is particularly relevant in species with complex reproductive strategies. The results of a previous methodological study on early oocyte (\u2009>\u2009120\u2009\u00b5m) recruitment in European hake18 cannot be directly compared because a much more detailed staging key for PVOs was used in this study, covering the full yearly cycle instead a part of it. However, in-built validation tests showed a close correspondence between estimated and observed values , the latter having been calculated from whole-mounts29. The results indicate that each of these two vitellogenic cohorts will subsequently develop into single batches, thus supporting the use of (NOEVTO\u2009+\u2009NOVTO)/2 as a proxy for batch fecundity.State-of-the-art laboratory techniques were used to estimate stage-specific oocyte packing density the appropriateness of the current oocyte staging scheme, originally applied to cod32, a determinate spawner with a group-synchronous oocyte development. Here the dynamics of these oocyte stages are evaluated in a teleost displaying an asynchronous oocyte development. Based on our results, sexually immature specimens may be characterized by the presence of PVO3 and PVO4a as the most advanced oocytes (MAO) and small amounts of blood, while ovaries showing oocytes in PVO4b and 4c stages as MAO correspond either to regenerating specimens or specimens at the onset of sexual maturation, i.e. in puberty. This view is supported by previously published results on cod and herring32. Consequently, we argue that the oocyte stage PVO4b, or the final formation of the circumnuclear ring (CNR) represented by PVO4c, could be incorporated in future studies as an early marker of sexual maturation schedules in European hake, and likely in many other teleosts as well. Note, however, that identifying regenerating individuals from those in puberty requires a careful histological check of the presence/absence of actual spawning markers (e.g. POFs)33.Oocyte development is a continuous process, which, logically, makes it difficult to always confidently classify stages. In contrast, and although the annual mean abundance was similar to that of PVO4a, the PVO4b dynamics were much more \u2013 although not completely \u2013 related to spawning activity. Furthermore, the PVO4b abundance appeared highest in the first half of the year (after the winter solstice). A second important drop in oocyte numbers was found from PVO4a-b to PVO4c, while from PVO4c to VTO this reduction was more modest and occurred more steadily. As distinct from the postulated long-term reservoir of PVO1-3, these oocytes were obviously being recruited for subsequent spawning, since the abundance of PVO4c onwards, showed three peaks . Hence, the remaining issue is to elucidate the role of the intermediate or transitional primary stages, PVO4a and PVO4b, which probably have a mixed role as a reservoir and \u201crecruitment spot\u201d. Importantly, their abundance was low in immatures, in contrast to PVO1-3, (note here our above comment that \u201cimmatures\u201d might be split between those being true \u201cimmatures\u201d and those in early puberty), equally higher in developing and spawning capable females, but, in the case of PVO4b, decreased significantly in regressing and regenerating phases, although far from reaching the zero values observed for vitellogenic oocytes. Hence, our suggestion is that PVO4a should be considered a medium-term reservoir for the current spawning season or for the following one(s); they may or may not be recruited, depending on external and internal factors (see next section). In stark contrast, PVO4b seems closely associated with oocyte recruitment within the current period. Having said that, a precise timing of when and how oocyte recruitment to PVO4b ends was not established in this study, although we did see clear peaks in PVO4b in spawning capable females. In any case, the PVO4b stage should be considered as a pool of cells that can, potentially, be spawned during the up-coming season, in line with the expected realized fecundity and thereby the duration of spawning, as discussed later. The fate of the observed surplus of PVO4a-b at the end of the spawning season remains speculative: these cells might be either broken down through apoptosis or atretic processes34, or revitalized as part of next spawning season\u2019s production (see below).The major reduction in oocyte numbers between PVO1-3 and PVO4a probably means that there is an actual \u201creservoir\u201d of the smallest primary oocytes. In other words, this high number of PVO1-3 implies that not all of them will be recruited to be spawned during the current season. Moreover, their temporal, numerical (NO) trend, being opposite to the one seen for vitellogenic oocytes, reinforces the idea that they are not specifically dedicated to the up-coming spawning season(s) but to future ones. We found that this PVO1-3 reservoir is generally built up during the second half of the year, but also that this reservoir is actively utilized in the early transformation from immature to developing ovarian phases. However, the fact that this reservoir is so large complicates any evaluation of direct dynamic links to subsequent oocyte stages. Despite seeing a three-fold decrease in NO values from PVO3 to PVO4a, these two stages followed a similar seasonal pattern independent of spawning activityet al.5 clarify that the level of fecundity in herring seems predefined very early on in oogenesis, apparently in response to much earlier feeding opportunities. In our staging system, the PVO4 stages are specifically classified based on the development of the circumnuclear ring (CNR) in cytoplasm, but only the PVO4b stage is characterized by a well-formed and evident CNR. For herring, the appearance of the CNR is likely triggered around the winter solstice, considered to be \u201cthe first decision window\u201d, while the spring equinox in this species may operate as \u201cthe second decision window\u201d28. In fact, photoperiod has been linked to initiation of early oocyte recruitment or development, at least in some high-latitude teleosts where day length, and thereby the strength of this photic signal as an environmental cue, changes markedly throughout the year35. Here we could not establish any robust relationship between day length and oocyte recruitment, but we did observe a concurrence between the increase in PVO4a and b abundance and the increase in daylight hours after the winter solstice. Consequently, their observed lower numbers during the second half of the year might simply be explained by the subsequent transfer to later developmental stages. However, hake is considered an income breeder36, i.e. the acquired energy is immediately used for reproductive investment, meaning there is no need for related, extensive storage37. In other words, environmental factors triggering oocyte production (at least vitellogenesis) must necessarily be related with instantaneous energy demands. So, taken together, it seems reasonable to believe that photoperiod may play a potential role in triggering the mobilization of oocytes from \u201creservoir to recruitment\u201d while the rate at which oocytes develop may be regulated by other environmental factors, e.g. upwelling events attracting foraging fish (the main hake prey)38, and the temperature regime experienced40.It seems clear, therefore, that the oocytes to be spawned in each spawning season of European hake are \u201clabelled\u201d at some point during the PVO4b stage, but, of course, all these cells will not necessarily make it to the end of the reproductive cycle due to varying incidences of apoptosis or atresia. In this line, the statistical findings of dos Santos Schmidt 4. However, it is also affirmed that the preceding primary growth (PG) phase is primarily gonadotropin-independent42, and the production of CAO, the \u201cbridge\u201d between primary growth (PG) and secondary growth (SG), might possibly be related to other hormonal pathways as well43. Our results indicate that a spawning-related dynamic is already in place at the PVO4b stage, i.e. well before gonadotropin-dependent regulation. This is a striking finding because it implies that oocyte recruitment takes place much earlier than traditionally thought. Given the similarity of the oogenesis among fish species30, this result has a broad implication in fish reproduction research. Whether this implies that the present focus in the fish fecundity literature on SG rather than PG processes should be reconsidered is a matter of discussion, but given that oocyte recruitment is principally determined during PG, as evident in European hake, the standard definition of determinate and indeterminate fecundity should be, at the very least, revisited. This is because both concepts today relate to oocyte recruitment patterns during SG, or more specifically, during the spawning season as such.Although a series of gaps exist in our knowledge regarding oogenesis in teleosts, it has been firmly established that oocyte development is stimulated by gonadotropins44. The observed three peaks or seasons involved different level of spawning activity , which seems to happen concurrently with upwelling events; egg release during upwelling likely promotes successful reproduction in these types of ecosystems45. Also, a protracted spawning season is tightly coupled with an income breeding strategy and indeterminate fecundity46; in spawning hake, this is evidenced by intense feeding47 and secondary growth oocyte dynamics48, respectively. However, our findings challenge today\u2019s thinking: assuming the standing stock of oocytes at PVO4b-VTO stages represents the potential fecundity, the estimations undertaken here give a highly plausible realized fecundity, as would be the case in a determinate fecundity species. Moreover, while we would expect such figures to change between spawning seasons in an indeterminate/income breeder species in accordance with food availability, they actually change with fish size, suggesting strong maternal effects44. Note here that production of the smallest oocytes, logically, requires little energy.The southern stock of European hake shows a very protracted spawning season, virtually covering the whole year, but with several peaks, as documented here and earlier49. Intense atretic activity was observed both in this and earlier studies on hake50, but only after the spawning season ceases, with the possible exception of summer time where atresia coincided with the spawning peak. This picture has been linked to an indeterminate strategy30, or a \u201cmopping-up\u201d process51. Altogether, this supports the impression that down-regulation of fecundity in hake occurs as a consequence of the developmental rate of each cohort (defining the batch fecundity and spawning frequency), which, hypothetically, could be mediated by environmental factors, and which results in the presence of remnants of underdeveloped oocytes to be removed by apoptosis or atresia once the spawning is over.Down-regulation of the more advanced oocytes is often seen as a process of matching the available energy52. This also applies to the genus Merluccius, which displays large phenotypic plasticity in traits such as fecundity and spawning length across stocks inhabiting waters located at different latitudes, including off Galicia56. Our results clearly confirm the existence of three spawning seasons, but spawning frequency could not be directly estimated and was therefore set as constant along the year and with female size . Obviously, this constrains our results on spawning duration by season and size class, given by potential/batch fecundity ratios and thereby not specifically attuned to the environmental influences that could certainly affect spawning rhythm.Our results clarify that hake egg productivity changes with female size, but not substantially along the year. However, batch fecundity and length of spawning differed significantly by female size and, so far much less documented, also among seasons. These variations are crucial to our understanding of the reproductive strategy, including the realized egg production, of the stock. Temporal and regional variability in reproductive traits related to environmental conditions have been described for several fish species24, it would imply the presence of three spawning components within the same stock. The other option would be that each female takes part in each spawning season, i.e. they present three individual spawning periods, but this would lead to a considerably elevated energetic effort and the longest reproductive season reported for the genus Merluccius. Some of our results support this latter idea: i) PVO1-3 increase along the year as if they were to be used later, and ii) the amount of PVO4b-c in regressing and regenerating (RT) females is too high for them to be considered remnants. Although the level of (follicular cell) apoptosis or previtellogenic atresia were not assessed in this study, they are likely to be important processes34, at least for PVO1-3 where many of the oocytes \u201cdisappeared\u201d to the next stage PVO4a, when contrasting immature and developing ovaries. The PVO4b-c oocytes could serve as surplus for the next spawning period occurring in a matter of weeks, which is supported by the observed dynamics of these PVO stages between RT and SC (spawning capable) females. More specifically, the RT-PVO4b level compares with the SC-PVO4c level, as does the RT-PVO4c level with the SC-VTO level. However, this way of thinking would be based on the idea that hake, living in a regime such as the Galician shelf ecosystem, should be able to spawn over a total of 7\u20138 months with an annual production of close to 20 million eggs for a 75-cm female. This possibility should not be entirely excluded, e.g. North Sea cod of the same size produced \u22484 million eggs in one single season, but this\u00a0population\u00a0of cod then waits a full year for the next season57. In any case, another clear option is a much shorter spawning duration for each female hake; it results in a more realistic individual fecundity and reproductive effort, but implies, as stated, the existence of different spawning components, i.e. winter-spring, summer and autumn spawners. Statutory ichthyoplankton surveys since 1999 show that species inhabiting the area in question are either winter or summer spawners, except for the mesopelagic fish, Maurolicus muelleri, which has a completely different life strategy, and the present object of study58. Also, the presence of three spawning seasons in southern European hake is reasonably well documented through this monitoring program. The existence of stable spawning components suggests some genetic differentiation, since a kind of reproductive isolation must occur. However, this is not a simple issue to address, as shown in genetic analyses conducted on, for example, spring and autumn spawning stocks of herring on both sides of the North Atlantic59, where genetic differences between components were certainly identified, but also associated with high gene flow. So, the alternative option of a single component, where each female spawns once a year for a short period, but at different times each year (which would result in three observed seasons) should be considered as biologically unrealistic. Nevertheless, the potential existence of several spawning components within the same stock, or even a single component with several spawning seasons, has a profound implication in fisheries assessment and management. Overfishing of one of the spawning components may occur if not managed properly. On the other hand, the stock-recruitment relationship is a cornerstone in fisheries assessment and estimation of recruitment is dependent on the spawning dynamics. Thus, further research must be performed to support sustainable exploitation.From a more speculative viewpoint, the findings in this article raise an important, but as yet unresolved, question about the actual duration of spawning or spawning period(s) of a single female within the same year. While this has been assumed to be 2\u20133 monthsde novo oocyte recruitment during spawning . However, the concept behind this thinking becomes blurred by our findings that the level of fecundity is set much earlier, meaning that hake shows a much more determinate, rather than an indeterminate, fecundity type at the \u201cbase line\u201d, which suggests that the definition needs revising. Further to this, early oocyte recruitment patterns clearly differed throughout the year. Our work also points to a complex picture of environmental cues involved. Despite the many issues still unresolved, the fecundity pattern outlined here may well occur in a series of other teleosts. Thus, we question the well-established conceptions of primary and secondary growth oocytes being separate categories rather a continuum, and the concept of determinate versus indeterminate fecundity strategies limited to the spawning season. Hopefully, these results will stimulate new discussions within the scientific community to address teleost reproductive biology, particularly fecundity estimation, in a way that differs from that done today. This may have direct consequences in terms of fisheries advice and management with regard to estimation of SRP and SSB, and thereby potentially help improve sustainability.We found that, for European hake, oocyte recruitment \u2013 with direct consequences for the resulting egg production \u2013 occurs much earlier than previously thought, i.e. already during the gonadotropin-independent stage and that the standing stock of PVO4b-VTO stages reflects reasonably well the potential fecundity of this species. If we adhere to the normal definition of a determinate and an indeterminate pattern, the European hake clearly falls into the latter category, as we found clear evidence of Merluccius merluccius) females (N\u2009=\u20092961) were collected monthly from commercial gillnet catches from the Galician shelf in the vicinity of the port of Laxe, Spain, from December 2011 to November 2012 was also registered. Assuming that the ovaries of European hake show an homogeneous oocyte distribution62, a cross section of the central part of each ovary was dehydrated, embedded in paraffin and histological sections of 4\u2009\u00b5m were cut and stained with haematoxylin and eosin. The MAT was identified microscopically, based on the most advanced oocyte stage (MAO), on whether postovulatory follicles (POF) were present (spawning marker) or not, as well as on the type and amount of atresia, adapted from Hunter and Macewicz9 , specifically based on the location and shape of the circumnuclear ring (CNR), a cytoplasmic structure rich in organelles and RNA . Secondary growth (SG) stages were, following standard procedures, divided into cortical alveoli (CAO), and early (EVTO) and late vitellogenic oocytes (VTO), using zona radiata (chorion) appearance and the degree of accumulation and distribution of yolk granules as criteria, and, ultimately, migratory nucleus (MNO) and hydrated oocytes (HYO)24. Atresia was categorized into three types ; POFs were also registered24. Note: for clarity, we refer to stages for oocyte development, and phases for ovarian development, the latter being based on MAO, as described above.Histological sections were screened in order to classify the stages of oocyte development present . Oocyte size and shape were measured using a slightly modified version of the Elliptical oocytes project (https://sils.fnwi.uva.nl/bcb/objectj/examples/oocytes/), while grid counting was performed using the Weibel Grid Cell project (https://sils.fnwi.uva.nl/bcb/objectj/examples//Weibel/MD/weibel.html).For each ovary, ten micrographs of histological sections (fields) were taken at 10\u2009\u00d7\u2009magnification, using a digital camera (DFC490 of 8.8 Mpx) mounted on a light microscope (Leica DM500B) with a resolution of 2.33 px/\u00b5m. To ensure no overlaps, these photos were taken from every second field starting from the ovarian wall and moving across the section, aided by the motorized Multistep module of the Leica Application Suite. Subsequent image analysis (see below) was carried out using the software ImageJi) (according to Delesse\u2019s principle)64. The Vvi was computed on grid-overlaid histological sections as the ratio between grid points hitting stagei oocytes (or any of the other mentioned elements) and the total points hitting the sectioned tissue . A pilot study was performed in order to establish a compromise between accuracy and time consumption: (i) to avoid underestimation of Vvi of the earliest oocyte stages (PVOs), a first trial was performed on two females to find the most appropriate grid type, using two different set-ups, i.e. Grid A (240 points and 77.1\u2009\u00b5m probe line length) and Grid B (370 points and 65.7\u2009\u00b5m probe line length) to estimate Vvi, analyzing 10 fields per specimen in each case; (ii) a second trial was performed to define the number of counting fields per sample required to estimate Vvi reliably ), aiming at a deviation from normalized mean below \u00b10.05, studying ovaries from four females using the selected grid type and \u226410 fields. These methodological issues have already been touched upon by other authors65, but given that, in this study, we focused on much smaller particles (down to \u224815\u2009\u00b5m in oocyte diameter when embedded in paraffin), verification was needed to ensure a good level of precision and accuracy. Regarding trial (i), significant differences in PVO volume fraction appeared between Grid A and B (P\u2009<\u20090.01). Thus, we opted for the most conservative option, i.e. the denser grid (Grid B). Regarding the number of fields in trial (ii), the estimated deviation from the normalized mean for every oocyte stage in all specimens analysed stabilized at \u00b10.05 when seven fields were counted , missing oocytes and empty space; see Supplementary, Table\u00a0ij) was calculated from the ratio between oocyte L and S axis, i.e. kij\u2009=\u2009Lij/Sij, and mean stageij-specific oocyte shape factor thereafter calculated for each female. Only oocytes sectioned through the nucleus were considered for measurements. The above-given number of stagei oocytes to be measured in order to estimate mean stagei-specific oocyte shape factor with a deviation from the normalized mean below \u00b10.05 was defined in a trial test on two females, where up to 15 stagei oocytes were measured /2. Stagei-specific mean volume-based oocyte diameter was then estimated as ODvi\u2009=\u2009[\u03a3nij=1(ODij)3/ni)](1/3)16. Due to oocyte shrinkage during histological processing, a correction factor was applied to turn oocyte diameters into their initial, stabilized formaldehyde -fixed dimensions, as measured under laboratory conditions in whole mounts . We first tested the correction factor developed for resin-embedded ovarian tissue of European hake66. However, oocyte shrinkage varies with the embedding medium, being higher in paraffin than resin65. Since paraffin was used in this study, a second correction factor from a similar study on Thunnus alalunga using paraffin instead, was also taken into account65. Hence, we applied both correction factors in order to choose the one that gave the closest fit to the whole-mount recordings. Due to the high degree of shrinkage of hydrated oocytes (HYO) during histological processing and their resulting highly irregular shape, it was considered unfeasible to get reliable measurements of their oocyte size, thus ODi and OPDi were not estimated for this type of oocytes.For each specimen (j), short (S) and long (L) axes were measured on 10 oocytes of every oocyte stage (i) in histological slides. Individual oocyte shape factor of oocytes present in the ovary by applying the refined formula of Korta et al.18:The number of oocytes per gram of ovary, known as the oocyte packing density (OPD)OPDij: Stagei-specific oocyte packing density by female (j)Vvij: Volume fraction of stagei oocytes by female (j)\u03c1o: Specific gravity of the ovarykij: Mean shape factor of stagei oocytes by female (j)cODvii: Mean stagei volume-based oocyte diameter by female (j), corrected for shrinkage, cf.65.16, i.e. being set at 1.061 and 1.072 for ovaries showing PVO/MNO and CA/EVTO/VTO as the most advanced oocyte stage, respectively.The specific gravity of the ovary was obtained from Kurita and Kjesbui) in each ovary (j) was calculated from OPDij and the formalin-fixed gonad weight (GWfj) as:Shrinkage correction. To further clarify which shrinkage correction factor was the best one, we compared recent whole-mount estimates of batch fecundity (BF) for the same stock29 with our three estimations of the number of medium-late vitellogenic oocytes (VTO), the first being uncorrected for shrinkage (NOi) and the second and third being corrected for shrinkage using the two correction factors described above . The number of VTOs was assumed to reflect BF, as has been previously reported24. We found that mean cNOVTO values were closest to BF values at any total length of the female .Approaches for estimating NOi. As formalin-fixed ovary weight (GWf) was estimated as NOi/TL3. To further validate this second approach, and concentrating on those specimens with EW values, the relative number of oocytes was also calculated as EW-based NOi\u2009=\u2009NOi/EW. The relationship between TL-based and EW-based NOi was extremely tight \u2009=\u20092555, P\u2009<\u20090.001) \u2009=\u200928.44, P\u2009<\u20090.001). Tests on the GW vs. TL relationships demonstrated unequal slopes among MAT Eq.\u00a0 both varGWf) Eq.\u00a0, i.e. acTemporal trends. The temporal dynamics of oocyte production was analysed in three ways. First, an overall temporal analysis was conducted by estimating NOi values by month and oocyte stage, including other ovarian elements as well . These elements are known to be indicators of ovarian development. The noted variations were compared with trends in environmental variables , as well as with spawning fraction (SF) (see below). Second, a seasonal analysis was performed by estimating NOi values by ovarian phase (MAT) and oocyte stage within the respective spawning seasons identified. Information on duration and magnitude of spawning peaks was based on monthly SF, estimated as the proportion of the number of actively spawning females and the total number of sexually mature females by month, i.e. an average SF at the population level44. Note here that SF was estimated from the complete database of 2961 female specimens. This analysis resulted in the definition of three spawning seasons with different levels of spawning activity , and the spawning dynamics . To examine the influence of body size on oocyte recruitment dynamics, the analysis was performed using the absolute numbers of NOi while considering four TL ranges . For each spawning season, the average NOi by oocyte stage and body size range was computed, pooling females in DV, SC and AS reproductive phases (Table\u00a0EVTO and NOVTO (i.e. oocytes in various parts of vitellogenesis) showed comparable values in a given ovary, irrespective of the spawning season and female size. As we assumed NOVTO as a proxy for batch fecundity, this implied that the joint category of vitellogenic oocytes corresponded, in principle, to two, single batches and a mean batch fecundity was therefore estimated for each spawning season and female size range; BF\u2009=\u2009(NOEVTO\u2009+\u2009NOVTO)/2. We used this value to estimate the number of batches potentially existing in each oocyte stage and, from this, the average accumulated number of batches produced by a female in each season and size range. However, given that our sampling covered the whole spawning season, and given the strong spawning asynchrony among females , we assumed that the typical female analysed was in the middle of its spawning period . In consequence, each female had already released half of the batches.The number of oocytes in stage (www.indicedeafloramiento.ieo.es) using data on sea surface level pressure from the weather research and forecasting (WRF) operational model of Meteogalicia for the region, between December 2011 and November 2012. Day length (number of daylight hours) at 43\u00b0 30\u2032N 9\u00b030\u2032W was obtained from the Astronomical Applications Department of the U.S. Naval Observatory Data Service (http://aa.usno.navy.mil/index.php). The sea water temperature at the same geographical position at depth 50\u2013350\u2009m (monthly mean) was acquired from IBIRYS Regional High Resolution Reanalysis67 located at Copernicus Marine Environment Monitoring Services (http://www.copernicus.eu/) (product identifier: IBI_ REANALYSIS_PHYS_005_002). The above three selected abiotic factors were specifically used to explore impacts of environmental cues/drivers on European hake oocyte recruitment.Monthly upwelling indexes were provided by the Spanish Institute of Oceanography (https://www.r-project.org/) with P\u2009<\u20090.05 considered as a significant result. Differences in volume fraction (Vvi) between grid types were analyzed using paired t-test. The relationship between TL-based and EW-based NOi, was tested by a linear regression. ANCOVA was applied to contrast slope and intercept values for total length (TL) regressed on formalin-fixed gonad weight (GWf) split by ovarian phase (MAT). Relationships between potential fecundity and TL among spawning seasons were assessed by General Linear Models. All tests consulted were executed according to standard routines, as indicated in the various R scripts.All statistical analyses were performed using R version 3.2.3 (No specific permissions for sampling were required for this study as all the individuals sampled were obtained from commercial fishing following the local laws. Fish were purchased once the fishers processed them and analysed on board or right after landing. No protected species were sampled. No authorization or ethics board approval was required to conduct the study.Supplementary information"} {"text": "Macrophage activation syndrome (MAS) is a rare, potentially life-threatening, condition triggered by infections or flares in rheumatologic and neoplastic diseases. The mainstay of treatment includes high dose corticosteroids, intravenous immunoglobulins and immunosuppressive drugs although, more recently, a more targeted approach, based on the use of selective cytokines inhibitors, has been reported. We present the case of a two-year-old boy with 1-month history of high degree fever associated with limping gait, cervical lymphadenopathy and skin rash. Laboratory tests showed elevation of inflammatory markers and ferritin. By exclusion criteria, systemic onset Juvenile Idiopathic Arthritis (sJIA) was diagnosed and steroid therapy started. A couple of weeks later, fever relapsed and laboratory tests were consistent with MAS. He was promptly treated with high doses intravenous methylprednisolone pulses and oral cyclosporin A. One day later, he developed an acute myocarditis and a systemic capillary leak syndrome needing intensive care. Intravenous Immunoglobulin and subcutaneous IL-1-antagonists Anakinra were added. On day 4, after an episode of cardiac arrest, venous-arterial extracorporeal membrane oxygenation (VA-ECMO) was started. Considering the severe refractory clinical picture, we tried high dose intravenous Anakinra . This treatment brought immediate benefit: serial echocardiography showed progressive resolution of myocarditis, VA-ECMO was gradually decreased and definitively weaned off in 6 days and MAS laboratory markers improved. Our case underscores the importance of an early aggressive treatment in refractory life-threatening sJIA-related MAS and adds evidence on safety and efficacy of HDIV-ANA particularly in acute myocarditis needing VA-ECMO support. Macrophage activation syndrome (MAS) is a rare, potentially life-threatening complication of some rheumatologic diseases, such as systemic onset Juvenile Idiopathic Arthritis (sJIA), Kawasaki Disease (KD) and Systemic Lupus Erythematous (SLE), or a condition triggered by viral and or bacterial infections in predisposed individuals or associated with neoplastic disease . Early dWe describe the case of a child with sJIA complicated by severe MAS, Systemic Capillary Leak Syndrome (SCLS) and acute myocarditis, leading to distributive and cardiogenic shock, cardiac arrest needing cardiopulmonary resuscitation and veno-arterial Extracorporeal Membrane Oxygenation (VA-ECMO), who was successfully treated with high dose intravenous ANA (HDIV-ANA).A previously healthy two-year-old boy presented with 1 month history of fever associated with limping gait, cervical lymphadenopathy and evanescent skin rash. He was evaluated at a regional hospital where laboratory tests showed: WBC 25990/mm3 (N 18740/mm3); CRP 65 mg/L; ESR 68 mm/h; ferritin 1,259 ug/L; tryglicerides 1.5 mmol/L; AST 61 U/L, ALT 45 U/L. Echocardiography was normal. Bone marrow aspiration was negative for blasts. A short course of oral prednisone (1 mg/kg/day) was started with benefit on fever. However, upon steroid tapering, fever and limping reappeared. MRI showed synovial membrane hypertrophy and effusion in both hips. Therefore, according with ILAR criteria , sJIA waOn admission, physical examination revealed unremitting high-grade fever, erythematous skin rash on face and limbs and mild hepato-splenomegaly. No arthritis was detected.Laboratory tests showed: Hb 8.5 g/dl, PLT 44,000/mm3; FDP 1522 ug/L, fibrinogen 1.6 g/L, CRP 100 mg/L, AST 57 U/L, ALT 52 U/L; ferritin 2,200 ug/L; triglycerides 2.86 mmol/L. Suspecting an incipient MAS, high doses IV MDPN and oral Cyclosporin A were started . 24 h laMAS is a life-threatening condition, most commonly reported as a complication of sJIA and triggered by infections in up to one-third of the patients . It is tIn our case, the intravenous ANA administration was a forced choice, partly due to the prominent generalized edema related to the SCLS and partly to the severe risk of bleeding, related both to MAS and ECMO. We now recognize that this choice has been successful. Along with the unexpected favorable result, no major adverse events were noticed, except for a transient neutropenia, already reported .Based on our experience, HDIV-ANA is a safe and effective treatment for refractory life-threatening sJIA-related MAS, even if complicated by acute fulminant myocarditis. Based on our positive experience, this therapeutic approach may be also considered in the current pandemic COVID-19 emergency where myocardial injury has been recently reported and where recent evidence showed interleukin-\u03b2-driven MAS-like complication, triggered by SARS-CoV-2 virus, as predictor of bad outcome \u201320.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.AM, GM, and FZ decided the personalized treatment, reviewed the literature about Anakinra use in MAS and received final approval by the hospital pharmacy, and general management department for its use. AM collected all data, conceptualized, and wrote the manuscript. FZ supervised the study team and critically reviewed the manuscript for important intellectual aspect of the work. All authors approved the final manuscript as submitted and agreed to be accountable for all aspects of the work. All authors took care of the patient during hospitalization, reviewed, and approved the final version of the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} {"text": "Diffuse astrocytomas are the most aggressive and lethal glial tumors of the central nervous system (CNS). Their high cellular heterogeneity and the presence of specific barriers, i.e., blood\u2013brain barrier (BBB) and tumor barrier, make these cancers poorly responsive to all kinds of currently available therapies. Standard therapeutic approaches developed to prevent astrocytoma progression, such as chemotherapy and radiotherapy, do not improve the average survival of patients. However, the recent identification of key genetic alterations and molecular signatures specific for astrocytomas has allowed the advent of novel targeted therapies, potentially more efficient and characterized by fewer side effects. Among others, peptides have emerged as promising therapeutic agents, due to their numerous advantages when compared to standard chemotherapeutics. They can be employed as (i) pharmacologically active agents, which promote the reduction of tumor growth; or (ii) carriers, either to facilitate the translocation of drugs through brain, tumor, and cellular barriers, or to target tumor-specific receptors. Since several pathways are normally altered in malignant gliomas, better outcomes may result from combining multi-target strategies rather than targeting a single effector. In the last years, several preclinical studies with different types of peptides moved in this direction, providing promising results in murine models of disease and opening new perspectives for peptide applications in the treatment of high-grade brain tumors. IDH1 and IDH2) gene mutations, this last group of tumors can be subclassified into IDH-mutant and IDH-wildtype categories, of which the first type is associated with better prognosis [Astrocytomas are the most common primary glial tumors of the central nervous system (CNS). In 2016, the World Health Organization (WHO) revised the 2007 CNS tumors classification, discerning astrocytomas into circumscribed (WHO grade I) and diffuse (WHO grades II-IV) subtypes based on their histology and molecular parameters. The first subtype includes benign astrocytic tumors, such as pilocytic astrocytomas, which are usually treatable with complete surgical resection. Conversely, the second group includes those gliomas that are more difficult to treat because of their heterogeneity and invasive growth. These include diffuse astrocytoma (grade II), anaplastic astrocytoma , and glioblastoma [rognosis ,4.AA and GBM are two high-grade tumors. They are characterized by poor prognosis with a median survival of about 2\u20133 years and 12\u201315 months, respectively. Furthermore, they present with neurodegeneration, invasiveness, cytological pleomorphism, and increased mitotic activity. GBM exhibits also microvascular proliferation, necrosis, or both . PatientThe current treatment for AA and GBM consists of maximal surgical tumor resection, followed by chemotherapy with Temozolomide (TMZ) and focal radiotherapy. Furthermore, inpatient exercise rehabilitation programs after tumor resection were reported to have an important role to significantly improve neurocognitive and motor functions, which contribute to enhance the patients\u2019 quality of life ,10. It iIn view of all these considerations, the development of novel therapeutic approaches, capable to overcome both the BBB and the tumor barrier, and able to inhibit tumor growth by selectively targeting GICs emerges as an urgent need. In the last few years, several studies have been performed to realize a genetic and lineage classification of GICs in order to design targeted and personalized therapies ,18,19. IIn this regard, the use of peptides for the treatment of a variety of diseases, including brain tumors, has been rapidly expanding. Some of these have already moved into Phase I/II clinical trials for the treatment of high-grade gliomas, showing promising results not only in terms of safety and tolerability, but also for their ability to reduce the tumor mass ,21,22.In this review, we provide an outline of the properties of different types of peptidic agents. Furthermore, we explore potential molecular targets for the treatment of high-grade astrocytomas using anti-tumor peptide therapeutics as well as peptide carriers able either to deliver anti-cancer molecules through cell and tissue barriers or to target tumor-specific receptors .Peptides are a novel class of compounds that can be used for the treatment of a wide range of pathological conditions, such as infections, diabetes, cardiovascular diseases, neurodegenerative disorders, and cancer. They are low molecular weight molecules, usually 10\u201350 amino acids long. The advantages of peptides are several-fold and include ease of synthesis, high specificity and activity, and low production cost. Furthermore, in the case of cancer, peptides result less immunogenic than recombinant monoclonal antibodies already used to target tumor cells with anti-cancer drugs . Since pIn vivo, peptides have poor stability due to their susceptibility to degradation by serum proteases. This short half-life prevents the development of drug resistance and increases their safety, though it may also reduce their efficacy. This constraint can be prevented by chemical modifications of the peptide sequence, for instance by integration of D-amino acids, cyclization, use of unnatural amino acids that are uncleavable by endogenous proteases, and blocking the access to the N- and C- terminal fragments . AnotherIn the case of brain tumor treatments, peptides can be employed in several ways, depending on their specific properties. According to their mechanism of action, they can be classified into two major groups. The first class includes peptides that are characterized by a direct mode of action. Some of these molecules have an intrinsic anti-cancer activity, so that they can be used as therapeutics, promoting the reduction of tumor growth by a variety of mechanisms. For instance, anti-microbial peptides (AMPs) are short cationic and hydrophobic molecules that play a cytotoxic action by targeting the negative charges of cancer cell membranes and inducing cell death through apoptosis or necrosis ,31. OtheIn the second group are peptide vaccines, which are characterized by an indirect mode of action. Cancer vaccines can be considered as active immunotherapies because they are based on the administration of tumor-associated antigens, which are specifically expressed in cancer tissues. The aim is to stimulate the immune system to react against the tumor . In thisAdvancements in the comprehension of molecular, cellular and genetic anomalies of various tumors have enabled the development of the \u201cAtlas of Genetics and Cytogenetics in Oncology and Haematology\u201d, a database listing several genes and proteins that are relevant for different types of cancer, including diffuse glioma. Some of these have been investigated as potential targets for small molecules or peptide vaccines, while others were specifically addressed with peptidic agents ,37,38,39Because most of the pathways that are altered in astrocytomas are primarily located within GICs, the main goal for the future is to develop strategies that specifically target this cellular subpopulation. Although several treatments conceived on this principle are still in the preclinical phase, there are solid promises for clinically relevant success.Here below, we report some detailed examples of glioma targets that can be specifically addressed by peptide-based drugs .In high-grade astrocytomas, such as GBM, an increased expression of the C-X-C chemokine receptor type 4 (CXCR4) has been reported. This can transduce growth signals in response to its protein ligand CXCL12. Clinically, CXCR4 expression levels in GBM correlate with increased tumor grade, aggressiveness and, consequentially, with a poor prognosis . A studyThese observations indicate that the CXCR4-CXCL12 pathway mediates survival and self-renewal in GICs with high selectivity, emerging as an attractive target for glioma-directed therapies. The CXCR4 antagonist that was most widely tested in clinical trials to reduce the growth of GBM, is the small molecule Plerixafor . Yet, itEpidermal growth factor (EGF) receptor (EGFR) is a transmembrane receptor tyrosine kinase involved in a wide variety of cellular processes and cancers, including GBM. EGFR is activated by a set of ligands, including EGF and transforming growth factor-alpha (TGF-\u03b1), which trigger its dimerization. The latter stimulates the autophosphorylation of the EGFR intracellular tyrosine kinase domain, leading to activation of numerous downstream signaling pathways, such as the phosphatidylinositol-3 kinase (PI3K)/AKT/rapamycin-sensitive mTOR-complex (mTOR) cascade. This signal transduction pathway is involved in various cellular functions, including cell cycle progression, differentiation, migration, and survival ,56. AKT It has been described that approximately half of the patients affected by GBM overexpresses EGFR, and 20\u201330% of them expresses the mutant truncated variant EGFRvIII. The latter is constitutively active and promotes persistent intracellular signaling, with consequent tumor growth, survival, invasion, and angiogenesis . Interestrans-activator of transcription (TAT) protein from HIV-1, since the large size and negative charges of siRNA hinder their cellular translocation. This system, named TAT-DRBD, was used to deliver EGFR and AKT siRNAs into an intracranial GBM mouse model to induce synthetic lethal RNA interference responses channels . These cThese findings reveal that a combinatorial therapy, simultaneously targeting neo-angiogenesis and glioma cells, might represent an efficacious approach, since not only it would hinder oxygen and nutrients supply, but also kill tumor cells.Mounting evidence attributes GIC proliferation and resistance to deregulated pathways, such as Hedgehog (HH) signaling, which may be an effective target to improve GBM therapies. This signaling begins with the binding of HH ligands, more frequently sonic hedgehog (SHH), to the transmembrane receptor Patched, which initiates an intracellular signaling cascade that results in the activation of the family of Gli transcription factors. The upregulation of the HH/Gli1 pathway is associated with worse prognosis in GBM patients, because it is implicated in the regulation of cellular proliferation, survival, invasion, and angiogenesis ,67.2), isolated from phage display libraries -loaded nanoparticles (CK-NP-PTX) coated with a previously tested TTP, named CK peptide. This was composed of a VEGFR-2 targeting peptide (K237) bound, via a GYG linker, with a SHH targeting peptide , iThis drug delivery system was designed with the aim of increasing the therapeutic efficacy of glioma treatments, by simultaneously targeting the chemotherapeutic PTX to VM channels, tumor neovasculature, and GICs. To test the system, an intracranial glioma mouse model was injected intravenously with CK-NP-PTX. Researchers observed a selective accumulation of the compound around the vasculature as well as in the tumor parenchyma. This distribution determined a strong VM channels destruction, a significant apoptosis of glioma cells and an increase in medium survival time in treated mice when compared to controls .The MAPK kinase (MEK)/extracellular signal-regulated kinase (ERK) signaling pathway has been identified as a commonly dysregulated pathway in several cancers, including GBM. This cascade starts with the binding of a ligand to a transmembrane receptor tyrosine kinase and culminates with the phosphorylation through MEK of the final MAPK ERK. This latter translocates to the nucleus where it activates numerous transcription factors involved in the regulation of a large variety of processes including cell proliferation, differentiation, migration, and apoptosis . Li and v\u03b23, are overexpressed in GBM, both on the surface of tumor cells as well as on angiogenic vessels. They contribute to angiogenesis and correlate with worse prognosis [v\u03b23 and \u03b1v\u03b25 integrins, preventing their interaction with ECM ligands. However, the promising results obtained in pre-clinical studies and in earlier clinical trials, were disconfirmed from the CENTRIC phase III trial that produced no survival benefit in treated patients when compared to controls [Integrins are a big family of cell adhesion transmembrane receptors composed of two associated \u03b1 and \u03b2 subunits, which directly bind several components of the ECM providing the adhesion required by tumor cells for their motility and invasion. Although integrins do not act as oncogenes, they cooperate with them or with receptor tyrosine kinases to increase tumorigenesis . Some tyrognosis ,72,73,74controls .v\u03b23 integrin antagonist RGD peptide. The co-delivery of PD0325901 and the RGD peptide allowed synergetic effects with inhibition of the ERK pathway, which is overactivated in GBM cells, and disruption of angiogenic signals in GBM tissue as well as inhibition of cell migration. Moreover, the RGD peptide permitted to deliver the MEK1/2 inhibitor to tumor cells and to the vasculature by integrin-targeted delivery, preventing off-target effects on healthy tissues . MDGI is a fatty acid-binding protein whose role in tumorigenesis is rather controversial and seems to vary in a cancer type-dependent manner. It can be linked to tumor-suppressor properties, e.g., in breast cancer , but alsTo this end, the group of Pirjo Laakkonen identified a novel synthetic tumor homing peptide, named CooP (ACGLSGLGVA), which specifically targets invasive tumor cells and the vasculature by binding to MDGI. They subsequently investigated the potential of CooP-targeted therapy to treat high-grade brain tumors. Thus, they developed a peptide-drug conjugate, named CooP-CPP-Cbl, in which the CooP peptide was covalently conjugated with the chemotherapeutic agent clorambucil (Cbl) and a CPP derived from the N-terminal part of the tumor suppressor protein p14ARF (MVRRFLVTLRIRRACGPPRVRV-NH2), to permit cellular internalization . To asseAs mentioned in the previous paragraphs, the most promising strategy to treat high-grade astrocytomas seems to be the simultaneous targeting of neovasculature/VM channels as well as glioma cells or GICs. Yet, an additional aspect that should be considered attentively is the poor penetration of most drugs in the tumor mass, due to the presence of a pathologic barrier constituted by ECM. The occurrence of this event is one of the causes of GBM recurrence . The ECMAmong ECM components, tenascin-C (TN-C) is a hexameric glycoprotein mainly expressed by neural and endothelial cells during embryogenesis. It is downregulated in adult healthy brains, but it becomes overexpressed in about 90% of GBMs. Tumor cells are the main source of TN-C release, and the intensity of its expression correlates with glioma grade and outcomes. TN-C is able to bind other ECM proteins as well as integrin receptors, thus influencing a number of cellular processes, such as cell migration, angiogenesis, and proliferation . For allNeuropilin-1 (NRP-1) is a multifunctional non-tyrosine kinase co-receptor expressed in many tissues, which binds a number of factors, including VEGF-A, Hedgehogs, TGF\u03b2 and EGF. It was shown to be highly expressed in GBM cells and neo-vasculature, where it regulates glioma growth, progression, and recurrence. The expression of NRP-1 correlates with glioma grade and poor patient prognosis ,81,82.In a recent study, Kang and collaborators developed a synergistic nanosystem constituted by PTX-containing nanoparticles coated with the synthetic Ft peptide (Ft-NP-PTX), which contains two sequences and tLyp-1 (CGNKRTR) coupled via a cysteine) to simultaneously target TN-C and NRP-1, respectively. This system was designed to specifically circumvent the ECM barrier by targeting the glioma-related matrix component TN-C, and to concurrently achieve deep penetration in the glioma parenchyma, mediated by the over-expression of NRP-1 in glioma cells and vasculature . Mice beOver the last years, considerable progress has been made with regard to the development of pharmacological treatments for high-grade astrocytomas. Nonetheless, the prognosis of these types of cancer has not significantly improved and they remain the most aggressive and lethal tumors of the CNS. Even after total or subtotal surgical eradication of the tumor, followed by radiotherapy and concomitant adjuvant chemotherapy, the median survival does not exceed 3 years. There are many reasons for such a failure, including the critical brain localization and the lack of defined margins, which might hinder the total resection of the tumor mass. This event may contribute to its recurrence. Such disappointing results can be explained also with the structural complexity of malignant gliomas. In fact, it is well known that they are characterized by a great cellular heterogeneity and, within the tumor tissue, different sub-populations of poorly differentiated cells co-exist. Among these, GICs are the main responsible for tumor invasiveness and recurrence. These tumors are also supported by a complex network of blood vessels, including standard vasculature, which is constituted by endothelial cells, and VM channels, formed by glioma cells that are able to act as both endothelial and tumor cells. Finally, the complexity of high-grade gliomas can also be ascribed to the presence of an ECM-anomalous barrier that surrounds the tumor. This latter, together with the BBB, hampers the access of drug molecules to the tumor parenchyma.All these aspects make these types of cancer resistant to any kind of therapy. Furthermore, most chemotherapeutic agents are not selective for cancer cells, but they also damage healthy tissues, thus leading to diffuse adverse effects.In recent years, a significant number of key signaling molecules have been identified in many pathways, specifically altered in malignant gliomas, allowing the advent of more effective targeted and personalized therapies. In this framework, peptides have emerged as a novel class of etiology-based anti-cancer therapeutics that can be used as (i) pharmacologically active agents or (ii) carriers, either to facilitate the translocation of chemotherapeutics through brain, tumor, and cellular barriers, or to target tumor-specific molecular markers. This approach should make the therapy more specific, i.e., more effective and characterized by fewer side effects.Another aspect to consider carefully is that single-target approaches have not resulted in improved prognosis for patients affected by malignant gliomas. Better outcomes may result from combining multi-target strategies. For instance, peptide-based therapies can be designed to simultaneously target multiple effectors either on the same pathway or involved in different mechanisms and tumor compartments. To achieve this goal, it is necessary to target diverse markers on (i) ECM components, to enable the achievement of effective drug concentrations in the tumor mass; (ii) standard vasculature and VM channels, to remove nutrients and oxygen supply to the tumor; and (iii) glioma cells/GICs to kill those cells that become resistant to the hypoxic microenvironment and are responsible of metastasis formation. The successful elaboration of these approaches may enable, in the future, the development of effective personalized molecular therapies for high-grade astrocytomas."} {"text": "Wolbachia strains are one of three endosymbionts associated with the insect vector of \u201cCandidatus Liberibacter asiaticus,\u201d Diaphorina citri Kuwayama (Hemiptera: Liviidae). We report three near-complete genome sequences of samples of Wolbachia from D. citri (wDi), with sizes of 1,518,595, 1,542,468, and 1,538,523\u2009bp. Wolbachia strains are one of three endosymbionts associated with the insect vector of \u201cCandidatus Liberibacter asiaticus,\u201d Diaphorina citri Kuwayama (Hemiptera: Liviidae). We report three near-complete genome sequences of samples of Wolbachia from D. citri (wDi), with sizes of 1,518,595, 1,542,468, and 1,538,523\u2009bp. Wolbachia is present in many insect species and can manipulate host reproduction via cytoplasmic incompatibility, male killing, and induction of parthenogenesis or feminization (Diaphorina citri Kuwayama (Hemiptera: Liviidae), the vector of the pathogen \u201cCandidatus Liberibacter asiaticus,\u201d associated with citrus greening disease, harbors three intracellular symbionts, including a strain of Wolbachia, wDi. Currently, the functional relationship of Wolbachia sp. strain wDi with D. citri and \u201cCa. Liberibacter asiaticus\u201d is limited due to the unavailability of a genome assembly with no gaps. Here, we report three wDi genome sequences utilizing both long- and short-read sequencing methods. Wolbachia isolates were recovered from D. citri from an established laboratory culture collected in Polk County . Individual psyllids were placed on sterile diet rings for 2 days prior to Wolbachia extraction. The psyllids were surface sterilized and immersed in 1.0\u2009ml of Schneider\u2019s Drosophila (S2) medium . Next, individual psyllids were homogenized and centrifuged at 100\u2009\u00d7\u2009g for 5 min. The supernatants were collected and centrifuged at 400\u2009\u00d7\u2009g for 5 min. The pellets were resuspended with 1.0\u2009ml of S2 medium. The samples were centrifuged at 100\u2009\u00d7\u2009g for 5 min; then, supernatants were placed in new tubes and centrifuged at 4,000\u2009\u00d7\u2009g for 5 min. The pellets were resuspended in 1.0\u2009ml of S2 medium. After isolation, an individual wDi sample was inoculated into Drosophila S2 cells and maintained in S2 medium containing 10% heat-inactivated fetal bovine serum , 50 units of penicillin, and 50\u2009\u03bcg streptomycin sulfate per ml (S2 complete medium) using the method described by Dobson et al. to detect the generic variation of the bacteria in the host, using the protocol adopted from Rasgon et al. (The widespread endosymbiont nization . Diaphorhttps://github.com/lh3/seqtk) and Filtlong (https://github.com/rrwick/Filtlong). De novo assembly was performed separately using Canu v1.9 (The circular consensus sequences (CCS) generated from the PacBio raw reads were subjected to quality assessment and adaptor trimming using seqtk (84.7%) (Drosophila melanogaster) (83.1%) (Wolbachia sp. strain wPip (from Culex pipiens) (86.3%) (https://phaster.ca/) v4 using the bacteria_obd10 database was util (84.7%) , wMel (f (83.1%) , and Wol (86.3%) . In addi (86.3%) was done (86.3%) (Table\u00a01ter.ca/) . PHASTERCP051266, CP051265, and CP051264 corresponding to BioSample accession numbers SAMN14560310, SAMN14560311, and SAMN14560312, respectively, under the BioProject accession number PRJNA603775.The three genome sequences of wDi have been deposited in GenBank under the accession numbers"} {"text": "Escherichia coli is mostly a commensal of birds and mammals, including humans, where it can act as an opportunistic pathogen. It is also found in water and sediments. We investigated the phylogeny, genetic diversification, and habitat-association of 1,294 isolates representative of the phylogenetic diversity of more than 5,000 isolates from the Australian continent. Since many previous studies focused on clinical isolates, we investigated mostly other isolates originating from humans, poultry, wild animals and water. These strains represent the species genetic diversity and reveal widespread associations between phylogroups and isolation sources. The analysis of strains from the same sequence types revealed very rapid change of gene repertoires in the very early stages of divergence, driven by the acquisition of many different types of mobile genetic elements. These elements also lead to rapid variations in genome size, even if few of their genes rise to high frequency in the species. Variations in genome size are associated with phylogroup and isolation sources, but the latter determine the number of MGEs, a marker of recent transfer, suggesting that gene flow reinforces the association of certain genetic backgrounds with specific habitats. After a while, the divergence of gene repertoires becomes linear with phylogenetic distance, presumably reflecting the continuous turnover of mobile element and the occasional acquisition of adaptive genes. Surprisingly, the phylogroups with smallest genomes have the highest rates of gene repertoire diversification and fewer but more diverse mobile genetic elements. This suggests that smaller genomes are associated with higher, not lower, turnover of genetic information. Many of these genomes are from freshwater isolates and have peculiar traits, including a specific capsule, suggesting adaptation to this environment. Altogether, these data contribute to explain why epidemiological clones tend to emerge from specific phylogenetic groups in the presence of pervasive horizontal gene transfer across the species. E. coli focused on clinical isolates emphasizing virulence and antibiotic resistance in medically important lineages. Yet, most E. coli strains are either human commensals or not associated with humans at all. Here, we analyzed a large collection of non-clinical isolates of the species to assess the mechanisms of gene repertoire diversification in the light of isolation sources and phylogeny. We show that gene repertoires evolve so rapidly by the high turnover of mobile genetic elements that epidemiologically indistinguishable strains can be phenotypically extremely heterogeneous, illustrating the velocity of bacterial adaptation and the importance of accounting for the information on the whole genome at the epidemiological scale. Phylogeny and habitat shape the genetic diversification of E. coli to similar extents. Surprisingly, freshwater strains seem specifically adapted to this environment, breaking the paradigm that E. coli environmental isolates are systematically fecal contaminations. As a consequence, the evolution of this species is also shaped by environmental habitats, and it may diversify by acquiring genes and mobile elements from environmental bacteria (and not just from gut bacteria). This may facilitate the acquisition of virulence factors and antibiotic resistance in the strains that become pathogenic.Previous large scale studies on the evolution of Escherichia coli is a commensal of the gut microbiota of mammals and birds (primary habitat) . Fo. Fo62]. papX, the P fimbriae, yersiniabactin, colibactin and multiple type 5a protein secretion systems (\u22123), while being rare in strains isolated from freshwater and wild birds\u2019 feces as previously shown , and the minimal sequence identity observed in each of the 2,486 persistent gene families. The observed average sequence identity is 98.3% across families of persistent genes. The average minimal value observed across persistent gene families is 95.5%.A. Number of gene families according to their occurrence in genomes. Singletons (in green), (EPS)Click here for additional data file.S3 FigA. Graphical representation of the different steps of the phylogenetic trees build process from the persistent genome. Among persistent gene families, there are families that are core and the remaining that have missing genes . B. Number of persistent gene families according to their number of missing genes in the Australian dataset. Only 12% of families are core, i.e., present in all genomes (in red). C. Violin-plot of the number of missing genes per genome in the Australian dataset. On average, the number of missing genes is around 8 per genome. It can reach up 93 in a single genome, but this represents less than 4% of persistent families.(EPS)Click here for additional data file.S4 FigE. coli and 86 outgroups genomes performed from the matrix of mash distances computed between all pairs of genomes using bionj. The number of genomes in each species (or clade) was indicated. The different phylogroups of E. coli were displayed: A (in blue), B1 (in green), E (in purple), D (in yellow), F (in orange), G (in brown) and B2 (in red). B. Boxplot of the mash distances computed between all pairs of genomes belonging to the same species (or clades). In both cases, the maximal mash distance was lower than 0.05. For E. coli species, the median was around 0.027 and the maximal value was 0.04. C. Phylogenetic tree of 100 Australian E. coli genomes representative to the diversity of the dataset and 86 outgroups genomes performed from the persistent-genome of the genus with IQ-TREE under the GTR+F+I+G4 model. We made 1,000 ultra-fast boostrap to assess the robustness of the topology of the tree. We found that all boostrap supports were higher than 95%. D. We rooted the species phylogenetic tree from the genus phylogenetic tree. The resulting rooted species tree was reported, and for simplicity, the main phylogenetic groups were collapsed.A. Distance tree of 1,294 Australian (EPS)Click here for additional data file.S5 Figi.e., persistent (in gold), accessory (in grey) and singleton (in green). The average was represented by a black dot. The pairwise Wilcoxon Rank Sum test with bonferroni correction was applied to all comparisons (P<0.001 :***). B. Same analysis as in A, but distinguishing the genomic location of the gene of each set : inside of contigs or at the edge of contigs . The average gene size for each case was reported in the table. C. Percentage of genes located inside contigs (dark color) or at the edge of contigs (light color) in the 3 sets. The last column corresponds to the fraction of the 3 sets located at the edge of contigs. D. Heatmap of the observed/expected (O/E) ratios of genes located inside or at the edges of contigs in the 3 sets. The ratio (O/E) was reported for all comparisons with a color code ranging from blue (under-representation) to red (over-representation). The level of significance of each Fisher\u2019s exact test was also indicated (P<0.001 :***). It was performed on each 2*2 contingency table. E. Fraction of singletons with no hit (in light gey), with a small domain (in grey) or fully included (black) in larger accessory or persistent gene families (\u22124), the average number of persistent is also significantly higher than the rarefied RefSeq dataset. Singletons represent 43%, and 35% of the rarefied Australian and RefSeq pan-genomes, resp.A. Boxplots of gene size (bp) in the three categories of gene families, families . F. Viol(EPS)Click here for additional data file.S6 Fig\u22124). The summary of the linear fit was: Y = 90.722391\u201376.2919X, R2 = 0.50,P<10\u22124. Hence, with or without singletons, the results were similar.Here, the GRR were computed excluding singletons in all genomes. Due to the large amount of comparisons (points), we divided the plot area in regular hexagons. Color intensity is proportional to the number of cases (count) in each hexagon. The linear fit ) and the spline fit ) were reported for the whole or the intra-ST (in blue) comparisons. There was a significant negative correlation between GRR and the patristic distance Click here for additional data file.S7 Fig\u22123 :***). D. Average number of persistent (in gold), accessory (in grey) and singleton (in green) in the rarefied pan-genomes of each dataset.A. Violin plots of the nucleotide diversity per site in the 3 datasets computed from the multiple alignments of 112 core gene families see . The pai(EPS)Click here for additional data file.S8 Fig\u22124). B. Boxplots of the nucleotide diversity (left), the MASH (center) and the patristic distances (right) computed with/between genomes in each phylogroup. The pairwise Wilcoxon Rank Sum test with bonferroni correction was applied to all comparisons. Here, only the non-significant (ns : P> = 0.05) comparisons were indicated, all other were higly significant P<10\u22124. C. Density of the patristic distances between all pairs of genomes of the same phylogroup (intra-phylogroup). The dash vertical line corresponds to the median of each distribution. (A-B-C) In all cases, similar results were obtained with rarefied datasets .A. Violin plots of the nucleotide diversity per site (left), the MASH (center) and the patristic distances (right) computed with/between genomes belonging to the same phylogroup , to different phylogroups , or all together . In all cases, intra- and inter-phylogroup distributions were significantly different Click here for additional data file.S9 Fig2>0.88, P<10\u22124). The Rarefaction curve of the pan-genomes of the full dataset was also reported . B. Rarefaction curves of the pan-genomes of each phylogroup and of the full dataset (All). C. Rarefaction curves of the gene-families associated to MGE in each phylogroup and in the full dataset (All). D. Rarefaction curves of the pan-genomes of each isolation sources. In each case, (i) we used 1,000 permutations (genomes orderings) and then averaged the results , (ii) the pan-genomes remained open Click here for additional data file.S10 Figi.e., GRR, Manhattan, Jaccard, MASH and patristic, computed between pairs of genomes belonging to the same phylogroup (intra-phylogroup) with the whole dataset or excluding singletons (woS). Spearman\u2019s rank correlation rho matrix. Positive correlations were displayed in red and negative correlations in blue color. Color intensity and the size of the circle were proportional to the correlation coefficients. The p-value of each correlation was highly significant (P<10\u22124). We found similar results with rarefied datasets, i.e., considering only 50 randomly selected genomes in each phylogroup. We also found higher correlation coefficients using all the comparisons (intra- and inter-phylogroup).A. Average GRR (%) computed between pairs of genomes belonging to the same phylogroup (intra-phylogroup) and to different phylogroups (inter-phylogroup). The color code used was displayed in the insert (top right). B. Correlation between the different distances and indexes, (EPS)Click here for additional data file.S11 Fig\u22124). B. Boxplot and histogram of the size of the detected regions in complete and draft genomes. These distributions were significantly different . On average the regions were almost 4 times larger in the complete genomes than in draft genomes and few regions (644) in draft genomes had a typical size of known dsDNA phages (around 44kb). (A-B) showed that prophage elements were less assembled and were probably divided into several small contigs. The large regions (>60 kb) in complete genomes corresponded to tandem elements (consecutive on the genomic sequence). Thus, the number of detected regions did not correspond to the number of prophages either in the complete genomes (due to tandem elements) or in the drafts genomes (the elements being fragmented). C. Strong association between the cumulative size of the detected regions (X) with the number of detected regions (Y). Linear regression (dash red line) and statistics were reported. D. Boxplot of the predicted number of prophage elements in both the complete and the draft genomes using the linear equation showed in (C) from the cumulative size of the regions detected by VirSorter. These distributions were significantly different . On average, there was 6.0 prophages in complete genomes, and 4.25 in draft genomes. The medians of the two data sets were closer reflecting probably the assembly problem related to the presence of prophages in tandem combined with the fact that they are often genetically close (most of them are lambdoids). In each panel, the red arrow corresponds to the median and the blue arrow to the average of each distribution.A. Boxplot of the number of regions detected as prophage-related by Virsorter in the 370 complete RefSeq GenBank genomes and in the 1,294 draft Australian genomes. These distributions were significantly different, on average the number of regions detected was significantly higher in draft than in complete genomes Click here for additional data file.S12 Fig\u22124). On average the contigs were almost 10 times larger in the complete genomes than in draft genomes (81 kb vs. 8.9 kb). We identified 2347, 562 and 53 contigs larger than 20, 50 and 100 kb, resp. (A-B) showed that plasmid elements were poorly assembled and probably divided into several small contigs. C. Boxplot of the fraction of the proteome encoding plasmid elements per genome in complete and draft genomes. These distributions were similar with an average of 3.2% in both.A. Boxplot of the number of contigs classified as plasmid by PlaScope in the 370 complete RefSeq GenBank (Complete) genomes and in the 1,294 draft Australian genomes (Draft). All the extrachromosomal replicons of the complete genomes were perfectly identified as plasmid elements by PlaScope. Hence, results based on the extrachromosomal replicons or on the contigs detected as plasmid by PlaScope were identical (Complete*). The average number of contigs was eight times larger in draft genomes than in complete genomes (15.4 vs 1.9) and reached up to 124 contigs. B. Boxplot and histogram of the size of the contigs detected as plasmid in complete and draft genomes. These distributions were significantly different Click here for additional data file.S13 Figi.e., prophage (left column), plasmid (middle columns) and IS elements (right column). A. Histogram and boxplot of genomic features of each type of MGEs, i.e, the cumulative size of the elements per genome (Kb), the total number (#) of genes encoded by the elements per genome, the fraction of the genome encoding these elements per genome. For each case, the dash line corresponds to the smoothed curve, the red arrow to the median and the blue arrow to the average of each distribution. B. Histogram and boxplot of the number of conjugation systems per genome. C. Number of conjugative systems: (MPF) and isolated relaxases (MOB) detected in our dataset. The different MPF types were indicated and also their genomic location, i.e., located on a contig classified as plasmid or as chromosome by PlaScope.Three types of MGEs were detected, (EPS)Click here for additional data file.S14 Figi.e., # of genes per genome) and the total number of genes associated to the MGE elements. B. Histogram and boxplot of the genome size (in grey), and of the genome size without MGE (in red), i.e., after removing all the genes encoding MGE elements (in red). These distributions were significantly different . C. Same representation as in (a), but distinguishing the different types of MGEs, i.e., prophage, plasmid and IS elements. (A-C) We found a strong correlation in each case. Linear regression (dash red line) and statistics were reported. Similar results were obtained with the genome size (Mb). D. Number of singletons (in green) and accessory gene families encoding MGEs. The fraction of the pan-genome encoding such elements was reported in each case (%).A. Association between the genome size ((EPS)Click here for additional data file.S15 FigNumber of accessory gene families associated to prophage and plasmid present in one to seven phylogroups (A), or in one to seven sources (B). The Z-score obtained for the observed number with respect to the expected distribution (as in (EPS)Click here for additional data file.S16 FigNodes are phylogroups and edges the O/E ratio of the number of pairs of MGE genes (from the same gene family) acquired in the terminal branches of the tree. Only significant O/E values (and edges) are plottted (|Z-score|>1.96). Under-represented values are in dash blue and over-represented in red see .(EPS)Click here for additional data file.S17 FigA. Heatmap of the average genome size of strains from different sources in each phylogroup. The deviation to the overall intra-phylogroup mean was reported for all comparisons with a color code ranging from blue (below average) to red (above average). The level of significance of each ANOM test was indicated . It was performed within each phylogroup (each line). (B-C-D) Same representation as in (A), but in relation with the average number of genes associated to MGEs (B), to prophage (C), or plasmid elements (D).(EPS)Click here for additional data file.S18 Figint1+) or not (int1-). The level of significance of the Wilcoxon test was indicated (P<10\u22123). B. Heatmap of the proportion of genomes int1+ in each phylogroup and source. A cross marks the absence of data. C. Same as in (B) but we merged sources related to human activity (with), or not directly associated to human (without). The level of significance of each ANOM for proportions test was indicated . Here, we compared response proportions for the X levels to the overall response proportion from the contingency table. This method uses the normal approximation to the binomial. Therefore, in some cases sample sizes were too small to be tested. D. Heatmap of the average number of ARGs per genome in each phylogroup and source. E. Heatmap of the average number of ARGs when we merged sources related (with) or not (without) to human activity. The level of significance of each non-parametric ANOM test (ANOM with Transformed Ranks) was indicated . The deviation to the overall mean was reported for all comparisons with a color code ranging from blue (below average) to red (above average). The color code used was displayed in the top of each panel.A. Violin plots of the number of ARGs in genomes encoding integron-integrase ((EPS)Click here for additional data file.S19 Fig(A-B). Heatmap of the average number of VFs per strain from different sources in each phylogroup. The deviation to the overall mean or to the intra-phylogroup mean was reported for all comparisons with a color code ranging from blue (below average) to red (above average). The level of significance of each ANOM test was indicated . It was performed within each phylogroup . C. Heatmap of the average number of Colicins per genome in each phylogroup and source. D. Same representation as in (B), but in relation with the average number of Colicins per genome.(EPS)Click here for additional data file.S20 FigA. Heatmap of the average number of capsule systems per genome in each phylogroup and source. B. The deviation to the intra-phylogroup mean was reported for all comparisons with a color code ranging from blue (below average) to red (above average). The level of significance of each ANOM test was indicated . It was performed within each phylogroup (each line). C. Prevalence (%) of each capsule groups across phylogroups and sources.(EPS)Click here for additional data file. 15 Apr 2020* Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. *Dear Dr Touchon,Thank you very much for submitting your Research Article entitled 'Phylogenetic background and habitat drive the genetic\u00a0 diversification of\u00a0Escherichia coli' to PLOS Genetics. Your manuscript was fully evaluated at the editorial level and by independent peer reviewers. The reviewers appreciated the attention to an important topic but identified some aspects of the manuscript that should be improved.We therefore ask you to modify the manuscript according to the review recommendations before we can consider your manuscript for acceptance. Your revisions should address the specific points made by each reviewer.In addition we ask that you:1) Provide a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript.archive. If your image is from someone other than yourself, please ensure that the artist has read and agreed to the terms and conditions of the Creative Commons Attribution License. Note: we cannot publish copyrighted images.2) Upload a Striking Image with a corresponding caption to accompany your manuscript if one is available (either a new image or an existing one from within your manuscript). If this image is judged to be suitable, it may be featured on our website. Images should ideally be high resolution, eye-catching, single panel square images. For examples, please browse our plosgenetics@plos.org.We hope to receive your revised manuscript within the next 30 days. If you anticipate any delay in its return, we would ask you to let us know the expected resubmission date by email to Submission Checklist.If present, accompanying reviewer attachments should be included with this email; please notify the journal office if any appear to be missing. They will also be available for download from the link below. You can use this link to log into the system when you are ready to submit a revised version, having first consulted our Preflight Analysis and Conversion Engine\u00a0(PACE) digital diagnostic tool. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.While revising your submission, please upload your figure files to the data availability policy\u00a0requires that all numerical data underlying graphs or summary statistics are included with the submission, and you will need to provide this upon resubmission if not already present. In addition, we do not permit the inclusion of phrases such as \"data not shown\" or \"unpublished results\" in manuscripts. All points should be backed up by data provided with the submission.Please be aware that our\u00a0Similarity Check, powered by iThenticate, into its journal-wide submission system in order to screen submitted content for originality before publication. Each PLOS journal undertakes screening on a proportion of submitted articles. You will be contacted if needed following the screening process.PLOS has incorporated To resubmit, you will need to go to the link below and 'Revise Submission' in the 'Submissions Needing Revision' folder.[LINK]Please let us know if you have any questions while making these revisions.Yours sincerely,Xavier DidelotAssociate EditorPLOS GeneticsKirsten BombliesSection Editor: EvolutionPLOS GeneticsReviewer's Responses to QuestionsComments to the Authors:Please note here if the review is uploaded as an attachment.Reviewer #1: The authors present a fantastic analysis of genetic diversity and pan genome dynamics ics in E. coli. I thoroughly enjoyed reading this paper and have no doubt it will make a major contribution to the E. coli community and the burgeoning scientific discourse on bacterial pan genomes. I have a few comments the authors may wish to consider:Line 86: In consequence beginning to sentence does not read wellLine 120: what are essentially core genes are given a new term persistent. Shouldnt they be termed Core to avoid confusion and align with common terminologyLine 157-198: Apologies as I see how much work this analysis has been. But this GRR finding is leaving me with alot of questions. Is this differences in functional gene classes? Is it deviation in sequence ofsimilar genes as a result of drift? I feel i would like more info on the GRR diversification and description of the results.Line 254: The suggestion of B2 not being basal to the tree. Isnt this counter to the widely accepted pattern of evolution proposed by some of the authors of this paper, that B2 is the ancestral E. coli and A/B1 emerged from ancestral pathogensLine. 263: not probably more frequent. Absolutely more frequentLine 310-11: I am slightly confused at what level the prevalence of MGE is being described and analysed, and that no MGE is found in the core genome. Is this only true across the entire species data set? There are no MGEs which are core in a given lineage/phylogroup? Similarly in the following lines why would one expect MGE fixed across the species given the ecology of lineages is so varied? I am a bit confused by this section.Line 417: potential number of possible sources is absolutely a function of phylogroup is it not?Line 636: I can only assume the 454 genomes represents the fact some genomes were sequenced as legacy projects some time ago. But to fend off criticism I would provide a very clear and honest assessmentt of how why and when these were sequenced.Reviewer #2: Dear Professor Didelot,First of all I apologise for the delay in my review. Disruptions and subsequent preparations linked to the COVID-19 pandemic have got in the way of providing you with a quick review. I have now read with attention the manuscript submitted to you by Marie Touchon et al. entitled \"Phylogenetic background and habitat drive the genetic diversification of Escherichia coli\"E. coli can be found in all endothermic animals and is consequently associated with their environment . Currently, there is a strong bias towards clinical samples in E. coli studies. Here, authors use an impressive sampling of >5,000 isolates to sequence and characterize a subset of representative >1,200 E. coli from mostly non-clinical, isolates originating from humans, poultry, wild animals and water sampled from the Australian continent (no plants). Using comparisons of pangenomic diversity and gene repertoires, authors show that (1) there are large variations in gene content in E. coli lineages, even within STs, (2) gene flow explains the association of certain lineages with specific ecological niches and habitats, something that is not new, but had never been looked at in such way, (3) smaller genomes are associated with a higher (and not lower) turnover of genetic information, and (4) an interesting proportion of small genomes were freshwater isolates, which is very interesting and provides new evidence suggesting that E. coli can be \"naturalized\" and adapt/replicate in non-host environments.The results from this study are not ground-breakingly novel. It's been more than a decade that we (including many studies from authors of this manuscript) find evidence for phylogroup-specific ecologies and physiologies in E. coli but it is the first study of this scale, which makes it seminal and authoritative.Overall, I enjoyed this manuscript a lot, and will be happy to recommend it for publication in Plos Genetics after a few clarifications are made:Methods:- I think I need some clarification on the phylogenetics section, because I am a bit confused as to why the authors chose to do things the way they did. I would understand that aligning with E-INS-i in MAFFT is not recommended for >200 sequences, but is it why authors used this odd strategy of back-translating concatenated protein-to-protein alignments instead of performing the alignment on gene-by-gene concatenates? I am also missing details as to how the back-translation was performed: which codon usage did you pick? How is that affecting recombination/HGT/diversity analyses you are doing afterwards? Or did you come back to the original nucleotide sequence? At the very least, a clarification of the rationale here would be more than welcome. Similarly, another amber flag (to not say red) for me was to read that authors manually added gaps \"-\" in the alignment. Please explain a bit better.- I couldn't access the sequences on ENA accession PRJEB34791 nor on the SRA using the individual accessions from the S1 dataset. Please make sure to make things available when the paper is published, or for eventual future reviews.- I would have liked to see a bit more details on the sources of isolations presented in the S1 Dataset. Which birds and non-human mammals were actually sampled? Were they captive or wild?- It would be very good to provide with more information on the actual animal source for each isolate, rather than just being vague and mention \"bird faecal\" for example. I suspect there is a collection of various Australian animals there, but at the moment we cannot tell.Results:- I would have liked to see more genetics in this analysis: you have characterised the pangenome of a rather unique dataset in one of the most iconic bacterial species ever studied, and yet, I am left hungry for gene names, and knowing which functions you find to be potentially involved in what. For instance, what functions would be phylogroup-specific in healthy and environmental E. coli? Are they predicted metabolic functions like we expect?Discussion:- I find the generalism of phylogroup B1 fascinating, so I enjoyed the freshwater angle at the end of this study. Instead of an open sentence at the end of the manuscript, would authors care to hazard a more exhaustive interpretation on the evolution of phylogroup B1? A majority of B1 isolates are also found in ruminants and on plants, which could maybe have an obvious link with freshwater composition too. The deadliest outbreak strain of E. coli (O104:H4) was also a B1 isolate. Is B1 only the most dramatic example (i.e. for which we can find a possible explanation) of the impact of ecology on phylogroup evolution?- Linked to my comment above for the Results section, I would have liked to see a bit more on the possible genetic functions involved in the \"naturalization\" in the freshwater niche (to paraphrase Mike Sadowsky). Are these functions also found in other bacteria that are found in natural environments? Can this study contribute to a better idea of what is actually E. coli associated with in freshwater (free-living vs. surviving in protozoa vs. in sediments)? If I recall well, this is still quite an unclear point.**********Have all data underlying the figures and results presented in the manuscript been provided?PLOS Geneticsdata availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.Large-scale datasets should be made available via a public repository as described in the Reviewer #1: YesReviewer #2: No: I couldn't access the sequences on ENA accession PRJEB34791 nor on the SRA using the individual accessions from the S1 dataset.**********what does this mean?). If published, this will include your full peer review and any attached files.PLOS authors have the option to publish the peer review history of their article , and click on the Fetch/Validate link next to the ORCID field.\u00a0 This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager.In the meantime, please log into Editorial Manager at If you have a press-related query, or would like to know about one way to make your underlying data available , please see the end of this email. If your institution or institutions have a press office, please notify them about your upcoming article at this point, to enable them to help maximise its impact. Inform journal staff as soon as possible if you are preparing a press release for your article and need a publication date.Thank you again for supporting open-access publishing; we are looking forward to publishing your work in PLOS Genetics!Yours sincerely,Xavier DidelotAssociate EditorPLOS GeneticsKirsten BombliesSection Editor: EvolutionPLOS Geneticswww.plosgenetics.orgTwitter: @PLOSGenetics----------------------------------------------------Comments from the reviewers (if applicable):----------------------------------------------------Data DepositionDryad Digital Repository. As you may recall, we ask all authors to agree to make data available; this is one way to achieve that. A full list of recommended repositories can be found on our website.If you have submitted a Research Article or Front Matter that has associated data that are not suitable for deposition in a subject-specific public repository (such as GenBank or ArrayExpress), one way to make that data available is to deposit it in the The following link will take you to the Dryad record for your article, so you won't have to re\u2010enter its bibliographic information, and can upload your files directly:\u00a0http://datadryad.org/submit?journalID=pgenetics&manu=PGENETICS-D-20-00247R1http://www.datadryad.org/depositing. If you experience any difficulties in submitting your data, please contact help@datadryad.org for support.More information about depositing data in Dryad is available at data availability policy requires that all numerical data underlying display items are included with the submission, and you will need to provide this before we can formally accept your manuscript, if not already present.Additionally, please be aware that our ----------------------------------------------------Press Queriesplosgenetics@plos.org.If you or your institution will be preparing press materials for this manuscript, or if you need to know your paper's publication date for media purposes, please inform the journal staff as soon as possible so that your submission can be scheduled accordingly. Your manuscript will remain under a strict press embargo until the publication date and time. This means an early version of your manuscript will not be published ahead of your final version. PLOS Genetics may also choose to issue a press release for your article. If there's anything the journal should know or you'd like more information, please get in touch via 4 Jun 2020PGENETICS-D-20-00247R1 Phylogenetic background and habitat drive the genetic diversification of\u00a0Escherichia coli Dear Dr Touchon, We are pleased to inform you that your manuscript entitled \"Phylogenetic background and habitat drive the genetic diversification of\u00a0Escherichia coli\" has been formally accepted for publication in PLOS Genetics! Your manuscript is now with our production department and you will be notified of the publication date in due course.The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out or your manuscript is a front-matter piece, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.Thank you again for supporting PLOS Genetics and open-access publishing. We are looking forward to publishing your work! With kind regards,Matt LylesPLOS GeneticsOn behalf of:The PLOS Genetics TeamCarlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdomplosgenetics@plos.org | +44 (0) 1223-442823plosgenetics.org | Twitter: @PLOSGenetics"} {"text": "Metastases and cancer recurrence are the main causes of cancer death. Circulating Tumor Cells (CTCs) and disseminated tumor cells are the drivers of cancer cell dissemination. The assessment of CTCs\u2019 clinical role in early metastasis prediction, diagnosis, and treatment requires more information about their biology, their roles in cancer dormancy, and immune evasion as well as in therapy resistance. Indeed, CTC functional and biochemical phenotypes have been only partially characterized using murine metastasis models and liquid biopsy in human patients. CTC detection, characterization, and enumeration represent a promising tool for tailoring the management of each patient with cancer. The comprehensive understanding of CTCs will provide more opportunities to determine their clinical utility. This review provides much-needed insights into this dynamic field of translational cancer research. For metastasis formation, cancer cells must leave the primary tumor and disseminate. To this aim, epithelial tumor cells might go through epithelial-to-mesenchymal transition (EMT), lose their polarity and cell-cell/matrix adhesion, secrete specific enzymes to digest the extracellular matrix (ECM), and gain migratory properties ,2 Figura. Then, During the last two decades, CTC detection, characterization, and enumeration opened a promising avenue to better understand the biology of metastatic cancer at the exact moment of metastasis initiation. Their use as a real-time \u201cliquid biopsy\u201d might help to predict metastasis formation and to develop novel anticancer therapies . HoweverThere is substantial evidence that many cancers are driven by cancer stem cells (CSCs) or tumor-initiating cells that are called metastasis-initiating cells (MICs) at metastatic sites. CSCs are a cancer cell population with stem cell features, such as self-renewal and differentiation into multiple cell types . Stem ceIt is not clear whether CTCs are representative of the whole tumor cell population or, similarly to CSCs, are a rare subpopulation . MoreoveGenetic analyses have revealed similar mutation profiles in primary and metastatic tumors and also in the corresponding CTCs . NeverthThe hypothesis that single CTCs must undergo EMT for metastasis initiation a has beeOCT4, NANOG, SOX2 binding sites in CTC clusters enhances their stemness features and the expression of cell\u2013cell junction components, thus promoting collective migration and invasion [Collective cancer cell invasion might be explained by two mechanisms. First, during embryo development and wound-healing, EMT leads to collective migration of neural crest cells and cellinvasion , K8, K14, and also of E-cadherin and P-cadherin (adhesion molecules) in disseminating cell clusters ,39,40. TIt has been reported that the detection of rare CTC clusters indicates a higher metastatic potential compared with high numbers of single CTCs . This poPolyclonal metastases might originate from the accumulation of multiple single CTCs or by direct seeding of a CTC cluster at a single site . SeveralOnce in circulation, CTCs are stopped at branch points in vessels due to low shear stress and size limitations (CTCs up to 20 \u03bcm in diameter versus capillaries of ~3\u20137 \u03bcm) . Then, CAlthough lymphatic vessels can uptake only particles of ~5\u2013300 nm in diameter , melanomAggressive tumors release thousands of cancer cells in the circulation each day; however, the metastasis rate is very low and it iMoreover, depending on their origin (late-stage primary tumor or pre-invasive lesion), DTCs may give rise to metastatic lesions or enter dormancy, respectively . Cancer AXL signaling that is associated with quiescence of disseminated cells [Production by bone stromal osteocytes of the ligands for tyrosine kinase receptors, such as growth arrest-specific protein 6 (GAS6), bone morphogenetic protein 4 (BMP7), and transforming growth factor-\u03b22 (TGF\u03b22), support DTC dormancy a. GAS6 aed cells . TGF\u03b22 ied cells . BMP7 sied cells . Regulated cells . T cell-ed cells to its receptor mediates Extracellular signal-Regulated Kinase (ERK) activation and cell proliferation that is then reduced by binding of the ECM integrin \u03b1v\u03b23 to vitronectin, resulting in uPA decrease Figure c. Loss oThe immune system should eliminate or at least control CTC activity . The intDuring metastasis formation, the overall natural killer (NK) cell count is increased. CTCs can modulate NK cell anti-tumor activity through production of inhibitory cytokines that block the NK cell immunoglobulin receptors (KIRs) Figure a. Toll-lThe major histocompatibility complex (MHC) is a set of cell surface proteins that helps to recognize viral and tumor-associated antigens for initiation of the immune cell response . NK cellThe high shear forces in lung capillaries induce micro-particle production by CTCs. Following their uptake and ingestion, macrophages are activated and contribute to metastatic lesion initiation and development . In physThe characterization of individual white blood cells (WBCs) in CTC clusters of breast tumor-bearing mice has shown the 85.5\u201391.7% of WBCs express neutrophil markers . These cCTCs downregulate MHC-I expression on CD8 T cells, a receptor that is crucial for initiation of the adaptive cytotoxic T lymphocyte response . ProgramFas Cell Surface Death Receptor (FAS or CD95) belongs to the TNF transmembrane receptor superfamily. After binding to its ligand Fas Cell Surface Death Receptor Ligand (FASL), FAS can activate the extrinsic apoptosis pathway through recruitment of caspase 8 and 10, and consequently mediates cells death. In resistant tumor cells, FAS also induces non-apoptotic signaling pathways linked to tumor growth, survival, and migration . FASL onCTCs modulate the immune system also by increasing platelet number and activity Figure d. PlatelThe major mechanisms of chemotherapy resistance in tumor cells and CSCs have alsMoreover, EMT and dormancy mediate the upregulation of ECM remodeling markers and the downregulation of signaling pathways that induce tumor cell mitosis ,95,96. TFinally, hypoxia stress signals ensure CTC survival through activation of autophagy to provide the nutrients required for cell metabolism . Immune To conclude, as CTCs contribute to promoting resistance to conventional anticancer treatments, investigating and targeting these mechanisms represent a potential therapeutic approach.Currently, CTCs are considered a promising bio-source for cancer detection. Many technologies have been and are developed for CTC capture and isolation based on their biophysical and biological features Figure . The chaBiological feature-based CTC enrichment is usually performed using immunoaffinity approaches based on CTC trapping or removal of background blood cells a Table . \u00ae system is an immunomagnetic device based on EpCAM-positive CTC enrichment that offers semi-automated CTC capturing, staining, and image analysis, and thus can overcome the problems of CTC stability during sample shipment and storage [Immunomagnetic devices and microfluidic chips are two examples of positive enrichment technologies in which monoclonal antibodies (mAbs) targeting CTC surface antigens are coupled to magnetic beads or micro-posts/surfaces, respectively a1. For i storage ,101. Aff storage . In this storage ,104; how storage . Moreove storage . This pr storage . Aptamer storage .However, cancer cells undergoing EMT may lose EpCAM expression . To overCTC affinity selection (and thus CTC purity) is influenced by the surface mAb concentration, and low antigen level leads to mixed monolayers and recovery reduction. Moreover, some antibodies target only specific cancer types or lack cancer-specificity because their targets are expressed also by other cells . These lCentrifugation or reed blood cells (RBCs) lysis as pre-processing steps can lead to CTC loss. This problem can be addressed by using another immunomagnetic enrichment technology that captures CTCs from whole diluted blood samples pre-labeled with magnetic particles by bead-bound CTC absorption using magnetic rods .To overcome the issues of CTC low number in blood samples and of CTC stability during sample shipment and storage, leukapheresis and cytopheresis are an excellent option for increasing the chance of ex vivo CTC capture and detection of rare CTCs. During cytopheresis, blood is passed through a machine that retain CTCs, while WBCs and peripheral blood stem cells go back to the patient\u2019s vein . StandarPositive enrichment has other shortcomings. Indeed, some CTCs are not captured by the difficult-to-remove antibodies, and the capture accuracy is reduced by the heterogeneous expression of cell surface biomarkers due to EMT . MoreoveOverall, while immunoaffinity-based CTC enrichment presents valuable advantages, it also has some limitations. In label-dependent methods, antibody binding to surface markers causes the activation of intracellular signaling pathways , and theCTC enrichment methods based on biophysical properties, also called \u201clabel-free\u201d approaches, can isolate CTCs from blood by exploiting their specific density, size, deformability, and electric charges b Table . \u00ae use density gradient centrifugation gradient system that leads to higher CTC enrichment compared with the standard Ficoll-Paque\u00ae method [Density-based CTC enrichment technologies, such as Ficoll-Paquefugation b1 in whi\u00ae method .\u00ae, ScreenCell\u00ae, and CellSieve\u2122) and 3D membrane microfilters , to isolate single CTCs or CTC clusters based on their size and/or deformability is exploited by the DEPArray system for selection and isolation of single CTCs, independently of their features, followed by massive parallel sequencing to clearly reveal differences among CTCs . In thisCTCs can be isolated with high purity using hydrodynamic microfluidic devices based on inertial forces that force blood cells to migrate across flow stream lines to equilibrium positions b4. For iZeinali et al. developed the inertial microfluidic Labyrinth device that can isolate single CTCs and CTC clusters from blood samples of patients with NSCLC using a high-throughput, biomarker-independent, and size-based isolation method . The aut\u00ae FX or the Vortex method, however these methods still require further validation [Other inertial focusing methods have been developed that can capture viable cells, for example ClearCelllidation ,132. \u00ae system is the only US Food and Drug Administration (FDA) approved CTC detection system, and newly developed technologies are routinely compared with this system [CTC enrichment methodologies have their own advantages and limitations , and sevs system . Unfortus system . \u00ae [One alternative approach is to combine the advantages of biophysical- and biological-based enrichment methods, such as geometrically enhanced differential immunocapture , cell enrichment and extraction , and RosetteSep\u2122 (density centrifugation and immunoaffinity selection). For instance, the GEDI chip can increase (2\u2013400-fold) CTC counts compared with CellSearch\u00ae . A micro\u00ae ,103. CTC\u00ae . Besides CTC enrichment and enumeration, the detailed molecular and functional characterization of isolated CTCs can be performed using high-throughput single-cell analytical methods . MoreoveWhile technologies are focused on CTCs in peripheral blood, before and during dissemination, DTCs that successfully land at distant organs, such as bone marrow and lymph nodes, can form recurrent tumors and metastases . After sTumor tissue biopsy and fine-needle aspiration are invasive procedures that allow sampling only part of the tumor mass. Therefore, they are not representative of the whole disease. Moreover, these procedures may lead to tumor cell seeding around the sampling area and increase the risk of local dissemination . Hence, At diagnosis, CTC number is correlated with PFS and OS . In earlIn patients with breast cancer, the correlation between tumor size, CTC counts, and survival revealed that CTC number is closely associated with the primary tumor size, the number of metastases, and PFS reduction . ConversBalakrishnan et.al. studied the clinical implications of CTC clusters, and showed that in vitro cluster formation from CTCs of patients with advanced stage lung and breast cancer correlates with shorter OS. They also found that patients who were sensitive and resistant to chemotherapy exhibited loose and tight clusters, respectively. Moreover, tight clusters were clearly correlated with poor patient survival . ConversCTCs may provide useful information for tumor staging at diagnosis. For instance, in gastric cancer, the percentage of patients with \u22654 CTCs increases with the tumor stage and reaches 95.24% at stage IV . MoreoveDetermining the tumor cell phenotypic status and its correlation with CTCs can help treatment decision-making . In patiAnalysis of PD-L1 expression in CTCs from patients with metastatic lung cancer before and after immunotherapy showed that the overall response to anti-PD-1 immunotherapy was higher in patients with >1.32 CTCs/mL and in those with >50% of PD-L1-positive CTCs . IncreasFinally, metastatic CSC subsets that express specific markers, such as CD44v6 in colorectal cancer , CD133 aEGFR-T790M drug-resistance mutation in 92% of them. This was correlated with PFS reduction during treatment [The genetic analysis of mutations in CTCs could improve cancer management. Analysis of PIK3CA mutations in single CTCs from 20 patients with metastatic breast cancer showed that six harbored a PIK3CA mutation that may help to select the best treatment . Epidermreatment . B-Raf proto-oncogene, serine/threonine kinase (BRAF) and KIT proto-oncogene, receptor tyrosine kinase (KIT) mutations. Comparison of the mutation profile in CTCs and in the resected primary melanoma showed some heterogeneity [anaplastic lymphoma kinase (ALK) gene with echinoderm microtubule-associated protein-like 4 (EML4) and the production of oncogenic EML4\u2013ALK fusion transcripts [EML4\u2013ALK analysis in CTCs from patients with NSCLC indicated that EML4\u2013ALK+ CTCs are associated with resistance to crizotinib (ALK inhibitor), and thus are a promising candidate for monitoring treatment efficacy and for the early detection of drug resistance [Single CTC mutation analysis highlighted the presence of mutations in genes encoding the therapeutic target or signaling proteins that are involved in resistance to targeted therapy . The effogeneity , particuogeneity . Severalogeneity . A smallnscripts . EML4\u2013ALsistance .In conclusion, CTC enumeration and mutation profiles give information on the prognosis and help to stratify patients in clinics.Circulating tumor DNA (ctDNA) is actively released from living tumor cells into the blood stream and can be used as a surrogate biopsy biomarker. Its quantification and genomic analysis are useful for the management of patients with tumors in the era of precision oncology . Like CTKRAS mutations in patients with NSCLC [EGFR mutations in patients with NSCLC [On the other hand, it has been reported that ctDNA is much more sensitive than CTCs for the detection of th NSCLC . Also, cth NSCLC ,192. In th NSCLC . Moreoveth NSCLC . In concth NSCLC .The higher metastatic potential of CTC clusters compared with single CTCs is associated with poor prognosis . CTC cluCTC cluster survival in the bloodstream and extravasation are supported by platelets . DisruptIn solid primary tumors, autophagy induces the endocytosis of TRAIL receptors, resulting in resistance to TRAIL-based therapy . HoweverBesides metastases in distant organs, the leaky-prone neo-vasculature of the primary tumor and tumor-draining lymph nodes promote CTCs\u2019 re-entry in the tumor of origin after circulation . Tumor-dAs CTCs express CD47, an anti-phagocytic receptor and the post-metastatic niche initiates and takes form upon CTC arrival. As each cancer exhibits a proclivity to metastasize in specific organs, the CTC niche type in distant sites should be characterized . Understanding CTC biological properties, how they migrate (single cells or clusters) and escape the anti-tumor immunity, as well as their pre- and post-metastatic niche will offer opportunities to improve cancer management.CTC enrichment and detection technologies have improved in the last decades. For routine use, it would be better to develop technologies that simultaneously enrich and detect CTCs by downstream molecular characterization at the single-cell level. Moreover, CTC detection might be enhanced in the vessels and lymphatic network close to the tumor .The implementation of a tumor-specific signature for CTC identification improves personalized medicine through disease monitoring to guide treatment decisions, and even for metastatic cancer therapy. Targeting CTCs in the blood circulation might represent a promising therapeutic strategy; however, CTC biology needs to be fully understood. Moreover, insights into the role of EMT, stemness, clustering, and immune escape in CTCs help to better understand the metastatic cascade and consequently to guide research on future anti-cancer agents."} {"text": "Metastatic castration-resistant prostate cancer (mCRPC) is the most aggressive and deadly form of prostate cancer. As a bone-predominant metastatic disease, liquid biopsy-based biomarkers have advantages in monitoring cancer dynamics. Previous studies have demonstrated the associations between circulating tumor cells (CTCs) and mCRPC outcomes, but little is known about the prognostic value of CTC-clusters. In this study, we investigated the associations of CTCs and CTC-clusters with mCRPC prognosis, individually and jointly, using longitudinal samples. We confirmed the associations of CTC counts with mCRPC outcomes in both baseline and longitudinal analyses. Our results also showed that the presence of CTC-clusters alone had prognostic value and that CTC-clusters may further improve CTC-based prognostic stratification in mCRPC. Our findings suggest the potential of combing CTC and CTC-clusters as non-invasive means to monitor progression and predict survival in mCRPC and build a premise for in-depth genomic and molecular analyses of CTCs and CTC-clusters.p = 0.0185). mCRPC patients with both unfavorable CTCs and CTC-clusters had the highest risk for death , as compared to those with <5 CTCs. Analyses using longitudinal data yielded similar results. In conclusion, CTC-clusters provided additional prognostic information for further stratifying death risk among patients with unfavorable CTCs.Liquid biopsy-based biomarkers have advantages in monitoring the dynamics of metastatic castration-resistant prostate cancer (mCRPC), a bone-predominant metastatic disease. Previous studies have demonstrated associations between circulating tumor cells (CTCs) and clinical outcomes of mCRPC patients, but little is known about the prognostic value of CTC-clusters. In 227 longitudinally collected blood samples from 64 mCRPC patients, CTCs and CTC-clusters were enumerated using the CellSearch platform. The associations of CTC and CTC-cluster counts with progression-free survival (PFS) and overall survival (OS), individually and jointly, were evaluated by Cox models. CTCs and CTC-clusters were detected in 24 (37.5%) and 8 (12.5%) of 64 baseline samples, and in 119 (52.4%) and 27 (11.9%) of 227 longitudinal samples, respectively. CTC counts were associated with both PFS and OS, but CTC-clusters were only independently associated with an increased risk of death. Among patients with unfavorable CTCs (\u22655), the presence of CTC-clusters signified a worse survival (log-rank Prostate cancer (PCa) is the most commonly diagnosed cancer and the second leading cause of cancer-related death among men in the United States . In patiRemarkable progress has been made in the use of tissue-based molecular analyses to guide treatment decisions of many cancers. However, such tissue-based genomic profiling is challenging in mCRPC due to several reasons. First, mCRPC is a bone-predominant metastatic disease, thus tissue samples are not always obtainable. Second, the yield of tumor tissues from metastatic sites can often be quite low, particularly when sampling from bone metastases . Third, CTCs are shed from primary or metastatic tumors into the blood and have extremely high malignant potential. Since CTCs constitute \u201cseed cells\u201d for metastasis, they are arguably the most important subset of tumor cells to monitor and treat ,11. UnliIn addition to disseminating as individual cells, tumor cells also collectively migrate as clusters, in which cell-cell adhesion remains intact ,21. ClusAs yet, no study has been reported to evaluate whether CTC-clusters can further improve CTC-based prognostic stratification in mCRPC. Furthermore, no in-depth analysis has been conducted to explore the prognostic value of CTC-clusters, by using longitudinally collected data. Herein, based on an ongoing mCRPC cohort with longitudinal samples, we conducted, to our best knowledge, the first study that evaluated the prognostic value CTC-clusters in high-risk mCRPC patients with high CTC levels.A total of 64 mCRPC patients were included in this analysis . At the p < 0.0001 for progression-free survival (PFS); 6.0 months vs. not reached, log-rank p < 0.0001 for OS) . mCRPC p for OS) . When co for OS) . We thenp = 0.0003 for PFS; 4.2 months vs. not reached, log-rank p < 0.0001 for OS) (p = 0.0299) (CTC-clusters were identified in 8 (12.5%) of 64 patients. Representative immunofluorescent images of CTC-clusters are shown in for OS) . mCRPC p for OS) . After a 0.0299) .n = 44) included those with favorable CTC counts (<5 CTCs), and no patients in this group had a CTC-cluster. The medium-risk group (n = 12) included those with unfavorable CTC counts (\u22655 CTCs) but without a CTC-cluster. The high-risk group (n = 7) included those with both unfavorable CTC counts and CTC-clusters. Note that the one patient with one CTC that was a 2-cell cluster was excluded from this analysis because this subgroup only had one subject.We were interested in learning whether CTC-clusters could further stratify prognostic risk in patients with unfavorable CTC counts. To this end, we categorized patients into three risk groups. The low-risk group (p = 0.0185), suggesting improved prognostic stratification using CTC-clusters. In the univariate Cox analyses, compared with the patients in the low-risk group, those in medium- and high-risk groups had a 3.53-fold and 6.30-fold increased risk for progression, as well as a 3.96-fold and 21.53-fold increased risk for death (p = 0.0072) in the multivariate Cox analyses . Among por death . mCRPC panalyses .In the joint analyses of CTCs and CTC-clusters described above, we found that baseline CTC-clusters could further stratify patients with unfavorable baseline CTCs into different risk groups. However, using measurements with only one time point may underestimate prognostic values. In comparison, using longitudinal data obtained from repeated measurements of each individual over time is an effective approach to improve prediction power ,28.n = 1) from joint analyses. As shown in p < 0.0001). The association between the presence of CTC-clusters and outcomes remained significant in OS-related analyses, even after adjustment for covariates (p < 0.05). In the joint analysis using longitudinal CTCs and CTC-clusters at each time point, we found that the death risk for patients with both unfavorable CTCs and CTC-clusters almost doubled, as compared to those with unfavorable CTCs but without a CTC-cluster . These results from longitudinal analyses were consistent with the baseline analyses and further confirmed that CTC-clusters conferred additional prognostic information to CTC enumeration alone and improved prognostic stratification in patients with unfavorable CTCs.To confirm the additional prognostic value of CTC-clusters, and to clarify whether the non-significant finding in PFS analyses was due to relatively small sample size, we evaluated the associations of longitudinal changes in CTCs and CTC-clusters with clinical outcomes, using the Cox proportional hazards model with time-dependent covariates. The time-dependent CTC- and cluster-related variables, including risk groups, were re-defined for every patient at each time point of blood draw from baseline to first progression or death . In totaWe also plotted the dynamic changes of CTCs and CTC-clusters of individual patients. The prognostic values of CTC-clusters have been reported, but mostly only in breast cancer. Despite a seminal study on CTC-clusters in PCa , no studCTC-clusters are derived from multicellular groups of tumor cells that are held together through plakoglobin-dependent intercellular adhesion . ClusterIn vivo studies have shown that CTC-clusters had a 23- to 50-fold increase in metastatic potential compared to single CTCs in breast cancer . Recent The maintenance of cell-clusters, including cell-cell cohesive interactions, has been observed in the majority of invasive PCa ,36. AcetWe then evaluated the role of CTC-clusters in prognostic stratification. After stratifying mCRPC patients with unfavorable CTC counts according to the presence or absence of CTC-clusters, we noted that OS differed significantly between these two groups B. ComparThe major strengths of this study include the focus on mCRPC that ensures a homogenous study population and the innovative use of time-dependent analyses of longitudinal samples with repeated measurements ,37. Our We recruited men with mCRPC who visited the Sidney Kimmel Cancer Center at Thomas Jefferson University Hospital from March 2018. The enrolled patients had histologically confirmed prostate adenocarcinoma, a progressive disease despite castration levels of serum testosterone (<50 ng/dL), and radiographic metastases according to computed tomography (CT) or technetium-99 bone scan. Patients were excluded if they concurrently had other primary tumors. Demographic data , clinical data , and laboratory data were collected by reviewing medical charts. Blood samples were collected from each patient at baseline before initiation of a new therapy and at follow-up visits (approximately every 6\u20138 weeks) for CTC and CTC-cluster enumerations. Assessments of CTC/CTC-cluster, PSA, and tumor lesions were repeated. Follow-up imaging tests were conducted following the PCWG3 guideline . This reApproximately 8\u201310 mL of peripheral blood was drawn into a 10 mL evacuated blood draw tube , maintained at room temperature, and processed within 96 h of collection. CTC and CTC-cluster enumerations were conducted using the CellSearch System (MENARINI Silicon biosystems), which consists of the CellTracks Autoprep and the CellSearch CTC kit to immunomagnetically enrich cells expressing the epithelial cell adhesion molecule (EpCAM) and fluorescently label nuclei (DAPI), leukocytes with monoclonal antibodies specific for leukocytes (CD45), and epithelial cells . CTCs were defined as nucleated cells lacking CD45 and expressing cytokeratin (CK+/DAPI+/CD45\u2212) . CTC-cluhttp://www.r-project.org) software packages. Two-sided p values of <0.05 were considered to be of statistical significance.Clinical outcomes analyzed in this study included PFS and OS. PFS was defined as the time from the date of baseline blood draw to the date of radiologic progression ; on boneTo our best knowledge, this is the first comprehensive analysis of the role of prognosis stratification by CTC-clusters in mCRPC patients. Our findings suggest the potential of combing CTC and CTC-clusters as non-invasive means to monitor progression and predict survival in mCRPC and build a premise for in-depth genomic and molecular analyses of CTCs and CTC-clusters."} {"text": "The potential clinical utility of circulating tumor cells (CTCs) in the diagnosis and management of cancer has drawn a lot of attention in the past 10 years. CTCs disseminate from tumors into the bloodstream and are believed to carry vital information about tumor onset, progression, and metastasis. In addition, CTCs reflect different biological aspects of the primary tumor they originate from, mainly in their genetic and protein expression. Moreover, emerging evidence indicates that CTC liquid biopsies can be extended beyond prognostication to pharmacodynamic and predictive biomarkers in cancer patient management. A key challenge in harnessing the clinical potential and utility of CTCs is enumerating and isolating these rare heterogeneous cells from a blood sample while allowing downstream CTC analysis. That being said, there have been serious doubts regarding the potential value of CTCs as clinical biomarkers for cancer due to the low number of promising outcomes in the published results. This review aims to present an overview of the current preclinical CTC detection technologies and the advantages and limitations of each sensing platform, while surveying and analyzing the published evidence of the clinical utility of CTCs. The advent of new diagnostic and treatment modalities have improved the 5-year relative survival rate for all types of cancers combined; the survival rate increased substantially from 39% to 70% among white patients and from 27% up to 63% among black patients . It is wSeveral CTC detection platforms have emerged over the past decade, each exploiting a distinctive characteristic of CTCs for sensitive selection and capture. Each technology differs in the biophysical or bimolecular trait leveraged for CTC capture, enrichment, and downstream cellular and molecular characterization, but all of them aim to enumerate CTC and draw clinically relevant conclusions regarding the prevalence of CTC for cancer management. The focus of these methodologies is the detection of CTC clinically, rapidly, and with high sensitivity, selectivity and specificity, while remaining minimally invasive. The differences between CTC and normal blood cells in gene/protein expression, morphology, volume, and biophysical properties had led to the establishment and commercialization of several CTC detection and enumeration devices during the past decade. These commercialized technologies can be categorized based on method of CTC identification as label dependent (affinity-based) or label independent. Then, each category is subdivided into different classes based on its functional detection approach .The most widely used approach for CTC detection and isolation is immune-based detection, whereby antibodies are used to selectively bind cell surface antigens . Tumor cOne of the leading platforms utilizing label-dependent technology is the CellSearch systems (Veridex), which employ EpCAM-coated ferrofluid nanoparticles for the selection of EpCAM-positive CTCs followed by confirmation with immunostaining for the high expression of CK 8, 18 and the absence of CD45 expression . More thImportant correlations between CTC count and cancer relapse have already been observed using the CellSearch system, but the technology has some limitations . Firstly\u00ae is another commercialized positive selection platform that relies on immunomagnetic beads coated with a cocktail of antibodies for the enhanced capture and enrichment of CTCs in breast, prostate, ovarian and colon cancer . A. A44]. Aic beads . In comp sample) . In a hellSearch . It is wllSearch . With dullSearch . In a stllSearch . Each caGiven that CTCs express EpCAM and CK to varying levels, with some displaying the complete downregulation of these proteins, alternative strategies have been developed and tested to isolate and enumerate CTCs based on their biophysical properties . These pMicrofiltration enrichment methods process whole blood through a range of microscale constrictions to capture target cells based on their size or a combination of size and deformability. The recovery efficiency is reduced due to the buildup of filtration resistance resulting from the frictional drag on the blood sample as it passes through the filter, a main limitation of this strategy . \u00ae technology, for the enrichment and cultivation of CTCs in vitro. Using this technology, CTCs were detected in 66.7% of patients, with comparable frequencies in patients with operable and inoperable tumors (60% vs. 77.8%). Comparable CTC fractions were observed among patients with metastatic and nonmetastatic tumors (66.7% vs. 66.7%). The CTCs were then cultured in vitro for further downstream applications, thus confirming their viability [The most significant physical difference between CTCs and WBCs is size, where CTCs are larger on average. Several platforms aim at sieving CTCs from a blood sample and have been shown to be more selective and efficient than the CellSearch system. These platforms rely on microfiltration, which involves a single membrane with varying pore sizes between 6 and 9 \u00b5m and is used to separate CTCs and filter out smaller blood cells. A novel filter-based size exclusion technology called ISET (isolation by size of tumor cells), developed by RareCell Diagnostics, Paris, France, was capable of isolating CTCs independent of their expression of any particular marker. Using this technique, CTCs were detected in patients with hepatocellular carcinoma, breast carcinoma, and melanoma ,69,70. Iiability . Althougiability .Microfluidic chips have also been developed for CTC size-based detection and termed \u201cthree-dimensional microfiltration\u201d, where 3D geometries are constructed and designed to allow the separation of CTCs from background blood components. The Parsortix system is an example of such an approach: it has a stair-like architecture that decreases gradually in width down to 4.5 \u00b5m to aid in the capture of larger cells and provide the necessary physical support above and below the captured cells to prevent morphological changes. CTCs larger than the channel width become trapped in the gaps, while smaller cells pass through. Its design maximizes the length of separation and allows reverse flow for the subsequent release of captured CTCs for downstream interrogation and analysis .\u00ae and Percoll, Ficoll-Hypaque\u00ae , mostly used in biomedical laboratories to recover peripheral blood mononuclear cells. Despite its long history of use in laboratory environments, there are some drawbacks linked to this technique, mainly the possible loss of CTCs that move either to the plasma region or to the bottom of the density gradient due to the formation of aggregates. It is worth noting that this cell loss could be due to the cytotoxicity of the density medium. Interestingly, the Percoll density gradient medium has some advantages over Ficoll, which include reduced toxicity as well as a wider density gradient range [Density gradient centrifugation is a typical method for segregating whole blood into its constituents based on differences in sedimentation coefficients. As whole blood is dropped in the liquid gradient while being subjected to centrifugation, cells are distributed along the gradient depending on their density. Erythrocytes or polymorphonuclear leukocytes with lower cellular density are precipitated at the bottom, whereas the heavier mononuclear leukocytes and CTCs remain at the top . Severalnt range ,74. Anotnt range . Inertial focusing uses the effects of fluid inertia in microchannels of a certain shape to align microparticles and cells at high flow rates. When randomly dispersed particles, such as cells in blood, flow through a channel with a particle Reynolds number of one or greater, they are subjected to two counteracting inertial lift forces: a force that directs particles toward the channel walls, and another that repels the particles toward the channel centerline. In square or rectangular channels, combining these forces leads to the migration of particles to two to four dynamic equilibrium positions located between the channel centerline and the wall. Following focusing, cells are collected in a smaller volume and significantly concentrated in a size-dependent fashion ,77. SollAnother modality that makes use of inertial focusing is the ClearCell FX, developed by Clearbrdige Biomedics. ClearCell FX is a spiral microfluidic device that combines both inertial focusing with the secondary Dean\u2019s flow resulting from curved channels to trap CTCs from a blood sample; this allows for the proper positioning of CTCs within the channel. This modality can process 7.5 mL of blood in less than 10 min but requires red blood cells lysis prior to sample processing. Khoo et al. tested this system on patients with metastatic breast cancer or NSCLC. CTCs were detected in 100% of the patients, with a varied range of CTCs isolated for breast cancer samples (12\u20131275 CTCs/mL) (Median: 55 CTCs/mL) and NSCLC samples (10\u20131535 CTCs/mL) (Median: 82 CTCs/mL), respectively . InertiaIn addition to size-dependent isolation, other physical traits observed in CTCs are exploited to distinguish them from leukocytes. Two innovative approaches to cell separation are dielectrophoresis and direct imaging, which both depend on cell composition, morphology, and phenotypes. These two platforms are discussed in the following subsections.Imaging-based detection makes use of specific fluorescent tags to identify and count CTCs in blood. Several imaging-based CTC detection technologies have been developed and tested and each is unique in its sample preparation, detection algorithm, and fluorophores used. Somlo G. et al. developed a novel fiber optic array scanning technology (FASTcell\u2122) to detect CTCs in patients with locally advanced/inflammatory breast cancer (LABC/IBC), metastatic breast cancer (MBC) and non-small cell lung cancer. The principle of operation of the FASTcell\u2122 technology is based on an array of optical fibers that form a wide collection aperture to allow a wider field of view; this enables the rapid high-fidelity localization of CTCs identified by conventional markers such as CK, DAPI and CD45 without the need for enrichment. The FASTcell\u2122 system can scan a sample of blood on a glass side at a rapid rate of 25 million cells/min, comprising the image resolution and subsequent confirmation of potential detected CTCs. Clinical studies using FASTcell\u2122 have successfully identified CTCs in 62% of LABC/IBC patients, in 82% of MBC patients and 42% of non-small cell lung cancer patients ,79.Another novel imaging approach that combines both flow cytometry with fluorescence imaging for high throughput analysis has been developed recently. The technology is referred to as ImageStream, and its detection of CTC depends on the expression of EpCAM, CK, AFP, glypican-3 and DNA-PK together with an analysis of size, morphology and DNA content. In a clinical study of patients with hepatocellular carcinoma (HCC), between one and 1642 CTCs were detected in the blood samples of HCC patients (45/69) compared to zero CTCs in the controls (0/31) . In a coDielectrophoresis (DEP) is a liquid biopsy separation method that relies on particles with different polarization that move differently under a nonuniform electric field . MicrochAs in label-based enrichment technology, label-free enrichment methodologies have their own advantages and disadvantages that are presented in ClinicalTrials.gov\u201d website revealed 296 studies involved in CTC detection and capture in patients with metastatic disease [In 2015, a search for \u201ccirculating tumor cells\u201d in the \u201c disease . CurrentAmong the clinical uses of CTC detection is cancer prognosis, which is significant for clinical decision making as prognostic estimation is useful for assessing the risks and benefits of the proposed treatment. Huge efforts have been made to understand the clinical utility of CTCs to predict prognosis and guide therapeutic decisions. We selected seventeen studies to demonstrate and confirm the prognostic, and sometimes the diagnostic, implications of CTCs in various types of cancer using different CTC detection technologies including CellSearch. In these studies, like in most cases, patients who had a higher count of CTCs (unfavorable count) have had a worse prognosis, measured primarily by PFS and OS survival estimates. The results of these trials shed light on the real possibilities of CTC counting and gave deeper insights on the potential of using CTCs as liquid biopsies b.p < 0.001) and relapse-free survival (RFS) compared to CTC negative patients. Even in patients with non-metastatic tumors and lymph node invasion, CTC presence indicated worse OS and RFS. A multivariate analysis identified CTCs as strong independent prognostic indicators of tumor recurrence . The outcome of this study suggests the clinical relevance of CTCs as preoperative prognostic and staging parameters in esophageal cancer [p < 0.001). Importantly, multivariate analysis revealed that CTC detection was the strongest prognostic predictor independent of other clinicopathological parameters; in fact, there was no association between primary tumor characteristics or clinicopathological parameters and CTC detection in non-metastatic patients [The pretreatment staging of patients with cancer and risk stratification based on clinical and histopathological findings greatly improved prognostication and treatment allocation for many cancer subtypes; however, there remains much room for improvement . Evidencl cancer . In a sipatients . These sThe prognostic significance of CTC is also true for patients undergoing a new line of treatment. Using the CellSearch platform, predictive outcome measures of CTC count and chromogranin A (CgA) were investigated in 138 patients with metastatic neuroendocrine neoplasms receiving a new line of treatment . Of all patients, 51% had received previous anticancer therapy, 41% were receiving long-term SST , and 60% of patients tested positive for CTC (at least one CTC detected). Fifteen weeks after starting the new line of therapy, patients with zero CTC count had the highest OS (49.1 months), patients with a CTC count from one to eight had lower OS , and patients with a CTC count above eight had the worst OS . The changes in CTC count significantly correlated with the response to treatment and OS, suggesting their potential use as surrogate markers to direct clinical decision-making. On the other hand, changes in CgA were not significantly associated with survival . p < 0.0001) and PFS compared to patients with a CTC count lower than five CTCs/7.5 mL. After the new line of treatment, any increase in CTC count correlated with shortened OS and PFS. When the CTC count was added to the full clinicopathologic predictive models, the prognosis accuracy was improved according to using the likelihood ratio (LR) \u03c72 statistical analysis. It is worth mentioning that serum tumor markers (CEA and CA15-3) did not show any significant prognostic value, even when added to the clinicopathologic predictive model [In another study, a pooled analysis of 1944 patients with metastatic breast cancer from 20 different studies at 17 European centers validated the prognostic clinical utility of CTCs. Before using a new line of treatment, 47% of patients tested positive for CTCs using the CellSearch system (threshold \u2265 5CTCs/7.5 mL). Patients with elevated CTCs at baseline had an inferior OS to determine therapeutic approaches. The CellSearch system was used to count CTCs expressing IGF-1R in patients treated with monoclonal antibodies against IGF-1R, either alone or in combination with docetaxel. Out of 26 patients, 23 had positive IGF-1R CTCs and had responded better to the combinatorial treatment compared to the remaining three patients whose CTCs were negative for IGF-1R. Such a study suggests the potential use of CTCs as predictive markers for the choice of administered chemotherapy . AnotherSeveral clinical studies have reported the use of CTC with circulating tumor DNA (ctDNA), tumor-derived DNA released into the blood via apoptosis or the necrosis of shedding cells from primary and metastatic lesions, as a complementary biomarker for treatment assessment. In a phase II clinical trial of erlotinib and pertuzumab in patients with advanced NSCLC, a decrease in CTC count upon treatment was correlated with longer PFS. In addition, patients with EGFR mutations showed a substantial reduction in CTC count throughout treatment. The mutational analysis of EGFR showed that ctDNA had higher sensitivity in detecting mutations compared to CTC and, upon treatment, a decrease in mutational load suggested a partial response to treatment. This study, and another mentioned in In the past two decades, our improved understanding of the biology of CTCs and their role in cancer metastasis has opened the door to a myriad of technologies aimed at exploring the clinical potential of CTCs as biomarkers for cancer. CellSearch remains the only successful platform to obtain FDA approval, while many are still in preclinical and clinical trial stages. Antigen-free approaches are showing a lot of potential for clinical success as they overcome the heterogeneous expression of expressed membrane proteins in CTCs. Affinity-based detection platforms are primarily designed to detect, and count CTCs based on the expression of epithelial markers, mainly EpCAM and CK, but such markers are proven to be downregulated upon undergoing EMT. Such inadequacy necessitates either finding a universal selection epithelial and mesenchymal cell surface marker to prevent cross-reactivity with other types of cells in the blood or adopting physical, mechanical and electrical properties as selection markers for isolating CTCs from their surroundings. In addition, any selection approach also may need to distinguish single cells and clusters in patients with metastatic breast cancer and no CTC decrease in response to the first-line chemotherapy, switching to second-line chemotherapy did not affect CTC count, nor did it improve OS . This meDespite all the advances in CTC detection technologies and their diverse capture and enrichment systems, many significant challenges are yet to be met, particularly those with respect to analytical and clinical sensitivities. Adopting such tools into routine clinical practice will demand laborious studies into their analytical validity, clinical validity and clinical utility. These tools, coupled with bioinformatics tools and annotated databases, will provide evidence as to whether detected genomic aberrations in blood may aid in predicting the most suitable cancer therapy on a personalized level."} {"text": "Long noncoding (lnc)RNAs and glycolysis are both recognized as key regulators of cancers. Some lncRNAs are also reportedly involved in regulating glycolysis metabolism. However, glycolysis-associated lncRNA signatures and their clinical relevance in cancers remain unclear. We investigated the roles of glycolysis-associated lncRNAs in cancers.Glycolysis scores and glycolysis-associated lncRNA signatures were established using a single-sample gene set enrichment analysis (GSEA) of The Cancer Genome Atlas pan-cancer data. Consensus clustering assays and genomic classifiers were used to stratify patient subtypes and for validation. Fisher\u2019s exact test was performed to investigate genomic mutations and molecular subtypes. A differentially expressed gene analysis, with GSEA, transcription factor (TF) activity scoring, cellular distributions, and immune cell infiltration, was conducted to explore the functions of glycolysis-associated lncRNAs.Glycolysis-associated lncRNA signatures across 33 cancer types were generated and used to stratify patients into distinct clusters. Patients in cluster 3 had high glycolysis scores and poor survival, especially in bladder carcinoma, low-grade gliomas, mesotheliomas, pancreatic adenocarcinomas, and uveal melanomas. The clinical significance of lncRNA-defined groups was validated using external datasets and genomic classifiers. Gene mutations, molecular subtypes associated with poor prognoses, TFs, oncogenic signaling such as the epithelial-to-mesenchymal transition (EMT), and high immune cell infiltration demonstrated significant associations with cluster 3 patients. Furthermore, five lncRNAs, namely MIR4435-2HG, AC078846.1, AL157392.3, AP001273.1, and RAD51-AS1, exhibited significant correlations with glycolysis across the five cancers. Except MIR4435-2HG, the lncRNAs were distributed in nuclei. MIR4435-2HG was connected to glycolysis, EMT, and immune infiltrations in cancers.We identified a subgroup of cancer patients stratified by glycolysis-associated lncRNAs with poor prognoses, high immune infiltration, and EMT activation, thus providing new directions for cancer therapy.The online version contains supplementary material available at 10.1186/s12916-021-01925-6. Cancer is regarded as a type of metabolic disease. Tumor cells can drive certain metabolic pathways to sustain their biological processes for growth and to adapt to complex tumor microenvironments (TMEs) . A well-Long noncoding (lnc)RNAs, which are longer than 200 nucleotides, can modulate gene expressions through various mechanisms. Several oncogenic signaling pathways, such as the cell cycle , immune In this study, we used pan-cancer data from The Cancer Genome Atlas (TCGA) to identify glycolysis-associated lncRNAs across 33 tumor types. We performed a consensus clustering analysis to classify these glycolysis-associated lncRNAs into distinct clusters. We then identified glycolysis-associated lncRNAs that exhibited key clinical effects in five cancer types. Finally, we explored the potential pathways and functions of glycolysis-correlated lncRNAs in association with oncogenic signaling such as EMT and immune regulation.https://xena.ucsc.edu/). Raw counts of RNA-Seq data were normalized to counts per million (CPMs). Gene-level copy number values were calculated using GISTIC2.0. Beta values derived from Illumina human methylation 450K arrays were used to analyze TCGA DNA methylation changes. LncRNA annotation was retrieved from GENCODE, which contains 17,910 lncRNAs. In total, 15,121 lncRNAs were detected in TCGA RNA-Seq data. We defined an lncRNA as that expressed in a certain cancer type if the gene count was >\u200910 in more than 90% of patients.Genomic profiles including RNA sequencing (RNA-Seq) data, gene-level copy number, DNA methylation, and patient clinical characteristics of 33 TCGA cancer types were downloaded from UCSC Xena or downregulated (FC\u2009<\u20090.7 and FDR\u2009<\u200910\u2212\u20094) in cluster 3 versus cluster 2, and cluster 2 versus cluster 1. In total, 174 upregulated and 49 downregulated genes were identified in low-grade gliomas . These mutation data, generated by the Multi-Center Mutation Calling in Multiple Cancers (MC3) project, were derived from exon sequencing of TCGA cancer patient samples, and genes were categorized into binary calls as either nonsilent mutation or wild-type. Fisher\u2019s exact test was performed to investigate genomic mutations that were significantly enriched in glycolysis score-classified cluster 3 or cluster 1 cancer patients. The identified mutation within each cancer type was shown as a heatmap. To compare established molecular subtypes with the glycolysis signature-classified groups, we selected two cancer types (LGG and BLCA) to perform the analyses; the types have been classified as different groups in other studies [Genomic mutation data of BLCA, LGGs, mesotheliomas (MESOs), pancreatic ductal adenocarcinomas (PAADs), and uveal melanomas (UVMs) were retrieved from UCSC Xena , and a pathway with an FDR of <\u20090.01 was considered significant enrichment. The Hallmark pathway database [https://david.ncifcrf.gov/tools.jsp).To explore the transcriptome that exhibits distinct expression patterns within glycolysis score-stratified clusters, we used CPM-normalized counts from RNA-Seq data to perform a differentially expressed gene (DEG) analysis by using the edgeR package. Gene candidates were ranked based on log2 multiples of change to conduct a GSEA with 10database . In brieThe lncALTAS database was usedp value of <\u20090.01 were considered to significantly differ among the three clusters.To infer immune cell infiltration based on transcriptome profiles, we followed the method reported by \u015eenbabao\u011flu et al. . Brieflyx, and glycolysis-associated gene expression was y. The first-order partial correlation between x and y conditioned on lncRNAs wasA first-order partial correlation was performed to explore interlinks among lncRNAs, glycolysis scores, and glycolysis-associated genes. The glycolysis score was assumed to be \u2212\u20096.We compared the cumulative distribution of Pearson correlation coefficients between glycolysis scores and gene expressions with or without removing the effect of lncRNA expression by using the Kolmogorov\u2013Smirnov test. To identify gene candidates that were correlated with lncRNA expression and were independent of copy number variations and DNA methylation, we performed a multivariate linear regression to adjust for these covariates. We considered a gene candidate to be significantly associated with an lncRNA when its absolute correlation coefficient was >\u20090.3 and its FDR was <\u200910p\u2009<\u20090.01). Cluster 3 patients in five cancer types\u2014BLCA, LGGs, MESOs, PAADs, and UVMs\u2014demonstrated a significant association with a poor prognosis , MESO (n\u2009=\u200984), and UVM (n\u2009=\u200980) were small, we mainly focused on the effects of lncRNAs on BLCA (n\u2009=\u2009406) and LGG (n\u2009=\u2009524). Furthermore, the microarray data of PAAD, MESO, and UVM cancer patients lacked survival information, limiting our validation of the clinical importance of glycolytic lncRNA-stratified clusters. Thus, we focused mainly on LGG and BLCA to validate the clinical significance of the clusters. We identified 26 gene candidates in BLCA and 45 gene candidates in LGGs by using the developed genomic classifiers, which allowed discrimination within these subgroups based on gene expression levels . Directly correlating lncRNAs with these genes would generate hundreds of correlation results and Relationships between glycolysis and the immune resistance of cancers have recently been reported. Tumor cells utilize glucose and metabolically compete with T cells through impairing mammalian target of rapamycin (mTOR) activity and glycolytic activity in T cells, leading to the overriding of the capability of T cell-mediated cytotoxicity . In addi-omics analysis integrating transcriptome, metabolite, and genomic analysis revealed that UVM cells with BAP1 mutation maintain their energy demand through oxidative phosphorylation and glycolytic pathway [SLC2A1, PDK1, LDHA, and SLC16A3 [In our genomic mutation analysis, we identified specific gene mutations that were associated with glycolysis-associated lncRNA-stratified clusters in different cancers. Some of these genes have been reported to be involved in glucose metabolism. For example, in mice bearing RB1 null lung cancer , upregul pathway . Additio SLC16A3 . By contIn our TF analysis, we determined that activated prominent TFs in highly glycolytic clusters across BLCA, LGG, PAAD, and UVM were mainly involved in TGF-\u03b2 signaling and SMAD protein complex assembly. SMAD proteins are downstream signal transducers of the TGF-\u03b2 signaling pathway, which functions as an immune-suppressive regulator in cancers. For instance, Smad3-mediated TGF-\u03b2 signaling was reported to suppress the cytotoxic activity of NK cells by blocking the production of CD16-mediated IFN-gamma . AnotherAmong the glycolysis-associated lncRNAs we identified, some lncRNAs have been reported to be directly involved in glycolysis signaling. For example, plasmacytoma variant translocation 1, which exhibited positive associations with glycolysis scores in six of thirty-three cancer types in our findings, was suggested to function as a microRNA sponge in suppressing miR-497 expression, leading to the promotion of HK2 upregulation and osteosarcoma progression . NuclearAmong the five lncRNAs that were consistently correlated with glycolysis scores across different cancers in our findings, MIR4435-2HG was identified as being highly associated with glycolysis and its correlated genes. By performing a multivariate linear regression adjustment, we also uncovered that immune and EMT-involved genes are positively correlated with MIR4435-HG expression. Several studies have demonstrated the oncogenic roles of MIR4435-2HG in cancer processes. MIR4435-2HG promotes gastric cancer cell migration and proliferation through Wnt/\u03b2-catenin signaling . The uprWe identified a subgroup of cancer patients stratified by glycolysis-correlated lncRNA signatures with the poorest prognosis, a highly infiltrative immune microenvironment, and EMT activation and thus provide novel aspects for cancer therapy.Additional file 1: Table S1. Glycolysis candidate genes in each cancer types.Additional file 2: Figures S1. Consensus cumulative distribution function (CDF) and delta area for five cancer types.Additional file 3: Table S2. DEGs in LGG. Table S3. DEGs in BLCA.Additional file 4: Table S4. Gene candidates in TCGA BLCA data. Table S5. Ggene candidates in TCGA LGG data.Additional file 5. Table S6. Number of tumor tissues from TCGA pan cancer data.Additional file 6: Figures S2. Flowchart for filtering cancer types and selecting glycolysis-associated long non-coding (lnc) RNAs.Additional file 7: Figures S3. Stratification of patients into different clusters by a consensus clustering analysis.Additional file 8: Table S7. Glycolytic score associated lncRNA in UVM.Additional file 9: Figures S4. Flow chart for establishing genomic classifiers.Additional file 10: Figures S5. Flow chart for Transcription factor activity analyses.Additional file 11: Figures S6. Negative correlation of lncRNA-MYC activity pairs.Additional file 12: Figures S7. Positive correlation of lncRNA-MYC activity pairs.Additional file 13: Figures S8. All the high resolution images in the result section."} {"text": "Wisdom is a multi-component trait that is important for mental health and well-being. In this study, we sought to understand gender differences in relative strengths in wisdom. A total of 659 individuals aged 27\u2013103 years completed surveys including the 3-Dimensional Wisdom Scale (3D-WS) and the San Diego Wisdom Scale (SD-WISE). Analyses assessed gender differences in wisdom and gender\u2019s moderating effect on the relationship between wisdom and associated constructs including depression, loneliness, well-being, optimism, and resilience. Women scored higher on average on the 3D-WS but not on the SD-WISE. Women scored higher on compassion-related domains and on SD-WISE Self-Reflection. Men scored higher on cognitive-related domains and on SD-WISE Emotion Regulation. There was no impact of gender on the relationships between wisdom and associated constructs. Women and men have different relative strengths in wisdom, likely driven by sociocultural and biological factors. Tailoring wisdom interventions to individuals based on their profiles is an important next step. Wisdom is one of six core virtues shared across cultures , and an The burgeoning positive psychiatry subfield focuses on improving outcomes like quality of life and well-being , so wisdIn this article, we examine the association between gender and wisdom. Some reviews and theoretical work have pronounced wisdom should be \u201candrogynous\u201d and emphThe direction of the differences identified were mixed. In three studies of the nine studies where gender differences were found, women scored higher on overall wisdom . The resExamination of related psychological constructs yields support for gender differences in a subset of wisdom components. A meta-analysis of the 24 character traits making up the six virtues, including wisdom, found differences between women and men in 17 traits, although 13 of these were very small in size . There wGender norms, differential approaches to upbringing and socialization, and other sociocultural factors may support increased development of some areas of wisdom in women compared to men and vice versa. Related work finds that women and men report some variance in terms of how they conceptualize wisdom, where women are somewhat more likely to endorse a \u201cintegrative\u201d model while men are somewhat more likely to endorse a \u201ccognitive\u201d model . This vaThus, the existing work in this area argues for a gender-neutral wisdom construct, but the empirical studies show mixed findings regarding the current gender-based differences in wisdom, which may exist for sociocultural and biological reasons. Identifying and characterizing gender differences is important because it will better illuminate the current wisdom construct as it is being commonly measured today, and whether it differs from the gender-neutral goal originally sought. It also allows for the consideration of individualized pathways to wisdom. The existence of wisdom subdomains, alongside previous discussion regarding individual development and pursuit of wisdom e.g., , suggestTherefore, in this study, we examined gender differences in a relatively large community-based sample across the adult lifespan using two validated rating scales . We inclBased on the existing literature, our first hypothesis was that compassion-related domains such as the Affective or Compassionate dimension of the 3-Dimensional Wisdom Scale (3D-WS), and Pro-Social Behaviors and Acceptance of Diverse Perspectives on the San Diego Wisdom Scale would be higher among women. The second hypothesis was that men would score higher on cognitive-related domains like the Cognitive dimension of the 3D-WS and Decisiveness on the San Diego Wisdom Scale. The third hypothesis was that women would score higher on wisdom total scores. Given significant differences between women and men in this same sample in age, income, education, and marital status, these variables were also included in the model. We conducted two sets of exploratory analyses: first, to test whether the magnitude of gender differences in wisdom would vary between more wise and less wise individuals, as posited by This study was approved by the University of California, San Diego Human Research Protections Program (#171635).The participants in this study were recruited from the UCSD Successful Aging Evaluation (SAGE) study, an ongoing project which has been described in previous work . BrieflyTwo measures of wisdom with good to excellent psychometric properties including reliability, convergent validity, and divergent validity were included in this study. The first was the 39-item 3D-WS, which has three subscales capturing the Cognitive, Reflective, and Affective (Compassionate) dimensions of wisdom . The secGender was self-reported with two categorical options: \u201cmale\u201d or \u201cfemale.\u201d This approach allowed for examining differences between people identifying as men and women, the two most frequently reported genders , but hasAdditional measures were included to examine associated constructs: the Center for Epidemiologic Studies Depression Scale (CES-D) , UCLA Lod was calculated for each gender effect. For the first set of exploratory analyses, a median split was calculated for 3D-WS total score, SD-WISE total score, and subscale scores for each measure. Separate linear models were performed to identify how the interaction between gender and the median split dummy coded variable impacted the relationship between gender and each wisdom variable. For the second set of exploratory analyses, separate linear models were performed to identify how the interaction between gender and each wisdom total score and subscale score impacted CES-D (Depression), the UCLA-3 Loneliness, SF-36 Mental Well-being, LOT-R Optimism, and CD-RISC Resilience scores. These analyses were adjusted for family-wise error by using the false discovery rate (FDR) correction.Linear models were performed to examine the relationship between wisdom (the dependent variable) and gender (independent variable). Income, education, age, and marital status were also included as covariates because there were significant differences between women and men in this sample on those demographic areas. One model was calculated for 3D-WS total score, SD-WISE total score, and subscale scores for each measure. Cohen\u2019s Detailed demographic comparison of men and women in the sample is presented in p = 0.008 with a medium effect size (Cohen\u2019s d = 0.481). Men scored significantly higher on the Cognitive Dimension, p = 0.019, with a small effect size (Cohen\u2019s d = 0.184). Women also had higher 3D-WS total score, p = 0.01, with a small effect size (Cohen\u2019s d = 0.292). Please see On the 3D-WS, the mean score among women was significantly higher on the Affective or Compassionate Dimension subscale score relative to men, p < 0.05, d = 0.110\u20130.331), with the Emotion Regulation effect being the largest. There was no significant difference between women and men in the overall SD-WISE score, indicating that the relative strengths and weaknesses of each group balanced out in the overall score.There were significant differences between women and men in all six subscales of the SD-WISE . There was no significant difference between women and men in the \u201chigh wisdom\u201d group for 3D-WS Reflective Dimension, p = 0.17. There were no other differences between \u201chigh wisdom\u201d and \u201clow wisdom\u201d in any other categories.Median splits created \u201chigh wisdom\u201d and \u201clow wisdom\u201d categories for each wisdom measure and subdomain to identify whether there was variability in size of gender effects by group. For 3D-WS Reflective Dimension, people in the \u201clow wisdom\u201d category had a larger gender difference such that women scored higher in wisdom than men, Exploratory analyses examined how gender may moderate the relationship between wisdom and measures of well-being. Main effects supported the relationship between these variables and wisdom regardless of gender; however, gender did not moderate the relationship between wisdom and any of the measures of well-being including depression, loneliness, mental well-being, optimism, and resilience \u20135.This study on gender differences in wisdom found evidence that women and men differed on some components of wisdom. However, these differences were not uniform, but rather varied based on subdomains of wisdom as hypothesized. Women scored higher on several subdomains associated with social connection and compassion, including 3D-WS Affective or Compassionate Dimension, and SD-WISE Acceptance of Diverse Perspectives, Pro-Social Behavior and Social Advising subscales. We did not hypothesize differences in the reflection subdomains; women also scored higher on SD-WISE Insight subscale, and women in the lower half of the median split scored higher than men in the 3D-WS Reflective Dimension. On the other hand, men scored higher on SD-WISE and Emotional Regulation and Decisiveness subscales, and the Cognitive Dimension of the 3D-WS. Our third hypothesis was only confirmed by one of the wisdom measures: total wisdom scores were higher among women on the 3D-WS but not on the SD-WISE total score. SD-WISE has 6 components and offers a more detailed examination of wisdom, with relative strengths in each gender being neutralized by relative weaknesses in others. Only one measure, the 3D-WS Reflective, supported theory and evidence that there are larger gender differences among people with less wisdom. We also did not find any evidence that the gender moderated the relationship between wisdom and measures of well-being including depression, loneliness, mental well-being, optimism, and resilience.Although some of these findings were unanticipated, others were well-aligned with past research in this and related areas. Acceptance of Diverse Perspectives and Social Advising both require perspective-taking and interest in the well-being and values of others even when they are misaligned with one\u2019s own. It seems likely that both of these domains are related to compassion, a domain that women reliably score higher . PreviouThere are two potential causes for the gender differences we identified. One is biological. In this regard, it is important not to conflate sex and gender, but rather to discuss the potential impact of biological processes including sex on differences in wisdom. The finding of greater empathy and compassion toward others in women has been reported across time periods and across cultures. Sex-based differences in oxytocin receptor gene polymorphism may lead to increased empathy in women . In a diThese sociocultural processes mentioned are the second, and likely to be the most influential cause for the observed gender difference in wisdom, including social expectations, gender norms and how wisdom-relevant behaviors are differentially reinforced between boys/men and girls/women by parents, teachers, peers, and society at large. Boys and men tend to be socialized toward behaviors including toughness and leadership, which may translate into being more decisive and in control of emotions, whereas girls and women tend to be socialized toward behaviors including warmth and caretaking, which may translate into pro-social behaviors including being compassionate and accepting of diverse people and ideas . Male adIt has been argued that wisdom should be a broadly gender-neutral construct so that Therefore, it seems to us that variance by subgroup in strengths and weaknesses at the wisdom subdomain level is not a flaw. It may not indicate gender bias as much as it does allowing for a diversity of paths toward wisdom. However, given the alignment between our findings and what has been previously noted to be valued as wise behavior by women and men, we wonder whether these differences might lessen as the divide in societal expectations of women and men fades. We would also note that these effects were small, indicating that although differences existed, women and men still had meaningful overlap as groups in their wisdom scores.Finally, it seems that, at the macro level, the SD-WISE measure does not show evidence of an overall gender difference, unlike 3D-WS. The subcomponents included in a measure of wisdom will of course impact gender and other group differences. These two measures of wisdom measures were developed conceptually and tested psychometrically, using theoretical and empirical findings on layperson and expert definitions of wisdom without focusing on gender balance c.f., ,c. Variacis women and men. We did not collect any biomarkers relevant to gender or sex, so assessment of the potential impact of hormones or other factors was not possible. The study sample was predominantly white and came from an urban county in the United States; thus, the findings may not apply to other race/ethnic groups and different cultural regions, and study of other groups is important. We should point out, however, that in a recent study using SD-WISE and ULS scale for loneliness, we found that the constructs of wisdom and loneliness seemed to be largely similar in a San Diego sample of middle-aged and older adults and an age-comparable sample from rural Italy (The cross-sectional design prevents assessment of differences in how wisdom develops and evolves over the lifespan. Future longitudinal work will be able to fully describe this development by gender, and the influence of other important factors including those associated with mental health and wellbeing. Longitudinal work may also be able to examine some of the hypotheses we and others have considered in regards to why and how gender differences occur. Participants self-identified their genders, and only binary options were offered so there were no options for people who are non-binary or other genders. Understanding wisdom profiles of non-binary people in future studies would be particularly enlightening in understanding how wisdom develops among people who may be less bound to traditional gender norms. We did not collect data to identify how many participants were transgender, which would aid in understanding whether transgender women and men have unique wisdom profiles relative to al Italy .On average, both women and men have strengths in wisdom subdomains that can be capitalized upon to promote their well-being. Helping people identify and lean on these strengths may promote related aspects of well-being including social connection and happiness. We also find that both groups have relative weaknesses that may benefit from individual and societal intervention to improve well-being and promote healthy living, including the growing set of positive psychiatry interventions. Consistent with past literature, we find a difference in compassion between women and men. There are a number of compassion interventions including compassion-focused therapy . We alsoThe raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.The studies involving human participants were reviewed and approved by the University of California, San Diego Human Research Protections Program. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.ET, BP, EL, and DJ conceptualized the study. ET completed the initial analyses with conceptual support from DJ and conceptual and pragmatic support from T-CW, MT, and XT, wrote the first draft of the manuscript, and made the first draft of the tables and figures. RD provided database management and analytic support. All authors contributed to revisions of the manuscript, tables, and figures.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "The health workforce is a vital aspect of health systems, both essential in improving patient and population health outcomes and in addressing contemporary challenges such as universal health coverage (UHC) and sustainable development goals (SDGs). There is an increasing body of research that indicates that if the health workforce were to be redesigned from the ground up\u2014based on population needs\u2014we would see a very different configuration of the health workforce. This makes us wonder how one could design or develop innovative health workforce solution(s) for the future in order to make the health workforce more responsive to population needs.The 21st century presents several challenges to the health workforce and the health professions that require thoughtful consideration and analysis. Health inequalities continue to exist both within and across countries, especially affecting vulnerable and disadvantaged groups. Disease patterns are changing, with a rise in chronic conditions and non-communicable diseases, the COVID-19 pandemic notwithstanding. Increased life expectancies also present us with the challenge of meeting care provisions for an ageing population. Workforce shortages, geographic maldistribution, and international migration are omnipresent.Health workforce solutions have been diverse and generally dependent on condition, context, or country-specific scenarios. New health occupations, as well as reforming the scopes of practice of existing occupations, have been widely debated as solutions. Of importance has been how different health personnel groups can work collaboratively as a team, and at different levels of care\u2014primary, secondary, and tertiary. Models of care specific to population groups as well as health conditions , and health strategies are emerging, with varied success.In this special issue of the International Journal of Environmental Research and Public Health, we have brought together research that debates and provides innovative health workforce solutions directed towards meeting population needs, mainly through integrated solutions or models of care. We have also included papers that cover challenges at an education or regulatory level. This special issue, entitled \u201cThe Future Health Workforce: Integrated Solutions and Models of Care\u201d, features a compelling range of research that spreads across the health professions, including medicine, nursing, dentistry, and allied health. This edition embraces quantitative as well as qualitative research approaches, as well as methodological pluralism and a rapid review. A hallmark of each article is methodological rigor, and we are particularly pleased to have included research conducted with health workforce groups dealing with different conditions in a range of contexts and countries including the USA, the UK, Canada, Australia, Sweden, South Korea, Japan, China, and Brazil. This special issue features 13 papers.The first research paper, from a multidisciplinary team of researchers based in the Rural Clinical School in the Faculty of Medicine at the University of Queensland in Australia, provides a theory that assists us to understand factors that affect doctors in choosing a generalist or specialist medical career . Belindan = 295) in three categories of hospitals. The survey revealed that the informal and formal education received by medical leaders in these Chinese hospitals has not been effective in developing the required medical and leadership management competencies. This provides a basis for recommendations regarding health system and higher education strategies to improve the management competencies of clinical leaders in China. The second article reports research conducted with the health services management workforce in China . The staWe then turn to a thematic analysis of Twitter data and newspapers extracted through a search for new forms of team work in the health and social care of older people in response to the COVID-19 pandemic . The stuCatherine Cosgrave\u2019s study addresses chronic workforce shortages and unmet health care needs in rural and remote communities in Australia . The finThe next paper in this special issue reports a national cross-sectional study of faculties supporting general medical practitioners (GPs) . Matthewn = 965) showed a sustainable return on investment, covering workers with heart disease and diabetes. The study concludes that this model of workers\u2019 healthcare assistants is capable of enhancing workers\u2019 health in companies, while reducing costs for employers and improving workers\u2019 quality of life within the organisation.The following paper takes us to research conducted on an innovative model of workers\u2019 healthcare assistants by a group of Portuguese researchers in Brazil . This stLuis Miguel Dos Santos has investigated reasons behind the shortage of public health, social work, and psychological counselling professionals who can provide multilingual services to minority groups and foreign residents in South Korea . This faThe next paper in this special issue investigated the future of careers for public-health professionals with training in climate change based on analysis of 16 years\u2019 worth of job postings and a survey with prospective employers . HeatherInnovative health workforce solutions were needed for the Swedish mental health workforce due to the recent refugee crisis. Sandra Gupta and colleagues from Uppsala University Sweden explored the experiences of mental health workers towards new training solutions to effectively manage unaccompanied refugee minors . They suThe next paper from Sierras-Davo and colleagues based in Spain and Greece discusses how you can transform the future healthcare workforce across Europe through improvement science . They evYuki Ohara and colleagues based in Japan discuss an interesting paper on job attractiveness and job satisfaction of dental hygienists based on the 2019 Japanese dental hygienists survey . Using aA very interesting commentary is featured as the penultimate article, titled Broken Promises to the People of Newark. Franklin et al discuss We round out this special issue with a rapid review of contemporary techniques and practices in oral health workforce modelling, conducted by a team of researchers from England and Australia . Workfor"} {"text": "These factors induce angiogenesis and recruit various cells into the tumor niche, including neutrophils and monocytes which, in the tumor, are transformed into tumor-associated neutrophils (TAN) and tumor-associated macrophages (TAM) that participate in tumorigenesis.Chronic hypoxia and cycling hypoxia are two types of hypoxia occurring in malignant tumors. They are both associated with the activation of hypoxia-inducible factor-1 (HIF-1) and nuclear factor \u03baB (NF-\u03baB), which induce changes in gene expression. This paper discusses in detail the mechanisms of activation of these two transcription factors in chronic and cycling hypoxia and the crosstalk between both signaling pathways. In particular, it focuses on the importance of reactive oxygen species (ROS), reactive nitrogen species (RNS) together with nitric oxide synthase, acetylation of HIF-1, and the action of MAPK cascades. The paper also discusses the importance of hypoxia in the formation of chronic low-grade inflammation in cancerous tumors. Finally, we discuss the effects of cycling hypoxia on the tumor microenvironment, in particular on the expression of VEGF-A, CCL2/MCP-1, CXCL1/GRO-\u03b1, CXCL8/IL-8, and COX-2 together with PGE The growing knowledge of tumors indicates the significance of the tumor microenvironment, a collection of factors that act on cancer cells in the tumor. These factors include tumor-associated cells ,2 along An important aspect of hypoxia in the tumor microenvironment is chronic low-grade inflammation. The role of inflammation supports the fight of the immune system against pathogens and is an element strengthening the anti-tumor response . HoweverThis review expands on the mechanisms of the activation of hypoxia-inducible factors (HIFs) and nuclear factor \u03baB (NF-\u03baB) presented in our previous reviews on the effects of hypoxia on the CC and CXC The intense division of cancer cells results in the proliferation of tumor tissue. This process does not go hand in hand with angiogenesis, i.e., the formation of new blood vessels. In this way, due to the low availability of blood vessels, the tumor has areas with chronically reduced oxygen concentration. This microenvironment is called chronic hypoxia.The most important and best-known proteins activated in hypoxia are three hypoxia-inducible factors . The first two, HIF-1 and HIF-2, are responsible for the transcription of genes induced by hypoxia, while HIF-3, in addition to inducing gene expression, also decreases the activity of HIF-1 and HIF-2 ,13,14.All three HIFs are composed of two subunits, alpha and beta. The HIF-\u03b2 subunits, also known as aryl hydrocarbon nuclear translocators (ARNT), are not regulated by any changes in oxygen, although a study on high-risk multiple myeloma cells shows that chronic hypoxia increases HIF-1\u03b2 expression via NF-\u03baB . The higIn contrast to HIF-\u03b2 subunits, the expression levels of HIF-1\u03b1, HIF-2\u03b1, and HIF-3\u03b1 subunits are tightly regulated by changes in oxygen concentration through proteolytic degradation and transcriptional regulation. In addition, HIF-3\u03b1 expression is upregulated by HIF-1 and HIF-2 . This reIn normoxia, HIF-\u03b1 undergoes hydroxylation on the proline residue in the N-terminal oxygen-dependent degradation domain (NODD) and C-terminal oxygen-dependent degradation domain (CODD) by three isoforms of prolyl hydroxylase (PHD) ,21\u2014oxyge402 HIF-1\u03b1, Pro564 HIF-1\u03b1, Pro405 HIF-2\u03b1, and Pro531 HIF-2\u03b1 . N. N122]. tination . This letination . The MAPinocytes ,107. In inocytes and lunginocytes , the p38inocytes . This painocytes . Additioinocytes . Akt/PKBinocytes . Akt/PKB634 IKK\u03b2 . This in634 IKK\u03b2 , althougAfter activation, NF-\u03baB forms a complex with its coactivators. These complexes include PHD2 and PHD3HIF-1 and HIF-2 also increase the expression of p65/RelA NF-\u03baB in macrophages . HIF-1 cHypoxia is associated with a decrease in PHD1 activity, which leads to a decrease in the hydroxylation of IKK\u03b2 ,118 and Thanks to the aforementioned mechanisms, there is no simultaneous response of the cell to hypoxia and to pro-inflammatory factors. Nevertheless, NF-\u03baB is activated in chronic hypoxia, leading to an increase in the expression of some inflammatory genes .The signaling pathways activated during chronic hypoxia are very well understood. Hydroxylation of HIF-\u03b1 is reduced, which results in an accumulation of these subunits in the cell. Phosphorylation by MAPK kinase, change in acetylation, or influence of ROS are also responsible for the increase in HIF-\u03b1 stability during chronic hypoxia. There is also an activation of NF-\u03baB, which increases the expression of HIF-1\u03b1. Ultimately, chronic hypoxia occurs in 23 to 54% of the tumor area, depending on the tumor model and the adoption of threshold oxygen levels from which hypoxia is defined ,145. In In the initial stages of tumor growth, the intense proliferation of tumor cells is not matched by the development of blood vessels that supply cells inside the tumor with nutrients and oxygen. Therefore, chronic hypoxia occurs inside the tumor . This caL) expression in cancer cells . T. T185]. subunit . The mec subunit . The incIn cycling hypoxia, ROS activates nuclear factor erythroid 2-related factor (Nrf2) , which rCycling hypoxia also causes changes in HIF-1\u03b1 acetylation. Cycling hypoxia results in decreased expression of HDAC3 and HDAC5 proteins, but not the other HDACs , as demo63 TAK1 and activation of the entire NF-\u03baB activation pathway.In cycling hypoxia, ROS activates NF-\u03baB ,175,191.Chronic hypoxia is accompanied by an activation of NF-\u03baB, which increases the expression of HIF-1\u03b1 and some pro-inflammatory genes . HoweverBoth types of hypoxia also increase vascular endothelial growth factor (VEGF)-A expression. This effect depends on the cancer cell line. VEGF-A expression in the tumor cell is increased much more under chronic hypoxic conditions than in cycling hypoxia. This has been shown in melanoma WM793B cells and prostate cancer PC-3 cells , as well2 (PGE2)\u2014the product of COX-2 activity. The main mechanism of the proangiogenic properties of CCL2/MCP-1 is the recruitment of monocytes into the tumor niche, which are transformed into TAM [2 [VEGF-A is one of the best described pro-angiogenic factors in a tumor 204,205205. Howeinto TAM ,207, whio TAM [2 . CCL2/MCo TAM [2 . CXCL1/Go TAM [2 ,211, reso TAM [2 ,213,214.o TAM [2 ,217,218\u2014o TAM [2 ,220; MMPo TAM [2 .2 is also a pro-angiogenic factor, although not directly. It participates in angiogenesis and lymphangiogenesis by increasing the expression of various angiogenic and lymphangiogenic factors such as VEGF-A, VEGF-C, basic fibroblast growth factor (bFGF), platelet-derived growth factor (PDGF), endothelin-1 [PGEthelin-1 ,225,226 thelin-1 .2, through its action on anti-tumor cells, is one of the mechanisms of cancer immunoevasion. It inhibits the anticancer function of NK cells and dendritic cells and enhances the pro-cancer function of M2 macrophages and regulatory T cells (Treg) [The aforementioned pro-inflammatory factors induced by cycling hypoxia also act on tumor-associated cells. For example, they recruit various cells into the tumor niche. CCL2/MCP-1 is a TAM recruiting factor ,228,229,s (Treg) ,233,234.Cycling hypoxia is a feature of all solid tumors ,154,155.2 production [Cycling hypoxia is associated with elevated COX-2 expression and consequently an increase in PGEoduction ,169,197.oduction , especiaoduction , colorecoduction , oesophaoduction , and prooduction . Neverthoduction ,239,240.Additionally, cycling hypoxia increases CCL2/MCP-1 production in the tumor ,198,199.In addition to CCL2/MCP-1, cycling hypoxia increases in the expression of CXCL1/GRO-\u03b1 and CXCL2 production. As already mentioned, PGE2 has no direct angiogenic effect, but it increases the expression of pro-angiogenic factors [2 production results in decreased expression of other pro-angiogenic factors. Another possibility is to combine bevacizumab with a CXCR1/CXCR2 dual inhibitor [Another option is to improve the anti-cancer anti-angiogenic therapy, e.g., by using bevacizumab\u2014an anti-VEGF-A monoclonal antibody . However factors ,225,226.nhibitor . It is anhibitor . NF-\u03baB anhibitor ,192,196.The vast majority of published in vitro experiments on hypoxia in cancer relate to chronic hypoxia. Most of the available work has not investigated the effect of cyclic changes in oxygen concentration on tumor cells. For this reason, this type of research model does not reflect the actual state of the cancerous tumor, with cycling hypoxia affecting a considerable part of the tumor. In this way, the results of studies showing the effect of chronic hypoxia only reflect the situation in one area in a tumor. For this reason, it is advisable that each study on hypoxia in a tumor should use an in vitro model that includes cycling hypoxia."} {"text": "Medulloblastoma (MB) is an aggressive malignant tumor of the posterior fossa of the CNS that mainly affects children younger than 15 years of age. It is uncommon in the adult population compared to children. Any adult patient presenting with cerebellar mass must be evaluated with brain tissue biopsy to rule out MB. Our patient is a 27-year-old female who presented with sudden onset of frontal headache and was diagnosed with MB. Medulloblastoma (MB) is an aggressive neoplasm of embryonal origin. It is most commonly located\u00a0in the vermis of the cerebellum and commonly affects children . It accoA 27-year-old female with a past medical history of migraine headache presented to the ED with a complaint of sudden onset frontal headache, different from her usual migraine headache. Headache was frontal, worse in the night and morning, not relieved with over-the-counter acetaminophen. The physical exam was normal, and vitals were stable. CT\u00a0scan of the head without intravenous contrast showed a large right cerebellar mass measuring 1.4 x 2.2 x 1.5 cm with a midline shift in the posterior fossa analysis cytology was negative for malignant cells. According to National Comprehensive Cancer Network (NCCN) guidelines, treatment protocol included maximal safe resection, followed by adjuvant therapy involving chemotherapy and radiation. The plan was to start the patient on adjuvant craniospinal radiotherapy with concurrent vincristine for a period of eight weeks, followed by maintenance multiagent chemotherapy including cisplatin, lomustine, and vincristine for eight cycles.MB is most commonly located in the vermis of the cerebellum in children but can involve the lateral hemispheres of the cerebellum in adults. It occurs more frequently in men than women . MetastaPatients with MB classically present with clinical signs and symptoms of increased intracranial pressure like night and early morning headache, nausea, vomiting, confusion, and blurring of vision. Tumors located in the midline can manifest as gait ataxia or truncal instability, whereas those located in the lateral cerebellar hemisphere cause limb clumsiness or incoordination . It can Apart from physical exams, CT\u00a0and MRI of the brain are needed to support the diagnosis. MRI shows iso- or hypointense on T1-weighted images and hyper to hypointense on T2-weighted images . HydroceMB is classified into several variants by the WHO classification of brain tumors which divides the tumor based upon histopathologic criteria. Different variants of MB are classic, desmoplastic/nodular, desmoplastic with extensive nodularity, large cell, or anaplastic. Among these variants, classic MB is the most common variant among both children and adults (70-80%) and extensive nodularity is the least common one (3%) . HistoloAs MB in adults is rare, there is no clear guideline for treatment in adults. Treatment is based on guidelines in children itself . The aimIn a high-risk patient, with positive CSF cytology, combination treatment with standard-dose craniospinal radiotherapy with posterior fossa boost followed by multiagent maintenance chemotherapy is considered effective . Those pWith the evolving treatment modalities of MB, the prognosis\u00a0is variable in both adults and children. In patients who received postoperative cranioradiation therapy, the prognosis was favorable for adults compared to children. Adult patients who have group 4 tumors are found to have poor prognoses compared to children. A desmoplastic variant has a better prognosis compared to the classic variant. Those patients who have positive CSF cytology are found to have an increased rate of relapse and poor outcomes. Any patient with spinal seeding at presentation also has a poor prognosis .MB is the most common brain tumor in children but is rare in adults. Every patient with posterior fossa mass must undergo a biopsy of the mass with histopathological and immunohistochemical examination to confirm the diagnosis, as radiographic imaging alone could be inadequate."} {"text": "Oryza species are the natural reservoir of favorable alleles that are useful for rice breeding. To systematically evaluate and utilize potentially valuable traits of new QTLs or genes for the Asian cultivated rice improvement from all AA genome Oryza species, 6,372 agronomic trait introgression lines (ILs) from BC2 to BC6 were screened and raised based on the variations in agronomic traits by crossing 170 accessions of 7 AA genome species and 160 upland rice accessions of O. sativa as the donor parents, with three elite cultivars of O. sativa, Dianjingyou 1 (a japonica variety), Yundao 1 (a japonica variety), and RD23 (an indica variety) as the recurrent parents, respectively. The agronomic traits, such as spreading panicle, erect panicle, dense panicle, lax panicle, awn, prostrate growth, plant height, pericarp color, kernel color, glabrous hull, grain size, 1,000-grain weight, drought resistance and aerobic adaption, and blast resistance, were derived from more than one species. Further, 1,401 agronomic trait ILs in the Dianjingyou 1 background were genotyped using 168 SSR markers distributed on the whole genome. A total of twenty-two novel allelic variations were identified to be highly related to the traits of grain length (GL) and grain width (GW), respectively. In addition, allelic variations for the same locus were detected from the different donor species, which suggest that these QTLs or genes were conserved and the different haplotypes of a QTL (gene) were valuable resources for broadening the genetic basis in Asian cultivated rice. Thus, this agronomic trait introgression library from multiple species and accessions provided a powerful resource for future rice improvement and genetic dissection of agronomic traits.Rice improvement depends on the availability of genetic variation, and AA genome Rice is one of the most important staple crops for almost half of the world\u2019s population. The Food and Agriculture Organization of the United Nations predicts that rice yield will have to be increased 50 to 70% by 2050 to meet human\u2019s demands, which increases that rice yield is still central for maintaining global food security . WhereasOryza contains twenty-two wild species and two cultivated rice species that represent 11 genomes: AA, BB, CC, BBCC, CCDD, EE, FF, GG, HHJJ, HHKK, and KKLL and two cultivated species (O. sativa and O. glaberrima) were classified into the AA genome. Asian cultivated rice (O. sativa L.) was domesticated from wild species O. rufipogon thousands of years ago is represented by the different segments in the genetic background of elite varieties. Genetic background noise of ILs can be eliminated significantly, which can be evaluated for any traits\u2019 improvement over the recurrent parents for rice breeding, also for QTL mapping and gene discovering as a single Mendelian factor; in addition, potential favorable genes hidden in the background of related species could be expressed in the genetic background of cultivated rice . Thus, Ia sativa , some liAA genome species distributed in the natural and wild environment, which contains amount of useful allelic genes for improving rice yield and resistance to biotic and abiotic stresses . DiffereO. longistaminata, O. barthii, O. glumaepatula, O. meridinalis, O. nivara, O. rufipogon, O. glaberrima, and upland rice of O. sativa) into three elite, highly productive O. sativa varieties . A total of six thousand three hundred and seventy-two agronomic ILs in three different backgrounds were screened and developed based on the repeated evaluation and selection of agronomic traits. One thousand four hundred and one of 6,372 agronomic ILs in the Dianjingyou 1 background were used to analyze genotype and discover novel alleles for grain size. Thus, this agronomic introgression library provided a powerful resource for future rice improvement and genetic dissection of agronomic traits.In this study, to explore and utilize wild relatives in rice improvement, we systematically introduced foreign segments from eight different AA genome species , Yundao 1 (a japonica variety), and RD23 (an indica variety), were used as the recurrent parents.The plant materials included 1 accession of . sativa . Three eO. longistaminata were crossed with Dianjingyou 1 as the recurrent parent. A total of two hundred and twenty-six accessions as the donor parents, except for O. longistaminata and O. glaberrima, were used to cross with the recurrent parent Yundao 1. All the F1 plants were used as female parents to backcross to their respective recurrent parents to produce BC1F1 generation. More than 200 BC1F1 seeds were generated for each of the combinations. The moderate heading date of individuals was selected to backcross with the recurrent parents, and about 200 BC2F1 seeds were obtained. From each of the BC2F1 progeny, individuals that showed a significant agronomic difference from the recurrent parents were selected for further backcrossing or selfing. After 2\u20136 times backcrossing and 2\u20137 times selfing, the progeny with stable and different target traits from their recurrent parents was developed as agronomic ILs.A total of three hundred and twenty-nine accessions of AA genome species as the donor parents except for 1 plants were obtained by embryo rescue technique from the cross between 1 accession of O. longistaminata as the donor parent and an indica variety RD23 as the recurrent parent, and crossing and selfing from BC1F1 generation were performed as above mentioned procedure.The FAll materials were grown at the Sanya Breeding Station, Sanya , Hainan province, China. Ten individuals per row were planted at a spacing of 20 cm \u00d7 25 cm. All materials were grown and managed according to the local protocol.A randomized complete block design was carried out with three replications for agronomic trait evaluation under two different environments , respectively. Each line was planted in three rows with 10 individuals per row. The five plants in the middle of each row were used for scoring traits. The recurrent parents, Dianjingyou 1, Yundao 1, and RD23, were used as controls in the experiment, respectively.Prostrate growth habit was observed for the tiller angle in three main stages, which includes booting stage, heading stage, and grain filling stage. That tiller angle in ILs was larger than that in recurrent parent, which was regarded as the prostrate growth.Primary branches at the base of panicle of the lines extend outward were regarded as the spreading panicle. Erect panicle or drooping panicle was evaluated according to the angle between the lines that connecting panicle pedestal with panicle tip and the elongation line of stem; spikelet numbers were measured as the total number of spikelets of the whole plant divided by its total number of panicles. Dense panicle was scored by the ratio of spikelet numbers to panicle length.Tiller number was recorded from five random plants; plant height was measured from the ground level to the tip of the tallest panicle.To measure the grain size, grains were selected from primary panicle and stored at room temperature for at least 3 months before testing. Twenty grains were used to measure grain length (GL), grain width (GW), and the ratio of grain length to grain width (RLW) from each plant. Photographs of grains per individual were taken using stereomicroscope, and then, grain size was measured by software Image J. The average value of 20 grains was used as phenotypic data. The weight of one thousand grains was measured by weighting fertile, fully mature grains from five panicles.Aerobic adaptation was evaluated by biomass, yield, harvest index, heading date, and plant height in both aerobic and irrigated environments. Drought tolerance was assessed by the same traits as the aerobic adaptation in both upland and irrigated environments. For the aerobic and upland treatments, we used direct sowing with 4 seeds per hole and retained one seedling at the three-leaf stage. There is some difference in water management, and the rainfall provided the essential water for plant growth without the extra irrigation under aerobic treatment, whereas mobile sprinkler irrigation facilities were used to maintain a humid soil environment at the sow, tiller, and heading stage under upland treatment. For the irrigating treatment, sowing and transplanting single seedlings were done, and the field was managed according to the local standard practices.Magnaporthe oryzae for 3 weeks after sowing by spraying with conidial suspension. After 7 days, lesion types on rice leaves were observed and scored according to a standard reference scale based on a dominant lesion type . When the japonica varieties Dianjingyou 1 and Yundao 1 used as the recurrent parents were crossed with the accession of O. longistaminata, the crossing was failed despite many efforts. Only the cross using an indica variety RD23 as the recurrent parent and O. longistaminata as the donor was obtained by embryo rescue. Fortunately, the female gametes from the interspecific hybrids were partially fertile, and some hybridization seeds in the different combinations could be harvested by backcrossing F1 as the female parent with O. sativa as the male parent. Finally, agronomic IL library that contains 6,372 lines was developed based on the agronomic trait selection, and agronomic ILs showed a significant difference from the recurrent parents, which include spreading panicle, erect panicle, dense panicle, lax panicle, awn, prostrate growth, plant height, pericarp color, kernel color, glabrous hull, grain size, 1,000-grain weight, drought resistance and aerobic adaption, and blast resistance in the two subspecies Pid3 locus was analyzed based on the 3K RGP sequencing data, and different strategies were developed to apply the functional Pid3 alleles to indica and japonica rice breeding an abundant genetic variations were introgressed into the cultivated rice genome; (2) target genes or QTLs for the same phenotype could be validated by the different donors, and it will provide the information that these target genes or QTLs could be the same haplotype; (3) the genes or QTLs responsible for the opposite phenotypes, for example, long-grain size and short-grain size, could also be confirmed using the different populations from multiple donors, and it could be the different haplotypes. Therefore, this agronomic IL library will help us to improve rice breeding and interesting gene discovery and utilization.The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/YZ and DT draft the manuscript. DT designed the research. JZ, PX, WD, and XD developed the introgression lines. YZ, YYa, YYu, JL, and QP participated the genotype and phenotype evaluation. YZ performed the data analysis. All authors reviewed and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "CHRIS supported the Victorian ICU response during the COVID\u201019 pandemicThe coronavirus disease 2019 (COVID\u201019) pandemic put an unprecedented strain on intensive care resources throughout the world. Initially in Wuhan (China)In late March 2020, rising numbers of COVID\u201019\u2010related admissions to ICUs were observed throughout Australia.A nationwide dashboard of ICU activity, the Critical Health Resources Information System (CHRIS), was rapidly developed as a collaboration between Telstra Purple, Ambulance Victoria, ANZICS and the Australian Government Department of Health. All adult and paediatric ICUs (public and private) in Australia were instructed to enter data twice daily. This manual data entry typically took 5 minutes. Each ICU was immediately able to see patient numbers and resources available within every ICU in their region and also see an aggregate summary of all ICUs in Australia. CHRIS was available to all state and territory health departments, to all patient transport and retrieval agencies, and also to ICUs in New Zealand. The system went live on 1 May 2020, after 26 days of development. Three weeks later, 184 out of 188 eligible ICUs (98%) in Australia were contributing data.After a decline in severe acute respiratory syndrome coronavirus 2 (SARS\u2010CoV\u20102) infections throughout Australia, notifications rose again in Melbourne at the end of June 2020.From the beginning of July to the end of September 2020, there were 237 ICU admissions with COVID\u201019 pneumonitis, of which 210 (88%) occurred in July and August. Admissions were predominantly to public hospitals in north\u2010western Melbourne.Spare ventilators were available at all sites on all days. On six occasions in August, there were more than 140 ventilated patients (with or without COVID\u201019) in Victoria. On each of these days, there were more than 500 spare ICU ventilators available .Box 1ACT\u00a0=\u00a0Australian Capital Territory; COVID\u201019\u00a0=\u00a0coronavirus disease 2019; ECMO\u00a0=\u00a0extracorporeal membrane oxygenation; HDU\u00a0=\u00a0high dependency unit; ICU\u00a0=\u00a0intensive care unit; NSW\u00a0=\u00a0New South Wales; NT\u00a0=\u00a0Northern Territory; NZ\u00a0=\u00a0New Zealand; QLD\u00a0=\u00a0Queensland; SA\u00a0=\u00a0South Australia; TAS\u00a0=\u00a0Tasmania; VIC\u00a0=\u00a0Victoria; WA\u00a0=\u00a0Western Australia.Box 2LOWESS\u00a0=\u00a0locally weighted scatterplot smoothing.CHRIS provided real\u2010time data on ICU activity and capacity. In addition to facilitating the transfer of critically ill patients, CHRIS also enabled early diversion of ambulance presentations to emergency departments at hospitals where ICUs had capacity. These approaches were integral to ensuring standards of care were maintained by clinicians, retrieval agencies and the Victorian health department. At the same time, there was visibility to the Australian Government Department of Health, which would, if required, coordinate a national response to overwhelmed ICU services.Although several individual ICUs came under strain, retrieval and critical care systems in metropolitan Melbourne were not overwhelmed. Strategies to redistribute critical care demand are likely to have contributed to high survival rates for ventilated patients with COVID\u201019 in Victoria.The local application of a national tool (CHRIS) for real\u2010time display of ICU activity and resources was a key component of the response to the COVID\u201019 pandemic in Victoria. CHRIS has the potential to augment existing ICU monitoring systems. The tool may also assist in the response to local and national public health emergencies, such as mass casualty events, bushfiresNo relevant disclosures.Not commissioned; externally peer reviewed.Video S1Click here for additional data file."} {"text": "The continuous growth in energy demand requires researchers to find new solutions to enlarge and diversify the possible ways of exploiting renewable energy sources. Our idea is the development of a solar concentrator based on trapping the luminous radiation with a smart window. This system is able to direct light towards the photovoltaic cells placed on window borders and produce electricity, without any movable part and without changing its transparency. Herein, we report a detailed study of cellulose ethers, a class of materials of natural origin capable of changing their state, from transparent aqueous solution to scattering hydrogel, in response to a temperature change. Cellulose thermotropism can be used to produce a scattering spot in a window filled with the thermotropic fluid to create a new kind of self-tracking solar concentrator. We demonstrate that the properties of the thermotropic fluid can be finely tuned by selecting the cellulose functionalization, the co-dissolved salt, and by regulating their dosage. Lastly, the results of our investigation are tested in a proof-of-concept demonstration of solar concentration achieved by thermotropism-based light trapping. Thermotropic polymers are a class of materials able to switch their state, from clear to strongly scattering, in response to a temperature change. Thanks to the reversibility of this transparent/opaque transition, they are attractive for photonic applications, particularly in smart windows, where they can play a critical role in enhancing the energy efficiency and the comfort level of indoor spaces .The physical mechanism under thermotropism relies on a phase transition, occurring at the critical temperature, from polymer homogeneously dissolved in the solvent to the appearance of partially undissolved/aggregated polymer chains. Below the phase transition temperature, that in case of polymer gels or blends is often called \u201clower critical solution temperature\u201d (LCST), the refractive indices of the polymer and the solvent are almost identical, so that the system exhibits a transparent state. When the temperature rises above the LCST, the refractive index of the aggregated polymer phase increases, generating an index mismatch with respect to the matrix, which causes light scattering .In addition to reflecting back part of the light passing through the material and therefore acting similarly to an automatic temperature-driven light protection, the scattering state of a thermotropic polymer opportunely confined in a transparent window can be exploited to trap light into waveguide modes. In fact, as light is backscattered by the thermotropic polymer in every direction, part of it will hit the outer surfaces of the cell windows with angles satisfying the conditions for total internal reflection. Consequently, light will be trapped inside the window.In our idea, sunlight passing through a smart window is conveyed by waveguide to the window edges, where small photovoltaic cells are positioned. The final goal is to feed the solar cells with more radiation than what they would normally capture, hence boosting electricity conversion.Several efforts have been made to maximize the efficiency of both organic and inorganic solar cells ,4, and tIn our system, depicted in Chemically, thermoresponsive polymers have both hydrophilic and hydrophobic subunits. The hydrophilic subunits can form hydrogen bonds with water and keep the polymer chains in a random coil-shaped, hydrated state. Thus, the polymer is dissolved in water leading to a single, homogenous, transparent phase. When the temperature increases beyond LCST, the conformation is changed from coil to globule form. The hydrophilic subunits become inaccessible to water molecules, causing the dehydration of polymer chains and, consequently, the formation of a biphasic, nonhomogeneous, scattering system. Given the requirement of hydrophilic and hydrophobic domains on their chain, LCST-type thermotropic polymers can belong to different chemical classes: ethers, alcohols, amides and polypeptides .Among thermotropic materials, one of the most widely studied is poly(N-isopropylacrylamide), which forms thick hydrogels with LCST around 30\u201332 \u00b0C in its homopolymer form ,13,14, aFor our application, the scattering medium should be chemically simple and economically affordable, with transition temperature within the range of 45\u201360 \u00b0C, high enough to be not reachable in standard sun conditions, and easily reachable by little sun concentration. For these reasons, a more suitable class of material to be considered for the scattering medium is represented by cellulose ethers.Cellulose is a natural polymer characterized by a high hydrophilicity on its chain structure. Because cellulose forms strong intermolecular hydrogen bonds, however, it is insoluble in water. When a certain fraction of hydroxyl groups is substituted by hydrophobic groups such as methoxide groups, intermolecular hydrogen bonds are weakened, resulting in water solubility . The resCellulose ethers, like native cellulose, are not digestible, not toxic, not allergenic, and they are extensively used as a thickener and emulsifier in various food and cosmetic products, in laxative drugs and in the manufacturing of drug capsules . More reTheir easy availability, directly connected to the extensive industrial usage, their LCST generally around 40 \u00b0C and above, and the possibility of different chemical substitution leading to tunable optical properties, make cellulose ethers the perfect candidate materials for our scattering-based solar concentrator, with respect to other synthetic and natural polymers with thermotropic properties. Furthermore, this material perfectly meets the requirements of sustainability such as low cost, easy availability, abundance, non-toxicity and does not present disposal problems, creating a virtuous combination of energy and sustainability. Even though the concept of a self-tracking solar concentrator based on a scattering medium was proposed a few years ago, to our knowledge, no practical demonstration/proof-of-concept has ever been published . FollowiFor our study, we initially considered six different cellulose derivatives that can be easily found on the market. In this series, cellulose ethers are characterized by either different viscosity while sharing the same kind of substitution (methyl celluloses) or by different substituting groups methyl), as summarized in In a preliminary screening aimed at testing the scattering capability of the selected materials, the diluted aqueous solutions were gradually heated with a hair-drier until we observed the formation of a scattering phase. The cuvette containing 1 wt % cellulose ether solutions quickly became opaque. By repeating the experiment in partially filled vials, we could verify by turning them upside-down that a liquid-gel transition accompanies the observed optical transition, increasing the viscosity of the mixture. By immersing the cuvette/vial in cold water, the temperature was quickly decreased, and the reversibility of the process could be visually confirmed .The visual comparison between the different cellulose derivatives before and after reaching the LCST revealed some differences. The scattering phase of methyl-celluloses sill maintained a minimum of transparency which allowed to glimpse the text underneath. HEC did not show any evident scattering phase upon heating; therefore, it was not considered anymore in this study. HPC produced a narrow scattering solid immersed in the transparent matrix. HPMC scattering phase was highly opaque and perfectly hid the text behind . In geneFollowing this preliminary assay, the next step was to determine the LCST of the different cellulose derivatives. To do so, we recorded the variation in light transmission through the cuvette filled with the cellulose solution by the microscope camera while varying their temperature from 30 to 95 \u00b0C. After reaching the highest temperature, the temperature scan was repeated back until it reached the initial cold point.As reported by the plot in This screening provided a first indication that cellulose ethers can be suitable for developing a self-tracking solar concentrator based on waveguiding, provided that the proper polymer functionalization and thus the right LCST is selected. However, the kind of functionalization is not the only parameter that must be taken into consideration. In fact, salts are known to influence the temperature-induced phase transitions in aqueous solutions of thermosensitive polymers . This isFor anions, a typical Hofmeister order is:\u2212 and SCN\u2212, and one \u201csalting-out\u201d anion, namely Cl\u2212, on the LCST of cellulose ethers. To better elucidate the effect of the anions, the counterion was always K+, and salt concentration was fixed at 0.5 M. As expected, in the presence of KI and KSCN all the cellulose ethers increased their LCST of 5\u201315 \u00b0C, while the addition of KCl lowered the LCST of the corresponding polymer of 5\u201310 \u00b0C data are reported in The CS values were obtained by measuring with an integration sphere the light reaching the lateral side of the cell. For each wavelength AM 1.5 D ,37. The The average CS values in the investigated wavelength range were equal to 0.49 and 0.32 for the measurements at 3 cm and at 8 cm, respectively. These results indicate that the system in this non-optimized setup failed to concentrate the solar radiation. In fact, for CS values smaller than 1, the concentrated light reaching the window\u2019s side is less than the non-concentrated light reaching the same area. Still, the amount of light trapped by our system and potentially usable to generate electricity is relevant, and the experiment demonstrated for the first time the potentiality of using thermotropic cellulose derivatives in self-tracking solar concentrators. The setup however demonstrated that a part of the light collected by the lens was deviated from its original direction through this optical device and in principle such behavior should happen for a certain range of incidence angles without the need of any mechanical tracking system.To achieve a real working system, the main parameters must be optimized. Specifically, by regulating the thickness of the thermotropic fluid , the size and efficiency of the scattering spot and the performance of the focusing optics, it would be possible, in principle, to maximize the amount of light reaching the window edges and to reach the goal of CS values exceeding unity.Cellulose ethers were purchased from TCI Europe N.V. except for (hydroxypropyl)methyl cellulose batches, that were purchased from Merck KGaA .\u22121. The maximum temperature was maintained for 10 s before starting the return run. The same values of light intensity and exposition were used for all the experiments. The transparency data were extrapolated by measuring the average brightness of each micrograph with ImageJ [The transparency variation plots were obtained by using a Nikon Eclipse Te2000 inverted microscope equipped with a Linkam hot-stage system . Micrographs were taken every 2 s while the sample temperature was varied between 30 to 95 \u00b0C and then back to the 30 \u00b0C, at the rate of 1 \u00b0C \u00d7 sh ImageJ . Since tThe solar concentration experiments were performed by focalizing the solar radiation on a glass cell of 160 \u00d7 160 \u00d7 4 mm filled with the thermotropic fluid, by using a PMMA Fresnel lens. The temperature of glass surface in correspondence of the concentrated light spot during a sunny day reached 50\u201355 \u00b0C. Data were acquired by positioning an integration sphere on the edge of the window and on the window\u2019s face, towards the sun .In this study, we proposed to use the phenomenon of thermotropism to produce a scattering spot in a window filled with a thermotropic fluid, to create a new kind of self-tracking solar concentrator. By investigating different cellulose ethers in aqueous solution, we demonstrated that the properties of the thermotropic fluid can be finely tuned by selecting the cellulose functionalization, the co-dissolved salt, and by regulating their dosage. Then, we tested the results of our study in a proof-of-concept demonstration of solar concentration, showing that a part of the light collected by the lens is effectively trapped and deviated from its original direction by thermotropic effect without the need of any mechanical tracking, even though the system needs some optimization.It is worth noting that the perfectly sustainable thermotropic system developed here is highly tunable to respond to a certain temperature range; specifically, fluids with different LCST can be used in different geographical areas or can be easily replaced in the same window depending on the season, thanks to the low cost, low toxicity and easy availability of the chosen approach. A further advantage is that this kind of transparent photovoltaic window utilizes only the direct sun radiation, while the diffused skylight arising from the scattering of the direct solar beam by molecules or particulates in the atmosphere, which represents normally 10\u201315% of the total radiation, will be still available to illuminate the internal ambient. More importantly, conversely to what happens with luminescent solar concentrators, here the natural illumination is maintained without any chromatic modification. Therefore, this system, based only on a lens protruding from a transparent window, can possibly become a design element that, by preventing the entry of direct light, is able to create a more pleasant and soft illuminated ambient.In conclusion, after proper optimization, the solution proposed could be a viable alternative to enlarge and diversify the possible ways of exploiting renewable energy sources."} {"text": "Borreliella to differentiate types and establish evolution over time. Investigating the ospC genetic types of Borreliella burgdorferi across multiple organ tissues of white-footed mice has the potential to contribute to our understanding of Lyme disease and the wide spectrum of clinical presentation associated with infection. In this study, five unique tissue types were sampled from 90 mice and screened for B. burgdorferi infections. This initial screening revealed a 63% overall B. burgdorferi infection rate in the mice collected (57/90). A total of 163 tissues (30.4%) tested positive for B. burgdorferi infections and when mapped to Borreliella types, 143,894 of the initial 322,480 reads mapped to 10 of the reference sequences in the ospC strain library constructed for this study at a 97% MOI. Two tissue types, the ear and the tongue, each accounted for 90% of the observed Borreliella sequence diversity in the tissue samples surveyed. The largest amount of variation was observed in an individual ear tissue sample with six ospC sequence types, which is equivalent to 60% of the observed variation seen across all tested specimens, with statistically significant associations observed between tissue type and detected Borreliella. There is strong evidence for genetic variability in B. burgdorferi within local white-footed mouse populations and even within individual hosts by tissue type. These findings may shed light on drivers of infection sequalae in specific tissues in humans and highlights the need for expanded surveillance on the epigenetics of B. burgdorferi across reservoirs, ticks, and infected patients.Outer surface protein C (OspC) is a commonly used marker in population studies of Peromyscus leucopus) are the principle reservoir for Borreliella burgdorferi in the eastern United States (Ixodes (Acari: Ixodidae) species ticks . However, once the tick initiates feeding, the dominant outer surface protein expression switches from OspA to OspC. OspC is required for productive mammalian infection to occur, with peak expression during tick-feeding at the 48-h mark infections are commonly reported from the United States, Europe, and Asia. The strains responsible for Lyme disease infections in Europe are primarily associated with neurological and dermal clinical manifestations respectively, while infections in the United States, originating from the B. burgdorferi sensu stricto strains, are most associated with skin lesions and arthritis , Goodwood (38.834570\u201377.358750), Huntley Meadows Park (38.756417\u201377.115347), Graves (38.771210\u201377.095720), Stoneybrooke (38.770943\u201377.096315).Collected mice were necropsied to separate spleen, liver, ear, tongue, tail, heart, and kidney tissues prior to storage in separate sterile microcentrifuge tubes at \u221280\u00b0C. DNA extraction was performed using the Qiagen DNeasy Blood & Tissue Kit according to the manufacturer\u2019s instructions, diluted (1: 5 in DEPC water), and both the original DNA and dilutions were stored at \u221280\u00b0C for future use.B. burgdorferi ospC was performed on all mouse tissues using a semi-nested PCR protocol with the first round amplifying a 597 base pair fragment and the second amplifying a 314 base pair fragment. The first PCR was performed using extracted DNA as a template and a combination of two forward primers (WangEF) and (LinF) with a reverse primer (LinR). The second PCR was performed using template DNA from the first PCR, a M13 tagged forward primer (WangIF M13F), and a M13 reverse tagged primer (WangER M13R) found in . Nextgen sequencing was performed using the Ion Torrent Personal Genome Machine.via De Novo assembly, to conduct multiple alignments, develop a reference library, BLAST search, construct phylogenetic trees, and for reference mapping. The MBAC Galaxy Portal was then used to organize and rename sequence reads and with the program FigTree to customize phylogenetic trees. Several in-house Perl scripts were also used to create tables, sort reads, and rename sequences.The results obtained from the sequencing yielded 322,480 reads. The bioinformatics program Geneious, version 10.0, was used to analyze sequence data De Novo assembly in Geneious and the resulting contigs were then uploaded into BLAST. The 20 unique strain reference matches from the BLAST search were recorded by accession number along with 27 additional strain references from BorreliaBaseBorrelia spp. and different strains found worldwide. These matched references were imported into Geneious to build a reference library of 47 sequences. The strain references in the library were aligned to each other and trimmed to the ospC primers that were utilized for the PCR. References that were within 3% similarity to each other were removed from the library to eliminate redundancy, after which a neighbor-joining tree was built in Geneious to examine phylogenetic distance. The final number of reference strain sequences remaining in the library was reduced to 30 unique sequences and used for the remainder of the analysis.The 322,480 fasta read master files were put through a The 322,480 sample reads were mapped to the reference library, of which, 143,894 reads had a 97% minimum overlap identity (MOI) resulting in the production of 10 contigs that corresponded to 10 unique references from the reference library. The remaining unused reads were further examined and mapped back to the reference library at a lower MOI to determine if novel variations were being lost or if these unused reads contained errors that precluded them from being incorporated into the original 97% MOI mapping.p-value of <0.05 considered significant.All data were analyzed and cleaned using STATA v. 14.1 . Categorical data were coded in a binary fashion and analyzed using a simple logistic regression to determine the likelihood of other tissues testing positive, when compared to positive samples within the same mouse as well as the association between the number of detected types by mouse to determine if specific tissues were prone to more genetic diversity. An Analysis of Variance (ANOVA) was attempted, but the variance was not equal between groups, and thus an inappropriate test for this dataset. Z-scores and 95% confidence intervals for odds ratios were calculated to capture magnitude of association with a An IACUC was completed and approved by George Mason University under the Institutional Biosafety Committee (IBC) protocol# 12-28Mod1. Trapping was conducted under the Virginia Department of Game and Inland Fisheries (VADGIF) permit number 032199, 035522, 046610, and 050364.B. burgdorferi infections. When mapped to the Borreliella types in the reference library at a 97% MOI, 143,894 sample sequences mapped to 10 of the original 30 reference sequences in the reference library. The frequency and distribution of these tissues to the mapped reference sequences was tabulated tested positive for abulated and visuabulated , 2.Borreliella variation, 143,894 of the initial 322,480 reads mapped to 10 of the reference sequences in the ospC strain library constructed for this study at a 97% MOI and JQ951096.1 were 3.58 and 2.94 times more likely to infect ear tissue than other tissue types (p\u2009<\u20090.01).A statistically significant association was observed between tissue and the type of the ue types , 5. The Borreliella types identified from the 537 tissues tested in this study are all endemic to North America and were mostly described from the northeastern states of New York and Massachusetts. The observed diversity of these types was different between trapping locations and between tissue types. Interestingly, several tissues tested positive for more than one type, which should be studied further in the future in terms of immune response and competition between types. Prior studies established the correlation between OspC types and the invasiveness of infections seen in humans, specifically OspC types A, B, C, D, H, K, and N , had fewer tissues infected per mouse (2), but had all 10 types observed for the entire study. It is important to note that the discrepancy between the total number of mice collected at each site was not evenly distributed, due to trap success rate. This lack of even distribution precluded statistical analysis of infection rate with respect to trapping location. The frequency distribution of the types was less evenly distributed than overall number of infected mice collected from trapping sites and therefore statistical significance of types was not reported.The observed diversity of ospC of B. burgdorferi and its pervasive spread through all trapping locations surveyed in Fairfax County suggests that Borreliella is actively transmitted and maintained within white-footed mice populations. Fragmentated and highly disturbed habitats, such as those in Fairfax County, Virginia, naturally lend themselves to increased non-mouse reservoir population and further increase the risk of creating infected tick vectors and JQ951096.1 were only found in ear tissue. To further explore the relationship between OspC type and tissue specificity, a larger sample size of white-footed mice would be required. Regardless, pairing genetic testing and surveillance of Borreliella in mammalian hosts with active cases of Lyme disease in neighboring populations, may hold important insights to drivers of tissue specific infection and the overall clinical management of acute and chronic Lyme disease infection.The genetic diversity of All datasets presented in this study are included in the article/ The animal study was reviewed and approved by George Mason University, Institutional Biosafety Committee.SZ: conceptualization, methodology, formal analysis, lab analysis, sequencing, sample collection, investigation, and writing\u2014original draft. MvF: methodology formal analysis, data visualization, and writing\u2014original draft. TW: formal analysis, data visualization, and writing\u2014editing and review. MS: methodology, lab analysis, sequencing, and writing\u2014editing and review. PG: supervision, methodology, data curation, data visualization, and writing\u2014editing and review. All authors contributed to the article and approved the submitted version.This project was supported using funds provided by George Mason University, Microbiome Analysis Center. Funding to cover publication costs was provided by George Mason University Libraries Open Access Publishing Fund.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "Acinetobacter pittii strain, co-producing chromosomal NDM-1 and OXA-820 carbapenemases.To characterize one KL38-OCL6-ST220 carbapenem-resistant A. pittii TCM strain was isolated from a bloodstream infection (BSI). Antimicrobial susceptibility tests were conducted via disc diffusion and broth microdilution. Stability experiments of blaNDM-1 and blaOXA-820 carbapenemase genes were further performed. Whole-genome sequencing (WGS) was performed on the Illumina and Oxford Nanopore platforms. Multilocus sequence typing (MLST) was analyzed based on the Pasteur and Oxford schemes. Resistance genes, virulence factors, and insertion sequences (ISs) were identified with ABRicate based on ResFinder 4.0, virulence factor database (VFDB), and ISfinder. Capsular polysaccharide (KL), lipooligosaccharide outer core (OCL), and plasmid reconstruction were tested using Kaptive and PLACNETw. PHASTER was used to predict prophage regions. A comparative genomics analysis of all ST220 A. pittii strains from the public database was carried out. Point mutations, average nucleotide identity (ANI), DNA\u2013DNA hybridization (DDH) distances, and pan-genome analysis were performed.A. pittii TCM was ST220Pas and ST1818Oxf with KL38 and OCL6, respectively. It was resistant to imipenem, meropenem, and ciprofloxacin but still susceptible to amikacin, colistin, and tigecycline. WGS revealed that A. pittii TCM contained one circular chromosome and four plasmids. The Tn125 composite transposon, including blaNDM-1, was located in the chromosome with 3-bp target site duplications (TSDs). Many virulence factors and the blaOXA-820 carbapenemase gene were also identified. The stability assays revealed that blaNDM-1 and blaOXA-820 were stabilized by passage in an antibiotic-free medium. Moreover, 12 prophage regions were identified in the chromosome. Phylogenetic analysis showed that there are 11 ST220 A. pittii strains, and one collected from Anhui, China was closely related. All ST220 A. pittii strains presented high ANI and DDH values; they ranged from 99.85% to 100% for ANI and from 97.4% to 99.9% for DDH. Pan-genome analysis revealed 3,200 core genes, 0 soft core genes, 1,571 shell genes, and 933 cloud genes among the 11 ST220 A. pittii strains.A. pittii presents a huge challenge in healthcare settings. Increased surveillance of this species in hospital and community settings is urgently needed.The coexistence of chromosomal NDM-1 and OXA-820 carbapenemases in Acinetobacter is ubiquitous in diverse environments and clinical settings. The species belonging to the Acinetobacter calcoaceticus\u2013Acinetobacter baumannii complex (ACB complex) including A. calcoaceticus, A. baumannii, A. dijkshoorniae, A. lactucae, A. nosocomialis, A. pittii, and A. seifertii are of great importance , pneumonia, and urinary tract infections (UTIs) (The genus portance . Among ts (UTIs) . One imps (UTIs) . These ds (UTIs) .Acinetobacter spp., including A. pittii , including the New Delhi metallo-\u03b2-lactamase (NDM) . NDM couse (NDM) . NDM in A. baumannii is commonly considered to be a low virulent pathogen in China. To the best of our knowledge, this is the first description of an ST220 A. pittii strain in which co-harboring blaNDM-1 and blaOXA-820 carbapenemase genes resided on the chromosome. A combination of Illumina and MinION whole-genome sequencing was conducted to provide comprehensive insight into the genomic and chromosome structure features.In this study, we investigate the characteristics of one sequence of type 220\u00a0Ahttps://cmap.ihmc.us) . Based oihmc.us) .A. pittii TCM strain was isolated from a BSI during routine diagnostic analysis on 19 January 2018 in Hangzhou, China. Isolate identification to species level was conducted by matrix-assisted laser desorption ionization\u2013time of flight mass spectrometry and confirmed by 16S rRNA gene-based sequencing . The antibiotic disks used in this study included imipenem , meropenem , Amikacin , and ciprofloxacin in a Mueller-Hinton agar (MHA) culture medium . In addition to the antibiotics mentioned above, colistin and tigecycline were also investigated using broth microdilution. Escherichia coli ATCC 25922 served as the quality control strain.Minimum inhibitory concentrations (MICs) for A. pittii TCM strain was grown overnight in three separate cultures at 37\u00b0C in 2\u00a0ml of Luria broth (LB) without antibiotics, followed by serial passage of 2\u00a0\u00b5l of overnight culture into 2\u00a0ml of LB daily, yielding 10 generations each lasting 7 days (blaNDM-1 and blaOXA-820 was confirmed via PCR. Primers were designed based on the full-length sequences of blaNDM-1 and blaOXA-820 on the A. pittii TCM chromosome (GenBank accession number: CP095407) using Snapgene BLAST 2.3.2 and BLAST software. Primers are listed in g 7 days . On the A. pittii TCM strain using a Qiagen minikit in accordance with the manufacturer\u2019s recommendations. Whole-genome sequencing was performed using both the Illumina HiSeq platform and the Oxford Nanopore MinION platform . De novo assembly of the reads of Illumina and MinION was constructed using Unicycler v0.4.8 (https://github.com/tseemann/abricate) based on ResFinder 4.0 updated in 2020 (http://genomicepidemiology.org/) (http://www.mgc.ac.cn/VFs/) (Kaptive v2.0.0 updated in 2021 (via the Center for Genomic Epidemiology (CGE) website updated in 2020 (https://cge.cbs.dtu.dk/services/MLST/). The Phage Search Tool (PHASTER) updated in 2016 was used for the prediction of bacteriophages . The poigy.org/) . Bactericn/VFs/) . Inserticn/VFs/) . Capsula in 2021 . Multiloiophages .https://proksee.ca/) (A. pittii strains from the PubMLST database (https://pubmlst.org/) was further performed using the Bacterial Isolate Genome Sequence Database (BIGSdb) (https://itol.embl.de/) (Plasmid reconstruction was conducted using the PLACNETw tool . Plasmidsee.ca/) . Compara(BIGSdb) . The genmbl.de/) . DefaultA. pittii HUMV-6483 as the reference strain and species positive control (via the Roary software (https://jameshadfield.github.io/phandango/#/main).The taxonomic relationships among these isolates were further evaluated using the average nucleotide identity (ANI) and DNA\u2013 control . Pan-gensoftware and visuThe genome and protein-coding sequences (CDS) were annotated and predicted using PGAP and RAST. According to PGAP annotation, there are 4,330 genes, of which 4,102 are protein-coding genes, 131 are pseudogenes, and the remaining 97 are predicted RNA-coding genes, composed of 74 tRNAs, 18 rRNAs and 5 ncRNAs.In contrast to PGAP, 4,280 genes that belonged to 313 subsystems were annotated using RAST. The subsystem each CDS was classified into is shown in A. pittii TCM strain possessed a multidrug-resistant (MDR) profile when both CLSI and EUCAST breakpoints were used. The inhibition zone diameters of imipenem, meropenem, ciprofloxacin, and amikacin were 17\u00a0mm (R), 13\u00a0mm (R), 7\u00a0mm (R), and 21\u00a0mm (S), respectively. The broth microdilution results showed that the MICs of imipenem and meropenem were 8 and 16\u00a0mg/L, respectively. A. pittii TCM strain also exhibited a resistance to ciprofloxacin (32\u00a0mg/L). In our case, A. pittii TCM was still susceptible to amikacin (2\u00a0mg/L), colistin (1\u00a0mg/L), and tigecycline (0.5\u00a0mg/L). The susceptibility of the A. pittii TCM strain against the antimicrobial agents above was consistent when the isolate was classified as resistant or susceptible using CLSI and EUCAST breakpoints.The antimicrobial susceptibility testing results revealed the A. pittii TCM strain revealed that in addition to co-harboring blaNDM-1 and blaOXA-820, a series of genes conferring resistance to \u03b2-lactams (blaADC-43), bleomycin (ble-MBL), streptomycin [ant(2\u2019\u2019)-Ia], sulfonamides (sul2), and macrolide [msr(IE) and mph(E)] were also identified (gyrA (DNA gyrase) was found. However, no pmrAB and lpxACD mutations were identified in this strain. Based on these results, the genotype and the phenotype were consistent.Analysis of the genome of the A. pittii TCM strain. One was the outer membrane protein ompA gene pga operon (pgaABCD) encoding poly-\u03b2-1,6-N-acetyl-d-glucosamine (PNAG), which is important for biofilm development. Others were csu operon encoding Csu pili and pbpG encoding PbpG for serum resistance\u2014a quite important two-component regulatory system bfmRS involved in Csu expression. Then, finally, there were lpxABC and lpxL encoding lipopolysaccharide (LPS) and many genes encoding Acinetobactin for iron uptake.Many virulence factors were identified in the blaNDM-1 and blaOXA-820 were quite stable even after 70 samples under antibiotics-free condition. These results were confirmed with PCR . According to the Oxford MLST schemes, it belonged to ST1818 . The specific positions of all housekeeping genes are shown in Based on the Pasteur MLST scheme, the Kaptive showed that the A. pittii TCM strain contains OC locus 6 (OCL-6), matching the 98.98% coverage of reference sequence with 80.55% nucleotide identity. The K locus in the A. pittii TCM strain is KL38. It matches 100% of the locus with an overall nucleotide identity of 96.89%.A. pittii TCM strain had a 4,250,902-bp circular chromosome possible target site duplications (TSDs). Moreover, the composite transposon structure of Tn125 with another 3-bp (AAA) TSD in the plasmid pDETAB2 (GenBank accession number: CP047975), which was isolated from a rectal swab sample in China, is identical to this Tn125 with a percentage of 99.99% software family, Proksee, was utilized for generating high-quality, navigable maps of circular genomes. This tool is a Java program and originally intended for bacterial genomes. In our case, four plasmids were identified in the A. pittii TCM strain, namely pTCM-1 to pTCM-4, with sizes between 6,078 and 84,108 bp and GC contents ranging from 33.29% to 39.54% disseminated predominantly within the ICU in China. However, limited data and knowledge concerning NDM-1-positive A. pittii-causing BSI have been acquired to date in China could neutralize the activity of \u00df-lactam antibiotics ibiotics . In previn China . Authorsfunction . Consistfunction .A. pittii isolates, an outbreak of ST63 clone that carried a 45-kb blaNDM-1-bearing plasmid was reported in an ICU in China . However, no studies completely clarify the chromosome and plasmid structures of blaNDM-1-positive carbapenem-resistant A. pittii strain. Few studies highlight the importance of mobile genetic elements (MGEs) in A. pittii. MGEs, including insertion sequences (ISs), integrons, and transposons, play a particularly important role in the resistance gene transfer between plasmid and chromosomes to a previously described blaNDM-1-blaOXA-58-harboring plasmid from an A. baumannii strain isolated from the rectal swab of a hospitalized patient in an ICU in Hangzhou, China and a 28-bp recombination site dif, could play key roles in the resistance genes transfer of blaOXA-40-like, blaOXA-499, and blaOXA-58 of carbapenems resistance genes, such as blaNDM-1 and blaOXA-23 and the Natural Science Foundation of Zhejiang Province .The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."} {"text": "Modelling in anaerobic digestion will play a crucial role as a tool for smart monitoring and supervision of the process performance and stability. By far, the Anaerobic Digestion Model No. 1 (ADM1) has been the most recognized and exploited model to represent this process. This study aims to propose simple extensions for the ADM1 model to tackle some overlooked operational and metabolic aspects. Extensions for the discontinuous feeding process, the reduction of the active working volume, the transport of the soluble compound from the bulk to the cell interior, and biomass acclimation are presented in this study. The model extensions are included by a change in the mass balance of the process in batch and continuous operation, the incorporation of a transfer equation governed by the gradient between the extra- and intra- cellular concentration, and a saturation-type function where the time has an explicit influence on the kinetic parameters, respectively. By adding minimal complexity to the existing ADM1, the incorporation of these phenomena may help to understand some underlying process issues that remain unexplained by the current model structure, broadening the scope of the model for control and monitoring industrial applications. Anaerobic digestion (AD) is expected to play a crucial role in tackling climate change and assisting in the transition toward a circular economy . ModelliRegardless of the model complexity, modelling, as such, is an abstraction of the reality, since several assumptions, considerations, and simplifications are included. In fact, even the ADM1 model can be considered as a parametric gray-box approach . These aThis study aims to analyze some aspects that have not been considered in AD modelling and to evaluate their incorporation into the ADM1 model in a simple way. Two operational aspects, namely the feeding strategy and the working volume reduction, and two metabolic aspects, the extra- intra- cellular transport and the adaptation, are tackled with new extensions of the ADM1 model proposed in this study. We aim at broadening the scope of the model by adding minimal complexity to the existing common model base.Four model extensions are proposed, two regarding operational aspects and two regarding metabolic aspects. Before describing the model equations, we provide theoretical background and the rationale behind the extensions that are proposed in the present study. For modelling purposes, a typical assumption is that the digester is continuously fed, which does not occur in reality. A continuous feeding rate is suggested to avoid process imbalances in digester operation . Most fuAlong with the substrate, particularly with semi-solid waste, such as sludge or manure, inorganic materials, such as sand and grit, will enter the digester tank and accumulate due to sedimentation at the bottom or in dead zones where the agitation system is less effective. This buildup will lead to the need for maintenance and the cleanup of the tank. This should be somehow monitored to anticipate the moment it will need to be carried out . FurtherThe ADM1 model considers that any change in substrate composition will have an immediate impact on the growth rate of the microorganisms. Specifically, any change of soluble substrate will have an immediate influence on the biomass kinetics. This is clearly seen by looking at the expression of the growth rates which are based on the Monod kinetic. Of course, the instantaneous effect can be partially smoothed depending on the values of the affinity constant and the maximum substrate consumption rate. Likewise, the same instantaneous effect will take place for substrate inhibition. It is known that the cell membrane is impermeable to water-soluble compounds, such as glucose, amino acids, and LCFA, which means that a transport mechanism takes place inside the cell, and it has its own kinetics. Once inside the cell, these molecules enter their catabolic pathways. The transport will need time that, compared to the traditional modelling, will lead to a certain delay of the biomass response. This delay has been studied in other microorganisms such as microalgae . The impMicrobial adaptation has been a widely studied topic in AD because the capacity of microorganisms to adapt will determine the robustness of the process performance. Adaptation can be related to a change in the abundance of specific microbial species , population shift or change in dominance, or a mutation that results from changes in the activity of an enzyme that will provide a selective advantage by allowing growth under new conditions ,25. Partc) is set to zero so that the particulate material will enter the hydrolysis reaction directly partitioned as carbohydrates, proteins, and lipids. The hydrolysis and the disintegration constant were set at 0.5 d\u22121. The latter remains active only for the transformation of the biomass that has gone through decay. The rest of the parameters were set equal to the values presented by Rosen and Jeppsson [The base for the model extensions is the ADM1 developed by Batstone et al. togetherJeppsson .The feeding strategy extension does not entail a modification of the model itself, but instead implies that the algorithm that solves the equations is divided in two operation modes working in a cycle: a batch reactor when no feeding is occurring and a continuous reactor when the feeding is taking place. The final conditions of a batch period correspond to the initial conditions of a continuous one, and vice versa. The mass balance for a substrate The working volume reduction comprises a modification of the mass balance for each state variable since the volume is no longer constant, so the classic mass balance of the chemostat needs to change. Because there is still an inlet and exit of media, the resultant balance is a mix between the balance for a fed-batch reactor and a continuous one. The general mass balance for a substrate 3 d\u22121 units. This parameter depends on the substrate characteristics, the mixing, and the pumping system, just like other apparent parameters such as the So, the question is, how does the working volume diminish over time? Here, we assume that the volume reduction is linear as a function of time (modelled by Equations (5) and (6), in its integral and differential form, respectively), with the slope , where only one new parameter is added to the model, the deposition rate alpha .The expression that represents the rate of transfer between the bulk and inside the cell, given by Equation (9) for the case of sugar consumers, is similar to the one used to model liquid-gas transfer. This means that the phenomenon is governed by the gradient of the soluble monomer concentration between the exterior and the interior and a transport coefficient.In the case of the adaptation model extension, an explicit time-dependent function was added to the model. Considering that the adaptation process usually requires the microorganisms to develop a new enzymatic machinery, and as a first approach to the issue, the hydrolytic constant for the particulate carbohydrates was selected to be modified as the biomass adapts. Using Equation (10), the hydrolytic constant is now expressed as a function of time by a saturation-type equation.\u00ae and the solver ode15s was used to solve the ODE system. A virtual digester of 330 m3 total volume (300 m3 working volume) operating at a hydraulic retention time (HRT) of 30 d (continuous flow of 10 m3 d\u22121) at mesophilic conditions (35 \u00b0C) was considered. The inlet characteristics of the substrate and the assessment conditions for each model extension are presented in The model and the extension were implemented and simulated in Matlab 20212 and HCO3. Thus, more (or less) CO2 is dissolved in the liquid phase , causing a change in the CO2 and CH4 concentrationThe biogas production, the biogas composition, and the pH of the digestate for 18 days of operation for each of the assessed feeding strategies are shown in As observed, the methane content in general is below than the average expected values (60\u201365%), which is explained by the fact that, for this study, the original values of the ADM1 were used. According to some studies, the value of some coefficients related to the methane formation for carbon need to be adjusted higher ,33.The pH is less affected by the feeding strategy than the biogas flow, which is expected since the variables measured in the liquid media are normally more robust due to the influence of the working volume and the HRT of the process. Nevertheless, some pH variations are observed, but they do not reach values that could be considered dangerous for the digester operation since they remain close to neutrality (depending on the lower and upper limits for pH inhibition). 3 d\u22121 of the accumulation rate of inert material, the impact on biogas production is low. Indeed, at this rate, the biogas decreases by 5% due to the loss of working volume. At a rate of 0.6 m3 d\u22121, the biogas production decreases 16% compared to the values when no working volume is lost . At higher accumulation rate values, the reactor collapses with a sharp drop in the biogas production and pH associated with the washout of the methanogenic biomass. It is interesting to note that the effect of the working volume has, for a long period, a steady negative effect on the biogas production that steadily decreases due to the reduction of the biomass activity. However, the collapse of the system could occur in a matter of a few days. In this case, the system will need to be stopped and reinoculated completely with all the operational and waste management consequences that a situation such as this entails. By having a well-established model, this accumulation rate could be estimated from the biogas production and could be used to estimate the need for a clean-out and digester maintenance.The biogas production, composition, and the pH behavior in the digester, for a year of operation, at different accumulation rates of solid material (\u03b1), are shown in \u22121, the system operates without a limitation given by this transport process. This can be related to a system well adapted to the substrate, where the transport of monomers to the interior of the cell is a consolidated mechanism. Under these conditions, the concentration inside the cell is similar to the one in the bulk. Thus, similar biogas production values are obtained. At lower values of the transport coefficient, \u22121), the measured variables begin to be limited by this extra- and intracellular transport process, and the system reaches a steady-state with lower biogas production accompanied by a modified abundance of the microbial species given by the steady-state substrate concentrations. Similar results were obtained for a culture of Escherichia Coli in a chemostat [\u22121, the biogas is mostly produced by no methanogenic biomass. Thus, it is composed mainly of CO2, whereas at values of 0.01 d\u22121, the biogas drops to almost zero, with a minimum generation of biogas made up of H2 and CO2. As the transport coefficient values decrease, the pH values tend to level off to lower values which are explained by the low generation of VFAs due to the transport restrictions. The fact that no extra- or intra- cellular transport is normally considered can lead to an overestimation of the digester performance as well as an anticipation of the system\u2019s response since the delay related to this transport is not accounted for. This delay depends on the types of compounds that need to enter the cell. In this study, the same transport coefficient value was considered for all the monomers, which is probably not very accurate in a real case. For instance, the light required in a microalgae culture has a very fast transport kinetic, and thus a minimum delay is observed [The response of the biogas production flow, the biogas composition, and the pH of the digestate regarding the proposed extra- and intracellular transport kinetics extension is shown in hemostat . Lower vobserved , whereasobserved ,34,35. Iobserved .The extra- and intracellular transport can cover some aspects concerning the biomass adaptation response, but it still considers a fixed set of parameters. In this case, the adaptation extension of the model includes the dynamic evolution of the hydrolytic coefficient over time. In the incorporated function for this extension, the values of 3 d\u22121 are obtained. This behavior can be explained by a higher accumulation of VFAs as the time goes on and the adaptation process moves along, and it comes as a response to the previous period where the biogas values are notoriously below the base case where the process is strongly limited by the hydrolysis reaction. The fact that this extension affects the hydrolysis reaction leads to the methane content having the opposite behavior, as when adK increases, the methane content also increases. This effect upon the hydrolysis reaction has a more important influence on CO2 production, which makes the biogas composition to behave this way. With In general, the pH does not seem to be significantly affected in the representation of the adaptation process. It remains between recommended values to operate digesters.The simulated response of a digester when a new (co)substrate is added on day 100th is presented in Making the kinetic parameters or stoichiometric coefficients become variables of the model has been evaluated in a handful of studies. Overall, most of these studies have focused on the stoichiometry of the ADM1 representing the anaerobic digestion process. The catabolic yield for acetate and butyrate were defined as a function of the hydrogen concentration and the pH in the digester by Rodr\u00edguez et al. , althougAs one may think, these theoretical approaches presented here need to be validated with experimental results, particularly with the extensions where new parameters were incorporated. For the working volume reduction extension, the operation in a (semi) continuous mode of a sludge digester at the same HRT and OLR for long period of time can be used to calibrate the accumulation rate parameter. Under those conditions, one can expect that the only variable that can change the reactor performance is the reduction of working volume due to the accumulation of inert material. In the case of the extra- intra- cellular extension, the addition of a specific soluble substrate in a pulse manner can be carried out while observing the system response in terms of biogas flow and pH. This way the transport coefficient can be estimated depending on the observed delay between the system\u2019s disturbance and the system\u2019s reaction to it. For the adaptation extension, an experiment where a sudden change of the type of substrate used, without changing the total OLR and the HRT, could be used to estimate the adaptation coefficient.Four new extensions to the ADM1 model have been proposed to cover two operational aspects, namely the feeding strategy and reduction of the working volume as a result of solid material accumulation, and two metabolic aspects, namely the transport of monomers from the bulk to the cell and the biomass adaptation to a substrate given the time of exposure. The feeding strategy can influence the curve of biogas production and, consequently, the quality sensors used to measure it, and it has an effect on the downstream process for biogas purification and storage. The average daily production is not affected as long as inhibitory levels of some intermediate variables are not exceeded. The reduction of the working volume of the reactor depends on the rate of material accumulation, which can be estimated from the biogas production data. This phenomenon leads to a reduction of the activity of the microorganism and eventually to a total collapse of the anaerobic digester. The extra- and intracellular transport kinetics, regarding the soluble compounds, considered in the model application allows us to account for the delay in the response of the system compared to a conventional instantaneous model kinetic. Variable kinetic parameters as a function of the time of exposure can be considered to obtain a more realistic picture of the adaptation that takes place during, e.g., the start-up of the digester or the addition or replacement of the substrate fed into the reactors."} {"text": "Background: Knee osteoarthritis (KOA), a chronic degenerative disease, is mainly characterized by destruction of articular cartilage and inflammatory reactions. At present, there is a lack of economical and effective clinical treatment. Zhuifeng Tougu (ZFTG) capsules have been clinically approved for treatment of OA as they relieve joint pain and inflammatory manifestations. However, the mechanism of ZFTG in KOA remains unknown.Purpose: This study aimed to investigate the effect of ZFTG on the TLR4/MyD88/NF-\u03baB signaling pathway and its therapeutic effect on rabbits with KOA.Study design:In vivo, we established a rabbit KOA model using the modified Videman method. In vitro, we treated chondrocytes with IL-1\u03b2 to induce a pro-inflammatory phenotype and then intervened with different concentrations of ZFTG. Levels of IL-1\u03b2, IL-6, TNF-\u03b1, and IFN-\u03b3 were assessed with histological observations and ELISA data. The effect of ZFTG on the viability of chondrocytes was detected using a Cell Counting Kit-8 and flow cytometry. The protein and mRNA expressions of TLR2, TLR4, MyD88, and NF-\u03baB were detected using Western blot and RT-qPCR and immunofluorescence observation of NF-\u03baB p65 protein expression, respectively, to investigate the mechanism of ZFTG in inhibiting inflammatory injury of rabbit articular chondrocytes and alleviating cartilage degeneration.Results: The TLR4/MyD88/NF-\u03baB signaling pathway in rabbits with KOA was inhibited, and the levels of IL-1\u03b2, IL-6, TNF-\u03b1, and IFN-\u03b3 in blood and cell were significantly downregulated, consistent with histological results. Both the protein and mRNA expressions of TLR2, TLR4, MyD88, NF-\u03baB, and NF-\u03baB p65 proteins in that nucleus decreased in the ZFTG groups. Moreover, ZFTG promotes the survival of chondrocytes and inhibits the apoptosis of inflammatory chondrocytes.Conclusion: ZFTG alleviates the degeneration of rabbit knee joint cartilage, inhibits the apoptosis of inflammatory chondrocytes, and promotes the survival of chondrocytes. The underlying mechanism may be inhibition of the TLR4/MyD88/NF-kB signaling pathway and secretion of inflammatory factors. Knee osteoarthritis (KOA) is a chronic osteoarticular disease with a higher incidence during middle age, especially in the joints, and it is associated with the spine, knee, and hip joints. KOA is characterized by degeneration of articular cartilage, subchondral bone sclerosis, and osteophyte formation . OA is aEarly joint pain is the first symptom. The degree of pain becomes increasingly serious with increase in joint activity, affecting the quality of life of patients and eventually causing disability . TherefoModern medicine recognizes that KOA is a disease of the whole joint, including lesions of various tissues . HoweverUnlike the acquired immune system, the innate immune mechanism is mainly involved in tissue repair, wound healing, apoptosis, and cell debris removal, serving as the initial line of defense against infection. The immune cells of the innate immune mechanism mainly include macrophages, granulocytes, mast cells, and dendritic cells. Fast-acting innate immune cells play an important role in inducing inflammation . Among tThe TLR family comprises ten types of transmembrane protein receptors in humans (TLR 1\u201310). TLR 1, 2, 4, 5, 6, and 10 are located on the cell surface, which mostly rely on the myeloid differentiation factor 88 (MyD88) and cytoplasmic adaptor, further activating the inflammatory response with the nuclear factor kappa B (NF-\u03baB). The other portion of TLRs, including TLR 3, 7, 8, and 9, is located intracellularly and can directly activate IRFs across MyD88 to induce inflammatory responses. Within this, the TLR4 receptor is the only two-way TLR that triggers internalization into the intracellular-activated IRF pathway after activation of MyD88. TLR10, the newly discovered TLR in humans, is also the only known anti-inflammatory molecule in the TLR family . Its mecThe mechanism to slow down the occurrence and development of OA is the current research focus. As a national treasure of the Chinese nation, traditional Chinese medicine (TCM) plays an irreplaceable role in maintaining human health. Zhuifeng Tougu (ZFTG) capsules, a clinical drug approved for OA, are composed of classic famous prescriptions, including Xiaohuoluo pills and the Linggui Zhugan and Jiuwei Qianghuo decoctions, which effectively relieve joint pain and inflammation in patients with OA. It can expel wind, eliminate dampness, dredge meridians and collaterals, dispel cold, and relieve pain. The composition and dosage in the automatic production line of ZFTG are illustrated in ZFTG was produced and provided by Hinye Pharmaceutical Co., Ltd. and produced according to the automatic production line displayed in Approximately 2g of ZFTG was weighed accurately and placed in a conical flask with a stopper before adding 50\u00a0ml of pure water for ultrasonic treatment for 30\u00a0min. Then, the capsules were removed, centrifuged for 10\u00a0min, and the supernatant was filtered, followed by continuous filtration.A proper amount of paeoniflorin reference standard was accurately weighed and mixed with methanol to prepare a solution containing 20\u00a0\u03bcg per 1\u00a0ml.Altogether, we used a UPLC H-Class system from Waters Corporation , MSA3-6P-OCE-DM millionth electronic balance from Sartorius Co., Ltd. , MS105DU from Mettler-Toledo Group , and KQ250-DB from Kun Shan Ultrasonic Instruments Co., Ltd. .\u22121; column temperature: 40\u00b0C; injection volume: 1\u00a0\u03bcl; detection wavelength: 230\u00a0nm; many theoretical plates: not less than 5000 according to the paeoniflorin peak; separation degree with other peaks: greater than 1.0. All components were detected within 60\u00a0min. The gradient elution process is illustrated in Chromatographic column: Waters HSS T3 column ; mobile phase: A was acetonitrile, B was 0.1% formic aqueous acid solution; flow rate: 0.5\u00a0ml\u00a0minPaeonia veitchii Lynch ; peak 4 belonging to Gentiana macrophylla Pall ; peak 5 belonging to Aconitum kusnezoffii Reichb ; peak 8 belonging to Notopterygium incisum Ting. ex H. T. Chang ; peak 9 belonging to Saposhnikovia divaricata (Turcz.) Schischk ; peak 10 belonging to Ligusticum chuanxiong Hort ; and peak 11 belonging to Glycyrrhiza uralensis Fisch .According to the chromatographic conditions, six batches of ZFTG were prepared as test solutions for UPLC analysis using paeoniflorin as the reference substance purchased from the National Institutes for Food and Drug Control . The chromatograms were recorded for 60 min, and the UPLC fingerprints of six batches of ZFTG were obtained . The resad libitum access to normal water and a standard diet, the rabbits were randomly separated into two groups: the control (BC) (n = 9)and KOA (KOA) (n = 33). To generate a model of OA, we chose the modified Videman method ]. All rabbits were kept in individual cages in the Animal Center Laboratory of Hunan University of Chinese Medicine [establishment license number: SYXK]. The feeding temperature was between 24 and 26\u00b0C, and the humidity was between 50% and 70%. After 2\u00a0weeks of acclimatization with n method and the n = 6) received saline gavage (10\u00a0ml/kg body weight). The gavage dose of ZFTG and GS in rabbits was equivalent to that used in patients.Six weeks after modeling, three rabbits were randomly selected from the BC or KOA groups for KOA model validation, according to the random number table method. After model validation, the KOA model rabbits were randomly divided into five groups: model control ; high-dose ; medium-dose ; low-dose ; or positive control groups , with a total of 30 rabbits (six rabbits in each group). The control group and western blotting (WB) analyses. Individuals were euthanized according to the IAEC animal experimentation guidelines. All experimental protocols were approved by the Committee of Ethics on Animal Experiments at the Hunan University of Chinese Medicine (LLBH-202007070001).The validation method included comparing the functional activities and general observation of knee cartilage to assess cartilage damage. After the general observation, frozen sections of the knees (5\u00a0mm thick) were collected and observed under a microscope using a modified 100 and 400 OA Research Society International (OARSI) scoring system . The OARAll collected sections were dewaxed with dimethylbenzene, soaked in graded ethyl alcohol, and washed with distilled water. Parts of the sections were stained with hematoxylin and eosin (H&E), dehydrated with gradient alcohol (95%\u2013100%), and sealed. Urea and trypsin antigen were used to repair other parts of the sections, which were subsequently treated with primary and secondary antibodies, and then DBA was added for 5\u00a0min to develop color and observed with a microscope. Other sections were immersed in EDTA buffer (pH 9.0), heated, and then cooled to room temperature. Before adding the primary and secondary antibodies, the sample was first cleaned with 0.01M PBS (pH 7.4\u20137.6), submerged in 75% alcohol, Sudan black dye solution was added, and then stained with DAPI working solution at 37\u00b0C until sealed. The H&E staining solution and immunohistochemistry reagents and kits were purchased from Wellbio (K435960 and K484350) and Beijing ZSGB company (600D54 and 600W23), respectively. The immunofluorescence reagents were purchased from ProteinTech (SA00013-2). The stained cartilages were observed and imaged using a light microscope . Immunohistochemical sections were examined for IL-1\u03b2, IL-6, TNF-\u03b1, and IFN-\u03b3. Immunofluorescence sections were used to observe the expression of NF-\u03baB p65.Primary chondrocytes were obtained from rabbit knee cartilage. Chondrocytes were maintained in 10% Dulbecco\u2019s modified Eagle\u2019s medium (DMEM). After digesting the cells with trypsin, adherent cells were collected and subcultured. Chondrocytes from passage 3 (P3) were used for further analysis. The P3 generation of chondrocytes was randomly divided into nine groups and then treated with different concentrations of ZFTG to select the best concentration.The chondrocytes were modeled with 10\u00a0ng/ml IL-1\u03b2 and grown on glass slides fixed with 4% paraformaldehyde. Inactivated endogenous enzymes were identified with toluidine blue and collagen type II for immunocytochemical staining under microscopic observation. The P3 generation of chondrocytes was randomly divided into five groups. In addition to the control group, the other groups were treated with IL-1\u03b2 and then treated with different concentrations of ZFTG . The selection of these concentrations was based on the screening results of the previous drug concentration.4 cells/well, followed by treatment with IL-1\u03b2 and various ZFTG concentrations at the indicated dosages for 24\u00a0h. After cell intervention for 24 h, 20\u00a0\u03bcl CCK-8 solution was added into each well and incubated for 4\u00a0h in a 5% CO2 incubator at 37\u00b0C. The optical density (OD) values at 450\u00a0nm were measured using a microplate reader .The Cell Counting Kit-8 (CCK-8) was used to assess the cytotoxicity of various concentrations of ZFTG against the P3 generation of chondrocyte rabbits. Cells were seeded in 96-well plates at a density of 10The apoptosis of P3 chondrocytes treated with IL-1\u03b2 (10\u00a0ng/ml) and various ZFTG was measured using flow cytometry. Cells were digested with 0.25% trypsin, which contained 0.02% EDTA, before collecting the cell suspension. The cells were collected after the suspension was centrifuged for 5\u00a0min (1500\u00a0rpm), washed with PBS, and mixed with Annexin V-FITC and propidium iodide. Annexin V was subjected to fluorescein FITC labeling of apoptotic chondrocytes using the Annexin V-FITC apoptosis detection kit .In vivo and in vitro experiments of the cartilage tissue weighing approximately 0.025\u00a0g were carried out, and chondrocytes were precooled, washed with ice and PBS, and crushed with 300\u00a0\u00b5l RIPA lysate in a biological sample homogenizer, and the complete protein was extracted. The protein supernatant was mixed with loading buffer, boiled in water for 5\u00a0min, and placed in an ice box for medium-speed cooling. BCA detection was used to carry out protein quantification, and electrophoresis was carried out for 130\u00a0min according to the results. After blocking with 5% non-fat-dried milk at room temperature for 1\u00a0h, membranes were incubated with primary antibodies overnight at 4\u00b0C, and then incubated with horseradish peroxide-conjugated secondary antibodies at room temperature for 90\u00a0min. The ECL reagent was incubated with the membrane for 1\u00a0min, and the exposure was performed in the chemiluminescence imaging system .\u2212\u0394\u0394Ct method according to the manufacturer\u2019s instructions. Total mRNA was used as a template for reverse transcription of cDNA following the manufacturer\u2019s protocol. Reverse transcription products were retained for PCR reactions and fluorescence quantitative PCR reactions. Each sample was analyzed in triplicate, and expressions of TLR2, TLR4, MyD88, and NF-\u03baB were normalized to the expression level of \u03b2-actin. The 2t method was usedp < 0.05 was considered statistically significant. To reduce the uncertainty and contingency of the data, we reported the mean difference between groups and the upper and lower limits of the 95% confidence interval.All data were expressed as the mean \u00b1 standard deviation. Data from two groups were compared using the Mann\u2013Whitney test. A comparison among three or more groups was performed using one-way analysis of variance (ANOVA). SPSS for Windows (version 26) was used to analyze the data. The graphs were plotted using GraphPad Prism (version 8). A p = 0.0058). All the abovementioned results indicate that the modeling in the KOA group was successful.According to the experimental design, some rabbits were treated with modified Videman modeling at 3\u00a0months. To verify the success of the modified Videman modeling, we randomly selected three experimental rabbits from the BC and KOA groups for model verification 6\u00a0weeks later . In the After successful modeling, the rabbits in each group were treated intragastrically according to the experimental plan. Six weeks after dosing, all rabbits were killed, and the left knee joint was collected for anatomical and histMacroscopically, we found that the knee joints of the MC group were more worn than those of the BC group and manifested as grayer, thinner, and lusterless cartilage. Because the cartilage in the bearing area of the articular surface was damaged, the articular surface was unsmooth and even dented downward, forming ulcers. Moreover, osteophytes were formed. The cartilage of the four medication groups was better than that of the MC. In the ZFTG groups, especially the HD, the cartilage tissue was less worn, and the functional performance was better. The articular cartilage in the HD group was yellowish, more lustrous, and thicker than that in the MC, and a slight abrasion was observed in the mid-posterior region of the medial tibial plateau without significant pitting or osteophyte formation . The H&Ep = 0.028), IL-6 , TNF-\u03b1 , and IFN-\u03b3 ] increased significantly than that in the BC group. The MC group presented inflammatory manifestations.To explore how ZFTG relieves joint injury, we conducted a series of tests on the blood and cartilage samples of experimental rabbits according to the previous investigation . To verip = 0.024), IL-6 , TNF-\u03b1 , and IFN-\u03b3 of the HD group had lower expression levels than those in the MC group, but the LD and MD groups had no statistical significance compared to the MC group. The abovementioned results indicated that a high-dose ZFTG could significantly inhibit inflammation by reducing IL-1\u03b2, IL-6, TNF-\u03b1, and IFN-\u03b3 in the serum of rabbits with OA. Immunohistochemical results of articular cartilage sections are demonstrated in (p < 0.001), IL-6 , TNF-\u03b1 , and IFN-\u03b3 ] were significantly higher than that in the normal group. The results demonstrated that the model group had obvious inflammatory manifestations in the articular cartilage, consistent with the serum test results.The expression levels of related factors in the ZFTG groups were lower than those in the MC, and the resultant trends were consistent with those in the PC group. Among these, IL-1\u03b2 , IL-6 , TNF-\u03b1 , and IFN-\u03b3 of the HD group were significantly lower than those in the MC group. Based on the results of the abovementioned blood and cartilage tests, we found that ZFTG can reduce the expression of inflammatory factors in the body and cartilage, thereby relieving the inflammatory response of KOA. This was also confirmed by the anatomical observation of the knee joint and the results of the H&E-stained section.However, the expression levels of inflammatory factors in ZFTG groups decreased, which had the same trend as the results of the positive drug group. IL-1\u03b2 , MyD88 , and NF-\u03baB ]. The test results in the ZFTG groups were consistent with those in the PC group and were lower than those in the MC group. With the increase in ZFTG concentration, the corresponding expressions of TLR4, MyD88, and NF-\u03baB proteins decreased gradually. The protein expressions of TLR4 , MyD88 , and NF-\u03baB in the HD group were significantly different from those in the MC group. The protein expression of NF-\u03baB in LD and MD groups was significantly different from that in the MC group . However, there was an insignificant difference in the protein expression of TLR4 and MyD88.The expression of the TLR4/MyD88/NF-\u03baB signaling pathway would induce differential expression of the knee joint. The protein expression is demonstrated in p < 0.001), TLR4 , MyD88 , and NF-\u03baB ]. It was consistent with the protein expression results of cartilage proteins. Compared with the MC group, the mRNA expression level of the PC group decreased significantly. Resembling the PC group, the mRNA expression in ZFTG groups was significantly lower than that of the MC group. Similar to the positive control group, the level of serum TNF-\u03b1 in ZFTG groups was significantly lower than that in the model, and the HD group demonstrated the lowest expression of mRNA . The NF-\u03baB p65 protein was evenly expressed in the cytoplasm of the BC group, but the MC group showed significantly higher levels in the nucleus. The expression of NF-\u03baB p65 protein in the nucleus gradually reduced following intervention with ZFTG and lower apoptotic rate , suggesting that the intra-articular inflammation in OA patients was indeed closely related to the wear of articular cartilage. However, ZFTG could significantly affect the activity of the damaged chondrocytes and inhibit their apoptosis. The best ZFTG concentration was 400\u00a0ng/\u03bcL , which was similar to the previous experimental result.We treated the chondrocytes with ZFTG and detected cell survival . The resp < 0.001), IL-6 , and TNF-\u03b1 , IL-6 , and TNF-\u03b1 ]. Therefore, ZFTG had a stable anti-inflammatory effect on chondrocytes, which was consistent with the results of the previous experiment.Moreover, we detected the expression of inflammatory factors in cell supernatants. As displayed in in vivo experiment , TLR4 , and MyD88 ]. These data suggest a positive correlation between the severity of OA and expression of TLR2, TLR4, and MyD88 proteins. With the increase of drug concentration, the protein expressions of TLR2, TLR4, and MyD88 in ZFTG groups decreased gradually, and the best ZFTG concentration was 400\u00a0ng/\u03bcl . TLR2, TLR4, and MyD88 mRNA expressions in each group were consistent with the protein expression trend. The results revealed (p < 0.001), TLR4 , MyD88 ]. ZFTG could significantly reduce gene expression. TLR2, TLR4, and MyD88 mRNA expression levels were lowest in the 400\u00a0ng/\u03bcl group . Furthermore, with the decrease in drug concentration, the inhibition of ZFTG on TLR2, TLR4, and MyD88 genes weakened gradually and manifested as a gradual increase in gene level in each group.As demonstrated in revealed that mRNThese data indicate that ZFTG can reduce chondrocyte inflammation, inhibit chondrocyte apoptosis, and repress the expression of genes and proteins of TLR2, TLR4, and MyD88.As an age-related joint degeneration, OA has been treated with general therapy , exercise plans , and medication with existing clinical treatment strategies that aggravate the burden and risk on the gastrointestinal tract and kidneys . In addiThe occurrence and development of OA involve complex networks of inflammatory mediators. The imbalanced remodeling driven by inflammatory mediators is an important factor in inducing and promoting the development of OA . InflammThe TLRs and NF-\u03baB signaling pathways are important for inflammatory expression and activation of chondrocytes and are highly expressed in chondrocytes stimulated by inflammation . In partRecently, the treatment of OA with TCM has been frequently reported, and its curative effect has been gradually recognized . Its effin vitro (The modified Videman method was a classic OA model for New Zealand rabbits . The patin vitro , which aIn addition, IL-1\u03b2, IL-6, and TNF-\u03b1 are closely related to degeneration of articular cartilage matrix . IL-1\u03b2 ain vivo experiments, we found that the expressions of IL-1\u03b2, IL-6,TNF-\u03b1, and IFN-\u03b3 in the blood and articular cartilage of the MC group were higher than those in the BC and ZFTG groups. In the in vitro experiments, the supernatant of the chondrocytes in the IL-1\u03b2 group also detected high expression of IL-1\u03b2, IL-6, and TNF-\u03b1. In addition, after ZFTG intervention, the expression levels of inflammatory factors reduced significantly, similar to the results of the in vivo experiments. This not only verifies that various factors are involved in the formation of OA but also demonstrates that ZFTG has an anti-inflammatory effect. In the ZFTG groups, the concentration gradient was negatively correlated with the expression of each factor, which also provided a reference for our subsequent in vitro experiments.Using IL-1, IL-6, TNF, and other cytokines are encoded and expressed by the NF-\u03baB classical pathway , which iIn vivo experiments found that the TLR4/MyD88/NF-\u03baB signaling pathway was activated in the model group, similar to previous studies (As an upstream target of NF-\u03ba B and an important component of the innate immune mechanism, TLRs also play a crucial role in OA. With the discovery of the anti-inflammatory effects of TLR10 , the rol studies . The pro studies , the inhin vitro experiments, the high expressions of genes and proteins of TLR2, TLR4, and MyD88 in the IL-1\u03b2 group were consistent with the results of in vitro studies (in vitro study demonstrated that the aggrecan 32-mer fragment induces the TLR-2-dependent gene to activate NF-\u03baB in mouse and human chondrocytes, accelerating cartilage destruction (From our studies that dem studies . Korotkytruction . AltogetIt is worth noting the limitations of this study. First, ZFTG is produced by a Chinese patent medicine that uses modern equipment. However, the complexity of Chinese medicinal components makes it difficult to explore their effective mechanisms. Second, this study focused on the anti-inflammatory effect of ZFTG on KOA without comprehensively exploring the efficacy mechanism. Finally, further in-depth research is warranted to clarify the efficacy of ZFTG in treating KOA.The modified Videman method provided the experimental model of OA for this study. ZFTG inhibits IL-1\u03b2-induced inflammatory injury in rabbit chondrocytes by repressing the TLR4/MyD88/NF-\u03baB signaling pathway, secreting inflammatory factors, and promoting the survival of chondrocytes while reducing the apoptosis of chondrocytes. In summary, TCM is a potential reservoir for prevention and treatment of KOA. Our study not only provides an important reference for treatment of KOA, which reveals that ZFTG can be used as a new drug and its curative mechanism, but also clarifies the vital function of the TLR4/MyD88/NF-\u03baB signaling pathway in KOA. Our study can provide the direction for modern Chinese medicine research, provide the basis for the clinical treatment of KOA, and lay the foundation for the TCM treatment of KOA."} {"text": "Microbial colonization of animal intestine impacts host metabolism and immunity. The study was aimed to investigate the diversity of the intestinal microflora in specific pathogen free (SPF) and non-SPF Beagle dogs of different ages by direct sequencing analysis of the 16S rRNA gene. Stool samples were collected from four non-SPF and four SPF healthy Beagle dogs. From a total of 792 analyzed Operation taxonomic units, four predominant bacterial phyla were identified: Firmicutes (75.23%), Actinobacteria (10.98%), Bacteroidetes (9.33%), and Proteobacteria (4.13%). At the genus level, Streptococcus, Lactobacillus, and Bifidobacterium were dominated. Among which, Alloprevotella, Prevotella_9, and Faecalibacterium were presented exclusively in non-SPF beagles, with potentially anti-inflammatory capability, which could protect non-SPF beagles from complex microbial environment. The number and diversity of intestinal flora for non-SPF Beagle dogs were the highest at birth and gradually decreased with growth, whereas the results for the SPF beagle samples were the opposite, with the number and diversity of intestinal microbiota gradually increases as beagles grow. In a nutshell, the microbial complexity of the rearing environment can enrich the gut microbiota of beagles, many of which are anti-inflammatory microbiota with the potential to increase the adaptability of the animal to the environment. However, the gut microbiota of SPF beagles was more sensitive to environmental changes than that of non-SPF beagles. This study is of great significance for understanding the bionomics of intestinal microflora in non-SPF and SPF beagles, improving the experimental accuracy in scientific research. Experimental animals played a key role in scientific and medical research. With the development of modern life science, the quality requirements of experimental animals are increasing. Conventional experimental animals have complex microbial states, which cannot meet the requirements of scientific research and production. Therefore, SPF experimental animals have become more important research work. Beagle as a standard non-rodent experimental animal has been widely applied due to its submissive behavior, medium size, long life span, and consistent genetic delivery. Therefore, it is very necessary to understand the biological characteristics of SPF beagles in research work.Gut microbiota, which is closely associated with host nutrition, metabolism, and immunity, acts as a \u201csecond genome\u201d for modulating the health phenotype of the superorganism host , 2. As aHigh-throughput sequencing plays an increasingly important role in biological sample for studying the population structure, microbiome diversity, and evolution of the bacterial flora of humans and animals , 10, 11.Four non-SPF and four SPF newborn Beagle dogs from Qingdao Bolong Experimental Animal Co., Ltd were enrolled into this study. Throughout the study, none of the dogs received drugs, such as gastrointestinal disease drugs, antibiotics, or diabetes drugs, that could affect their gut microbiota, nor did they receive additional nutritional supplements. SPF beagles are bred in pairs in sterilized isolators and fed artificially sterilized breast milk until they are 1 month old, then gradually weaned on a dry diet. After weaning, they were fed sterilized maintenance commercial extrusion (dry-type) diet with free access to sterilized water. The isolator maintains a positive pressure of 118\u2013127 Pa, humidity of 40\u201370% and 30 air changes per hour. The temperature inside the isolator is 34\u201335 degrees for the first week, and then decreases by 2 degrees every week until it reaches 24\u201326 degrees. Non-SPF beagles were raised in pairs in spacious indoor enclosure, kept at the same temperature and humidity as the SPF beagles' isolation unit, breastfed naturally until 1 month of age, then gradually weaned, after which they were fed the same sterilized maintenance commercial extruded (dry-type) diet as SPF dogs and were given free water. All dogs gain behavioral enrichment through interaction with each other, playtime with their caregivers, and access to toys.Fresh stool samples from newborn, 1 and 3-month-old non-SPF Beagle dogs or 1 and 3-month-old SPF Beagle dogs were collected immediately after spontaneous defecation, immediately frozen at \u221280 \u00b0C without any additives or pretreatment, and sent to Shanghai Majorbio Bio-pharm Technology Co., Ltd. for the 16S gene library construction, quantification, and sequencing. Twelve stool samples for non-SPF Beagle dogs were divided into three groups, named F1, F3 and F5, which were collected from newborn, 1 and 3-month-old non-SPF Beagle dogs, respectively. Eight stool samples for SFP Beagle dogs were divided into two groups, named F2 and F4, which were collected from 1 and 3-month-old SPF Beagle dogs, respectively.t-test or analysis of variance (ANOVA) on at least three independent replicates. P values of < 0.05 were considered statistically significant for each test .After sequencing, paired-end reads were assembled using FLASH software (v1.2.11) based on overlap. MOTHUR (v 1.30.2) software is used for quality control and filtering of assembled sequences. Uparse (v 7.0.1090) was used for OTU cluster analysis, and Usearch (v 7.0) was used for taxonomic analysis. Taxonomy based on the 16S rRNA gene sequence was assessed using the Ribosomal Database Project (RDP) classifier (v 2.11) against the Silva database at a confidence level of 0.7. Microbial diversity in the individual stool samples was estimated using rarefaction analysis. Alpha diversity and beta diversity indexes were calculated using MOTHUR. PICRUSt (v 1.1.0) was used to predict the KEGG and COG functions of 16S sequences. The alpha diversity for each sample was measured by OTUs using Sobs, Shannon, Simpson, ACE, Chao, and Coverage indexes. Statistical analyses were performed using Student's n = 12) and SPF (n = 8) Beagle dogs were assessed using high-throughput sequencing. After removing low-quality sequences and non-target regions, the number of cleaned sequences were 914,442 reads, approximately 45,722 reads per sample. Among them, 547,960 filtered sequences were obtained from the non-SPF Beagle dogs and 366,482 filtered sequences were from SPF Beagle dogs .Stool samples of non-SPF . All the stool samples showed 792 OTUs, with an average of 423 bp high-quality sequences . For the non-SPF Beagle dogs, 726 OTUs, with an average of 422 bp , and for the SPF Beagle dogs, 347 OTUs, with an average of 426 bp were acquired by sequencing.P = 0.007) between F1 (newborn) and F3 (1-month-old) groups; but no significant difference (P = 0.103) between F3 (1-month-old) and F5 (3-month-old) groups. For the SPF Beagle dogs, there was a significant difference in the number of OTUs (P = 0.003) between F2 (1-month-old) and F4 (3-month-old) groups.The number of OTUs for the non-SPF and SPF Beagle dogs fit a normal distribution. For the non-SPF Beagle dogs, there was a significant difference in the number of OTUs , as the number of sequences increased, the curve tended to be flat, indicating that the sequencing of each sample was sufficient to reflect the species diversity in the sample.These results indicated that the number and diversity of intestinal flora for non-SPF Beagle dogs were the highest at birth and gradually decreased with growth. In SPF beagles, in contrast, the number and diversity of intestinal microbiota gradually increases as beagles grow.Firmicutes (75.23%) was the highest, followed by Actinobacteria (10.98%), Bacteroidetes (9.33%), and Proteobacteria (4.13%).The results of microbiota analysis at the phylum level for the five groups of stool samples demonstrated that of the 4 phyla identified, the content of Firmicutes was the dominant taxa in all the five groups, the percentage of Firmicutes was different. In non-SPF beagles, with the dog's growth, the proportion of Firmicutes in the intestinal flora gradually increased, to 92.27% at the age of 3 months. At the same time, the proportion of Actinobacteria, Bacteroidetes, and Proteobacteria decreased gradually. In SPF dogs, the proportion of Firmicutes and Proteobacteria declined slightly with age, while the proportion of Actinobacteria and Bacteroidetes increased. Different from the previous reported that Fusobacteria ranks the third in the intestinal microbiota of healthy beagles , whereas the microbiota in the stool samples from SPF beagles is composed of 12 main genera . On the PICRUSt software was used to compare the species composition information obtained from 16S sequencing data, and to predict the composition of functional genes in the samples, to analyze the functional differences between different samples. KEGG pathway analysis predicted many basic metabolic function genes, among which the abundant genes were Metabolic pathways, Biosynthesis of secondary metabolites, Microbial metabolism in diverse environments, Biosynthesis of amino acids, etc. In general, the functional distribution of intestinal microbiota in non-SPF and SPF beagles was similar . For nonLactobacillus, Blautia and Prevotella in the non-SPF Beagle intestinal microflora (Lactobacillus was involved in carbohydrate and protein metabolism (Blautia was a major producer of short-chain fatty acids, especially acetic acid (Prevotella was involved in de novo synthesis of amino acids and the synthesis of several B vitamins (Bifidobacterium, Bacteroides, unclassified_o_Lactobacillales, Faecalibaculum and Pepto streptococcus with higher abundance in the intestinal microbiota of SPF Beagle dogs (Bifidobacterium and Bacteroides are known to have bile salt hydrolase enzyme-coding genes and are also participated in carbohydrate metabolism, Bacteroides, Lactobacillus, and Streptococcus were identified microbial species for their role in proteolysis or amino acid production (Bacteroidetes was the major group involved in the synthesis of vitamin B and conjugated linoleic acid, and Faecalibaculum was a major producer of SCFA (By comparing the functional distribution of intestinal microbiota of 1-month old non-SPF and SPF Beagle dogs, we found that the abundance of pathways related to Microbial metabolism in diverse environments and Biosynthesis of amino acids was higher in the non-SPF group than in the SPF group. This may be related to the high levels of croflora . It has tabolism , 14. Blatic acid . Prevotevitamins , 17. At gle dogs . Bifidoboduction , 14, 16. of SCFA . The abuIn this study, the fecal microbiota of non-SPF and SPF Beagle dogs, and different ages were characterized to understand the biological characteristics of intestinal microbiota in different rearing environment.The alpha diversity was compared by the Coverage index, Sobs index, Shannon index, Simpson index, ACE index, and Chao 1 index for the fecal microbiota of the non-SPF and SPF beagles . The resFirmicutes, Bacteroidete, Proteobacteria and Actinobacteria. Firmicutes were the main flora of the two groups of beagles, which was consistent with many studies on animal intestinal flora (Firmicutes increased gradually, and reached 92.27% by the age of 3 months. In SPF beagles, the proportion of Firmicutes decreased slightly with growth. This was contrary to the trend of fecal flora diversity (Firmicutes, Bacteroidetes, Fusobacteria, Proteobacteria and Actinobacteria in dog's gut (Fusobacteria was not the dominant phylum in the beagle's intestinal flora in present study, which may be caused by animal breeds, ages, diets, living environments or experimental methods. We believe that the most important reason may be the age of the beagle dogs. Most of the studies used beagle dogs as adults, but this study was conducted until the beagle dogs were 3 months old, focusing on the impact of living environment on the intestinal flora construction process of the puppies. The predominant phyla in healthy human intestinal flora are Firmicutes, Bacteroidetes, Actinobacteria and Proteobacteria, suggesting that the intestinal microflora of young dogs may be more similar to that of humans than that of adult dogs.There was no significant difference in the composition of fecal microbes between the two groups of beagles at the phylum levels, with predominantly al flora , but theiversity . Most ofog's gut , 14. In Streptococcus, Lactobacillus and Bifidobacterium were the main fecal flora of both non-SPF and SPF beagle dogs. The three dominant genera are bacteria genera that widely exist in the digestive tract of animals, some strains of Lactobacillus and Bifidobacterium can also act as probiotics to improve the distribution of intestinal flora and antagonize the colonization of harmful bacteria, so as to protect intestinal health (Based on the analysis at the genus level, l health , 21.Burkholderia-Caballeronia-Paraburkholderia, Alloprevotella, Faecalibacterium, Prevotella_9, [Ruminococcus]_gnavus_group and Blautia were present exclusively in non-SPF beagles. Alloprevotella, Prevotella_9 and Faecalibacterium can produce SCFAs (Blautia is a relevantly abundant taxonomic group present in the microbiome of mammalian gastrointestinal tracts, which plays certain roles in host metabolism, inflammation and biotransformation, and has potential probiotic properties (Burkholderia-Caballeronia-Paraburkholderia, Prevotella_9, Faecalibacterium and Blautia could be a consequence of complexity of microbial environment, protecting them from inflammations and non-infectious intestinal diseases.Moreover, ce SCFAs , 23, whoce SCFAs , 25. Blaoperties . We specVeillonella (Escherichia-Shigella (llonella and EschShigella peculiarBacteroidetes, also the SCFA producing bacteria (bacteria , 29, werLactobacillus, Blautia and Prevotella in intestinal microbiota. However, at 3 months of age, the abundance of many functional genes in SPF beagles was higher than that in non-SPF beagles, which may be due to the higher content of Bifidobacterium, Bacteroides, unclassified_o_Lactobacillales, Faecalibaculum and Pepto streptococcus in the intestinal flora of SPF Beagles. By participating in the metabolism of carbohydrates, proteins, SCFA, bile salts and vitamins in the gut, these bacteria help the host decompose food and obtain nutrients, improve the utilization of proteins, carbohydrates and vitamins.KEGG pathway analysis predicted many basic metabolic function genes of intestinal flora. The abundance of functional genes in intestinal flora of non-SPF Beagle dogs decreased with growth, while that of SPF beagle dogs increased with growth. At 1 month of age, the abundance of functional genes in intestinal microbiota of non-SPF Beagles was mostly higher than that of SPF beagles, which may be related to the high concentration of The experimental animals used in this study were newborn beagles in general environment and SPF environment. In order to obtain newborn puppies with similar genetic background of the same strain in the same breeding farm at the same time, only 4 animals were obtained in each group, 2 females and 2 males. The number of experimental animals is relatively small but representative. In addition, due to the limitation of experimental conditions, we failed to collect fecal samples from newborn SPF beagles, which is a pity of this study.This study laid the foundation for the study of SPF and non-SPF Beagle fecal microbiota. By comparing the composition and diversity of the fecal microbiota of non-SPF and SPF beagles, and different ages, we found that living and dietary exposure to the large variety environmental microbes could increase the potentially beneficial bacterial genomes, enriching the microbiome, enhancing the anti-inflammatory ability of intestinal flora. This may be the result of beagles that adapt to the complexity of microbial environment, indicating a role of microbiota in protecting beagles from pathogens. The gut microbes of SPF beagles were more sensitive to environmental changes than that of the non-SPF beagles, which may lead to a weaker environmental adaptability of SPF beagles. It is of great significance for understanding the bionomics of intestinal microflora in non-SPF and SPF beagles, improving the experimental accuracy in scientific research.The data presented in the study are deposited in the NCBI repository, SRP402573, accession number PRJNA890408.The animal study was reviewed and approved by Laboratory Animal Ethics Committee of Shandong Laboratory Animal Center.CY retrieved literatures, provided tables, figures, and wrote the manuscript. ZG retrieved literatures and provide methods and techniques. KW proposed the topic and provided outline. ZL revised the manuscript. XM and SC reviewed the final manuscript. All authors contributed to the article and approved the submitted version."} {"text": "The differences in nutritional intake of total calories and carbohydrates were lower in the monozygotic twin group than in the dizygotic twin group . The differences in total body fat were lower in monozygotic twins than in dizygotic twins . Monozygotic twins had more similar dietary habits for total calories and carbohydrate intake. Other nutritional factors did not show differential similarity between monozygotic and dizygotic twins. Total body fat was more concordant in monozygotic twins.The present study aimed to investigate the coincidence of obesity and nutritional intake in monozygotic twins compared to dizygotic twins. The data from the Korean Genome and Epidemiology Study (KoGES) from 2005 through 2014 were analyzed. Participants \u2265 20 years old were enrolled. The 1006 monozygotic twins and 238 dizygotic twins were analyzed for differences in self-reported nutritional intake, total body fat, and body mass index (BMI) using a linear regression model. The estimated values (EV) with 95% confidence intervals (95% CI) of the difference in dietary intake, total body fat, and BMI score were calculated. The monozygotic twin group and the dizygotic twin group showed similar differences in nutritional intake, DEXA fat, and BMI (all Obesity is a common disease whose prevalence is estimated to be approximately 36.9% in men and 38.0% in women in the worldwide adult population . In addiMultiple factors can contribute to the occurrence of obesity. Genetic predispositions to obesity have been suggested in twin studies . HoweverPrevious studies reported a number of genetic factors that result in obesity. In addition, because nutritional intake is a critical factor for obesity, it was questioned whether nutritional intake can be a contributor to obesity as an inherited trait. This study aimed to estimate the inherited portion of obesity compared to the shared environmental factors. To examine these current questions, twin cohorts were analyzed for differences in total body fat, body mass index (BMI), and nutritional intakes. This study is novel in analyzing nutritional intake in twin cohorts and comparing BMI and total body fat. The findings of the current study may enhance the knowledge on the inherited trait for the occurrence of obesity.The current research was permitted by the ethics committee of Hallym University (2021-03-004). The ethics committee exempted the written informed consent. The Korean Genome and Epidemiology Study (KoGES) from 2005 through 2014 was used ,10,11,12Among a population of 1300, the participants who did not completed the survey on nutritional intake, dual-energy X-ray absorptiometry (DEXA) exam, and sleep time were excluded. As a result, 1006 monozygotic and 238 dizygotic twin participants were enrolled . The dif2) was measured by the automated height-weighing machine in Frankfort Horizontal Plane. The income group was classified based on their household income. Education level, marriage status, and physical activity, walking time, and sitting time were surveyed. Smoking and the frequency of alcohol consumption were self-reported. Sleep time was surveyed with categorized questionnaire of \u22645 h/day, >5 and \u22647 h/day, >7 and \u22649 h/day, and >9 h/day.The self-reported surveys were conducted for the nutritional intake of total calories , protein (g), fat (g), carbohydrate (g), calcium (mg), phosphorus (mg), iron (mg), potassium (mg), vitamin A (mg), sodium (mg), vitamin B1 (mg), vitamin B2 (mg), nicotinic acid (mg), vitamin C (mg), zinc (ug), vitamin B6 (mg), folic acid (ug), retinol (ug), carotene (ug), ash (mg), fiber (g), vitamin E (mg), and cholesterol (mg) by trained interviewees using a validated questionnaire . Total bThe absolute difference in dietary intake including fat intake, total body fat, and BMI score between the matched twin participants were estimated.The categorical variables were compared using chi-square test. The continuous variables were compared using Wilcoxon rank-sum test.We calculated the estimated values (EV) (absolute difference between monozygotic twins\u2014Absolute difference between dizygotic twins) with 95% CI of the absolute difference of dietary intake, total body fat, and BMI score using a linear regression model.p values < 0.05 were regarded as statistically significant. SPSS v. 24.0 was used.p > 0.05, p < 0.05). The levels of income and education, marital status, obesity, smoking status, alcohol consumption, and sleep time were not different between the two groups.The nutritional intake, DEXA fat, and BMI were not different between the monozygotic and dizygotic twin groups .The difference in dietary intakes of twin pairs was compared between the monozygotic and dizygotic twin groups . The difThe difference in total body fat was higher in dizygotic twins than in monozygotic twins (adjusted EV = 2427.86 g, 95% CI = 1777.19\u20133078.53 and adjusted EV = 1.90%, 95% CI = 1.33\u20132.46, The total body fat was more concordant within monozygotic twin pairs than in dizygotic twins. In addition, the concordances in the nutritional intakes for total calories and carbohydrate intakes were higher in monozygotic twins than in dizygotic twins in this study. The current results suggest the inherited portions for the development of obesity and nutritional intake. This study improved the evidence on the inherited contribution of obesity by analyzing monozygotic and dizygotic twins.The intake of total body fat was more concordant in monozygotic twins than in dizygotic twins in the current study. The metabolic pathways related to body fat storage and adipocyte lipolysis were reported to be related to shared genetic loci . MoreoveIn addition, nutritional intakes of total calories and carbohydrates were more similar between monozygotic twin pairs than between dizygotic twins in this study. Eating behavior is one of the main determining factors for weight gain . Thus, nHowever, there was no higher concordance of BMI in monozygotic twin pairs compared to dizygotic twins in the present study. In advance with genome-wide association studies, multiple predisposing genetic factors have been announced for obesity . In addiThis study evaluated the differences in nutritional intake, total body fat, and BMI in monozygotic and dizygotic cohort populations. Total body fat was measured using DEXA, and BMI was calculated based on the measured weight and height. In addition, socioeconomic factors of income level, education, and marital status were considered in the analyses. Furthermore, lifestyle factors of physical activity, smoking, alcohol consumption, and sleep duration were assessed to minimize potential confounding effects from these variables. The KoGES HTS data are generated and regularly monitored by statisticians in the Korean government. However, the cross-sectional study design limited the causality of obesity and nutritional intakes with monozygotic twins in this study. In addition, although validated questionnaires were used to examine nutritional intake , there cThe nutritional intakes of total calories and carbohydrates were more similar in monozygotic twin pairs than in dizygotic twins. Total body fat was more similar between monozygotic twins than between dizygotic twins."} {"text": "Candida (NAC) species remarkably increase azole resistance in developing countries. We aimed to study candidemia trends and associated risk factors in oncology patients since they vary geographically, and rapid and appropriate treatment improves outcomes. Vitek 2 was used to identify the Candida species, and the E-test determined their susceptibility to azoles. Candida was the cause of 3.1% (n\u2009=\u200953/1701) of bloodstream infections (BSIs) during a 1-year study. Candida tropicalis was the most predominant species among the 30 candidemia episodes studied (36.7%), followed by C. albicans (33.3%). However, C. krusei, C. guilliermondii, C. pelliculosa, C. parapsilosis, C. famata, and C. inconspicua accounted for 30.0% of the isolates. An increased risk of NAC BSI was significantly associated with chemotherapy and leucopenia . However, the multivariable analysis revealed that leucopenia was the only independent risk factor (P\u2009=\u20090.048). Fluconazole and voriconazole resistance were 58.3% and 16.7%, with NAC species showing higher resistance rates than C. albicans. Both fluconazole and voriconazole minimum inhibitory concentration (MIC) median values were higher in NAC than in C. albicans, but only voriconazole was significantly higher . In conclusion, the increased prevalence of NAC BSIs and incredibly high fluconazole resistance rates in cancer patients emphasize the necessity of antifungal stewardship to preserve voriconazole effectiveness, continued surveillance of candidemia, and future studies into azole resistance molecular mechanisms.Candidemia is a life-threatening invasive fungal infection in immunocompromised patients. The widespread use of azoles and the shift toward non-albicans The online version contains supplementary material available at 10.1007/s00284-023-03468-w. Candida bloodstream infections (BSIs) have emerged as a global cause of invasive fungal infection in critically ill patients in healthcare settings. Death rates caused by this life-threatening infection range from 35 to 53% and have been linked to higher morbidity and hospital costs .In vitro antifungal susceptibility results and MIC values of ed Table . AccordiC. albicans, 6 C. tropicalis, and 1 C. parapsilosis, besides 2 C. krusei strains that have an inherent resistance to fluconazole. Despite a high median MIC for fluconazole against all Candida species of\u2009>\u2009256 ranging from 0.190 to\u2009>\u2009256\u00a0\u03bcg/ml, C. albicans had a lower resistance rate, median MIC value, and range than NAC species versus\u2009>\u2009256 (0.190\u2013>\u2009256) \u03bcg/ml with P\u2009=\u20090.981). Furthermore, the MIC50 and MIC90 of voriconazole and fluconazole in NAC species were higher than in C. albicans .The overall resistance rate for fluconazole was high , including 5 Candida was the third-leading cause of BSIs in the United States and the seventh cause in Europe [,Candida species isolates represented 3.1% of all positive blood cultures. This isolation rate was parallel to the results of Lim et al., where 3.0% of all positive blood cultures were fungal infections, with the majority being Candida species (95%) [C. albicans as the pathogenic Candida species, a global rise of NAC species has been reported in recent decades. Studying local trends in Candida species distribution and antifungal susceptibility is essential since they differ considerably among countries or institutions and are influenced by patients' underlying conditions [Candidemia is one of the most common invasive fungal infections in immunocompromised cancer patients. n Europe , 16. In es (95%) . Despitenditions .Candida tropicalis was the most common species in our patients, followed by C. albicans (36.7 and 33.3%).\u00a0But C. krusei, C. guilliermondii, C. pelliculosa, C. parapsilosis, C. famata, and C. inconspicua were less isolated. Candida species have shown variation in different geographical areas, with more reported C. albicans in some regions [C. tropicalis across Africa, Asia, and Latin America [Candida species distribution among patients having certain underlying disorders has been noticed. Similar to our results, a higher proportion of C. tropicalis and C. krusei was consistently recorded among candidemia patients with hematological malignancies [ regions , 18, 19, America \u201322. Howegnancies , 24.C. famata and C. pelliculosa [C. famata was also detected in children with cancer in a study by Vasileiou et al. [C. pelliculosa was documented primarily in neonates, it has also been reported at a lower frequency in other age groups, consistent with our findings [Some studies have implicated broad-spectrum antibiotics and immunosuppression as risk factors for candidemia caused by certain non-albicans species, such as liculosa , 26. Intu et al. . Althougfindings . The incCandida [P\u2009<\u20090.001) and cancer and transplantation settings (P\u2009<\u20090.001) and was ten times more prevalent in larger institutions (P\u2009<\u20090.001) [The worldwide rise in azole resistance has caused severe therapeutic challenges. This rise may be because of the widespread use of azoles, particularly in low-income countries, and their fungistatic mode of action against Candida , 5. A pr<\u20090.001) . Additio<\u20090.001) .Candida isolates with no established CBPs, fluconazole and voriconazole resistance accounted for 58.3% and 16.7% of total Candida BSIs, respectively. All species showed high resistance and MIC values to azoles, notably fluconazole, which showed higher resistance rates in NAC than in C. albicans (64.3% versus 50.0%). These results agreed with an Egyptian study that found high fluconazole resistance rates from NAC and C. albicans species causing BSIs in pediatric patients at Cairo University pediatric hospitals [C. albicans, C. krusei, C. glabrata, C. tropicalis, and C. parapsilosis in ICU patients at Ain Shams University Hospital [C. albicans species, respectively. It was conducted on pediatric candidemia patients at Egypt's Mansoura University Children's Hospital [Fluconazole and voriconazole were the most frequently utilized antifungals in our patients. Such azoles were used in 87.5% of prophylactic and 73.3% of antifungal treatments. After we excluded the six ctively) . AnotherHospital . A previHospital . TherefoC. tropicalis isolates were resistant to fluconazole and voriconazole, respectively [C. tropicalis (47.6%) as well as in uncommon Candida species (45.5%) [Several studies from other countries also found high MICs and resistance rates for azoles, mainly for fluconazole. However, these high values were detected only in NAC species and at lower rates. According to a study conducted in China, 50.0% and 56.5% of ectively . Another (45.5%) . Additio (45.5%) , Saudi Apicalis 4.6% as we (45.5%) showed hCandida isolates with and without acquired resistance based on their phenotypes. An organism with a MIC greater than the ECV is assumed to have mutational resistance (non-WT). ECVs should not replace the published CBPs in clinical practice since they do not categorize isolates into sensitive and resistant (treatable and untreatable) isolates. However, it can guide physicians in making clinical decisions when CBPs are unavailable [Epidemiologic cutoff values can identify vailable . Furthervailable .Candida isolates showed increased rates of non-WT phenotypes for azoles, with MIC values higher than ECVs. Fluconazole and voriconazole non-WT phenotypes were identified in 88.5% and 48.0% of all isolated Candida species. Furthermore, we found severe azole cross-resistance among our isolates. All voriconazole-resistant Candida isolates were fluconazole-resistant (100%). In addition, all isolates that showed non-WT phenotypes for voriconazole were fluconazole non-WT phenotypes (100%). Most of our isolates showed elevated MIC values for azoles compared to published studies from other countries. Non-WT phenotypes of fluconazole and voriconazole were observed in 61.9% and 33.3% of C. tropicalis, 60.0% and 45.0% of C. glabrata, and 20.8% and 16.7% of C. albicans in Korea [C. guilliermondii, 16% and 7% of C. lusitaniae, and 8% and 17% of C. kefyr in the USA [C. tropicalis, and 5.6% and 5.6% of C. albicans in Algeria [Our in Korea ; 17% and the USA ; 36.9% a Algeria . Consist Algeria .Cancer patients are a unique population of patients subjected to multiple courses of antibiotics and antifungals whenever they manifest a persistent fever that does not respond to antibiotics. Thus, they are more at risk of developing resistance. Because fluconazole has been linked to high rates of resistance in many parts of the world, it should be used cautiously in high-risk patients who are unsure of their response. Furthermore, based on the high resistance rates of fluconazole shown in our study and other Egyptian studies, a stewardship program is needed to preserve voriconazole so as not to be subjected to increasing resistance with misuse.Candida to the species level. Therefore, conventional techniques are insufficient for Candida species identification, which causes invasive candidiasis. Broader spectrum techniques, such as matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF), more advanced ones, such as PCR-restriction fragment length polymorphism (PCR-RFLP), and sequencing or combined methods are recommended whenever applicable, particularly in patients at high risk for candidemia.In this study, although sn-PCR and CHROMagar results showed a significant agreement with those of Vitek 2, sn-PCR and CHROMagar could not identify 26.6% and 30.0% of Candida infections were all observed as risk factors in our patients . Several studies have also revealed these variables as significant factors in the development of candidemia [Candida BSIs in patients with hematologic and solid tumors equally. Surprisingly, in studies by Zheng et al. and Liu et al., solid cancers were more prevalent than hematological malignancies [Candida infections is critical for optimal patient care. Candidemia caused by NAC species was significantly associated with chemotherapy and leucopenia. On the other hand, in adult patients, surgeries, prolonged hospitalization, and multifocal infections were predictive factors linked to candidemia caused by C. albicans. In multivariable analysis, leucopenia was the independent factor associated with an increased risk of NAC BSIs.\u00a0Likewise, hematological malignancies, chemotherapy, leucopenia, and neutropenia were identified by other studies as increasing the risk of NAC, whereas surgeries, old age, and catheters increase the risk of C. albicans BSIs [Broad-spectrum antibiotics, prolonged hospital stays, ICU, chemotherapy, neutropenia, recent surgeries, coexisting bacteremia, and multifocal ndidemia , 38, 39.C. auris is rare in our country and the tested species account for more than 90% of cases, we focused on the prevalence of common Candida species rather than C. auris among our patients. A recent study in our country failed to detect C. auris in 414 candidiasis-causing isolates [The current study has some limitations. Although the NCI treats cancer patients from all over the country, our findings were limited to a single institute. Our study's sample size was relatively small, which may impact the risk factor analysis. Moreover, since C. tropicalis being the most prevalent species. NAC candidemia was significantly linked to chemotherapy and leucopenia. We believe that chemotherapy-induced leucopenia and neutropenia in cancer patients are strongly linked to NAC BSIs rather than an underlying tumor, such as hematological malignancies. Our findings demonstrated an increased reduction in fluconazole and voriconazole susceptibility, particularly in NAC species. The general critical condition of cancer patients and numerous risk factors, such as prolonged hospitalization, all contributed to these susceptibility trends. Furthermore, the extensive use of azoles at our institute enhanced the selection pressure for resistance. The high fluconazole resistance rates and the increasing prevalence of NAC suggest a potential decrease in therapeutic efficacy and point to the necessity of stewardship programs to preserve voriconazole efficacy. Moreover, this alarming rise in azole resistance rates and elevated MIC values highlights the importance of physician awareness, early detection, and continued future surveillance of candidemia, especially in this critically ill group of patients. Moreover, investigating the underlying molecular mechanisms of azole resistance is highly advised, as this may improve the understanding of this critical clinical situation.We revealed the predominance of NAC-causing candidemia in oncology patients, with Supplementary file1 (DOCX 369 KB)Below is the link to the electronic supplementary material."} {"text": "We performed coarse-grained molecular dynamics simulations of DNA polymers pushed inside infinite open chiral and achiral channels. We investigated the behavior of the polymer metrics in terms of span, monomer distributions and changes of topological state of the polymer in the channels. We also compared the regime of pushing a polymer inside the infinite channel to the case of polymer compression in finite channels of knot factories investigated in earlier works. We observed that the compression in the open channels affects the polymer metrics to different extents in chiral and achiral channels. We also observed that the chiral channels give rise to the formation of equichiral knots with the same handedness as the handedness of the chiral channels. Polymers are long molecules consisting of many building units called monomers . The uniThe way the monomers are connected defines the polymer\u2019s topology. A relatively new topology is represented by polymer knots . The knoIt is difficult to pinpoint when and how this scientific interest in knotted molecules began, whether it was sparked by imagination or, as is often the case, by observing nature. But now, it is clear that the topological state of molecules has strong biological and technological effects and implications. While in biology, the knots can be very harmful on a genome ,5,6,7,8 As pointed out above, polymers, especially with regard to their topology, represent a problem suitable to be studied using computers. While the progress in polymer synthesis enabled controlled preparations of knotted molecules of up to eight crossings , in compAs we also mentioned, the knots are formed naturally by biological processes ,14, but The next layer of complexity is added by investigating polymer knotting in confined spaces. The confinement state is the most typical state where polymers occur in nature ,20. ConfConsequently, the level of the complexity of the problem can be extended by adding external forces into consideration that induce compression of the confined polymer. The compression of polymer chains under confinement was studied by means of Monte Carlo (MC) ,24,25 anIt has been demonstrated that the theoretical and computational insights into polymer compression within nanochannels can be experimentally validated by confining DNA within nanofluidic channels and inducing compression through specific experimental setups ,42. MoreChirality is a prominent property of knots. As mentioned above, knots are formed by a substantially long polymer chain winding around itself. One of the parameters to characterize knots is the crossing number that quantifies how many times the polymer winds around itself. The direction of the polymer winding around itself defines the chirality of the knot. The crossing number is a combinatorial property, and the number of possible knot types that can be constructed with a given crossing number increases substantially. This is why it is feasible to create as many as 1.7 million prime knots with up to 16 crossings, out of which fewer than 2000 knots are achiral . The chiIn the context of knot formation, a recent computer simulation was devised to explore an intriguing scenario of whether and how knotting could be induced by simply pushing DNA inside nanochannels, without the need for the more complex lab-on-chip nanofluidic experiments . This scIn our current study, we are exploring the formation of knots in polymers as they are pushed into open, infinitely long nanochannels with varying sizes and geometries. To simulate chiral environments, we designed these channels with a helical geometry and induced different chirality by altering the winding direction of the helical loops within the nanochannels. We developed a novel computational approach to identify the chiral properties of the knots that form in the DNA strands as they are pushed inside these channels. Our method utilizes Knoto-ID , a topol\u03c3 corresponding to 2.5 nm + 1/4 if |rij| < 21/6\u03c3 and Uex(rij) = 0 otherwise, where |rij| is a distance between a pair of beads position vectors ri and rj, where i \u2260 j.The dsDNA is modeled as a discretized beaded chain consisting of N = 300 beads representing DNA portions with a width of 1 o 2.5 nm . The beamolecule . The nonRch and central axis described by the equation of a helix given as t is a periodic parameter in radial space and \u03c9 gives a subtended angle as t increases = 2.5 nm and [\u03c4] = 74 ns \u00d7 (\u03b7/\u03b70) ns [\u03b70 is the viscosity of pure water, \u03b70 = 1 cP, and \u03b7 is the viscosity of the actual buffer used in the nanofluidic experiment. The buffers in nanofluidic applications often consist of a solution containing polymers, saccharose, agarose, etc., to increase the hydrodynamic drag of the media on the molecule [The span is shown as a function of velocity of the piston pushing the polymer along the infinite channels. The push by the piston is realized by applying an external force to the piston. Hence, the velocity of the piston was obtained from the simulated trajectories as a total distance traveled by the center of the piston from the beginning of the simulation to its final position at the end of the simulation over the simulation time \u03b7/\u03b70) ns , where \u03b7molecule ,42, whilmolecule . The corWe would like to note that we opted for realizing the push intermediated by an applied external force instead of directly moving the piston by a constant distance at a time, as simulated in some existing works . This alD/P = 0.5. The difference seems to be higher than in the compression in channels with an impenetrable wall. The difference in polymer span also seems to disappear at weak confinement strengths in terms of D/P, especially if strong compressive forces are applied.The comparison of polymer metrics pushed in helical and cylindrical geometries on In \u03b3. As the external force acting on the particle increases, the hydrodynamic drag force remains proportional to the velocity of the particle [The comparison of the computed span shows, in general, smaller compaction of the polymer when compressed by pushing inside the open channels than in the case when the polymer is compressed against the impenetrable wall in the nanochannels. We understand the obtained results as follows. Given the form of the equations of motions in the Langevin dynamics, provided in the Methodology section , the hydhe span) , this meWe also showed in the previous work that the helical confinement in narrow channels acted to a certain extent like cylindrical channels with a smaller diameter, i.e., channels with higher confinement strength. The conformation of the DNA molecule is determined by a balance of several ongoing forces: the confinement force, elastic force, and hydrodynamic force intermediated by the pushing force . This isD/P and small compressive forces, the existence of a special regime was discovered, forming a shoulder on the dependencies of the span versus compressive forces [As investigated in the existing studies, the compaction of the polymer under external forces is relevant and related to conformational changes ,25,36, ae forces ,25, thatThe polymer metrics provide one-dimensional information on the polymer behavior and the effects of confinement and compressive force. Another convenient property that is directly accessible from computer molecular simulations to represent the situation of a dynamically moving polymer molecule is distributions of monomers. Here, we analyze the radial and axial distributions of monomers across the major axes of inertia of the confining channels. dH = 2\u03c0k\u03c3, where k = D/2\u03c0 in helical channels, and dH simply equals D. There are five heatmaps corresponding to five settings of the external force, from the bottom up following the increasing velocity of pushing, as shown in the inset of The heatmaps indicate expulsion of monomers into the lateral sides of the channel with an increasing velocity of pushing. This effect is numerically captured in the graphs showing radial distribution functions that are displayed adjacent to the heatmaps in each of the panels. The radial distributions show a similar shape to those already obtained for cylindrical ,68 and sx-axis also reflect the direction of push in the simulations, which went from right to left in Cartesian coordinates. The shape of the concentration profiles is very similar to what is observed experimentally in the dynamic nonequilibrium segmental concentration profile of a single nanochannel-confined DNA molecule [x-coordinate). This kind of projection was used in some existing works studying polymers in nanochannels by other authors [F\u03c3/\u03b50 = 0.1 and 5. Each panel also shows a representative snapshot from the simulation showing polymer in the channel with a particular geometry obtained at the end of the simulation for the setting of F\u03c3/\u03b50 = 1.To the right of the radial distributions on each panel, we also show the axial distribution of monomers along the main axis of inertia of the channels. These distributions show the number density or concentration of monomers from the position of the piston. In the axial distributions, the position of the piston is always at the origin. The values on the molecule ,43. In omolecule ,35,70, i authors ,35. The D/P = 0.5, 1, and 2. We can see that in the absolute numbers, a larger maximum on the axial distribution near the position of the piston surface is achieved in the case of cylindrical channels. Also, the observed flattening of the radial distributions is more extensive in the case of cylindrical channels. This indicates that when pushing the polymers inside open channels, the force is less effective in compressing the polymers, and the monomers do not fully explore the helical grooves of the helical channels. Consistently with the polymer metrics shown in In this section, we evaluate the topological state of DNA polymers under compression induced by pushing through open infinite channels. The knotting probability is evaluated as the frequency of finding knots in ten runs along the trajectories that contain 5000 structures each for topological analyses. The occurrence of knots was evaluated by Knoto-ID software (v1.3.0)D/P ratio. D/P = 1. The analysis shows that, in general, the probability of knotting increases with the velocity of pushing. Also, it is shifted towards the occurrence of more complex knots with increasing velocity of pushing. This observation is consistent with previous investigations of knotting in DNA pushed through square channels and alsoOn the other hand, the distinctive feature of the pushing inside the infinite channels seems to be apparent lower knotting probability in nanochannels with helical geometry as compared to the cylindrical channels. As discussed in k = D/2\u03c0, which determines the distance between helical loops or size of the helical turns, dH = 2\u03c0k\u03c3. It is important to note that the current setting of the pitch was chosen based on our previous work where we investigated chiral effects in terms of mobility of localized knots with a given chirality. It is probable that for the current experimental setting of the polymers pushed inside the open channels, the pitch has to be fine-tuned, perhaps towards larger values above the deflection length \u03bb = (D2/P)1/3 [dH > 2\u03c0k\u03c3 for k fixed to k = D/2\u03c0. The decreased knotting probability in helical channels can, however, be due to the specific geometric parameters of the helical channels determined by the pitch D2/P)1/3 , so thatdH = 2 P, and analyzed the knotting probability, as shown in D/P = 0.5) and very small compressive forces (F\u03c3/\u03b50 = 0.1). Since detailed refinement and thorough exploration of the parameter settings of the channels\u2019 geometry are clearly beyond the scope and extent of a single work, we will readdress it in future works, providing more information on polymer behavior for various parameter settings of the helical channels, polymer chain length, and persistence lengths.In order to gain insight into this behavior, we simulated DNA polymer pushed into a helical channel with the size of helical loops well above persistence length, set to The midsection of D/P = 0.5 and D/P = 2. This might be related to the fact that for both of these settings, the knottedness is lower than in the case of D/P = 1, and the amounts of existing twist and torus knot types as a function of crossing number do not evolve equally; in other words, with increasing crossing number, there are more twist knots than torus knots. For the cases outside intermediate confinement, D/P = 0.5 and 2, we see an abundance of unknots. In the case of the strong confinement, D/P = 0.5, the unknots can be related to lower degrees of compaction and prevailing effects of confinement keeping the chain extended. In the case of weak confinement, D/P = 2, chain length effects or timescale and velocity effects might be taking place. The polymer at its given length is much more diluted; hence, on one hand, it leads to much more spooling, but also it might have not enough time to explore the geometrical spaces of the larger channels at the fixed rate of pushing. This may lead to the higher extent of writhing indicated in Note that there are only 20 amphichiral knots out of 801 knots that can be constructed from knotted lines up to 11 crossings . The pophttp://kymoknot.sissa.it/) [We further investigate whether the effect of the handedness of the helical channels on the knot chirality is preserved to some extent, and if channels with helical geometry and given handedness can be used to control handedness of the knots that are created during the pushing of the DNA through the channels. In the previous study , we probssa.it/) . In the current work, we directly use the information on the chirality of the knots provided by Knoto-ID software . However\u2113 = 1 \u03c3, there would be no need to construct a closure. Moreover, mathematically, knots occur only on closed curves, and the algorithms for finding knots often involve some kind of closure method that constructs a connection between the free ends of the linear polymer chain. In order to eliminate the possible bias coming from the closure method, we evaluated only the knots found in conformations with an arbitrarily chosen very short end-to-end distance. It is noteworthy that although we do not know how the end-to-end distance and the bias from the closure method are related quantitatively, one intuitively expects that the number of entanglements introduced by closing the arc grows with the distance spanned by the added closing segments . If we c\u03c3 based on the average variation of the end-to-end distance found in consecutive frames in simulated trajectories, and we consider it a ligation distance. After computing numbers of right-handed and left-handed knots, we evaluated their statistics, which are summarized in the last column of Furthermore, evaluating knots at short end-to-end distances can be of practical relevance, as the knots could be chemically embedded in the polymer by closing the polymer ring chemically. The distance was set to 10 In addition to the information on knotting statistics, computer simulations can also help understand the mechanisms by which knotting occurs ,75. In tAn earlier work investigated this by inspecting positions of emerging knots along the polymer chains , and fouThe occurrence of backfolding is evident from the evolution of the polymer metrics in terms of the chain span, investigated in Additionally, we show the evolution of the writhe throughout the simulations in By means of coarse-grained molecular dynamics simulations, we studied the behavior of polymers in terms of polymer metrics, monomer distributions and topology of the polymer chain pushed inside infinite nanochannels with both cylindrical and helical geometries, using a DNA biopolymer model system. The simulations showed that the polymer undergoes a compaction upon increasing pushing velocity that could be used for controlling knottedness. When compared to simulations of polymers compressed in finite channels, distinct features emerge. Primarily, the geometry of the channels exerts varying effects on the extent of polymer chain compaction. Consequently, when the polymer is pushed inside open channels, it forms fewer knots compared to when it is compacted by compression within finite channels against an impenetrable wall. The confined environment of the open channels limits the polymer\u2019s ability to explore helical loops of the chiral nanochannels, but it still generates equichiral knots and equichiral writhe. The lower degree of knotting observed in helical loops during the pushing inside helical channels may seem inconsistent with Ralf Metzler\u2019s argument regarding enhanced knotting resulting from irregularities in nanochannels . HoweverOur findings also prompted additional simulations involving variations in the pitch of the channels and the radius of the helix. Some of these simulations were included in the discussion of the results and provided as"} {"text": "Spontaneous echo contrast (SEC) observed in transesophageal echocardiography (TEE) is a reliable predictor of the risk of future ischemic stroke in patients with non-valvular atrial fibrillation (NVAF). Left atrial strain globally reflects atrial function, remodeling and distensibility. The left atrial appendage (LAA) is a myogenic remnant of the left atrium, which can actively relax and contract. The left atrial appendage (LAA) is an important part of releasing the pressure of the left atrium. The key role of the left atrium is to regulate the left ventricular filling pressure, act as a reservoir for pulmonary venous return during ventricular contraction, and act as a conduit, transferring blood to the Left ventricle during early ventricular diastole. The purpose of this study was to explore the relationship between left atrial function and left atrial appendage spontaneous echo contrast (LAASEC).A retrospective study of 338 patients with non-valvular AF was conducted. Two-dimensional speckle-tracking echocardiography provided the following metrics of LA strain: LA strain during the reservoir phase (LASr), LA strain during the conduit phase (LAScd). LA or LAA has the dense SEC of more than grade 3, which is defined as mud like change or pre thrombosis.n\u2009=\u200981) has lower LASr than those with lower grades of SEC (n\u2009=\u2009257) . Multivariate logistic regression model showed that the type of atrial fibrillation (persistent\u223c), increased heart rate, decreased LASr were independently associated with the dense LAASEC (OR (CI 95%), 5.558 (1.618\u201319.09), 1.016 (1.006\u20131.026) 0.002, 1.224 (1.085\u20131.381), both P\u2009<\u20090.01). Venn Diagram showed that lower CHADVASC2 score groups had dense SEC cases. Receiver operating characteristic (ROC) curve was used for analyzing results and selecting cut off values. The cut off point for LASr\u2009<\u20098.85% and CHADVASC2 score was >2 scores with sensitivity and specificity were 79% and 85%.Patients with level 3 SEC (Lower LASr is independently associated with the dense LAASEC in NVAF and has incremental values superior to clinical scores. The decrease of LASr may be a potential non-invasive parameter for evaluating the higher risk of LAA thrombosis. Previou2.A total of 338 patients with non-valvular atrial fibrillation who were admitted to the Department of Cardiology of the First Affiliated Hospital of Suzhou University from December 1, 2018 to June 10, 2021 were included in the retrospective study. 338 patients with AF included in the study were prepared to undergo radiofrequency catheter ablation. All patients were examined by 12-lead electrocardiogram or 24-hour dynamic electrocardiogram, and the onset of AF was clearly recorded. All of them received Transthoracic echocardiography (TTE) and Transesophageal echocardiography (TEE). Paroxysmal atrial fibrillation was defined as AF can recover spontaneously within 7 days or under intervention, and recur at different frequencies; Persistent atrial fibrillation refers to the onset time of continuous AF >7 days. The inclusion criteria were as follows: patients with non-valvular atrial fibrillation (NVAF) with adequate image quality and complete clinical data. Informed consent was obtained from each patient. Basic information collected about the patients included age, height, weight, and the type of AF . Previous history of hypertension, diabetes, coronary heart disease, prior stroke, or congenital heart disease was recorded. The exclusion criteria were as follows: patients with congenital heart disease, valvular heart disease, thrombocytopenic purpura, severe mitral regurgitation, or tumor, and patients whose data included low-quality images. All patients must undergo TEE and TTE before treatment, each participant signed a written informed consent form. This study was conducted in accordance with the Declaration of Helsinki (as revised in 2013) and approved by the Ethics Committee of The Affiliated Hospital of Soochow University (No. 225/2022).2.1.Vivid E95 ultrasonic system equipped with M5S probe was used for imaging. This study was performed in the left recumbent position, connecting with ECG. We selected cardiac cycles with minimal differences and took the average heart rate when obtained images, especially for per-AF. Standard two-dimensional echocardiographic views were obtained, including parasternal long axis and apical four chamber views and apical two chamber views. The Biplane Simpson' method was used to obtain the maximum and minimum left atrial (LA) volumes of the apical four chamber view and the apical two chamber view, and the exponential standardization was performed according to the Body Mass Index (BMI) to calculate the total left atrial emptying fraction (LAEF). The early diastolic flow-velocity peak of the mitral valve (peak E) was measured by pulse Doppler in the apical four-chamber view. Doppler Tissue Imaging (DTI) was used to measure the early diastolic tissue Doppler velocity of the mitral annulus from septal and lateral (E'sep and E'la). The ratio of E to E' (E/e') was then calculated. Simpson's rule was used to measure the left ventricular ejection fraction (LVEF) in the apical four-chamber view and the apical two-chamber view. The method of evaluating cardiac function according to the current guidelines of the American Society of Echocardiography .2.2.All patients routinely used Vivid E95 echocardiography for TEE, equipped with multiplane 6VT (3.0\u20138.0\u2005MHz) transducers. All TEE assessments were performed by professional physicians at least different views should be used to evaluate the left atrial appendage (LAA). The LAA were evaluated comprehensively to find any evidence of SEC or thrombus. During left atrial appendage imaging, pulse Doppler velocity of left atrial appendage forward (emptying) and backward (filling) blood flow was also recorded. Before TEE, lidocaine hydrochloride spray was used for local anesthesia of throat. The left atrial appendage was evaluated from 0\u00b0 to 180\u00b0 in the middle of the esophagus. Left atrial appendage ejection fraction (LAAEF) was measured and calculated by 2D Simpson' s method. (LAAEF)\u2009=\u2009[LAA Vmax\u2014LAA Vmin]/LAAVmax\u2009\u00d7\u2009100%. When the left atrium or left atrial appendage has the dense SEC of more than grade 3, it will show a jelly like non solid shape that lasts throughout the cardiac cycle and has a tendency to deposit, which is defined as mud like change or pre thrombosis , 12. SEC2.3.EchoPAC has been used in most studies on the deformation of left atrium (LA) previously, with high feasibility and good consistency. Speckle tracking echocardiography was performed using offline analysis (201 version). The starting point of the QRS wave of in the (electrocardiogram)ECG was used as the zero baseline. 2D-STE is a speckle tracking analysis technique using standard B-mode images. During brief breath holding and stable ECG recording, conventional 2D gray scale echocardiography was used to obtain apical four chamber and two chamber views. Patients with atrial fibrillation measure almost five consecutive cardiac cycles on average, find out and record three average consecutive cardiac cycles. The recommended frame rate setting is 60\u201380 frames per second. During processing, the left atrial endocardial surface was manually tracked by tracing in two chamber and four chamber views. In order to obtain the region of interest (ROI), the system automatically generates atrial endocardial surface tracking. You can manually adjust the width and shape of the ROI, and then the software divides it into six parts, and the tracking quality of each part is automatically adjusted. For each segment, the software gives the longitudinal strain curve and the average curve of all segments reflecting the pathophysiology of left atrial function. If more than three segments were excluded, subjects were removed from the study. The left atrial strain curves . The lefRepeatability evaluation of measurement parameters.Two imaging cardiologist analyzed the image in the EchoPAC softwear after obtained. Each researcher analyzed the image twice to avoid the intra-group differences. And take the average value of the data collected by two researchers as the final data to avoid inter-group differences.3.t-test or Mann Whitney U-test was used to compare the continuous variables between groups. The \u03c72-test or Fisher exact test were used to compare the categorical variables between groups. Single factor and multiple factors logistic regression analysis were conducted to determine the independent risk factors of LAASEC. In univariate analysis, variables with P\u2009<\u20090.05 were included in multivariate analysis. Subsequently, variables with P\u2009<\u20090.05 in multivariate analysis were included in the final multiple logistic regression model. Finally, Receiver operating characteristic (ROC) curve analysis was conducted to evaluate the predictive ability of left atrial function indicators and CHADVASC2 scores. Statistical significance was defined as P value <0.05. SPSS software (version 27.0) was used for statistical analysis.Continuous variables of normal distribution were expressed as mean\u2009\u00b1\u2009standard deviation. The variable of unnormal distribution was expressed as the median (quartile range), and the categorical variable was expressed as the frequency (percentage). The 4.(1)A total of 338 patients with atrial fibrillation were included, 40.2% of whom were female, with an average age of 63.20\u2009\u00b1\u200910.36. 81 patients (23.96%) with LAASEC above grade 3, all patients' characteristics were recorded in (2)P\u2009=\u20090.003), the proportion of women was higher , and the proportion of persistent atrial fibrillation was higher , Higher incidence of heart failure (26.3% vs. 4.7% P\u2009<\u20090.001), higher diastolic pressure , higher heart rate , higher CHADVASC2 score , larger left atrial size , diastolic left atrial volume index and systolic left atrial volume index were larger , Left ventricular ejection fraction and left atrial emptying fraction decreased , LASr and LAScd were lower , The LAA ostia and LAA depth were larger , LAAEV and LAAFV were lower , and LAAEF was decreased .According to the level of LAASEC, they were divided into two groups : those a(3)Factors associated with LAASECP\u2009=\u20090.006; decreased LASr 1.085\u20131.381, P\u2009=\u20090.001, higher heart rate, 1.006\u20131.026, P\u2009=\u20090.002].(4)The ROC curve analysis results showed :Univariate logistic regression identified a variety of clinical TTE, TEE parameters as significant contributors to LAASEC : Age, WoP\u2009<\u20090.001). Using a LASr cut-off value of <8.86% combined with CHADVASC2 cut-off value of 2 scores, patients with dense LAASEC were identified with a Sensitivity and specificity were 79% and 85%.AUCs of CHADVASC2 score incorporated with LASr and LASr\u2009+\u2009CHADVASC2 were significantly higher than that of CHADVASC2 score Linear relationship :Venn Diagram showed that there were dense SEC cases in the low anticoagulation score group .(5)Liner\u2009=\u20090.651, 0.459, 0.768, P\u2009<\u20090.001,repensively). LASr is negatively correlated with LAD, LAVI (diasoloe), LAVI (sysoloe) .LASr is positively correlated with LAAEV, LAAFV, LAAEF, is more obvious with the increase of age. The overall strain damage of left atrium and the increase of left atrial volume index are both aging phenomena. Previous research results show that compared with aging men, women may have worse mechanical properties of left atrium . DeterioPrevious reports provided evidence that elevated blood pressure may have a direct impact on left atrial mechanics \u201322. In aRecent studies have shown that structural and functional abnormalities of the left atrium/left atrial appendage may also lead to stroke/transient ischemic attack. Enlargement of left atrial appendage, SEC and thrombus in left atrium/left atrial appendage, decreased emptying velocity of left atrial appendage (LAAEV) are recognized as stroke risk markers. Anatomical remodeling, atrial fibrosis and left atrial myocardial contractility decline are factors related to left atrial thrombosis/spontaneous echocardiography (LAT/SEC) and transesophageal echocardiography (TEE) reducing the flow rate of left atrial appendage (LAA). The decrease of LAA flow rate found by transesophageal echocardiography is likely to lead to stroke. The area of left atrial appendage (LAA) accounts for about 10% of the total left atrial volume and plays an important role in left atrial function.With the development of ultrasound technology, researchers gradually understand the structure and function of left atrial appendage. The left atrial appendage (LAA) is a myogenic remnant of the left atrium, which can actively relax and contract. The left atrial appendage (LAA) has much stronger adaptability and blood storage capacity than other areas of the left atrium, so it is an important part of releasing the pressure of the left atrium and ensuring the filling of the left ventricle. Because of the complexity of structure and trabecular muscle, left atrial appendage is the most common site of left atrial thrombosis. Once atrial fibrillation occurs, the left atrium and left atrial appendage lose their contractility, resulting in the inability of blood to be effectively drained and detained, and eventually the SEC or thrombosis of the left atrial appendage is formed. The structural and functional remodeling of left atrium and left atrial appendage during AF, including cavity expansion, endocardial elastic fiber hyperplasia and myocardial function inhibition, are potential markers of left atrial appendage thrombosis (LAAT) and LAASEC \u201334. In tP wave of ECG and the ejection of LAA , allowing simultaneous assessment of strain and volume changes frame by frame from 3D images. In addition, 3D speckle tracking echocardiography is becoming a clinical reality and is expected to become a clinical device for spatial analysis of local or global strain analysis of left atrium.One of the most significant limitations is the patient population, as small samples may lack statistical capacity to demonstrate significant statistical differences."} {"text": "To evaluate patient characteristics, risk factors, disease course, and management of cervical vertebral osteomyelitis in patients who had radiation for head and neck cancers.A retrospective cohort study (case series) of patients diagnosed with post-radiation osteomyelitis of the cervical spine between 2012 and 2021. Data were collected from the patient\u2019s medical files.Seven patients with post-radiation cervical osteomyelitis were reviewed. The median patient age was 64 years. The mean interval between diagnosis of osteomyelitis and the first and last radiotherapy course was 8.3 and 4.0 years, respectively. A medical or surgical event preceded the diagnosis in four patients (57%) by a mean of 46.25 days. Common imaging findings were free air within the cervical structures and fluid collection. Four patients recovered from osteomyelitis during the follow-up within an average of 65 days.Post-radiation osteomyelitis is characterized by a subtle presentation, challenging diagnosis, prolonged treatment, and poor outcome. Clinicians should maintain a high index of suspicion for the long-term after radiotherapy. Multidisciplinary evaluation and management are warranted.The study describes post-radiotherapy osteomyelitis of the cervical spine, a rare and devastating complication. Literature data regarding this complication are sparse. Squamous cell carcinoma is the most common histological type of HNC, accounting for about 90% of patients. Radiotherapy is the mainstay of treatment, with or without chemotherapy or surgery. Variations in the combination and order of treatments depend on tumor site, histology, stage, patient\u2019s medical history, and preference. However, despite advances in diagnosis and treatment, locoregional recurrence occurs in 15\u201350% of patients, and up to 27% of recovered patients are at risk of secondary primary head and neck tumors.Head and neck cancer (HNC) is the seventh most common malignancy worldwide, with approximately 900,000 new cases and half a million deaths annually. Furthermore, substantially adverse effects may be on cellular physiology, including inhibition of osteoblast and osteoclast activity, vascular injury, cellular metabolic imbalance leading to osteolysis, increased susceptibility to infection, and tissue necrosis.Radiotherapy to the head and neck often causes immunological and vascular changes to the upper aerodigestive mucosa. As a result, it may induce mucosal ulceration, tissue breakdown, and the formation of non-healing wounds and fistulas, through which microorganisms colonizing the irradiated mucosa can penetrate and infect the soft tissue and bone. ORN affects 2% of irradiated patients with HNC, commonly presenting in the mandible and maxillary bones. ORN of the mandible was described in up to 5% of patients after head and neck irradiation and was attributed to the bone density that absorbs a more considerable amount of radiation and less vascular supply than other bones. Conversely, ORN of the cervical spine and skull base is a rare condition. As opposed to the mandible and maxilla, the cervical spine and skull base are deeper beyond anatomical barriers and not in close contact with the contaminated biofilm of the oral cavity mucosa; also, the cervical spine and skull base are rarely positioned within the maximal radiation field, as in the case of oral cavity cancer and the jaws.Osteoradionecrosis (ORN) is when an irradiated bone becomes exposed through a wound in the overlying skin or mucosa and persists without healing for 3\u20136 months.Staphylococcus aureus, which can enter the vertebrae through various routes, including direct trauma, surgery, or bloodstream infections. Osteomyelitis can affect any bone in the body, including the cervical vertebrae. Compared to osteomyelitis, radionecrosis refers to tissue death or damage due to exposure to radiation therapy. It most commonly affects tissues that have been irradiated as part of cancer treatment. Radionecrosis can occur in various body regions, including the head and neck, where the cervical vertebrae are located. To summarize, cervical vertebrae osteomyelitis is an infection and inflammatory condition of the cervical spine bones, usually caused by bacteria. Radionecrosis, conversely, is tissue death or damage resulting from prior radiation therapy, commonly seen in the cervical vertebrae following treatment for HNC.Cervical vertebrae osteomyelitis refers to the infection and inflammation of the bone and bone marrow in the cervical spine (neck region). It is typically caused by bacteria, such as In addition, radiation may damage the cervical vertebrae and the adjacent ligamentous structures, which are close to the radiation field, resulting in cervical spine deformity and instability and, ultimately, spinal cord compression, neurological deficit, and myelopathy.The treatment of head and neck osteomyelitis may require specific expertise. Besides the tendency for polymicrobial infections, important anatomic considerations are owing to the challenging drainage approach and the proximity of essential major blood vessels and the skull base. In some cases, a prolonged antibiotic regimen may be sufficient, whereas resistant disease often requires additional surgical debridement.Although ORN and osteomyelitis of the cervical spine and skull base are devastating complications of HNC radiation, they have not been thoroughly described in the medical literature. Therefore, this study aims to present a series of those patients and analyze their characteristics, identify potential risk factors, and describe disease course and management.A retrospective case series study was conducted in the tertiary radiation oncology center, Davidoff Cancer Center for the Treatment and Research of Cancer of Rabin Medical Center. The institutional ethics committee approved the study protocol (IRB-XXX 0731\u20132020). The cohort consisted of patients previously treated with radiation, with or without chemotherapy and surgery, for HNC who presented to the institute\u2019s multidisciplinary head and neck boards (either tumor boards or radiology rounds) between 2012 and 2020 with a diagnosis of osteomyelitis of the cervical spine or skull base region based on a combination of clinical findings, radiological features and a culture positive finding from the infection site without another infectious site. Clinical data for the study were collected from the patient\u2019s medical files: demographics, past medical and surgical history, detailed radiation therapy protocols, clinical presentation, imaging characteristics, clinical course, and management of osteomyelitis. The duration of follow-up was calculated in months from diagnosis to the last follow-up visit. Recovery time was calculated in days from diagnosis to resolution of osteomyelitis, as noted in the discharge form or follow-up report.Categorical variables are presented as absolute values and percentages; continuous variables are presented as median and range.Seven patients were enrolled in the study: five males (71%) and two females of median age of 64 years (range 54\u201385) at diagnosis. The characteristics and medical history of the patients are detailed in All patients had imaging studies, as detailed in \u22121 (range 8.06\u201315.56\u2009K\u2009ml\u22121).Symptoms included dysphagia (71%), cervicalgia (71%), weight loss (29%), cough (14%), and hoarseness (14%). Six patients (86%) were presented with multiple symptoms (two or more). On physical examination, fever was present in two patients (29%), posterior pharyngeal mucosal defects was found in three patients (43%), showing an ulcer with exposed bone tissue. Finally, abnormal reflexes in all four limbs were presented in one patient (14%). Initial blood tests revealed leukocytosis in four patients (57%) and an average leukocyte count of 11.14\u2009K\u2009mlIn four patients (57%), a surgical or medical event preceded the diagnosis of osteomyelitis, as detailed in Five patients (71%) died during follow-up; three related to osteomyelitis\u2014two died from septic shock, and one from cervical hemorrhage. Those patients died within 35 days of diagnosis of osteomyelitis (range 11\u201352 days). The remaining two patients died due to sepsis, unrelated to osteomyelitis: one due to pneumonia and the other urosepsis.Osteomyelitis of the cervical spine and skull base is an infrequent but dangerous head and neck radiation complication. Sparse data are available in the literature. The present case series describes the characteristics and clinical course of seven affected patients treated in a tertiary medical center over 8 years. Notably, in most patients, a medical or surgical event occurred in the months preceding the diagnosis of osteomyelitis. Multidisciplinary treatment was required, including broad-spectrum antibiotics and surgery. Recovery time was long, and outcomes were poor. The mean time in our cohort between the first radiation treatment and diagnosis of osteomyelitis was long and had a wide range, similar to the experience of other centers. Thus, clinicians should maintain a high index of suspicion even decades after radiotherapy, alongside regular evaluation for recurrent or persistent squamous cell carcinoma. When post-radiation osteomyelitis is suspected, a thorough evaluation should be conducted, including flexible nasal endoscopy to assess discharge, local swelling, edema, and pharyngeal wall defect, in conjunction with a complete radiological assessment. The radiological assessment may be complex because of significant scar tissue from prior radiotherapy, limiting soft tissue swelling on lateral cervical spine X-rays. MRI is the modality of choice. Hyperintense signals on T2 weighted images indicate infection and hypointensity on T1 images indicates loss of marrow fat signal. Contrast enhancement on T1 may be present with a soft-tissue inflammatory mass or a low-grade infection. However, MRI cannot differentiate infection superimposed on ORN from pure osteomyelitis.Post-radiation osteomyelitis has a variable presentation.In our study, patients were evaluated using MRI, CT, PET-CT, or a combination of those studies, which yielded findings that might indicate active inflammation, which include contrast material enhancement of the vertebral body epidural and meningeal enhancement, fluid collection, free air caused by gas-forming bacteria, and reactive lymphadenopathy. Thus, several imaging modalities may help diagnose active chronic infection, which would require an aggressive treatment approach. Samples should include the suspected bone and surrounding soft tissue. Some authors advocated transoral instead of CT-guided biopsy via an anterior or transverse approach because CT-guided biopsy poses a risk to vital structures like the carotid triangle. This is especially true in necks that have been irradiated in which the extensive scarring distorts the anatomy.In equivocal cases, bone biopsies can help identify ORN. All patients should undergo biopsies since tumor recurrence is an important differential diagnosis. In our study, most patients required debridement and cervical fusion. In addition, surgical decompression would likely be necessary in the event of spinal cord compression by an epidural abscess. Furthermore, a few case reports have suggested the possible advantages of hyperbaric oxygen, including abscess reduction and improvement in mucosal defect. Thus, due to the potential benefits and relatively minor side-effects, hyperbaric oxygen should be considered in those dreadful complications. Further prospective research should better evaluate the advantages of hyperbaric oxygen.The treatment of osteomyelitis is complex, let alone osteomyelitis of the head and neck central compartment owing to the complicated anatomy and the proximity to critical vascular, parenchymal structures and skull base. It usually includes surgery in addition to long-term broad-spectrum antibiotics. Nevertheless, we believe routine prophylactic antibiotics are not advised because of the complications of prolonged antibiotic use.A critical finding in the present study was that a precedent event in most patients may have triggered osteomyelitis. Invasive procedures and localized infections in irradiated patients might cause temporary bacteremia, leading to infection in the damaged tissue. In addition, prior case reports have described anaerobic bacteremia after tracheostomy that might further complicate osteoradionecrotic tissue causing osteomyelitis in the head and neck central tissues. In our research, the majority of patients who died during follow-up had an osteomyelitis-related death, and others died of severe inflammatory response syndrome due to infection. Cervical spine osteomyelitis is a complication that may cause mortality and indicates a poor patient prognosis since all those patients were malnourished, active, or cured of advanced-stage HNSCC after one or more high-dose radiation treatments with possible concurrent chemotherapy.Cervical spine osteomyelitis is a dreadful complication with devastating short- and long-term outcomes causing high degrees of morbidity, debilitation, and mortality.Given the subtle presentation, challenging differential diagnosis, long treatment duration, and poor outcomes, we suggest an individually tailored management approach for patients with radiation-induced osteomyelitis, carried out by a multidisciplinary team consisting of a head and neck surgeon, neurosurgeons, orthopedic surgeons, head and neck-oriented radiologist, nuclear medicine expert, infectious disease specialist, and radiological oncologist. Nevertheless, head and neck radiotherapy poses a high risk of spinal complications requiring careful monitoring of patients.In recent decades, substantial advances have been made in radiotherapy, such as the transition from a two-dimensional to a three-dimensional technique. Additionally, the introduction of stereotactic radiotherapy allows for accurate delivery of high-dose radiation in multiple directions, thereby causing fewer side-effects.This study was limited, first and foremost, by aa small cohort, single-institution experience, and a comparable control group.In addition, due to the retrospective design, the reported data might be incomplete.Midline structure osteomyelitis is a devastating complication of head and neck radiotherapy. Data in the literature remain sparse. Diagnosis may be difficult because symptoms are often delayed and may be subtle. A thorough evaluation, including endoscopic examination, MRI, and possibly biopsy, must be performed in all cases to rule out tumor recurrence or metastasis, and infections should be treated aggressively with antibiotics. Internal stabilization and fusion should be performed in cases of instability and deformity. The present case series highlights the multidisciplinary protocol used in our institute, Rabin Medical Center, to evaluate patients with radiation-induced osteomyelitis to narrow the differential diagnosis, optimize work-up, and initiate prompt appropriate integrative treatment. Its application in various oncology centers worldwide is an attainable goal. We conclude that our initial results merit a continued effort in this direction."} {"text": "Following publication, concerns were raised regarding the integrity of the images in the published figures. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies.This retraction was approved by the Chief Editors of Frontiers in Oncology and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction.\u201d"} {"text": "The journal and Chief Editors retract the 10 July 2019 article cited above.Following publication, concerns were raised regarding the integrity of some of the images in the article. After a corrigendum was published, additional concerns were identified, and a further investigation was conducted in line with Frontiers\u2019 policies. The images were determined to have been manipulated, and the authors were unable to provide the raw data for the images in question. As a result, the data and conclusions of the article have been deemed unreliable and the article is therefore retracted.The retraction of the article was approved by the Chief Editor of Frontiers in Chemistry and the Chief Executive Editor of Frontiers. The authors did not respond to the retraction."} {"text": "Following publication, concerns were raised regarding the validity of the data in the article. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies. Given the concerns the editors no longer have confidence in the findings presented in the article.This retraction was approved by the Chief Editors of Frontiers in Immunology- Inflammation and the Chief Executive Editor of Frontiers. The authors agree to respond to correspondence regarding this retraction."} {"text": "Mycoplasma. It was confirmed by the authors when they tried to repeat their experiments unsuccessfully. The findings and conclusions reported in the article are no longer supported by the data. An investigation was conducted in accordance with Frontiers\u2019 policies that confirmed this; therefore, the article has been retracted.Following publication, the authors contacted the Editorial Office to request the retraction of the cited article, stating that cell lines used in the article were contaminated by This retraction was approved by the Chief Editors of Frontiers in Pharmacology and the Chief Executive Editor of Frontiers. The authors agree to this retraction."} {"text": "The journal retracts the 5 April 2023 article cited above.Following publication, concerns were raised regarding the scientific validity of the article. An investigation was conducted in accordance with Frontiers' policies. It was found that the complaints were valid and that the article does not meet the standards of editorial and scientific soundness for Frontiers in Public Health; therefore, the article has been retracted.This retraction was approved by the Chief Editors of Frontiers in Public Health and the Chief Executive Editor of Frontiers. The authors did not agree to this retraction."} {"text": "The journal retracts the 2021 article cited above.Following publication, concerns were raised regarding the validity of the data in the article. The authors failed to provide the raw data or a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies. Given the concerns, and the lack of raw data, the editors no longer have confidence in the findings presented in the article.This retraction was approved by the Chief Editors of Frontiers in Pharmacology and the Chief Executive Editor of Frontiers. The authors agree to this retraction."} {"text": "The journal retracts the September 27, 2021 article cited above.Following publication, concerns were raised regarding the integrity of the images in the published figures. The authors failed to provide the raw data or a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies. Given the concerns about the validity of the data, and the lack of raw data, the editors no longer have confidence in the findings presented in the article.This retraction was approved by the Chief Editors of Frontiers in Oncology and the Chief Executive Editor of Frontiers.The authors agree with this retraction."} {"text": "The journal retracts the 2021 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Cell and Developmental Biology and the Chief Executive Editor of Frontiers. The authors do not agree to this retraction."} {"text": "The journal retracts the 2018 article cited above.Figures 1C, 2B, 6C, E, and 7E. The authors failed to provide a satisfactory explanation during the investigation, which was conducted in accordance with Frontiers\u2019 policies. As a result, the data and conclusions of the article have been deemed unreliable and the article has been retracted.Following publication, concerns were raised regarding the integrity of the images in the published figures. Image duplication concerns were identified in This retraction was approved by the Chief Editors of Frontiers in Pharmacology and the Chief Executive Editor of Frontiers. The authors have not responded to correspondence regarding this retraction."} {"text": "The journal retracts the 2021 article cited above.Following publication, concerns were raised regarding the contributions of the authors of the article. Our investigation, conducted in accordance with Frontiers policies, confirmed a serious breach of our authorship policies and of publication ethics; the article is therefore retracted.This retraction was approved by the Chief Editors of Frontiers in Medicine and the Chief Executive Editor of Frontiers. The authors do not agree to this retraction."}