diff --git "a/deduped/dedup_0843.jsonl" "b/deduped/dedup_0843.jsonl"
new file mode 100644--- /dev/null
+++ "b/deduped/dedup_0843.jsonl"
@@ -0,0 +1,67 @@
+{"text": "Currently in the U.S. it is recommended that tuberculosis screening and treatment programs be targeted at high-risk populations. While a strategy of targeted testing and treatment of persons most likely to develop tuberculosis is attractive, it is uncertain how best to accomplish this goal. In this study we seek to identify geographical areas where on-going tuberculosis transmission is occurring by linking Geographic Information Systems (GIS) technology with molecular surveillance.This cross-sectional analysis was performed on data collected on persons newly diagnosed with culture positive tuberculosis at the Tarrant County Health Department (TCHD) between January 1, 1993 and December 31, 2000. Clinical isolates were molecularly characterized using IS6110-based RFLP analysis and spoligotyping methods to identify patients infected with the same strain. Residential addresses at the time of diagnosis of tuberculosis were geocoded and mapped according to strain characterization. Generalized estimating equations (GEE) analysis models were used to identify risk factors involved in clustering.Evaluation of the spatial distribution of cases within zip-code boundaries identified distinct areas of geographical distribution of same strain disease. We identified these geographical areas as having increased likelihood of on-going transmission. Based on this evidence we plan to perform geographically based screening and treatment programs.Using GIS analysis combined with molecular epidemiological surveillance may be an effective method for identifying instances of local transmission. These methods can be used to enhance targeted screening and control efforts, with the goal of interruption of disease transmission and ultimately incidence reduction. Mycobacterium tuberculosis strains (TB), in combination with traditional surveillance, has yielded insights into tuberculosis transmission . One hundred and seventy-one (32.4%) patients were African-American, 165 (31.3%) were Caucasian, 109 (20.7%) were Hispanic, and 82 (15.6%) were Asian. African-Americans with tuberculosis were significantly more likely to have a clustered strain . Alternatively, Asians , and Hispanics were significantly more likely to have a unique strain. Three hundred and twenty-nine (67.4%) of the patients were males, and of these, 214 (65.0%) had clustered strains; 78 of 159 females (49.1%) had clustered strains. Males were more likely than females to have a strain that matched at least one other person in Tarrant County . Persons with previous experience of homelessness were strongly associated with clustering suggesting a high rate of on-going transmission among this population. (Table Three hundred and twenty-one (65.7%) patients were born in the United States. Of those, 235 (73.2%) had clinical isolates that matched the isolate from at least one other person living in Tarrant County. One hundred and sixty-seven patients were born outside of the United States. Of those, 57 (34.1%) clinical isolates that matched the isolate from at least one other person living in Tarrant County. U.S. born individuals were significantly more likely to be genotypically clustered than foreign-born counterparts . The birth country of foreign-born patients varied. Of those born outside of the U.S, 77 (46.1%) were born in Latin America, 47 (28.1%) in Southeast Asia, 14 (8.4%) in Sub-Saharan Africa, 12 (7.2%) in Pacific Asia, 11 (6.6%) in South Asia, and 6 (3.6%) in Europe.Evaluation of the spatial distribution of number of cases within zip-code boundaries displayed a distinct geographical distribution of disease. The average incidence for the entire county during the study period was 5.9 cases per 100,000. Zip code 1 recorded the highest incidence of 94.3 cases per 100,000 populations, followed by zip code 2 with an average incidence of 55.2 cases per 100,000 population occurred in the same zip code with the highest incidence. Similarly, zip code 2 on the southeast border of zip code 1, recorded the second highest proportion of persons with molecular clustered TB isolates with 76.6% of all reported cases clustered. Cases reported in zip code 1 were more than six times as likely than any other zip code to have isolates that match at least one other person living in Tarrant County.In zip code 3, we observed a morbidity that was more than triple the county average . Unlike other high morbidity areas, zip code 3 had a strong preponderance of unique strain distribution. In this zip code, 17 out of 26 (65.4%) patients had isolates that did not match any other patient in Tarrant County. Cases reported in this zip code were 70% less likely to have a clustered strain, suggesting that the high rates of tuberculosis did not result from local on-going transmission.The number of cases in the United States is at its lowest point in history, with 15,075 cases reported in 2002 . The rolThe use of molecular strain characterization methods in conjunction with traditional surveillance has led to the recognition of a number of risk factors associated with on-going transmission, and has identified numerous outbreaks of tuberculosis undetected by conventional approaches -27. ThesThese finding are similar to those reported in Los Angeles where locations, specifically homeless shelters were identified as important sites of tuberculosis transmission . SimilarWe identified that 55% of our patients were clustered and 47% attributable to ongoing community transmission. This differs from a study conducted in a high incidence area of South Africa, where 72% of cases were clustered and 58% attributable to ongoing community transmission . Our lowAlthough the majority of the tuberculosis morbidity within the developed world is strongly influenced by imported tuberculosis from high prevalence countries ,36, the There are some limitations to this research approach. This is based on secondary data, which includes variables collected from a cross-sectional period of time. Although each case is an incident case at the time of diagnosis, under this cross-sectional design, exposure and disease outcomes are assessed simultaneously. In addition patients with tuberculosis may have moved shortly before their diagnosis. However, this should not cause systematic error (bias) or result in an association of clustering with specific locations, because these events would be expected to produce a random misclassification. Also persons exposed within certain zip codes may go on to reside elsewhere and later develop the disease, and result in an underestimate of the morbidity and that may be reflected in calculating associations. Finally, genotyping results were not available for a proportion of TB cases in this study. Some unique isolates might have clustered if some of the missing isolates had been available or if other cases with the same strain moved or are located outside the study area . We therWhen using this approach TB control programs must select the appropriate geographical boundary to examine transmission in their area. For example, using zip codes may be too large a boundary in very populated metropolitan areas. Census block groups may provide greater resolution in determining localized transmission.6110 RFLP is recognized as the most discriminatory method for genotyping M. tuberculosis isolates, the discriminatory ability of the technique decreases when there are fewer than 6 IS6110 insertions in the genome. In this case, spoligotyping was used for further strain discrimination. However, it is still possible that some isolates classified as being the same strain based on identical genotypes may represent distantly related, but distinct, strains. Moreover, demonstration that particular patients have the same strain supports, but does not irrefutably prove, direct transmission between these patients as opposed to another source of infection. Conversely, strains continue to evolve, and the resulting genotypic differences over time can result in assigning isolates from cases of direct transmission to distinct strain lineages. Given that a small minority of the isolates had fewer than 6 IS6110 bands (18.2%) or differed by the presence or absence of one band in an otherwise conserved pattern (3.7%), we believe that estimates of the degree of clustering and the size of clusters are conservative.Nor are the molecular techniques used without limitation. Patients are clustered according to their isolates having the same genotype. While ISUsing GIS analysis combined with molecular epidemiological surveillance can be an effective method for identifying tuberculosis transmission not identified during standard contact tracing methods. The application of these methods can be utilized in countries where contact tracing is routinely performed. These methods can enhance targeted screening and control efforts, with the goal of interruption of disease transmission and ultimately incidence reduction. This study demonstrates that using existing health data, GIS can identify previously undetected TB transmission. These results were used to design new targeted screening efforts . StudiesCI Confidence IntervalGEE Generalized Estimating EquationsGIS Geographic Information SystemsHIV Human Immuno-deficiency VirusIDW Inverse Distance WeightingNCTCG North Central Texas Council of GovernmentsOR Odds RatioRFLP Restriction Fragment Length PolymorphismTB TuberculosisTCHD Tarrant County Health DepartmentTDH Texas Department of HealthStudy concept and design: PM, MB, SWAcquisition of data: PM, TQ, KJ, DD, GB, SWAnalysis and interpretation of data: PM, MB, TQ, JO, SWDrafting of the manuscript: PM, SWCritical revision of the manuscript for important intellectual content: SW, PM, JO, TNStatistical expertise: KS, MB, PMObtained funding: TQ, SWAdministrative, technical or material support: GBThis work was supported in part by the Centers for Disease Control and Prevention, National Tuberculosis Genotyping and Surveillance Network Cooperative Agreement U52/CCU600497-18, and Tuberculosis Epidemiologic Studies Consortium 200-2001-00084."}
+{"text": "Income is known to be associated with cerebrovascular disease; however, little is known about the more detailed relationship between cerebrovascular disease and income. We examined the hypothesis that the geographical distribution of cerebrovascular disease in New York State may be predicted by a nonlinear model using income as a surrogate socioeconomic risk factor.excess rate = 32.075 - 1.22*10-4(income) + 8.068*10-10(income2), and both income and income squared variables were significant at the 0.01 level. When income was included as a covariate in the non-linear regression, the number and size of clusters of high cerebrovascular disease prevalence decreased. Some 87 ZIP codes exceeded the critical value of the local statistic yielding a relative risk of 1.2. The majority of low cerebrovascular disease prevalence geographic clusters disappeared when the non-linear income effect was included. For linear regression, the excess rate of cerebrovascular disease falls with income; each $10,000 increase in median income of each ZIP code resulted in an average reduction of 3.83 observed cases. The significant nonlinear effect indicates a lessening of this income effect with increasing income.We used spatial clustering methods to identify areas with high and low prevalence of cerebrovascular disease at the ZIP code level after smoothing rates and correcting for edge effects; geographic locations of high and low clusters of cerebrovascular disease in New York State were identified with and without income adjustment. To examine effects of income, we calculated the excess number of cases using a non-linear regression with cerebrovascular disease rates taken as the dependent variable and income and income squared taken as independent variables. The resulting regression equation was: Income is a non-linear predictor of excess cerebrovascular disease rates, with both low and high observed cerebrovascular disease rate areas associated with higher income. Income alone explains a significant amount of the geographical variance in cerebrovascular disease across New York State since both high and low clusters of cerebrovascular disease dissipate or disappear with income adjustment. Geographical modeling, including non-linear effects of income, may allow for better identification of other non-traditional risk factors. Cerebrovascular disease disproportionately affects certain areas of the United States, including many areas within New York State . WesternEpidemiologic studies, including our own, that have explored causes of cerebrovascular disease among various populations, have historically focused on identifying associations between vascular disease and traditional risk factors. This research is well-founded since cerebrovascular disease, the third leading cause of death in the US, has long been associated with such traditional risk factors as hypertension, diabetes, elevated cholesterol, obesity, and tobacco use -4. HowevA number of studies have explored the relationships between vascular disease and socioeconomic risk factors and have identified associations -8. RecenIn addition, there is considerable geographic variations in cerebrovascular disease mortality across various geographical scales; in the US and world, and even within New York State ,15,16. DThere is cause to believe that income, is contributing to the heretofore unexplained variance in disease rates carried by certain areas of New York State. We questioned if the rates generated during our previous work would change once income was accounted for. We utilized a new geographic analysis method to examine clustering of cerebrovascular disease and the non-linear effects of income, one that has not been previously applied in cerebrovascular disease and socioeconomic risk data analysis in New York State.The purpose of this study was to 1) pursue a cross-disciplinary, innovative approach to identifying a significant nontraditional socioeconomic risk factor, 2) correlate this socioeconomic risk factor with the prevalence of cerebrovascular events at the ZIP code level in New York State in the year 2000, and 3) apply income-adjusted geographic clustering analyses to identify geographic patterns of cerebrovascular disease correlated with income within ZIP codes in New York State. We hypothesized that the novel approach of geographical cluster analysis, together with nonlinear regression that associated spatial distributions of income with cerebrovascular disease spatial distributions would enhance the power to predict event rates as compared to traditional risk factors. Such factors would have the power to facilitate the identification of high-risk ZIP codes or groups of ZIP codes for direct interventions, as well as low-risk ZIP codes or groups of ZIP codes for further exploration.Figure \u03c3 = 0.6 to \u03c3 = 2.5. These correspond to searching for clusters of different sizes; best results were obtained with \u03c3 = 1.0.We also identified locations of significantly low prevalence; (a) areas with local statistics less than -3.85 are depicted in dark blue, and (b) areas with local statistics between -3.85 and -2.5 are shown in light blue. The minimum local statistic was -8.42 (corresponding to ZIP code 14892 in Chemung County). 153 ZIP code areas have local statistics below the critical value; these areas contain 5,412 observed cases and 7,161.3 expected cases, yielding a relative risk of 0.76. We also tested different kernel bandwidths, ranging from To determine whether the differences between the observed and expected prevalence of cerebrovascular disease could be attributed to income, we performed a nonlinear regression; the age-adjusted cerebrovascular disease rate was taken as the dependent variable, and income and income squared were taken as independent variables. Although income has been previously noted as a risk factor, we also tested the hypothesis of a nonlinear effect of income. The resulting regression equation was:y = 32.075 - 1.22*10-4 (income) + 8.068*10-10 (income2)y is the predicted age adjusted cerebrovascular disease rate, per 10,000 population. The value of r2 is 0.045, but this is significantly different from zero, given the large number of ZIP codes examined. In addition, both the income and income squared variables were significant at the 0.01 level. As expected, the excess number of cases declines with increasing income. Each $10,000 increase in a ZIP code area's median income results in an average reduction in the age adjusted cerebrovascular disease rate of 1.22 per 10,000 individuals. In addition, the significant nonlinear effect indicates a lessening of the income effect with increasing income. For example, an increase in income from $20,000 to $30,000 results in a decrease in the cerebrovascular disease rate, on average, from 29.96 to 29.14 cases per 10,000 population while an increase from $60,000 to $70,000 results in a smaller decline \u2013 from 27.66 to 27.49 cases per 10,000 individuals. The age-adjusted rate is a minimum at an income level of $75,000; for ZIP codes with median income levels above this, the rate, on average, begins to increase.where Using the standardized residuals from the regression analysis, we identified geographic clusters of cerebrovascular disease in New York State with income adjustment. In this case the clusters in Figure We also identified the geographic locations of low prevalence after taking into account the non-linear effects of income. Again those areas with local statistics less than -3.85 in dark blue, and areas with local statistics between -3.85 and -2.5 in light blue are shown in the figure. The minimum local statistic was -4.42 (ZIP code 10069 in Westchester County). Thirteen ZIP code areas had local statistics below the critical value; these contained 917 observed cases and 1,262.3 expected cases, yielding a relative risk of 0.73. The majority of low prevalence areas in Figure We found that income was statistically significantly associated with cerebrovascular disease prevalence after taking into account age. After adjusting for income, the relative risk of having cerebrovascular disease for residents of the Buffalo-Niagara and Long Island regions was 1.2 times greater than for residents in other areas. For residents of Westchester County, the relative risk for cerebrovascular disease was 0.73, inferring a protective effect of residence in that area after adjusting for age and income. The magnitude of the income effect in the nonlinear regression equation is small in part because the effects of income on prevalence are being averaged over the large number of individuals who live in each ZIP code area.Initial analyses conducted with age but not income adjustment during this study corroborated findings from earlier works produced by our group. Of the one hundred ZIP code areas throughout New York that exceeded the critical value in the high clustering analyses without income, many of the ZIP codes are within western part, north-central part of New York State, and Long Island regions. The maximum local statistic of 9.39 was found in the center of the Buffalo-Niagara cluster. Also, in the analyses performed without income adjustment, highly concentrated areas of low clustering were found in the Finger Lakes area, namely Chemung County. The minimum local statistic was -8.42. This result is not unexpected since it substantiates previous findings as well.Our analyses demonstrated that a number of areas of high and low disease prevalence of cerebrovascular disease are explainable by income when it is included as a covariate since the majority of clusters were absent when income adjustment was applied during nonlinear regression analysis. This supports our conviction that income, in fact, is a strong predictor of cerebrovascular disease. However, high clusters in the Buffalo-Niagara and Long Island regions remain above both the 3.85 and 2.5 critical values, as described earlier. As well, a significant low clustering area remains in the Hudson Valley region indicating that other factors are moderating low rates of cerebrovascular disease in this region.Excess numbers of cerebrovascular disease cases decline with increasing income. Each $10,000 increase in a ZIP code's median income corresponds to a decrease in the rate of 1.22 per 10,000 population. This nonlinear effect weakens with increasing income and above about $75,600, the rate begins to increase slowly with income. The finding of high cerebrovascular disease prevalence in high income areas is unusual; one would expect this relationship to be inverse. Our study analyzed morbidity and not mortality data. Therefore, it is possible that residents in higher income areas survive longer with cerebrovascular disease than do those in lower income areas and deaths are not as prevalent.There may be some bias related to spatial mismatch, since we have used zip-code level hospitalization data and ZCTA-level population and income data in our analysis. The US Census Bureau recently developed a new statistical entity, called ZCTA, to represent the United State postal service-defined zip code areas in a more cohesive way. Those ZCTA may be different from traditional zip codes, even though the ZCTA code equals the zip code in most cases . We are Our study did not distinguish between types of cerebrovascular disease and therefore it is not known if income has a greater effect when correlated with one type of cerebrovascular disease than another. We do not know whether income is a causal factor or only a precipitating factor of cerebrovascular disease since we did not analyze individual-level data, nor did we adjust for other potential confounders. Based on our findings, it will be of great interest to further examine geographic distributions of traditional risk factors and non-traditional risk factors, such as education levels, occupation, measures of community deprivation, and environmental pollutants, to determine their contribution to geographic variations or clustering of cerebrovascular disease in New York State. In addition, it may be useful to examine other variables such as race and ethnicity to explore potential roles and relationships with those non-traditional risk factors.In summary, income is a nonlinear predictor of cerebrovascular disease. Income alone explains a significant amount of the geographical variance in cerebrovascular disease across New York. These associations were observed after taking into account age. These findings support the contention that cerebrovascular disease cases are susceptible to the influence of socioeconomic factors, notably income. Where clusters failed to disappear, further analysis is indicated to determine what factors are implicated. Additional analysis may also be conducted to further explain the relationship with income. We suspect that a number of factors affect this relationship, including access to and utilization of care, and treatment patterns. These geographic analyses of multiple variables at the ZIP code level allow researchers to determine more precisely where disease events are occurring, along with the causative factors. Further analyses in other geographical scales, such as census tract level, other than at the ZIP code level may ensure these findings. This evidence-based information is necessary in order to affect public policy and isolate small areas such as ZIP codes or groups of ZIP codes for direct health interventions.International Classification of Diseases (ICD). Income variables were extracted from US Bureau of the Census 2000 ZIP code level data files. For mapping purposes, ZIP code tabulation area (ZCTA) boundaries were also obtained from the US Bureau of the Census. Several ZCTAs were excluded for purposes of this study since they corresponded to hydrographic features such as lakes, parks, or forested lands. The final merged dataset that was prepared for geographic clustering analysis contained about 1600 ZCTAs, after restricting to those areas that are found in both ZCTA and zip code tables.We obtained the Administratively Releasable (ADREL) inpatient hospitalization dataset for New York State from the Statewide Planning and Research Cooperative System (SPARCS) at the New York State Department of Health. Observed prevalence of cerebrovascular disease was extracted from the SPARCS inpatient dataset by ZIP code according to codes listed within \"cerebrovascular disease\" in the The cerebrovascular disease hospitalization rates were calculated using the principal diagnosis code issued at discharge that is included in each individual record within the SPARCS dataset. The following inpatient records from the SPARCS dataset were eliminated from the analyses: a) patients who lived out of state , and b) patients who were discharged to another acute care hospital . Note that the above numbers are not necessarily mutually exclusive. The ICD-9-CM codes used to determine cerebrovascular disease included ICD-9-CM code 430.00 through 438.99, cerebrovascular disease. The Census dataset provided population counts by gender and race in five-year age increments for each of the 1600 ZIP codes that had recorded populations. These five-year age groups were collapsed into the following 11 age groupings: 0\u201324, 25\u201334, 35\u201344, 45\u201354, 55\u201359, 60\u201364, 65\u201369, 70\u201374, 75\u201379, 80\u201384 and 85+. The 11 age groupings determined were appropriate for analysis of cerebrovascular disease; they were chosen to be wide enough to include a reasonably large population in each group, and they were narrow enough that the hospitalization rates would not vary too much within each age grouping.Using age-specific population data from the Census, the age-adjusted expected number of cerebrovascular disease events was determined for each ZIP code using the indirect method of standardization. Age-adjustment allowed for a comparison without the influence of differences in how much older one population was than another. Several steps were required to obtain the age-adjusted hospitalization rates applying the indirect method of standardization . The age2(SR) = O/(E2). The percentage of excess risk (R) was calculated as SR-1.0, with R having the same standard error as SR. The 95% confidence interval for R was calculated as the interval from R-1.96SE to R+1.96SE.The standardized rate (SR) for each ZIP code was then calculated as the ratio of the total number of hospitalizations observed in the ZIP code (O) divided by the total number of hospitalizations expected in the ZIP code (E), and the standard error of SR was calculated by applying the formula SE will have an approximate normal distribution, with mean 0 and variance 1, where O is the observed number of cases, and E is the expected number of cases in ZIP code i. To optimize the detection of geographic clusters of a given size, these standardized scores need to be smoothed, by calculating for each ZIP code a z-score (zi) that is a weighted sum of the scores in the geographic neighborhood of the ZIP code:To test for the existence of geographic clusters of cerebrovascular disease exhibiting significantly higher or lower observations than could be expected upon the basis of age-structure, we used the statistical test suggested by Rogerson . This apwhere the weights are large near the ZIP code, and get smaller with distance:dij is the distance between the centroids of ZIP code areas i and j, and \u03c3 is a parameter indicating how quickly the weights change with distance .and where zi scores are known as local statistics, and they also each have a normal distribution with mean 0 and variance 1. If the null hypothesis of no geographic clustering is true, 95% of the time a map of the z scores will have a maximum that is no larger than:The A is the number of subareas (1600). In our case, we used \u03c3 = 1, corresponding to defining, for each ZIP code, a neighborhood of approximately one ZIP code area in each direction. This yields a critical value of z* = 3.85.where An additional step is taken to correct for edge effects before carrying out the weighting described above . We began by overlaying a square grid containing lattice points at intervals equal to 5.5 miles (which is the median distance between ZIP code centroids) onto the study area. We then created additional, hypothetical ZIP code centroids around the border of New York State, and assigned them hypothetical, standardized scores, in keeping with the null hypothesis that there was no raised prevalence in these hypothetical locations.We conducted a similar analysis of geographic clustering after adjusting for income in each ZIP code area. In this case, a regression analysis was carried out by first assuming a quadratic relationship between the excess number of cerebrovascular cases in a ZIP code area and income. We then used the standardized regression residuals as input into the geographic clustering analysis. These geographic clustering analyses were carried out using S-Plus and exported into ArcView GIS for visualization.DH and SSC performed statistical and spatial clustering analysis, and wrote the manuscript. PAR and FEM provided critical review and input. DH, SSC, PAR and FEM participated in the design of the study, participated in interpretation, as well as in data acquisition efforts. All authors read and approved this manuscript."}
+{"text": "Gastrointestinal illness is an important global public health issue, even in developed countries, where the morbidity and economic impact are significant. Our objective was to evaluate the demographic determinants of acute gastrointestinal illness in Canadians.We used data from two population-based studies conducted in select communities between 2001 and 2003. Together, the studies comprised 8,108 randomly selected respondents; proxies were used for all respondents under 12 years and for respondents under 19 years at the discretion of the parent or guardian. Using univariate and multivariate logistic regression, we evaluated the following demographic determinants: age, gender, cultural group, and urban/rural status of the respondent, highest education level of the respondent or proxy, number of people in the household, and total annual household income. Two-way interaction terms were included in the multivariate analyses. The final multivariate model included income, age, gender, and the interaction between income and gender.After adjusting for income, gender, and their interaction, children under 10 years had the highest risk of acute gastrointestinal illness, followed by young adults aged 20 to 24 years. For males, the risk of acute gastrointestinal illness was similar across all income levels, but for females the risk was much higher in the lowest income category. Specifically, in those with total annual household incomes of less than $20,000, the odds of acute gastrointestinal illness were 2.46 times higher in females than in males.Understanding the demographic determinants of acute gastrointestinal illness is essential in order to identify vulnerable groups to which intervention and prevention efforts can be targeted. Gastrointestinal illness (GI) remains an important global public health issue ,2. In deUnderstanding the relationships between GI and determinants of health in the general population, however, is essential to identify vulnerable groups to which intervention and prevention efforts can be targeted. Therefore, the objective of this study was to investigate the demographic determinants of acute gastrointestinal illness (AGI) in Canadians using available data from population-based studies.At the time of this analysis, the Public Health Agency of Canada had conducted two studies designed to ascertain the burden and distribution of self-reported AGI in defined Canadian populations. One study was conducted in Hamilton, Ontario, Canada from February 2001 to February 2002, and one was conducted in three communities in the province of British Columbia from June 2002 to June 2003; these studies used the same methodology and core survey tool, and have been described in detail elsewhere ,14. BrieRespondents were asked whether they had experienced any vomiting or diarrhea in the 28 days prior to the interview. Cases were those respondents who reported vomiting or diarrhea in the four weeks prior to the interview, excluding those who reported that their vomiting or diarrhea was due to a chronic condition including pregnancy, medication use, colitis, diverticulitis, Crohn's disease, irritable bowel syndrome, or other chronic condition. Respondents who did not report vomiting or diarrhea, as well as those whose symptoms were due to chronic conditions, were included in the non-case group. A broad case definition for AGI was deliberately chosen to ensure high sensitivity and case capture.Ethical approval for these studies was obtained from one or more of the following boards: the Research Ethics Board of St. Joseph's Hospital , McMaster University , the Human Subjects Committee of the University of Guelph , and the University of British Columbia Behavioural Research Ethics Board . The response rates for the surveys were 36.6% and 44.3The demographic determinants of illness and possible confounding factors included in this analysis are listed in Table P-value of the score chi-square test be less than 0.05. Individuals with missing data for a given variable were excluded from any models in which that variable was present. To assess whether any variables in the final model were subject to confounding by any variables that had been omitted from the final model, each omitted variable was re-introduced individually (results not shown). The impact on the sign, magnitude, and significance of each of the original coefficients was examined; a change from significant to non-significant (or vice versa) at P = 0.05, or a change in the resulting odds ratio of \u00b1 0.5, was considered biologically significant enough to retain the variable in the final model as a confounder.Logistic regression was used to determine how the risk of AGI related to demographic variables. To examine whether the relationship between each demographic factor and the risk of AGI varied between the two study areas, we fit (separately for each demographic factor) multivariate models with the demographic factor, study area, and their interaction as independent variables (results not shown). However, no significant interactions were found and data from the two studies were combined for all further analyses. Univariate models were then fit for each demographic variable Table . To explAll statistical analyses were carried out in SAS version 9.1 . Likelihood ratios were used to compare models and Wald tests were used for tests of global hypotheses and tests involving individual parameters .P = 0.005), culture (P =< 0.001), income (P = 0.033), age (P < 0.001), and gender (P = 0.001), but not with urban/rural status (P = 0.080) or education (P = 0.150). Specifically, the risk of AGI in the past four weeks increased significantly as the number of people in the household increased.Results of the univariate analysis are shown Table . The risP < 0.001). Respondents with household incomes between $40,000 and $60,000 had odds of AGI that were 0.76 times higher (i.e. 1.32 times lower) than respondents with household incomes less than $20,000 (P = 0.047). A significantly higher risk of AGI was observed in children less than 10 years (P < 0.001), and young adults 20 to 24 years (P = 0.003), compared to those 25 to 64 years. The odds of AGI in females were 1.28 times higher than males (P = 0.001).The odds of AGI for respondents who identified themselves as Asian were only 0.37 times higher (i.e. 2.69 times lower) than respondents who identified themselves as North American did not impact the sign, magnitude, or significance of any of the coefficients for income, age, gender, or the interaction between income and gender. Therefore, income, age, gender, and the income-gender interaction were the variables included in the final multivariate model Table .The results of the multivariate analysis and univariate analysis were consistent with respect to age. Odds ratios by age group, adjusted for income, gender, and their interaction, are shown Figure . Even afWe found a significant interaction between income and gender. Figure P < 0.001). In those with total annual household incomes of less than $20,000, the odds of AGI were 2.46 times higher in females than in males.It is interesting to compare the odds of AGI in males and females with the same income. For instance, the odds ratio for females earning $60,000\u2013$80,000 versus males with the same income was 1.38 (1.74/1.26). This difference is not significant (P = 0.107). The difference between the odds of AGI in males and females was statistically significant only at the lowest income level , such that the increased risk in low income females and children may be due to underlying factors unrelated to geography. Additionally, income, age, and gender were significantly associated with the risk of AGI whether or not the following variables were controlled for: total number of people in the household, the urban/rural status of the respondent, education, and cultural group, suggesting that the observed associations are not confounded by these variables.In the univariate analysis, we observed a significantly higher risk of AGI in children under 10 years, and young adults between 20 and 24 years, as compared to adults aged 25 to 64 years. As noted above, even when income, gender, and the income-gender interaction were accounted for, children under 10 years and young adults between 20 and 24 years remained at a higher risk of AGI. In children, this increased risk likely reflects an increased susceptibility due to immune status. In young adults, this increased risk may reflect behavioural factors. Our findings are somewhat consistent with one study from the Netherlands, which reported highest incidences in children and the elderly ,9,18,19.Campylobacter jejuni infection [Past studies report higher rates of AGI in females than males ,14,20,21nfection , which infection . AdditioIt is possible that the higher rate of AGI observed in females may be due to recall bias , or to reporting bias . However, such explanations are unlikely given the plausibility for greater exposure to infectious causes of AGI in females. In addition, if reporting bias was the reason for the higher rate of AGI observed in females, we would expect higher rates in females consistently across age groups and income levels, which was not observed here.There is a growing body of literature linking health to income or income inequality, with lower household income associated with an increased risk of morbidity and mortality -28. In CHere, we found that total annual household income was associated with the risk of AGI in females, but not males. In males, the risk of AGI was consistent regardless of income. In females, a higher risk occurred in those in the lowest income category . In this category, the odds of AGI for females were 2.5 times higher than the odds for males, regardless of age. Unfortunately, no literature exists which provides explicit reasons for this observation. Specific hypotheses may include different occupational risk settings in low income males versus females, or an increased susceptibility in females (due perhaps to increased foodborne exposure or increased exposure to infected children) that is exacerbated by low income living conditions. Further research evaluating reasons for this apparent increased risk in low-income females is warranted. Regardless of the cause, this finding calls for targeting information and interventions to this segment of the population, potentially via local public health outreach programs.The results of the multivariate analysis can be interpreted to yield stratum-specific odds ratios; for example, the odds of AGI in a female child under 10 years of age who resides in a low income household is 0.554, and the odds of AGI in a male child in the same age and income category is 0.225, yielding an odds ratio of 2.46. Examining the risk of AGI in females across income categories showed that those in households with total annual incomes of $20,000 to $40,000 (OR = 0.566), $40,000 to $60,000 (OR = 0.530), $60,000 to $80,000 (OR = 0.706), and over $80,000 (OR = 0.424) all had a lower risk of AGI than females in low income households.Low response rate in the original studies (36.6% in Hamilton and 44.3% in B.C) was the main limitation of this study, and is a limitation typical of such telephone surveys. Other similar studies report response rates ranging from 27% to 71% ,15,16 NoItem non-response is also a concern in surveys. Here, urban/rural status was missing for 9% of respondents, and total annual household income was missing for 25% of respondents. Thus our final multivariate model, which included income, used data from 5,732 respondents. This may be of concern if the likelihood of responding to specific questions differed across income levels. For example, if non-response to the income question is greater in those with lower incomes, and low income is a risk factor for AGI, the odds ratio for low versus high income will be a biased underestimate of the true risk of illness. To address this, we also examined models without income (results not shown). Since such analyses yielded consistent conclusions with respect to the other demographic variables, the impact of income non-response on the results presented here is likely minor.In our study, the education variable measured the highest level of education attained by either the respondent (for those over 18 years) or their proxy (for those 18 years and younger), in an attempt to capture the education of the person guiding the behaviour of the respondent. Thus, for those 18 years and under, we assumed that the proxy was more instrumental in guiding the behaviour of the respondent than the respondent themselves. Although this is likely true for very young respondents, it may be less true as the age of the respondent approaches 18 years. Thus for teenage respondents, we may have over-estimated the education level of the person guiding the behaviour of the respondent. However, in our analysis, we found no interaction between education and age, suggesting that the lack of association between education and the risk of AGI observed here is the same regardless of the age of the respondent. In any case, our findings do not negate the need for future in-depth analyses of specific components of education (e.g. food handling or hygiene training) that may decrease the risk of AGI.Cases of AGI in this analysis were those who reported vomiting or diarrhea in the past four weeks, excluding those whose symptoms were due to a chronic condition. In these data, no attempt was made to differentiate infectious AGI from other causes such as food allergies or intolerances, over-indulgence of drugs or alcohol, or other causes of AGI. Although respondents reported what they believed to be the cause of their illness, this information was not used to exclude cases since the validity of these self-diagnosed causes was highly variable. Thus our case definition of AGI, although highly sensitive for infectious GI (i.e. includes most true cases of infectious GI), should not be considered specific for infectious GI (i.e. includes some non-infectious cases of AGI).Another possible limitation of this study is that the data were collected via telephone interview. Thus, the results presented here may not be applicable to those without telephones, such as the homeless, those in institutions, or who are incarcerated. Lastly, it is debated whether analyses such as these should be weighted by the number of persons in the household . We repeIn Canada, it appears that children less than 10 years, young adults 20 to 24 years, and females in households with annual incomes under $20,000 are at an increased risk for AGI. In children, this increased risk may reflect an increased susceptibility to gastrointestinal infections due to immune status, and in young adults, this increased risk may be due to behavioural factors. In low income females, however, the specific reasons for this increased risk are unclear, and further research is needed. Understanding these relationships between AGI and determinants of health in the population is necessary to guide intervention and prevention efforts. These results suggest that children, young adults, and low income females should be targeted by public health programs aimed at decreasing the incidence of AGI in Canada.The author(s) declare that they have no competing interests.SM and JH planned the analysis of the data. KB performed the statistical analysis. All authors were involved in interpreting the results and writing the manuscript. All authors have read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"}
+{"text": "Exposure to environmental pollutants may contribute to the development of coronary heart disease (CHD). We determined the ZIP codes containing or abutting each of the approximately 900 hazardous waste sites in New York and identified the major contaminants in each. Three categories of ZIP codes were then distinguished: those containing or abutting sites contaminated with persistent organic pollutants (POPs), those containing only other types of wastes (\u201cother waste\u201d), and those not containing any identified hazardous waste site (\u201cclean\u201d). Effects of residence in each of these ZIP codes on CHD and acute myocardial infarction (AMI) hospital discharge rates were assessed with a negative binomial model, adjusting for age, sex, race, income, and health insurance coverage. Patients living in ZIP codes contaminated with POPs had a statistically significant 15.0% elevation in CHD hospital discharge rates and a 20.0% elevation in AMI discharge rates compared with clean ZIP codes. In neither of the comparisons were rates in other-waste sites significantly greater than in clean sites. In a subset of POP ZIP codes along the Hudson River, where average income is higher and there is less smoking, better diet, and more exercise, the rate of hospitalization for CHD was 35.8% greater and for AMI 39.1% greater than in clean sites. Although the cross-sectional design of the study prevents definite conclusions on causal inference, the results indirectly support the hypothesis that living near a POP-contaminated site constitutes a risk of exposure and of development of CHD and AMI. Coronary heart disease (CHD) is the most common, serious, chronic, life-threatening illness in the United States, affecting more than 11 million people . The majSeveral reports have indicated elevations in rates of various diseases and birtFat-soluble POPs, such as PCBs, dioxins/furans, and persistent pesticides such as dichlorodiphenyltrichloroethane (DDT) are of special concern because of their persistence in both the human body and the environment. This class of compounds bioconcentrate in food and can be ingested, inhaled, or absorbed through the skin. These compounds alter normal functioning of the immune, nervous, and endocrine systems and are carcinogenic . In prevThere is evidence in animals that PCBs induce lipogenic enzymes , that diInternational Classification of Diseases, 9th Revision , which is an administrative database. Like other administrative databases, it contains formalized information derived/abstracted from data sets such as clinical charts. We used SPARCS data from 1993\u20132000, with some 2.5 million hospital discharge records collected annually. Up to 15 diagnoses and 15 procedures, coded according to the Revision , are recEvery SPARCS record contains information on patient\u2019s age, sex, race, source of payment , and the ZIP code of their residence. We used this information to adjust for potential confounders. Income was employed as a proxy measure of socioeconomic status (SES), which is another potential confounder in our study. Data on income (median household income on the ZIP code level) were obtained from the 2000 U.S. Census .The ZIP codes were classified as POPs, other waste, or clean, depending on whether they contained or abutted contaminated sites, where POPs were considered to be PCBs, dioxins/furans, or chlorinated and persistent pesticides. We considered the hazardous waste sites in New York identified by the U.S. Environmental Protection Agency (EPA) , the NewChronic forms of CHD would be expected to be coded often in the SPARCS database as one of the 14 \u201cother diagnoses\u201d rather than as the \u201cprincipal diagnosis.\u201d This may result in a bias problem. Nonsevere comorbidities that do not require treatment during the hospital stay may be undercoded, resulting in underestimation of the association between CHD hospitalization rates and exposure (bias toward null). But the relatively high prevalence of CHD in the general population makes it quite a common comorbidity. Higher hospitalization rates for any disease caused by the contaminants of interest would result in higher CHD prevalence among hospitalized patients, resulting in overestimation of the association between pollution and hospital care use by CHD patients.To control for these possible biases, we analyzed the association between the most severe form of CHD (AMI) and ZIP code of residence. AMI is a very serious and often life-threatening disease, and it is less likely to be a comorbidity. Although other factors associated with AMI hospital discharge rates were not of primary interest for this study, they can be used for quality control purposes and thus merit careful examination. Male sex and older age are well-known risk factors for AMI and other forms of CHD, and more frequent hospitalizations should be expected in these population groups. Consequently, male sex and older age should be associated with higher hospital discharge rates for AMI. So adequacy of the model describing association between any exposure and AMI hospital discharge rates can be questioned if the model fails to indicate the contribution of sex and age.The hospital discharge rate for AMI among males was about twice that among females, and it increased with age . CompareThere are other important risk factors for CHD, especially smoking, diet, and exercise. Information at an individual level for these risk factors is not available in our data sets. However, by use of BRFSS, we have county-level information, as reported previously . In the 3 in the POP ZIP codes, 11.1 \u03bcg/m3 in the other-waste ZIP codes, and 11.2 \u03bcg/m3 in the clean ZIP codes. Although the number of ZIP codes for which mean particulate information is available is small, the information that can be obtained does not suggest that air pollution is a major confounder.Particulate air pollution is well documented to be an important risk factor for CHD and AMI. It is difficult to control for local differences in air pollution in an ecologic study such as this, but we have used what information is available from the air monitoring stations operated by the NYSDEC. Of the 43 stations in New York, 20 are outside of New York City, but in only 16 is regular monitoring of 2.5-\u03bcm particulates obtained . The mean 24-hr 2.5-\u03bcm particulate levels reported were 11.5 \u03bcg/mThe results of this study are consistent with the hypothesis that exposure to certain environmental contaminants increases the risk of development of CHD and AMI. Those persons residing in POP-contaminated ZIP codes have significantly higher rates of diagnosis of CHD and/or AMI on hospital discharge than do those living in noncontaminated areas. Residency in areas contaminated with other waste is also associated with an elevation in hospital discharge rates, but this relationship did not reach the traditionally used significance level of \u03b1 = 0.05.Others have reported health effects of living near hazardous waste sites [reviewed by The observations reported in this investigation raise two important questions: What is the mechanism(s) involved, and what is the route(s) of exposure? Exposure to PCBs and dioxins is known to increase atherogenic serum lipid levels in both animals and humaWith regard to the route of exposure, the present and our previous observations are mostSeveral important confounders could explain these observations, particularly SES and behavioral risk factors. Harmful behavioral patterns and unfavorable environmental exposures associated with development of diseases have higher prevalence among lower social classes. Health insurance coverage is also related to SES . The effFine particulate air pollution is another well-documented risk factor for cardiovascular disease . AlthougOur study is not free from limitations above and beyond the usual limitations of ecologic investigations . The SPAIn summary, we determined that residency in POP-contaminated sites is associated with increased rates of hospitalization for CHD and AMI. Although the cross-sectional design of the study prevents us from making definitive conclusions on causal inference, the results support the hypothesis that exposure to PCBs, dioxins/furans, and/or persistent pesticides as a result of living near a hazardous waste site results in an elevated risk of CHD."}
+{"text": "Environmental exposure to persistent organic pollutants (POPs) may lead to elevation of serum lipids, increasing risk of atherosclerosis with thromboembolism, a recognized cause of stroke. We tested the hypothesis that exposure to contaminants from residence near hazardous waste sites in New York State influences the occurrence of stroke.The rates of stroke hospital discharges were compared among residents of zip codes containing hazardous waste sites with POPs, other pollutants or without any waste sites using information for 1993\u20132000 from the New York Statewide Planning and Research Cooperative System (SPARCS) database, containing the records of all discharge diagnoses for patients admitted to state-regulated hospitals.After adjustment for age and race, the hospitalization rate for stroke in zip codes with POPs-contaminated sites was 15% higher than in zip codes without any documented hazardous waste sites . For ischemic stroke only, the RR was 1.17 . Residents of zip codes containing other waste sites showed a RR of 1.13 as compared to zip codes without an identified waste site.These results suggest that living near a source of POPs contamination constitutes a risk of exposure and an increased risk of acquiring cerebrovascular disease. However further research with better control of individual risk factors and direct measurement of exposure is necessary for providing additional support for this hypothesis. Cerebrovascular disease is a major public health problem . In addiPOPs are chlorinated organic compounds that are resistant to degradation and able to bio-accumulate in fatty tissues of living organisms. These compounds are semivolatile, and present in the atmosphere as vapors or adsorbed on suspended particles . MultiplThere is an increased incidence of some chronic diseases among individuals living near hazardous waste sites . HoweverSPARCS was used to obtain data on hospital discharge diagnosis of cerebrovascular disease among the New York State residents. SPARCS contains records of discharge diagnoses for all persons admitted as inpatients in all public and private New York hospitals, excluding federal-regulated facilities and mental health facilities. Non-hospitalized cases are not collected by the SPARCS registry. The database available to us contained the primary and up to 14 secondary discharge diagnoses in the format of International Classification of Disease, Ninth Revision (ICD-9), in addition to the zip code of residence, sex, age, and race/ethnicity of each patient. We have used the SPARCS data from 1993 to 2000.Data on hazardous waste sites were obtained from the New York State Department of Environmental Conservation (NYSDEC). The major contaminants present and the zip code(s) for each site were extracted from the NYSDEC database. From a total of 818 State and Federal hazardous waste sites in New York State (excluding New York City), 396 sites contained POPs. These hazardous waste sites (\"POPs\") were located in 192 zip codes. Two hundred thirteen zip codes contained hazardous waste sites where the listed contaminants of concern did not include any POP, and these were categorized as \"other waste\". All of the other 994 zip codes, which contained no identified hazardous waste sites, were classified as \"clean\", although we recognize that these zip codes may contain wastes that have not been characterized. One subset of the \"POPs sites\" was examined separately: 78 zip codes along the PCB-contaminated portion of the Hudson River from Hudson Falls to New York City . Using the information from the Behavioral Risk Factor Surveillance System (BRFSS) we were able to compare behavior of the population along the Hudson River to the rest of New York State.Demographic data on New York State residents was obtained from Claritas Inc., an information resource company that provides information derived from the U.S. Census, with the zip codes used being the same as those used by the U.S. Postal Service.The data from SPARCS, Claritas, and NYSDEC were merged on the basis of a zip code of residence to determine the rate of stroke hospital discharges among the individuals residing in three categories of zip codes for the years 1993\u20132000. Exposure was defined as a patient's residence in a zip code that contained or abutted at least one hazardous waste site. We used primary and all secondary ICD-9 codes 430 to 436 for cerebrovascular disease (with the fourth and fifth digits). Ischemic stroke was defined as codes 433.x1, 434.x1, and 436, while hemorrhagic cerebrovascular disease was identified as codes 430, 431, and 432.We excluded all zip codes that were not constant from 1993 to 2000 and post office box zip codes . Since New York City maintains its own hospitalization dataset and has unique sociodemographic characteristics, it was also excluded. After all the exclusions were made, 1399 zip codes remained in the study.Finally, the analysis was restricted to White and African-American races because the numbers of Asians and Native Americans were small. We restricted analysis to patients between 25 to 64 years old in order to evaluate stroke frequency at an age at which stroke is relatively rare, expecting that that would provide a better indication of an elevation in risk should it exist.Log = log + \u03b20 + \u03b21*AGE5 + \u03b22*AGE4 + \u03b23*AGE3 + \u03b24*GENDER + \u03b25*RACE + \u03b26*POPs sites + \u03b27*Other waste, where \"POPs\" and \"other waste\" represented the exposure; age, gender, and race represented other dependent variables with a value of zero or one. Before formulating the final regression model, we assessed confounding by demographic variables . All statistical analyses were conducted using the SAS statistical software package, version 8.2 .The stroke hospital discharge rate per 100,000 was calculated as the number of people discharged with cerebrovascular disease divided by the estimated total population. We used a Negative Binomial regression model, with the GENMOD procedure from SAS software. The Negative Binomial model was log linear: The initial Negative Binomial regression analysis included all four quartiles of the median household income, estimated on a zip code level. However, the analysis of zip codes with the lowest and highest median incomes (the first and the fourth quartiles) showed the greatest population variability. Therefore, we restricted the Negative Binomial regression model to the middle-income zip codes (second and third quartiles), with the median household income ranging from $30,388.0 to $48,213.5.Figure From 1993 to 2000 there were 28,216 stroke discharges in the three study zip code classes. Table The results of negative binomial regression are presented in Table Table Table Our results suggest that living in zip codes that contain hazardous waste sites is associated with an increased rate of hospital discharges for stroke, especially ischemic stroke. The regression model, when limited to middle income, showed a 15% elevation of hospital discharge rates for stroke in zip codes with POPs waste sites, independent from patient's age, race, or gender even though increased age, being male and being African-American were all significant but independent risk factors. In POPs-contaminated zip codes along the Hudson River the increase of hospital discharge rates for stroke was even larger. However, as previously reported individuThe stroke discharge rates in zip codes that have a hazardous waste site, but not one with POPs, were also found to be elevated (12%) after the negative binomial regression analysis. However, the substantial heterogeneity in this group of pollutants and existence of toxicological/biological interactions among them prevents us from drawing definite conclusions about their influence on stroke occurrence.The results of this study are consistent with the findings of previous investigations that show elevations in various diseases in residents living near hazardous waste sites ,13,17-19Stroke has many pathologic factors similar to those in cardiovascular disorders. Ischemic stroke is a \"brain attack\" and its etiology is similar to that of myocardial infarction , while hWe have previously reported an elevation in hospital discharges for infectious respiratory disease in \"POPs\" zip code residents and compared to \"clean\" and \"other waste\" zip codes . As in tThere are clear limitations in determining cause and effect in partially ecologic study designs such as we have used. Given that the study was based on aggregative data, we do not have a direct measure of exposure (a zip code of residency is a crude surrogate for exposure assessment), and have no information on the duration of individual residence in each specific zip code. It is possible, and indeed likely, that some individuals residing in a POPs zip code were not exposed because of short length of residence, or residence still in the zip code but far from the hazardous waste site. The information on income was available only at a zip code level, which allowed marginal adjustment for socio-economic status. It does not adjust for the range of income in any one zip code, nor the possibility that within a zip code the income is less among those living closer to the waste sites. Therefore, it is possible that the observed associations were influenced by some underdetermined factors and might not be applicable to every socioeconomic subpopulation.There are many other potential sources of confounding with the groups, and these are only partially controlled for by use of the BRFSS for the Hudson River population. BRFSS information is currently available only at a county level, not at the zip code level. While the BRFSS provides information on average behaviors within a county, it is still possible that those individuals who experience strokes differ from this average.While the absence of personal identifiers restricted our ability to account for some potential confounders/effect modifiers, hospitalization data obtained in a mandatory manner for a number of years, because of the large numbers involved, has considerable potential for generating and testing hypotheses regarding the causes of disease . In spitWe found a statistically significant elevation of hospital discharge rates for stroke in zip codes with POPs-contaminated hazardous waste sites, and to a lesser degree with \"other waste\" sites, when compared to zip codes that do not have any identified waste sites. These observations suggest that living near a waste site contaminated with POPs is associated with the risk of inhalational and/or ingestional exposure and an increased risk of acquiring cerebrovascular disease. Further research involving control for individual risk factors and direct exposure assessment techniques is necessary for providing additional evidence for this hypothesis.BRFSS \u2013 Behavioral Risk Factor Surveillance SystemICD-9 \u2013 International Classification of Disease, Ninth RevisionNYSDEC \u2013 New York State Department of Environmental ConservationPCBs \u2013 polychlorinated biphenylsPOPs \u2013 persistent organic pollutantsRR \u2013 rate ratioSPARCS \u2013 New York Statewide Planning and Research Cooperative SystemThe author(s) declare that they have no competing interests.IS performed the data analysis as a requirement of his MPH program, performed the statistical analysis, and wrote the first draft of the paper. XH coordinated the use of the SPARCS and hazardous waste datasets. LL provided overall statistical direction for the study. DOC designed the study and supervised the data analysis. All authors read and approved the final manuscript."}
+{"text": "In this study, we have investigated the expression of the proto-oncogene c-erbB2 in a total of 70 human primary breast tumours. In agreement with other workers, we observed c-erbB2 gene amplification in 17.5% of the tumours studied. In addition, we carried out a comprehensive analysis of c-erbB2 mRNA expression in the tumours using RNase mapping and in situ hybridisation techniques. Our results indicated a more frequent (30%) overexpression of c-erbB2 mRNA, which was associated only with breast carcinomas of a ductal origin. Furthermore, analysis of the c-erbB2 mRNA gene locus in the same tumours demonstrated that enhanced c-erbB2 expression could occur in the presence or absence of gene amplification, suggesting that additional molecular mechanisms may result in overexpression of c-erbB2 mRNA in human mammary tumours. In situ hybridisation showed that elevated levels of c-erbB2 mRNA were specific to malignant cells within the breast tumour. Analysis of the association between c-erbB2 mRNA overexpression and clinicopathological factors revealed a significant correlation with poor tumour grade, but not with steroid receptor status or patient menopausal status. No significant correlation was observed between overexpression of c-erbB2 mRNA and early disease recurrence in our group of patients, although there was a definite trend towards poorer prognosis."}
+{"text": "Low income individuals with diabetes are at particularly high risk for poor health outcomes. While specialized diabetes care may help reduce this risk, it is not currently known whether there are significant clinical differences across income groups at the time of referral. The objective of this study is to determine if the clinical profiles and medication use of patients referred for diabetes care differ across income quintiles.This cross-sectional study was conducted using a Canadian, urban, Diabetes Education Centre (DEC) database. Clinical information on the 4687 patients referred to the DEC from May 2000 \u2013 January 2002 was examined. These data were merged with 2001 Canadian census data on income. Potential differences in continuous clinical parameters across income quintiles were examined using regression models. Differences in medication use were examined using Chi square analyses.Multivariate regression analysis indicated that income was negatively associated with BMI (p < 0.0005) and age (p = 0.023) at time of referral. The highest income quintiles were found to have lower serum triglycerides (p = 0.011) and higher HDL-c (p = 0.008) at time of referral. No significant differences were found in HBA1C, LDL-c or duration of diabetes. The Chi square analysis of medication use revealed that despite no significant differences in HBA1C, the lowest income quintiles used more metformin (p = 0.001) and sulfonylureas (p < 0.0005) than the wealthy. Use of other therapies were similar across income groups, including lipid lowering medications. High income patients were more likely to be treated with diet alone (p < 0.0005).Our findings demonstrate that low income patients present to diabetes clinic older, heavier and with a more atherogenic lipid profile than do high income patients. Overall medication use was higher among the lower income group suggesting that differences in clinical profiles are not the result of under-treatment, thus invoking lifestyle factors as potential contributors to these findings. Individuals with low income are at increased risk for the development of diabetes -3 Low inThere is an extensive literature that explores the association between income and health outcomes among the general population. The relationship between income and health outcomes is complex and is mediated by a number of factors. Potential mediating factors include differential access to care -10, behaLow income patients with diabetes are at greater risk for adverse health outcomes but the factors influencing this relationship are unclear. There is emerging evidence that income does not appear to effect access to specialty diabetes care ,28, but In recognizing our incomplete understanding of the income relationship to diabetes, this study proposed to explore whether there are clinical and/or biologic differences across income groupings among patients referred to an urban diabetes education centre (DEC). The study's objectives specifically included an assessment of the clinical profiles (including medication use) of patients across income groupings at the time of referral for specialized diabetes care.To conduct this work, we used a regional DEC database that captures basic demographic information on all attendees to the regional clinic situated in Calgary, Alberta, a large Canadian city. The sampling frame was all active patients at the DEC from May 1, 2000 to January 9, 2002. The sample consisted of 4687 patients. All patients included were from a single health region within the province of Alberta. This DEC is the single regional provider of diabetes education services. Access is dependent upon physician referral to the centre. The postal codes of patients registered in the DEC database were linked to their corresponding dissemination area (DA) using the Statistics Canada Postal Code Conversion File (PCCF).Neighborhood income data were obtained from Statistics Canada Census data (2001). We defined a neighborhood as equivalent to a census dissemination area (DA)- a small geographic area covered by a single census data collector which typically contains 400\u2013700 persons. Therefore, median household income per DA was the income measure used in this study. These data were merged with the DEC database on the variable DA. Neighbourhood income has been shown to be reasonably concordant with individual income in urban settings ,30. TherHousehold income quintiles were generated from DA annual income data. All income data is reported in Canadian dollars. The size and associated incomes for the income quintiles were as follows:1) Income quintile 1, n = 940, less than $408772) Income quintile 2, n = 937, $40878 \u2013 $530653) Income quintile 3, n = 936, $53066 \u2013 $629214) Income quintile 4, n = 938, $62922 \u2013 798285) Income quintile 5, (n = 936), more than $79829Physicians referring patients to the DEC complete a standardized referral form that includes clinical data. This information was then entered into the DEC patient registry. Clinical information examined in this study included: serum hemoglobin A1C (HBA1C); serum lipid profiles including levels of low density lipoprotein (LDL-c), high density lipoprotein (HDL-c) and triglyceride; microalbumin to creatinine ratios and medications used at time of referral. Height and weight are measured upon presentation to clinic; these measures were used to calculate the body mass index (BMI) which was then entered into the DEC database.Potential differences in continuous clinical parameters across income quintiles were examined using regression models. If inspection of the distribution of these variables suggested a linear relationship between income and the variable of interest, then income quintile was modeled as a single ordinally-coded predictor variable. If, on the other hand, the relationship was not linear, then regression was performed using dummy variables for each income quintile relative to the lowest income quintile as a reference group. Covariates considered in these models included sex and medication use. Differences in categorically-coded medication use across income quintiles, meanwhile, were examined using Chi square analyses. All statistical analyses were performed in STATA, version 8.Clinical characteristics of patients referred for diabetes care and education are listed, by income quintile, in Table Visual inspection of the distribution of the variables age, body mass index (BMI), and duration of diabetes Figure and indeVisual inspection of the distribution on the clinical variables of LDL-c, HDL-c, triglycerides, HBA1C and microalbumin:creatinine ratio did not reveal an obvious linear relationship in the associations with income . An inverse gradient was noted in the use of oral diabetes medications. Metformin was used by 37.3% of patients in the lowest income group, compared to 30% in the highest income group . Sulfonylureas were also more commonly used in the lower income quintiles compared to the highest income quintiles . No significant differences were found across income quintiles in the use of glucosidase inhibitors , thiazolideindiones (TZD) or subcutaneous insulin .The proportions of patients, by income quintile, prescribed specific medications are presented in Table Individuals with low income and diabetes are at increased risk for developing vascular complications. While the processes mediating this low income/poor health outcome relationship have been examined in the general population, little is known about the factors mediating this relationship among those with diabetes. Previous research has shown that access to specialty diabetes care appears equitable across income groups ,28, suggThis study demonstrates that there are clinically significant differences in some biologic parameters across income quintiles and that low income patients present to clinic with higher risk profiles. Low-income patients are older at time of referral and have more atherogenic metabolic profiles with higher serum triglycerides and lower HDL levels, which are associated with a higher risk for developing cardiovascular disease -35. WhilThis study also suggests that differences in metabolic status are not due to overt under-treatment of the economically disadvantaged. The lowest income groups were using more sulfonylureas and metformin compared to the wealthiest groups. Even the use of more costly therapies such as TZD and lipid-lowering therapies were similar across income groups.This study also provides some insight into potential health related behavioural differences across income groups. It has been shown in previous research that sedentary lifestyles are more common among lower income populations. In this study, the lower income groups had the highest BMIs. This raises the possibility that the lower income groups are less physically active than their wealthy counterparts. The lower HDL levels and higher triglycerides might also reflect behavioural differences with respect to diet and/or exercise . UnfortuHigh income is frequently associated with higher health literacy and a greater ability to apply health-related knowledge ,38. It sWhile we did not find a significant difference with respect to the duration of the diagnosis of diabetes at the time of referral, examination of the distribution of this variable certainly suggests that this may, in part, be mediating some of clinical differences noted. The wealthiest patient group was also younger, and more likely to be controlled with diet alone, suggesting that these patients may be presenting at an earlier point in the natural history of their diabetes. If wealthy patients were being referred earlier (perhaps due to earlier diagnosis), this may also help explain the inverse relationship between income and complication risk. As there is now clear evidence that aggressive management of blood glucose, high blood pressure and high serum lipids will effectively prevent the micro- and macrovascular complications of diabetes -43, it fThis study has limitations. This is a cross sectional study that examined the clinical profiles of patients at one point in time. These referrals were not necessarily index referral, and had we compared clinical profiles at first contact with specialty care, it is possible that some of the clinical differences noted may have been attenuated. It is noteworthy that clinical data were entered into the DEC database from a standardized clinic referral form. All clinical data examined in this study, therefore, were provided by the referring physician. If doctors differ in the manner in which they complete, or do not complete this form, an information bias could be introduced to this study. We do not have any evidence, however, that physicians' documentation skills should differ based on the neighbourhood income of their patients, and would assert that information bias relating to income is unlikely.This study provides important information on how the clinical profiles of patients with diabetes differ based on income. Given that elevated serum lipids, HBA1C and microalbumin to creatinine ratios are all significant predictors of atherosclerosis and mortality -45, it iAll listed authors would like to declare that there were no competing interests involved with this research or the preparation of this manuscript.DMR conceived the study. DMR and WAG collaborated on the study design. WAG, ALE, PMS, PN and ETL were all involved in the establishment of the database used in this study. DMR led the writing of this manuscript but all listed authors contributed substantially to the editorial process and approved the final manuscript."}
+{"text": "Alberta Healthy Living Network (AHLN) mapped the inter-organizational structure of its members to examine the effects of the network environment on organizational-level perceptions. This exploratory analysis examines whether network structure, specifically partnership ties among AHLN members, influences organizational perceptions of support after controlling for organizational-level attributes.Knowledge of the structure and character of inter-organizational relationships found among health promotion organizations is a prerequisite for the development of evidence-based network-level intervention activities. The Organizational surveys were conducted with representatives from AHLN organizations as of February 2004 (n = 54). Organizational attribute and inter-organizational data on various network dimensions were collected. Organizations were classified into traditional and non-traditional categories. We examined the partnership network dimension. In- and out-degree centrality scores on partnership ties were calculated for each organization and tested against organizational perceptions of available financial support.Non-traditional organizations are more likely to view financial support as more readily available for their HEALTR programs and activities than traditional organizations . After controlling for organizational characteristics, organizations that have been frequently identified by other organizations as valuable partners in the AHLN network were found significantly more likely to perceive a higher sense of funding availability .Organizational perceptions of a supportive environment are framed not only by organizational characteristics but also by an organization's position in an inter-organizational network. Network contexts can influence the way that organizations perceive their environment and potentially the actions that organizations may take in light of such perceptions. By developing evidence-based understandings on the influence of network contexts, the AHLN can better target the particularities of its specific health promotion network. Alberta Healthy Living Network (AHLN) was formed in July 2002. The AHLN's mission is to provide leadership for integrated, collaborative action to promote health and prevent chronic disease. Integrated approaches can be described as multi-sectoral, multi-strategic, multi-disease and multi-risk factor approaches to reduce chronic disease . With the increasing popularity of integrated, collaborative approaches, however, there exists the corresponding need to understand better the structure and effects of inter-organizational relations on organizational perceptions and actions.Organizations are increasingly bringing their expertise and resources together to develop chronic disease prevention and health promotion programs in an integrated and collaborative manner. In ordeNetwork analysis has become increasingly regarded in the health promotion literature as a reliable method for describing and assessing levels of community capacity and organizational collaboration -5. The fTelephone and face-to-face interviews were conducted with representatives from AHLN member organizations as of February 2004. Organizations self-identified a representative who was best qualified to discuss that organization's AHLN-related activities. In general, respondents worked in the area of healthy living, specifically around healthy eating, active living, or tobacco reduction. Their organizational roles included executive directors, managers, and service providers. Different organizational roles can potentially provide different descriptions of an organization's relationships . Three oFormal network analytic methods were used to identify and measure the character and intensity of ties among organizations. For the network modules, organizational representatives referred to the list of AHLN members as they answered questions regarding their relationships with those organizations.To assess organizational perceptions of the funding environment, organizational representatives were asked to respond based on a four-point Likert scale from strongly disagree to strongly agree to the following statement: \"Financial support for your organization's programs and activities in Healthy Eating, Active Living and Tobacco Reduction (HEALTR) is readily available.\" The question was designed to tap into an organization's perception of whether the funding environment was one in which they felt that they could readily obtain the resources necessary to carry out their programs and activities.To maintain stable statistical estimates with the small sample size, the number of organizational-attribute variables was kept to a minimum in this analysis. Two organizational characteristics were included in the analysis: i) organizational type and ii) size. Organizational size was based on the total number of employees hired by an organization in either a full- or part-time capacity.AHLN Network Mapping: Report on Intersectoral Involvement in the Alberta Healthy Living Network. Unpublished Report, September 2004). The criteria used to classify organizations were established in consultation with key stakeholders in the AHLN, and based on rules around membership, mandates, and action strategies for AHLN organizations . For example, the primary mandate of traditional health-sector organizations was to improve health status. Provincial and federal government health departments, regional health authorities, chronic disease prevention charities, and health professional associations were classified as traditional members of the AHLN (n = 31). These organizations varied in their health promotion activities, with some focusing on primary prevention , others concentrating on secondary prevention , and a few targeting tertiary prevention . In contrast, the mandate of non-traditional health-sector organizations did not explicitly include improving health status, although the value of health activities may have been incorporated into their agendas. The organizations deemed to be non-traditional were active living organizations , education departments , recreation and sport organizations, aboriginal organizations, and private businesses (n = 23).For organizational type, each organization was classified as belonging to either the traditional or non-traditional health sector (Minke S.W. and Simpson T. name of other AHLN member), if so, on a scale of 1\u20135 where 5 is critically valuable and 1 is marginally valuable, how would you rate your partnership with (name of other AHLN member) to the success of your work in Healthy Eating, Active Living, and Tobacco Reduction (HEALTR)?\" The importance of such partnership ties for the overall work of the AHLN was determined through consultation with the Partnership Development and Community Linkages Working Group (PDCLWG) of the AHLN Coordinating Committee.The value of inter-organizational partnership ties within the AHLN network was ascertained by asking organizational representatives the following question for each of the other AHLN members: \"Do you have a partnership arrangement with organizations having equal levels of perceived support and ii) having direct reciprocal partnership ties. The basic premise is that the influence of other organizations' perceptions is strongest when those organizations perceive the same level of support and have reciprocal relationships. The network effects model allows adjustment for the influence of other organizations' perceptions on the ego organization's perception[HEALTR) is readily available.\" Results are reported using maximum likelihood estimates.Using the statistical package SPSS, the analysis proceeded in three steps. First, we analyzed the distribution of organizational and inter-organizational variables for all AHLN members and then according to traditional or non-traditional organizational type. We examined if significant mean differences in our study variables existed between traditional and non-traditional organizations. Second, we used Pearson and Spearman-rho correlation analyses to examine significant associations among variables. Since there were no significant differences to report between Pearson and Spearman-rho correlation values, Pearson correlation analysis results are reported. Third, we constructed two ordinal logistic regression models. Model 1 regressed organizational-level perceptions of available financial support on two organizational characteristics. In model 2, we added our network measures of in-degree (prestige), out-degree(influence), and tie homophily to model 1. In secondary analyses, we also constructed a network effects model in which we included the term ed actors. The weierception. SecondaTable HEALTR programs and activities . In other words, organizations that receive a greater number of partnership ties or receive more highly valued ties, i.e., identified as being important partners by others, are significantly more likely to perceive higher levels of financial support available. The influence of non-traditional organizational status on perceptions of support remain after adjusting for AHLN network features . Percentage tie homophily or its opposite tie heterophily across organizational types appears to have no direct influence on perceptions of support, although secondary analyses suggest that tie homphily may attenuate the influence of non-traditional organizational status. In secondary analyses, the network effects term was not significant nor did it alter the significance of the other variables in model 2. For this reason, the network effects model is not reported, although available upon request to corresponding author.Table Our analysis of the AHLN suggests that both network and organizational characteristics influence member's perceptions of available support. There are three questions that our study's findings raise that require further elaboration: 1) why does in-degree and not out-degree partnership ties have an influence on organizational perceptions of support?; 2) why do non-traditional have higher perceptions of support than traditional organizations?; and 3) how might in-degree ties help traditional organizations create a more secure funding environment? First, we found that in-degree has a significant, positive influence on organizational perceptions of support. In other words, if an organization receives more ties, they report a higher perception of readily available financial support. These findings held when we also adjusted for social influence, or network effects, on organizational perceptions. Although organizations with higher in-degree scores tend also to have higher out-degree scores, an organization's sending ties do not have a significant association with organizational perceptions of support. Why would receiving partnership ties have an influence on perceptions of support while sending partnership ties do not? The simplest interpretation may be that partnership ties provide supportive resources that contribute to an organizations' general pool of available resources. Organizations receiving more support through partnership linkages would tend to perceive a more supportive environment. This is not the case with sending ties since they represent partnership relations in which resources are potentially flowing outwards from an organization. Organizational influence might emerge through partnership ties in which resources are sent but this does not appear to contribute to an organization's sense of support available through partnership ties.HEALTR programs. Although tie homophily did not have a direct influence on perceptions of support, non-traditional organizations do have a greater diversity in their ties across traditional and non-traditional organizational types. While our data do not allow us to confirm this empirically, non-traditional organizations may maintain more diverse networks across a range of other organizational types and domains, thus having more avenues of support for their activities than traditional organizations.Second, we found that non-traditional organizations, such as active-living centres, private business, and educational centres, were significantly more likely than traditional health organizations to view funding support as readily-available for their Third, we found that traditional health organizations have on average significantly lower perceptions of support, despite receiving on average more partnership ties in the AHLN network. In the case of traditional organizations, receiving partnership ties appears to increase their access to network resources, informational or financial. Since traditional health organizations are more explicitly tied to health-related mandates, i.e., their mandates specifically include \"improving health status,\" such organizations may have a reduced range of overall activities and less access to diverse funding sources than non-traditional health organizations. In this sense, the greater development of partnership ties among traditional health organizations may represent an important organizational strategy that has helped such organizations buffer the potential funding insecurity surrounding their more-specialized organizational activities. Further research, including the use of longitudinal data, is required to confirm the potential factors that might help explain our present findings.While organizational characteristics are important, our study has shown how network environments also play a role in shaping the way organizations see the availability of support for their programs and activities. In studying the association among network structure, organizational characteristics, and perceptions, our analysis highlights the importance of receiving partnership ties in influencing organizational perceptions of readily available support. For traditional organizations, these receiving ties appear to be a particularly important mechanism in which such organizations develop or enrich their avenues of possible support.Given the AHLN mission to provide leadership for integrated, collaborative action to promote health and prevent chronic disease, we see this exploratory study as encouraging the continued development of evidence-based health promotion activities and contributing to the use of network mapping activities to assess the dynamics of inter-organizational collaboration. AlthougAlberta Healthy Living Network (AHLN)Healthy Eating, Active Living and Tobacco Reduction (HEALTR)Partnership Development and Community Linkages Working Group (PDCLWG)The author(s) declare that they have no competing interests.SM led the study design, analyses, and writing. CS, TS, and SWM assisted with the study and analyses. All authors helped to conceptualize ideas, interpret findings, and review drafts of the article.The pre-publication history for this paper can be accessed here:"}
+{"text": "Socioeconomic status could affect the demand for hospital care. The aim of the present study was to assess the role of age, socioeconomic status and comorbidity on acute hospital admissions among elderly.We retrospectively examined the discharge abstracts data of acute care hospital admissions of residents in Rome aged 75 or more years in the period 1997\u20132000. We used the Hospital Information System of Rome, the Tax Register, and the Population Register of Rome for socio-economic data. The rate of hospitalization, modified Charlson's index of comorbidity, and level of income in the census tract of residence were obtained. Rate ratios and 95% confidence limits were computed to assess the relationship between income deciles and rate of hospitalization. Cross-tabulation was used to explore the distribution of the index of comorbidity by deciles of income. Analyses were repeated for patients grouped according to selected diseases.Age was associated with a marginal increase in the rate of hospitalization. However, the hospitalization rate was inversely related to income in both sexes. Higher income was associated with lower comorbidity. The same associations were observed in patients admitted with a principal diagnosis of chronic condition or stroke, but not hip fracture.Lower social status and associated comorbidity, more than age per se, are associated with a higher rate of hospitalization in very old patients. As the population ages, patients over 64 account for a continuously growing proportion of acute hospital care . In LaziBesides being a relevant correlate of self-rated health status, functional status, morbidity and mortality -9, sociaWe planned the present study to evaluate whether socioeconomic status, as measured by proxy variable such as area-based income, affects acute hospitalization rates also in very old people.We examined the discharge abstracts data of acute care hospital admissions of residents in Rome aged 75+ years in the period 1997\u20132000. Discharge abstract data are routinely collected by the regional Hospital Information Systems (HIS) and include: patient demographic data, admission and discharge dates, admission referral source, discharge status, up to six discharge diagnoses (ICD-9-CM), up to six hospital procedures (ICD-9-CM), regional code of the facility, up to four in-hospital transfers, and date of in-hospital transfer. The information system covers all hospitals in the region and includes also hospitalizations of residents occurred outside the region. The study protocol was approved by the Ethical Committee of the Local Health Authority RME, Rome, Italy.st of January 1998. A record linkage between the Tax Register and the Population Register of Rome connected family status information to income data for each subject, then the family equivalent income, weighted for the number of family members was calculated. Data were aggregated at the CT level, and the median value for each CT was calculated. Due to confidentiality of information, only details about income for each CT were available in our study databaseAs a surrogate of individual socioeconomic status, we considered the income level of the population living in the census tract (CT) of residence. A median familiar equivalent income index has been derived for each of the 5736 census tracts (CT) of Rome (average population = 480 inhabitants) . In syntst decile very underprivileged, 10th decile very well off) on the basis of the whole adult population.In order to obtain categorical values for the income indicator, we calculated the deciles of the income distribution , heart failure (ICD-IX code 428), stroke , Chronic Obstructive Pulmonary Disease (COPD) , and hip fracture (ICD-IX code 820). Three of these conditions may be considered as ambulatory care sensitive conditions, i. e. high hospitalization rate for these chronic conditions suggests that community health care is inappropriate [We computed age-standardised rates of hospitalization (per 1000 inhabitants) by gender and income decile for the three age groups \u2265 75 years, 75\u201384 years, and \u2265 85 years. The cut off of 75 years was chosen because it marks a dramatic increase in the prevalence of comorbidity and disability . We usedropriate . InsteadTo quantify the burden of comorbidity, i. e. of diseases coexisting with the main disease, during the hospital admission, we computed for each hospitalized subject a modified version of the Charlson Index of comorbidity: individual diagnoses codified according to ICD-9-CM were givWe used Rate Ratios (RRs) to compare hospital admission rates among income deciles, using the first income decile (the lowest) as the reference group. Confidence intervals (CI) were calculated at the 95% level of significance by using the standard error of the age-adjusted rates. We used multiple linear regression analysis to evaluate the association between the log transformation of duration of hospital stay with income deciles among men and women. Age was considered in the regression models. Statistical analysis was performed using STATA 8 statistical software package.st decile of SES (481-397 = 84) was greater than for women in the 10th decile of SES (283-246 = 37).Age-standardised rates of hospitalization by income decile, separately for males and females, are reported in Table st vs 10th income decile, males RR = 2.59, 95% CI = 2.05\u20133.27, females RR = 4.92, 95% CI = 4.07\u20135.94), heart failure , COPD or stroke , but not in those with hip fracture and 12.9 (SD 14.4) days, for men and women respectively, among those in the lowest income decile, and 9.7 (SD 11.5) and 11.3 (SD 13.2) days among those in the highest income decile. When we adjusted for age in the multivariate linear regression analysis, the strong statistically significant inverse relationship remained (p < 0.001).When comorbidity was examined among hospitalized individuals, higher income was associated with low comorbidity in both genders Table and 3.Our data show that lower social status, more than age, is correlated with the rate of hospitalization in a population older than 74 years. A longer hospital stay was also detected in the lowest socioeconomic group when compared with those in the upper income category. Comorbidity also was greater in low income patients admitted to the hospital. Thus, socioeconomic inequalities are relevant to explain differences in health care use also in a very old population.In keeping with our findings, a study conducted in UK showed that an elderly population tenants had a higher institutionalisation rate than owner-occupiers, who represent a higher income population . FurtherThe inverse association between income level and hospitalization rate may reflect two concurrent phenomena: higher incidence and prevalence of diseases among people in less advantaged conditions, and inadequate community care, especially secondary care, among poor people resulting in higher demand for hospitalization. The former phenomenon is testified by lower comorbidity and healthier life style characterizing high income subjects in developed countries . CompliaAt variance from chronic diseases and stroke, the rate of hospitalization for hip fracture was not associated with income, as if the risk of fall were independent from socio-economic status. This finding is unlikely to suffer from \"collection bias\" because hip fracture requires hospital care and, thus, the recorded figures cannot be biased by alternative home care. Thus, it is a true finding which contrasts with most of previous observations showing that different measures of income are inversely correlated with the incidence of hip fracture -34. HoweSome limitations of this study should be cited. First, the Charlson's index quantifies comorbidity and was available only among hospitalized subjects. Therefore, it was not possible to evaluate the effect of income while \"adjusting\" for comorbidity level. Furthermore, Charlson's index only to some extent assesses the severity of illness, which might be relevant to explain the observed pattern of hospitalization. Indeed, computing an index of disease severity would require a detailed clinical information which is not available on administrative databases. Second, we had no information on the type or cost of care as a function of age. However, there is a consistent evidence that aging is associated with under treatment of many conditions -43. AccoOur findings add to previous observations by showing that income can conveniently target subjects at greater risk of hospitalization even in the very old population, while age per se cannot. Accordingly, measures of income might help targeting older people who could benefit the most from dedicated health care programs. Income data are easily available in administrative databases for the whole population and qualify as a cumulative index of health status or health risk, whereas medical databases are not so capillary in most countries and, thus, provide information on a minority of the population.Efforts are needed to identify factors mediating the relationship between income and health status. Interventions contrasting individual mediators are highly desirable, but, in a broader perspective, attempts at removing social inequalities would be the main health care intervention. Such an intervention would decrease the need for hospital care, and this would translate in an important saving of resources. Thus, physicians, health care managers and political authorities should be aware that medical and social dimensions interact to determine health status and health care needs also in the very old. This underscores the need for a comprehensive view of the health needs and an integrated approach to them.The author(s) declare that they have no competing interests.RAI, CA, FF, VB, and CAP planned and conducted the study, performed the statistical analysis, and drafted the first version of the manuscript. AC contributed to the study design and to the final version of the manuscript.The pre-publication history for this paper can be accessed here:"}
+{"text": "An examination of where in the income distribution income is most strongly associated with risk of mortality will provide guidance for identifying the most critical pathways underlying the connections between income and mortality, and may help to inform public health interventions to reduce socioeconomic disparities. Prior studies have suggested stronger associations at the lower end of the income distribution, but these studies did not have detailed categories of income, were unable to exclude individuals whose declining health may affect their income and did not use methods to determine exact threshold points of non-linearity. The purpose of this study is to describe the non-linear risks of all-cause and cause-specific mortality across the income distribution.We examined potential non-linear risk of mortality by family income level in a population that had not retired early, changed jobs, or changed to part-time work due to health reasons, in order to minimize the effects of illness on income. We used data from the US National Health and Nutrition Examination Survey (1988\u20131994), among individuals age 18\u201364 at baseline, with mortality follow-up to the year 2001 . Differential risk of mortality was examined using proportional hazard models with penalized regression splines in order to allow for non-linear associations between mortality risk and income, controlling for age, race/ethnicity, marital status, level of educational attainment and occupational category.We observed significant non-linear risks of all-cause mortality, as well as for certain specific causes of death at different levels of income. Typically, risk of mortality decreased with increasing income levels only among persons whose family income was below the median; above this level, there was little decreasing risk of mortality with higher levels of income. There was also some variation in mortality risk at different levels of income by cause and gender.The majority of the income associated mortality risk in individuals between the ages of 18\u201377 in the United States is among the population whose family income is below the median . Efforts to decrease socioeconomic disparities may have the greatest impact if focused on this population. Despite longstanding knowledge of an inverse association between income and mortality in the United States ,2 and caWithin the US, only two studies have explicitly examined the shape of the relationship between income and mortality ,10 and bThe aim of this paper is to describe the shape of the income and all-cause and cause-specific mortality associations among US adults age 18 to 64 at baseline (who were age 25\u201377 by the end of follow-up). We examined the association of income and mortality, restricting our analysis only to those individuals who were free from health conditions that caused them to change jobs, change to part time work, or retire early due to health reasons. By using data with a large number of income categories and by modelling the association without using a pre-specified functional form or pre-specified inflection points we are able to more accurately estimate the shape of the income and cause-specific mortality associations. We also compare the fit of models with baseline covariates and either a linear income term, a log-income term, or a smoothed spline income term in order to determine which income-mortality model provides the best fit to the data.The US Third National Health and Nutrition Examination Survey (NHANES III), 1988\u20131994, was designed to be representative of the non-institutionalized population of the U.S. when analyzed using weights to account for over-sampling and non-response . Our anaWe examined all-cause mortality and three cause-specific categories of adult mortality as defined by the following ICD-10 classifications: 1) heart disease , 2) cancer (C00-C97) and 3) injury .Total combined pre-tax family income for the 12 months prior to the survey included wages, salaries, income from self-employment, veteran's benefits, interest dividends, rental income and public assistance. Family income data were available in 28 income categories . Income from each half of the survey was adjusted to 1991 dollars using the Urban Consumer Price Index. For all analyses we used the midpoint of each income category and calculated the mid-point of the upper category of income by assuming a Pareto distribution of family income per standard methodology . Income Additional covariates included: (a) education (0\u201317 or more years), (b) race/ethnicity , (c) age (in years), and (d) occupation, referring to the longest held occupation, divided into 5 categories: (1): white collar and professional ; (2): white collar, semi-routine ; (3): blue collar, high skill ; (4): blue collar, semi-routine ; and (5): never worked. Detailed NHANES III occupational categories were used to create this variable [see Additional file In order to be consistent with prior work on the shape of the association of income and mortality ,8,9 we mThe income and cause-specific mortality associations were modelled with penalized splines (with a cubic basis) in proportional hazard survival models in order to allow for possible non-linear dependence of mortality hazard on income (as well as for education when included as a covariate),20. We uWe first present the unadjusted incidence rates of all-cause mortality by gender and income table . We thenTable Table Figure For all-cause mortality Figure , among bFor cause-specific mortality Figure , risk ofWe repeated the analyses shown in Figures We also examined the extent to which specifying a non-linear functional form of income improved the overall model fit for prediction of mortality as compared to: 1) baseline covariates and no income variable; 2) baseline covariates and a linear income variable; or 3) baseline covariates and a log transformation of income. We did so by comparing the likelihood ratios of each model, taking into account increased degrees of freedom of income and the non-linear income models Table . A lowerAmong US women and men age 18 to 64 at baseline, with follow-up of up to 13 years, we found evidence of a generally stronger associations of income with all-cause mortality at the lower end of the income distribution, i.e., under median income. Similar patterns occurred for deaths due to cancer and injury; by contrast, a more linear association across the full income range was evident for death due to heart disease. These results are unlikely to be substantially driven by contemporaneous effects of illness on income because of the restrictions of our sample to individuals with more than one year of follow up who had not ever changed jobs, changed to part-time work, or retired early due to health reasons. In fact, our results are likely a conservative estimate of the association due to the potential effects of income on illness, given that we restricted our analysis to a healthy sample that has not left the labor force due to health reasons.Our results for all-cause mortality are generally consistent with the previously reported logarithmic functional form of association with income ,10. HoweThese results should be considered within the context of several study limitations. First, the data we used lack specific income categories above $50,000 a year, which limits our understanding of the impacts of income for the 8% of families that had the highest equivalized income. Our estimates at the upper end of the income distribution are less precise, as indicated by the widening confidence intervals in the plots, and the point estimates in these regions should be interpreted cautiously. A second limitation of this analysis is that income is measured at only one point in time, thus not capturing household income dynamics that influence health outcomes -31. ThisBased on a qualitative inspection of the smoothed plots of the income-mortality association, and the overall tests of model fit, we have shown that a non-linear association with a stronger association below median income is the most prevalent pattern of association. There were however variations in this association by cause and by gender. For heart disease, in particular among men, there appears to be less of a threshold at median income. While this may be due to mortality risk for heart disease more evenly distributed across the income distribution, an alternative explanation is that we have limited power to detect the shape of the association due to a relatively small number of heart disease events above median income . Supporting this later speculation is prior work examining a two slope model of income and cardiovascular mortality that has shown there is a stronger association at lower income levels .While significant associations were observed for both men and women, and the shape of the relation was similar for all-cause, heart disease and injury mortality, there were different associations observed among women and men for cancer above median income \u2013 no association for men, and a slightly increasing association among women. Tests of overall model fit showed that while a linear or logarithmic model of income was an equally good fit to the data among men, a non-linear model was a better fit among women. This difference may be due to the positive association between socioeconomic position and rates of breast cancer mortality in women that does not exist as strongly for any site of cancer in men ,35. ThesOur results are generally consistent with the two other US studies that examined the shape of the income-mortality association ,10, despThe results presented have implications for understanding the etiological links between income and mortality. Based on the observed associations, income disparities in mortality chiefly among lower income populations (below the median income) appear to be driving the commonly reported socioeconomic gradients in all-cause, cancer among men and injury mortality. These findings also underscore why efforts to address income disparities in mortality cannot be restricted simply to persons below the US poverty line but instead should include persons with income at least up to the median level. The difference in size of these two populations is large: in 1991, the mid-point of the income data collection, 13% of families in the U.S. were below the poverty line as compared to 50% of families below the median family income , an absolute difference of 37% of US families, and similar to what we have in our study population (Table In the US context, in adults aged 18\u201364 at baseline, the non-linear risk of mortality with income arises from the stronger relationship between income level and mortality among lower compared to higher income populations. This evidence is supportive of the hypothesis that policies to improve the health of individuals among the lower half of the income distribution will have the most impact on reducing US income-based disparities in mortality, although from our data we cannot establish why this association exists. Second, if the associations presented are not due to residual confounding, measurement error, or unaccounted for reverse causation, they may also have implications for the importance of income in contributing to premature mortality. Future studies determining the pathways connecting income and mortality will benefit from a consideration of where in the income distribution the burden of disparity exists, and that this association varies by cause.The authors declare that they have no competing interests.DHR designed the study, conducted the analysis, and lead the writing of all sections of the manuscript. NK contributed to the design of the study and contributed to writing all sections of the manuscript. BC contributed to the design of the study, data analysis and contributed to writing all sections of the manuscript. LFB contributed to the design of the study and contributed to writing all sections of the manuscript.The pre-publication history for this paper can be accessed here:Table of occupational classifications. The data provided describes the categorization of the NHANES III occupational categories in order to create the occupation variable used as a covariate in the analysis.Click here for file"}
+{"text": "ERBB2 amplification/overexpression in gastric cancer remains unclear. In this study, we evaluated the ERBB2 status in 463 gastric carcinomas using immunohistochemistry (IHC) and fluorescence in situ hybridisation (FISH), and compared the findings with histopathological characteristics and with disease-specific survival. ERBB2 overexpression (2+ and 3+) and amplification (ratio ERBB2/CEP17\u2a7e2) were found in 43 (9.3%) and 38 (8.2%) gastric carcinomas, respectively. Perfect IHC/FISH correlation was found for the 19 cases scored as 0 , and also for the 25 cases scored as 3+ . One out of six carcinomas scored as 1+ and 12 out of 18 carcinomas scored as 2+ were positive by FISH. ERBB2 amplification was associated with gastric carcinomas of intestinal type (P=0.007) and with an expansive growth pattern (P=0.021). ERBB2 amplification was detected in both histological components of two mixed carcinomas, indicating a common clonal origin. A statistically significant association was found between ERBB2 amplification and worse survival in patients with expansive gastric carcinomas (P=0.011). We conclude that ERBB2 status may have clinical significance in subsets of gastric cancer patients, and that further studies are warranted to evaluate whether patients whose gastric carcinomas present ERBB2 amplification/overexpression may benefit from therapy targeting this surface receptor.The clinical significance of Despite the trend for decreasing incidence, gastric adenocarcinoma is still the second cause of cancer death worldwide (ERBB2 gene maps to 17q12\u2013q21 and encodes a 185-kDa transmembrane tyrosine kinase receptor (p185), which is a member of the epidermal growth factor receptor family ranged from 3.8 to 12.2% , which included all patients with ERBB2 amplification (n=38) and randomly selected patients with no amplification (n=218).A two-step study design was used to select the gastric cancer patients. First, a cross-sectional study was used to select 463 consecutive primary gastric adenocarcinoma patients who underwent gastrectomy at the Portuguese Oncology Institute\u2014Porto (between 1996 and 2000) to assess the frequency of The tissue specimens for IHC and FISH analyses were archival tumour samples of surgically resected gastric carcinomas from the 463 patients. Patient age at diagnosis ranged from 26 to 91 years .Clinical data were collected by a group of clinicians blinded to ERBB2 status, using a datasheet specifically developed for this study, including the following parameters: age, gender, date of and status on last follow-up, surgery type and date, TNM stage, and treatment other than surgery (if any). Time to clinical outcome was considered from the date of surgery until the last clinical appointment attended, and each patient was classified under one of the following categories: alive with no evidence of disease, alive with disease, dead with no evidence of disease, and dead from disease. For histological data collection, pathologists reviewed a representative H&E-stained slide.\u03bcm-thick tissue sections. The monoclonal antibody NCL-CB11 , recognising the intracellular portion of the protein, was used. Tissue sections were deparaffinised followed by antigen retrieval in citrate buffer at high temperature (water bath at 98\u2009\u00b0C). After blocking for non-specific binding, the primary antibody was added in a pre-standardised dilution (1 out of 60), and incubated for 30\u2009min at room temperature. A standard avidin\u2013biotin\u2013peroxidase complex technique was used for visualisation, with diaminobenzidine as chromogen . The tissue sections were then lightly counterstained with haematoxylin and cover-slipped.Immunohistochemistry targeting the ERBB2 protein was carried out in 4-The following scoring system was used: score 0, no membrane staining or <10% of cells stained; 1+, incomplete membrane staining in >10% of the cells; 2+, weak to moderate complete membrane staining >10% of the cells with; and 3+, strong and complete membrane staining in >10% of the cells. An appropriate positive control (ERBB2-overexpressing breast carcinoma) was included in each run and each section was analysed by a pathologist.\u03bcm-thick sections of a representative tissue block were cut onto SuperFrost Plus adhesion slides. The slides were then deparaffinised in two series of xylol followed by two series of ethanol (8\u2009min each), rinsed in 2 \u00d7 SSC, and placed in a solution of NaS/CN 1\u2009M at 80\u2009\u00b0C for 10\u2009min. The tissue was then digested with 6\u2009mg\u2009ml\u22121 pepsin for 22\u2009min at 37\u2009\u00b0C, after which the slides were rinsed in 2 \u00d7 SSC and dehydrated in an ethanol series. To assess ERBB2 amplification, a commercial probe targeting ERBB2, direct-labelled with rhodamine, and a control probe for chromosome 17 centromere (CEP17), direct-labelled with fluorescein, were used. The slides and probes were placed in a HYBrite denaturation/hybridisation system and co-denatured at 80\u2009\u00b0C for 7\u2009min. Hybridisation carried out for 18\u2009h at 37\u2009\u00b0C, followed by post-hybridisation washes in 2 \u00d7 SSC/0.5% Igepal at 73\u2009\u00b0C for 5\u2009min and 2 \u00d7 SSC/0.1% Igepal at room temperature, after which slides were counterstained with DAPI. Fluorescent images were sequentially captured with a Cohu 4900 CCD camera, using an automated filter wheel coupled to a Zeiss Axioplan fluorescence microscope and a CytoVision system.From each gastric adenocarcinoma sample, 4-ERBB2\u2009/\u2009CEP17 \u22652, or when an ERBB2 signal cluster was observed.Gene amplification was scored when a minimum of 60 cancer cell nuclei exhibited a ratio 2-square test. For parametric data, Student's t-test was used when comparing two means. Survival curves were calculated according to the Kaplan\u2013Meier method. Cases lost to follow-up and deaths caused by reasons other than gastric cancer were censored during survival analysis. The significance of differences between survival curves was determined using the log-rank or Breslow's tests. All statistical analyses were conducted using SPSS v.15 .Categorical data were analysed using \u03c7The ERBB2 protein status was determined by IHC for the 463 gastric carcinoma tissues . In all,in situ hybridisation analysis was performed in all cases (n=43) in which IHC showed complete membrane immunostaining (2+ and 3+). In addition, 25 cases that were regarded as negative for ERBB2 overexpression by IHC were also analysed by FISH. Gene amplification was detected in 38 gastric carcinomas showed amplification , but this genetic alteration was also observed in diffuse/isolated cells, and solid and mixed carcinomas . Two mixed carcinomas showed ERBB2 amplification and overexpression in the two histological components . Venous invasion, assessed through orcein staining, was not associated with ERBB2 amplification. No differences were observed between ERBB2-amplified and ERBB2 non-amplified cases in terms of age, gender, type of surgery, and clinical stage.mponents . ERBB2 aERBB2 amplification. Patients with ERBB2 amplification had in general, worse 10-year survival rates than those without this genetic alteration .Survival analysis was performed on 256 patients, including all 38 who showed ERBB2 amplification had a statistically significant worse survival than those without this genetic alteration (P=0.011). No such difference was seen in patients with infiltrative gastric carcinoma (P=0.863). Among patients with no lymph node metastases, those with ERBB2-amplified carcinomas had a trend for worse survival when compared with those without this genetic alteration (P=0.085).Differences in survival were more evident when we compared similar subgroups of patients. Patients with expansive gastric carcinoma and teration ; P=0.011ERBB2 status is essential for efficient selection of patients who might benefit from targeted therapy with trastuzumab or other drugs targeting this surface receptor. This therapeutic option proved useful in extending the survival of breast cancer patients, particularly when selected by FISH . The patients in whom ERBB2 amplification was found (n=38), as well as 218 ERBB2-negative patients, were selected for survival analysis. Using clinicopathological and follow-up data from these 256 patients, we investigated whether ERBB2 amplification is a prognosis factor in different subgroups of gastric cancer patients. Given the nature of the biological material used in this study , the techniques chosen to assess ERBB2 protein overexpression (IHC) and ERBB2 amplification (FISH) are the most appropriate to ensure reliable and reproducible results. Moreover, both techniques allow for the specific detection of ERBB2 alteration in individual cells, while maintaining critical architectural tissue information and amplification (8.2%) frequencies observed in the current study are within the range reported in earlier reports. There was a perfect correlation between IHC and FISH findings in cases with strong (3+) complete membrane staining (100%), as well as in those scored as 0. CDH1 mutations in diffuse gastric carcinomas , as reported earlier in similar studies , which could be explained by the patient heterogeneity regarding histological and clinical characteristics. According to ERBB2-amplifying expansive carcinomas have worse prognosis than those with the same growth pattern lacking that genetic alteration (P=0.011), which we did not observe in infiltrating gastric carcinomas (P=0.863). Recent data published by ERBB2 amplification in carcinomas with different growth patterns. These authors suggested that ERBB2 overexpression promotes increased cell migration but has minimal effect on cell proliferation in cells stimulated by epidermal growth factor or heregulin (ERBB2 amplification could increase cell migration in expansive carcinomas, whereas infiltrative carcinomas, which already have a strong invasive potential, do not acquire additional advantage from ERBB2 amplification. A similar reasoning may explain our observation of a trend to worse 10-year survival among ERBB2-amplified node-negative gastric carcinomas, something that was also reported by others in breast carcinomas (Our findings indicate that the influence of ERBB2 amplification and overexpression, we conclude that ERBB2 status may have clinical significance in subsets of patients and that further studies are warranted to evaluate whether gastric cancer patients whose tumours present ERBB2 amplification/overexpression may benefit from the therapy targeting this surface receptor.By studying the largest series of gastric cancer patients so far for"}
+{"text": "The aim of the present study is to investigate the role and signal pathway of c-erbB2 in onset of rat primordial follicle development.The expression of c-erbB2 mRNA and protein in neonatal ovaries cultured 4 and 8 days with/without epidermal growth factor (EGF) were examined by in situ hybridization, RT-PCR and western blot. The function of c-erbB2 in the primordial folliculogenesis was abolished by small interfering RNA transfection. Furthermore, MAPK inhibitor PD98059 and PKC inhibitor calphostin were used to explore the possible signaling pathway of c-erbB2 in primordial folliculogenesis.The results showed that c-erbB2 mRNA was expressed in ooplasm and the expression of c-erbB2 decreased after transfection with c-erbB2 siRNA. Treatment with EGF at 50 ng/ml significantly increased c-erbB2 expression and primary and secondary follicle formation in ovaries. However, this augmenting effect was remarkably inhibited by c-erbB2 siRNA transfection. Furthermore, folliculogenesis offset was blocked by calphostin (5 \u00d7 10(-4) mmol/L) and PD98059 (5 \u00d7 10(-2) mmol/L), but both did not down-regulate c-erbB2 expression. In contrast, the expressions of p-ERK and p-PKC were decreased obviously by c-erbB2 siRNA transfection.c-erbB2 initiates rat primordial follicle growth via PKC and MAPK pathways, suggesting an important role of c-erbB2 in rat primordial follicle initiation and development. Folliculogenesis is a complex process consisting of sequential and ordered follicular development and growth. Although much is known about the events and regulation of the later stages of ovarian follicular development, the early follicular development is very poorly understood. More recently, attention has focused on regulation of the initiation of follicular growth (follicle activation) ,2. The ic-erbB2, a member of the EGF receptor family, encoding a transmembrane EGF receptor [c-erbB2 is also reported as a marker for chemosensitivity and prognosis of breast and ovarian cancer [There is accumulating evidence implicating EGF as a key regulator of primordial follicle development in mammals. EGF has been shown, as mitogen for cultured granulosa cells, to stimulate oocyte growth during the primordial to primary follicle transition in vitro ,13, and receptor ,19, is ereceptor ,21. c-ern cancer ,23.c-erbB2 on oocyte maturation and found that c-erbB2 induced oocyte maturation via activation of mitogen-activated protein kinase (MAPK) [c-erbB2 mRNA and protein translation and investigated the role and signaling pathway of c-erbB2 in primordial follicle development. In addition, we explore the molecular mechanism of EGF effect on primordial folliculogenesis.Recently, we focused on characterizing the effect of e (MAPK) . MAPKs aAnimal use was approved by the Committee of Nanchang University for Animal Research. Sprague Dawley rats were used for all the experiments. EGF, PD98059 (a MAPK inhibitor) and calphostin (a PKC inhibitor) were purchased from Sigma (St. Louis MO).-3 mm), and were stained with hematoxylin and eosin. The number of follicles at each developmental stage was counted in two serial sections from the largest cross-section through the center of the ovary [2 in 4-well plates , ovaries were randomly assigned to treatment groups with 1-3 ovaries per well. The medium was changed every 2 days. During organ culture, ovaries were treated with EGF (50 ng/ml) and c-erbB2 small interfering RNA alone or in combinations. In addition, ovaries were challenged with PD98059 (5 \u00d7 10-2 mmol/L) or calphostin (5 \u00d7 10-4 mmol/L).Ovaries from 2-day-old rats were collected fresh or cultured for 4 and 8 days, with 20 ovaries in each group. Fresh ovaries were fixed in Bouins solution for 1-2 h, embedded in paraffin, sectioned 5'-G G A A GGACGTCTTCCGCAAGAATAACCAACTGGCT-3';2)5'-G C T T T G T A C A C ACTGTACCTTGGGACCAGCTCTTC-3';3)5'-C G G A C C T C T C C T A C A T G C CCATCTGGAAGTACCCG3'.Before hybridization, the liquids and containers have been strictly treated with 0.1% DEPC. Slides were deparaffinized and rehydrated with 3% H2O2, and subjected to enzymatic digestion with pepsin for 2-3 min, and then incubated in pre-hybridization solution at 37\u00b0C for 2 h. After discarding the prehybridization solution, the slides were transferred to hybridization solution overnight with water, covering the specimens on special coverslips of the situ hybridization. The next morning the coverslips were opened and washed three times in 2 \u00d7 SSC , 0.5 \u00d7 SSC, 0.2 \u00d7 SSC, and then were incubated with the incubating solution at 37\u00b0C for 30 min. Slides were exposed to biotinylated mouse anti-digoxigenin IgG for 60 min. Finally, the immunoreactions were detected by using SABC (Strept-Avidin-Biotin Complex) system. Slides were counterstained with haematoxylin before observation. As negative control we used pre-hybridization solution which without probe to replace hybridization solution with probe solution.Localization of c-erbB2 was assayed by RT-PCR. Ovaries from the same culture well were pooled to make single RNA sample. RNA was extracted using the Trizol reagent . Total RNA from each sample was reverse transcribed into cDNA using a standard oligo-dT RT protocol. cDNA samples were used as template for polymerase chain reaction (PCR) analysis. The 2 \u00d7 EasyTaq PCR Supermix kit (TRansGen Biotec) was used according to the manufacturer's instructions. The c-erbB2 primers forward: 5'-CAGTGTGTCAACTGCAGTCA-3', reverse: 5'-CAGGAGTGGGTGCAGTTGAT-3'. The housekeeping reference gene GAPDH primers forward: 5'-GCAAGTTCAACGGCACAG-3', reverse:5'-AGGTGG AAG AATGGGAGTTGCT- 3'. The protocol was 94\u00b0C for 4 min, then 35 cycles of 95\u00b0C for 15 sec, 55\u00b0C for 60 sec and 72\u00b0C for 2 min. Fluorescent detection data were analyzed and normalized for c-erbB2 mRNA levels to GAPDH mRNA levels. The identities of the PCR products were confirmed by direct sequencing Expression of mRNA for 2, p-ERK1/2, p-PKC or \u03b2-actin (Sigma). The \u03b2-actin bands were used as an internal control for equal loading. After rinsing with TBST, the membranes were incubated for 30 min at room temperature with horseradish peroxidase-conjugated anti-rabbit or anti-mouse secondary antibodies . Finally, the membranes were stained with DAB according to the manufacturer's instructions and analyzed with Gel image analysis system.Tissue protein extracts were electrophoretically separated under reduced conditions using NuPAGE 4-12% Bis-Tris gels . Standard Mark (Invitrogen) was used as the molecular weight standard. Proteins were then electrotransferred to nitrocellulose membranes and the immunoblots were subsequently blocked for 2 h at room temperature in TBST (TBS containing 0.1% Tween 20) containing 5% nonfat dry milk. The membranes were incubated overnight at 4\u00b0C with antibodies against PCNA, ErbBc-erbB2 gene (NM_017003). The c-erbB2 siRNA was as follows: sense, 5'-CAGUACCUUCUACCGUUCAtt-3', antisence, 5'-UGAACGGUAGAAGGUA CUGtt-3'. Transfection was followed on the manufacturer's instructions. Briefly, 3 \u00d7 10-3 ml 2 \u00d7 10-2 mM of siRNA and 2 \u00d7 10-3 ml of liposomes (METAFECTENE) were each added to in 5 \u00d7 10-2 ml free of serum and antibiotics medium respectively, and the two solutions were combined without any mixture procedures and incubated at room temperature for 15-20 min. After incubation, siRNA-lipid complexes were added to culture flasks (containing 0.5 ml medium and ovaries) and swirl flasks and incubated at 37\u00b0C in CO2 incubator. The final siRNA concentration of transfection was 0.1 mmol/L. Ovaries were cultured with/without 0.1 mmol/L targeting siRNA for 12 h. The medium was replaced after 12 h transfection with fresh medium containing no siRNA, and ovaries were cultured for 24 h and then collected to detect gene expression and protein translation by using RT-PCR and western-blot. Ovaries without transfection were used as the control. The negative control was the group transfected with negative siRNA. In addition, ovaries were processed for morphometric assessment of the development of primordial follicles.EASY siRNA kit was purchased from Shanghai Chemical Technology Co., Ltd. target to p < 0.05 was considered significantly difference.The experiment was repeated three times. All data were presented as the means \u00b1 SEM and analyzed by ANOVA and Duncan's new multiple range tests. c-erbB2, in situ hybridization and RT-PCR were performed. Hybridization histochemistry demonstrated that c-erbB2 mRNA was expressed in ooplasm from primordial follicles of 2 day postnatal ovaries to cultured 8 days of ovaries. Moreover, c-erbB2 mRNA expression increased with prolonged culture, especially in proliferating cumulus cells of cultured ovaries. To investigate more direct actions of EGF, ovaries were incubated in the absence or presence of EGF before RNA collection and analysis. After treatment with EGF (50 ng/ml), the ovaries showed more intense labeling for c-erbB2 mRNA than the control , and the results were consistent with in situ hybridization analysis.After RT-PCR, cDNA was amplified from RNA extracted from cultured ovaries to assess To identify the growth of primordial follicle, the expression of proliferating cell nuclear antigen (PCNA) protein was detected by western blotting analysis. PCNA was expressed in the rat ovary, with a positive band in most of the samples. Differences in the intensity of the band from different group of ovaries were observed, depending on the cultured days. Our data indicated that PCNA protein levels increased with the cultured days, and EGF further enhanced PCNA protein levels by promoting primordial follicle development Fig. .c-erbB2 pathway was involved in initiation of growth of primordial follicle, we synthesized in vitro three siRNAs targeting the c-erbB2 mRNA and transferred them into the newborn rats' ovary to examine the effect of c-erbB2 on primordial follicle development. The siRNA with maximal effect was used in the present study (data not shown). The specificity of the c-erbB2 siRNA effect was verified by examining the levels of c-erbB2 mRNA in ovaries exposed to c-erbB2 siRNA. Although nontargeting control siRNA did not affect the basal transcript level of the gene, c-erbB2 siRNA specifically and appreciably knocked down the levels of c-erbB2 mRNA in ovaries cultured for 4 days. Meanwhile, ErbB2 protein expression was also reduced , siRNA treatment significantly inhibited the growth of primordial follicle and reduced the percentage of secondary follicles. The highest percentage of secondary follicles was observed after 8 day culture with 50 ng/ml EGF. However, EGF-stimulated increase of secondary follicles was obviously inhibited by concurrent treatment with RNA Fig. . These dc-erbB2, siRNA was transfected into the cultured neonatal rat ovaries in vitro by liposome. After 8 day culture, western blot analysis was performed to measure the expressions of ErbB2, p-ERK and p-PKC protein after c-erbB2 siRNA transfection. As shown in Fig. 2, p-ERK and p-PKC protein were remarkably inhibited by c-erbB2 siRNA, compared with the control.To investigate the signal pathway of c-erbB2 mRNA were examined by RT-PCR. The results showed that PD98059 and calphostin did not significantly affect the expression of c-erbB2 mRNA . When some follicles leave the resting pool and start the initiation of follicular growth (follicle activation), the granulosa cells (GC) become cuboidal and begin to express markers of cell proliferation, such as PCNA. EGF is essential to initiate growth of primordial follicles ,26. Our oocytes ,27. Intet testis . Therefoc-erbB2 during primordial folliculogenesis and investigated the influence of EGF on c-erbB2 expression as well as the effects of c-erbB2 down-regulation on the initiation of primordial follicle growth and on the activating role of EGF. ErbB2 protein plays the role of epidermal growth factor receptor (EGFR) and hasn't a specific ligand. ErbB receptor has distinct signaling properties depending on its dimerization. ErbB2, the preferred heterodimerization partner of all ErbB receptors, is a mediator of lateral signaling [c-erbB2 and MAPK or PKC signaling pathways during primordial folliculogenesis. Our study revealed that expression of c-erbB2 mRNA was present in oocytes of primordial follicles, and also appeared in cuboidal granulosa cells after initiation of follicular growth. The expression of c-erbB2 mRNA increased in proliferated multilayer granulosa cells after prolonged culture. EGF promoted PCNA protein expression and follicular growth by initiating primordial follicle development. In addition, EGF promoted the expression of c-erbB2 mRNA. Therefore, we conjecture that EGF and c-erbB2 might be involved in the onset of primordial follicle development.In the present study, we examined the expression of ignaling . We invec-erbB2 during primordial folliculogenesis, we used the synthetic siRNA for c-erbB2 to transfect ovarian cells in organ culture. We observed the condition of the growth initiation of primordial follicles through inducing c-erbB2 gene silencing. In the current experiment, most of the primordial follicles in the control group developed to the primary follicles, whereas the number of primary follicles and secondary follicles was significantly decreased by c-erbB2 siRNA. Furthermore, c-erbB2 siRNA blocked the promoting effect of EGF on the initiation of primordial follicle growth. ErbB2, an orphan receptor tyrosine kinase, which can dimerize with other ligand-activated members of the EGF receptor family, might be a selecting marker for initiation of follicular growth. We observed that c-erbB2 siRNA inhibited the expression of ErbB2 protein. These results suggest that c-erbB2 plays an important role on the initiation of primordial follicle growth and mediates the regulating role of EGF as a key signal molecule.To further understand the action of c-erbB2 is still unclear during primordial folliculogenesis. Therefore, we investigated the MAPK and PKC pathways as possible mediators for the expression of c-erbB2 using calphostin and PD98059. Both inhibitors did not change the expression of c-erbB2 mRNA, while the expression levels of PKC and MAPK protein were significantly decreased by c-erbB2 siRNA transfection in primordial follicles. These results indicated that c-erbB2 might be an upstream activator of MAPK and PKC, which regulated the initiation of primordial follicle growth at least in part via the activation of MAPK and PKC signal pathways (Fig.A variety of signaling pathways, including the MAPK and PKC regulating systems, are involved in the initiation of the growth of primordial follicles ,31. Phosways Fig..c-erbB2 might have roles in the growth of primordial follicles beyond that of mediating EGF signaling. It also might regulate proliferation of granulosa cells and cumulus cells, which have close signaling communication with oocytes, to govern initiation of follicular growth, development, and steroidogenesis. Futher research of c-erbB2 functions may provide novel information for understanding the mechanism of the follicular initiation and development.A complex signal network system composed of a variety of autocrine, paracrine and endocrine factors regulates the growth of primordial follicles via intercellular communications, and it has been demonstrated that the growth of primordial follicles was associated with precise spatiotemporal expression of multiple genes and interactions between these genes ,26,35,36c-erbB2 in ovaries, whereas the promoting effect of EGF was blocked by c-erbB2 siRNA transfection. In addition, the initiation of primordial follicle growth was inhibited by MAPK or PKC inhibition. The expression of ErbB2, p-ERK and p-PKC protein and primordial follicle development were inhibited by c-erbB2 siRNA transfection. These results indicated that c-erbB2 played an important role in primordial follicle initiation and development and the effect of c-erbB2 might be mediated by a mechanism involving the PKC and MAPK pathways.In conclusion, we showed that EGF promoted the initiation of primordial follicle development and the expression of The authors declare that they have no competing interests.ZLP carried out all the experiments. ZDL, HJ, XLQ, XAX, DXY and TDF performed statistical analysis and drafted the paper. ZYH designed the study and amended the paper. All authors read and approved the final manuscript."}
+{"text": "Some of the Census Enumeration Areas' (CEA) information may help planning the sample of population studies but it can also be used for some analyses that require information that is more difficult to obtain at the individual or household level, such as income. This paper verifies if the income information of CEA can be used as a proxy for household income in a household survey.A population-based survey conducted from January to December 2003 obtained data from a probabilistic sample of 1,734 households of Niter\u00f3i, Rio de Janeiro, Brazil. Uniform semi-association models were adjusted in order to obtain information about the agreement/disagreement structure of data. The distribution of nutritional status categories of the population of Niter\u00f3i according to income quintiles was performed using both CEA- and household-level income measures and then compared using Wald statistics for homogeneity. Body mass index was calculated using body mass and stature data measured in the households and then used to define nutritional status categories according to the World Health Organization. All estimates and statistics were calculated accounting for the structural information of the sample design and a significance level lower than 5% was adopted.The classification of households in the quintiles of household income was associated with the classification of these households in the quintiles of CEA income. The distribution of the nutritional status categories in all income quintiles did not differ significantly according to the source of income information (household or CEA) used in the definition of quintiles.The structure of agreement/disagreement between quintiles of the household's monthly per capita income and quintiles of the head-of-household's mean nominal monthly income of the CEA, as well as the results produced by these measures when they were associated with the nutritional status of the population, showed that the CEA's income information can be used when income information at the individual or household levels is not available. The place of health on the international agenda for development has been broadened and healFor these reasons, it is common that population surveys collect socioeconomic information when the purpose is either exploratory or descriptive (and this information becomes the main focus) and socioeconomic information is associated with outcomes or other variables of interest.Income and education are the most used variables to characterize and/or discriminate among socioeconomic groups. However, the collection of this information, particularly income, is sometimes difficult and can be influenced by other factors in population-based studies. These interferences may result in either total failure to obtain it or misreporting (under- or overestimation) .In Brazil, the Census Enumeration Areas (CEA) are used to assess the data of the Brazilian Demographic Census but they are also used as conglomerates of households for other population-based surveys. They are defined as contiguous groups of approximately 300 households respecting administrative and political boundaries and identified by stable and easy location reference points . Some ofAlthough the use of this kind of information would be especially useful in developing countries, the few available studies in this area found in the literature were conducted exclusively in high-income countries in North America or Europe and in Australia -19. TherAdditionally, no study has empirically compared the trade-offs in terms of cost savings, potential bias or loss of accuracy due to the use of area-level instead of individual- or household-level information. It has been suggested that the census-aggregated information is complementary because it may have a different construct meaning, depending on how it is defined in association with the health outcomes . In middThe gap between the year of the census (every 10 years in Brazil) and the year of a given survey may play a crucial role in the socioeconomic characteristics of the population. Additionally, the fact that some countries' economic growth may be stationary or there is very discrete social mobility may facilitate the comparisons because there may not be expressive changes in family or individuals' income or socioeconomic status between the year of the census and the survey. On the other hand, if the country's economic growth is reflected in individual and family income, one may not be able to use the census information.The purpose of the present study was to assess the validity of household income data from CEA to represent household income obtained in a household survey. In practical terms, it sought to verify if the CEA income information could be used as a proxy for household income in a household survey conducted to assess the nutritional status of the population of Niter\u00f3i, a city in the state of Rio de Janeiro, Brazil.Pesquisa de Nutri\u00e7\u00e3o, Atividade F\u00edsica e Sa\u00fade - PNAFS) was the first household survey conducted to assess the nutritional status and health conditions of adolescents and adults living in Niter\u00f3i, Rio de Janeiro, Brazil. Data collection was carried out between January and December 2003. Niter\u00f3i is located in the metropolitan region of Rio de Janeiro that had 459,451 inhabitants in 2000, according to the last Brazilian census [The Nutrition, Physical Activity and Health Survey [In the first stage, 110 CEA were selected, systematically, with probabilities proportional to the number of permanent private households. Prior to selection, the CEA were ordered from lowest to highest according to the head-of-household's mean nominal monthly income, thus implicitly stratifying the CEA by mean income and ensuring the selection of CEA from all income levels.In the second stage, 16 households were selected in each CEA with equal probability, using an inverse sampling procedure , analogoThe sample weights were calculated as the product of inverse selection probabilities in each stage, using the estimator proposed by Haldane adapted Wij) was multiplied by a calibration factor (gij), providing the household calibrated weight i represents the index of the selected CEA, j the index of the selected household and d the 14 post-strata domains, as indicated above. The Generalized Regression Estimator proposed by Deville & S\u00e4rndal [gij as qij is a constant usually defined as 1 [xij represents the vector of auxiliary variables , tx denotes the vector of known population totals, and The calibration post-strata were defined using the variables age and sex. The combination of the two categories of sex and age -- categorized as seven brackets: 10-19.9; 20-29.9; 30-39.9; 40-49.9; 50-59.9; 60-69.9; and 70 years or more -- resulted in 14 post-strata (2 sexes \u00d7 7 age brackets). For the calibration of sampling weights, the household natural weight does not provide information about the agreement/disagreement structure of data and it cannot be used to analyze ordinal scale categories [Poisson family with log link function that considers the ordination structure of the variable's categories. Three components of the structure of agreement and disagreement compose this model: (1) the agreement at random; (2) the agreement due to the association between classifications; and (3) the agreement after eliminating the effects of the agreement at random and the association between variables [Despite its extensive use, the Cohen's Kappa index the information on household income from male-headed and (2) female-headed households. This was motivated by the hypothesis that when the woman is the head of the household, she may not know exactly her spouse's income and vice versa. Therefore, household income might be estimated with different errors if the head of the household knows or does not know the spouse's income.odds ratio (OR), using the measure \u03c4ij (where i indicates the line and j the column of the cell) as proposed by Darroch & McCloud [The adjusted model agreement grades were estimated for each cell in terms of McCloud .\u03c1 [In addition to the adjustment of the model, Cohen's weighted Kappa , Kendall\u03c1 were alsTo illustrate the comparison between the two income information applied to an epidemiologic study, the distribution of nutritional status categories of the population of Niter\u00f3i (\u2265 10 years of age) according to income quintiles (CEA and household) was performed.To test the hypothesis that the distribution of the population by nutritional status categories according to the household income quintiles is equal to the distribution according to the quintiles constructed with the income of CEA the Wald statistic for homogeneity based in the sampling design was used .2, \u2265 25 kg/m2 and \u2265 30 kg/m2 were used to define the categories of low-BMI/underweight, overweight and obesity, respectively [Body mass and stature data were collected in the households and used to calculate the body mass index (BMI = body mass in kilograms divided by stature in squared meters) as described elsewhere . BMI forectively .The Institutional Review Board of the Sergio Arouca National School of Public Health of the Oswaldo Cruz Foundation approved all research procedures.All estimates and statistics were calculated using the calibrated weights based on the structural information of the sample design, and a significance level lower than 5% was adopted. The analyses were conducted in R language and environment, version 2.6.1 .log of the expected frequency of being categorized in the first quintile by both household- and CEA-level income information , estimated by The parameters n Tables .Table ij cell and its respective 95% confidence interval (95% CI) as an example of interpretation, the value in the matrix's diagonal represents the OR of a household to be categorized in the 1st quintile of income by the CEA income is two times greater Table . AdoptinOR of assessment measures to be concordant rather than discordant. Observing the values in the first line (fixing line 1 and varying columns) or in the fifth column (fixing column 5 and varying lines) of Table The values of w = 0.49 (p < 0.001); Kendall's coefficient of concordance = 0.41 (p < 0.001); Krippendorff's alpha = 0.48; Spearman's \u03c1 = 0.49 (p < 0.001).According to the agreement classifications more widely used ,38, the Figure The results of studies that investigate the use of area-level socioeconomic information as proxy of household or individual information are still controversial regarding the agreement of income measures as well as the results produced by each measure when related with an outcome. This is expected because the analysed outcomes, methods employed in the definition of socioeconomic strata, and the partitioning criteria used to define the territories vary between studies ,16,39.On one hand, the literature indicates that the information of both levels can be used without jeopardizing the analyses on health inequities because they produce similar results ,16,18. OIn the present study, the structure of agreement/disagreement between quintiles of household monthly per capita income and quintiles of the head-of-household's mean nominal monthly income of the CEA, as well as the results produced by these measures when they were associated with the nutritional status of the population of Niter\u00f3i, showed that the CEA's income information can be used when income information at the individual or household level are not available.The hypothesis that the sex of the head of the household would not influence the structure of agreement/disagreement of income categories could not be rejected. Other factors, such as race ,39, thatIt is also important to pay attention to the time between when the information were assessed. This is particularly important in countries undergoing fast economic growth or greater socioeconomic mobility. The survey used in the present analysis was conducted only three years after the 2000 Brazilian census . During Additionally, it is also important to note that the aggregated census information comes from individually collected information, which may raise the question whether the individual information collected in the census is reliable. The census information cannot be regarded as gold standard but it is expected to constitute more robust information than that collected in surveys because there are many more quality control mechanisms, proportionally fewer missing values, higher trust in the interviewer as an employee of a known institution (the Census Bureau), and no variance due to sample design.Another issue that could be raised when dealing with aggregated data is that the income distribution within a given aggregated level may vary according to the distance from a predetermined center. However, the adjusted models have not taken into account the modifiable areal unit problem and ecological fallacies due to aggregation -42, a liFurthermore, the inference and conclusions of this study may not apply to different variables of interest, countries, sizes and boundaries of enumeration areas, and possibly survey designs. Therefore, comparisons by other studies should be carefully made, taking this limitation into account.It is remarkable that until this paper, the few studies on this theme had been solely derived from high-income countries . The study indicates that CEA's income information may be used as a proxy for household income in the absence of individual- or household-level information. The sex of the source of household income information did not influence the structure of agreement/disagreement of income categories. Additionally, the association between income quintiles and nutritional status is similar whether CEA- or household-level income measures were used.The authors declare that they have no competing interests.LAA and MTLV planned the research. MTLV designed the sample. FSG and MTLV calculated the natural and calibrated sampling weights and performed the statistical analysis. FSG wrote the first draft of the paper, which was revised and approved by the other authors."}
+{"text": "Fibrovascular polyps of the esophagus are rare benign lesions that arise from the cervical esophagus and can reach very big size before they become symptomatic. Surgical excision is the treatment of choice, since endoscopic removal is not always feasible.We present this case in order to emphasize the significance of localizing, preoperatively, the exact origin of the pedicle in planning the way of surgical approach. We consider the accurate pre-operative assessment of the origin of the pedicle essential for the proper surgical treatment of such a polyp. In respect to this, imaging provides important information concerning the exact location of the pedicle, the vascularity of the polyp and even tissue elements of the mass. Fibrovascular polyps (FVP) are rare, benign \u201ctumorlike\u201d lesions of the esophagus, that usually remain asymptomatic. Symptoms are present when the polyp reaches a large size (resulting in their common appellation as \u201cgiant fibrovascular polyps\u201d) and include progressive dysphagia (more than 50% of the patients), odynophagia, respiratory symptoms and the most distinctive regurgitation of a fleshy mass into the mouth which can lead in subsequent aspiration and even life-threatening asphyxia secondary to mechanical obstruction of the larynx ,2.Treatment consists of either endoscopic or surgical excision. If the stalk can be adequately visualized endoscopically, endoscopic ligation can be performed.The patient, a 62 years-old Caucasian male of Greek, was referred to our center due to progressive dysphagia since 2 years, an episode of regurgitation of a fleshy mass into the mouth and occasional attacks of dyspnea. Previous consultation to an Ear Nose and Throat specialist suggested a psychiatric evaluation.On admission to our hospital the patient underwent radiographic study of the esophagus using barium as contrast medium. The esophagogram demonstrated a contrast-filling defect from the cervical esophagus till the Cardioesophageal junction .A mobile, elongated endoluminal polypoid mass was revealed during esophagoscopy, arising from the level of the upper esophageal sphincter and extending till just above the Cardioesophageal junction. This soft tissue polypoid mass caused a marked dilatation of the proximal and mid esophagus. Attempt to excise the polyp endoscopically was not performed due to inability to visualize adequately the base of the polyp and therefore the patient was recommended to be operated.MRI of the neck and thorax demonstrated that the origin of the pedicle was pointed to the right anterior mucosal wall of the cervical esophagus ,Figure 3Knowing preoperatively the site of origin of the polyp, a cervical incision was decided opposite to the origin.For this reason through a left lateral cervical approach a longitudinal esophagotomy, 5 cm in length, was performed to the left posterior esophageal wall. The mucosal origin of the stalk was completely visualized, resected and suture-closed. The mucosal defect was repaired by single interrupted absorbable stitches. The polyp was tracted and removed. The esophagotomy was sutured in a two-layered fashion.The dimension of the polyp was 10.5 \u00d7 5.5 \u00d7 3.5 cm .A nasogastric feeding tube was introduced and left in place for 4 days.Histopathologic examination revealed that the specimen corresponded to a fibrovasular polyp lined wiThe patient had an uneventful recovery period. He has been followed up for 1 year postoperatively without any sign of recurrency.Although rare, fibrovascular polyps comprise the majority of benign tumor-like lesions of the esophagus characterized by the development of pedunculated intraluminal masses. Clinically, they do not present specific symptoms and are often misdiagnosed or even undiagnosed until they grow to gigantic sizes. Because these lesions are pedunculated they may have a spectacular clinical presentation, including regurgitation of a fleshy mass into the mouth. Usually these polyps arise from the cervical esophagus, inferiorly to the cricopharyngeal muscle at the Laimer's triangle, which reveals their trend to prolapse into the mouth causing the characteristic \u201cregurgitation of a fleshy mass\u201d . The redTheir elongated \u201csausage-like\u201d characteristic appearance is believed to be the result of the traction during peristalsis and swallowing .Initial diagnosis in the majority of cases is made by barium esophagogram . This, uResection, in most cases, is advocated as soon as a large fibrovascular polyp is detected to eliminate the potential risk of asphyxiation. Less usual indications for surgery include dysphagia and anemia due to gastrointestinal bleeding from the ulcerative tip of the polyp. Malignant transformation is extremely rare ,10.Endoscopic removal of small FVPs seems feasible. Surgical excision is mandatory whenever the polyp gets large dimensions and is performed, preferably, through a cervical esophagotomy. Surgical removal remains the treatment of choice.Since the pedicle has to be resected under direct vision, the incision needed to expose the esophagus has to be made opposite to the site of origin of the lesion.Making the esophagotomy to the side where the polyp originates can be disastrous, with unpleasant incidents such as severe hemorrhage and even inability to excise the polyp as a whole and leaving material that can recur.Consequently, knowing the exact site of origin of the pedicle of the FVP is extremely important when deciding to proceed to surgical removal of such a polyp. This knowledge can be provided, preoperatively, most of the times, by modern imaging techniques .Today, planning the proper surgical approach for the resection of a giant fibrovascular polyp, has an important ally, modern imaging, which can provide important information concerning the exact location of the pedicle."}
+{"text": "Interviews offer the potential for capturing experiences in great depth, particularly the experiences of organizations that may be under-represented in surveys.Only a small number of previous efforts to describe the experiences of organizations that produce clinical practice guidelines (CPGs), undertake health technology assessments (HTAs), or directly support the use of research evidence in developing health policy in 25 organizations, of which 12 were GSUs. Using rigorous methods that are systematic and transparent (sometimes shortened to 'being evidence-based') was the most commonly cited strength among all organizations. GSUs more consistently described their close links with policymakers as a strength, whereas organizations producing CPGs, HTAs, or both had conflicting viewpoints about such close links. With few exceptions, all types of organizations tended to focus largely on weaknesses in implementation, rather than strengths. The advice offered to those trying to establish similar organizations include: 1) collaborate with other organizations; 2) establish strong links with policymakers and stakeholders; 3) be independent and manage conflicts of interest; 4) build capacity; 5) use good methods and be transparent; 6) start small and address important questions; and 7) be attentive to implementation considerations. The advice offered to the World Health Organization (WHO) and other international organizations and networks was to foster collaborations across organizations.The findings from our interview study, the most broadly based of its kind, extend to both CPG-producing organizations and GSUs the applicability of the messages arising from previous interview studies of HTA agencies, such as to collaborate with other organizations and to be attentive to implementation considerations. Our interview study also provides a rich description of organizations supporting the use of research evidence, which can be drawn upon by those establishing or leading similar organizations in LMICs. As we argued in the introductory article in the series, a review of the experiences of such organizations, especially those based in low- and middle-income countries (LMICs) and that are in some way successful or innovative, can reduce the need to 'reinvent the wheel' and inform decisions about how best to organize support for evidence-informed health policy development processes in LMICs , health technology assessment (HTA) agencies, and organizations that directly support the use of research evidence in developing health policy on an international, national, and state or provincial level able to provide rich descriptions of their processes and lessons learned; 2) particularly successful or innovative in one or more of the seven domains covered in the questionnaire; and 3) influential over time within their own jurisdiction in supporting the use of research evidence, or influential in the establishment or evolution of similar organizations in other jurisdictions. The first criterion was applied by one member of the study team (RM) based on his reading of the completed questionnaires. The second and third criteria were applied by three members of the study team based on their knowledge of and experience with these types of organizations.We developed the first draft of the semi-structured interview guide in parallel with the questionnaire as a mechanism to augment questions that could not or could only partially be addressed in the questionnaire. These 18 core questions were followed by organization-specific questions that arose based on responses provided in the questionnaire and by cross-cutting questions that addressed particular themes or hypotheses that emerged from the survey or earlier interviews. One member of the study team (RM) piloted the interview guide with four organizations, at least one of which was from each of the three categories. No significant changes were made after piloting. See 'Additional file Detailed summaries of each interview were prepared by one member of the study team (RM) using both the audio tapes and notes taken during the interviews, and these detailed summaries were subsequently analyzed independently by two members of the study team . The detailed summaries were organized by question. During the analysis, the detailed summaries were first read separately and supplemented, where necessary, by listening to part or all of the corresponding audio tapes. Binary or categorical responses to more structured questions were counted when possible. Themes were identified from among responses to semi-structured questions using a constant comparative method of analysis. Then question- and theme-specific groupings of the detailed summaries were developed and read, and the themes were modified or amplified. Illustrative quotations were identified to supplement the narrative description of the themes.The principal investigator for the overall project (AO), who is based in Norway, confirmed that, in accordance with the country's Act on ethics and integrity in research, this study did not require ethics approval from one of the country's four regional committees for medical and health research ethics. We obtained verbal consent to participate in an interview. The nature of our request to participate in an interview made clear that we would be profiling particular organizations. We did not in any way indicate that we would treat interview data as confidential or that we would safeguard participants' anonymity. Nevertheless, we take care to ensure that no comments can be attributed to a single individual even if the organization about which an individual is speaking has been identified. We shared a report on our findings with participants and none of them requested any changes to how we present the data.The director (or his or her nominee) was interviewed in 25 organizations, including five organizations that produce CPGs, three that produce HTAs, five that produce both CPGs and HTA, and 12 GSUs. Six organizations were in Western Europe, five in North America, four in Asia, three in Latin America, two each in Africa, Eastern Europe, and the Middle East, and one in Australia. The organizations varied in size from a few people to 50. No organizations declined to participate in the interviews.See additional file The organizations employed a mix of models for producing outputs, with some undertaking some or all of the work internally and others commissioning some or all of the work externally. Seven organizations that produce CPGs, HTAs, or both commissioned little or no work , five commissioned some work (up to 25%), and one commissioned most of its work. Six GSUs commissioned little or no work, four commissioned some work, and the other two commissioned about half their work.There was substantial variation in the number and type of activities in which the organizations were involved. All but one of the CPG-producing organizations were involved only in producing CPGs, and the remaining organization was involved in the education of both physicians and consumers as well. Most (5 of 7) of the organizations that produce HTAs, or both CPGs and HTAs, reported producing systematic reviews as their major activity, while three reported undertaking economic analyses and dissemination activities as well. Other activities undertaken by organizations that produce HTAs, or both CPGs and HTAs, included horizon scanning, preparing policy papers, and conducting evaluations (one each). GSUs reported involvement in a variety of activities, including producing systematic reviews (n = 3), conducting policy analyses (n = 3), training and capacity building (n = 3), producing CPGs (n = 2), conducting evaluations (n = 2), conducting economic analyses (n = 2), conducting health systems research (n = 2), and undertaking consultations and communication activities (n = 2).All but one of the organizations producing CPGs, HTAs, or both used informal methods for setting priorities, whereas GSUs were more likely to respond to direct government requests. The exception among organizations producing CPGs, HTAs, or both used a scoring system, however, the organization's director added: 'Finally we ask: Is the technology compelling or not compelling? We find most decisions about prioritising are actually intuitive, so we have rolled this in. So, despite the scoring sheet, the most important decision-making about priorities for us is intuitive.' Among the organizations producing CPGs, HTAs, or both, one organization reported responding to government requests and four reported consulting with stakeholders. Other criteria that were considered include the frequency and severity of the problem, potential for improvement and cost of achieving the improvement, and avoiding duplication. About half of these organizations reported making decisions internally, and about the same proportion reported having a board or advisory group that sets priorities. Turning now to the GSUs, more than half of them (7 of 12) reported responding to requests for applications, two reported responding to perceived policy needs, and one reported making decisions through consultations involving staff and the Minister of Health. One had a board and one made five-year plans based on an external review.Organizations producing CPGs, HTAs, or both tended to conduct or use systematic reviews (12 of 13) and to have a manual that described the methods they use (11 of 13). Far fewer convened groups to develop CPGs or HTAs (5 of 13), took equity considerations into account (1 of 13), or had established a process for addressing conflicts of interest (1 of 13). Two organizations described primarily using secondary sources rather than conducting their own systematic reviews (see A.F.2). Only one of the five organizations that convened groups reported using a formal consensus method (the RAND method), and two of the other organizations described using some kind of interactive process with either clinicians or policymakers (see A.F.2). GSUs were less likely to conduct or use systematic reviews (3 of 12) and to have a manual that described the methods they use (4 of 12) and also more likely to report using non-systematic methods to review the literature (3 of 12). Several GSUs reported conducting economic analyses and using a variety of methods, including surveys, epidemiological studies, and qualitative studies. One GSU reported working with ethicists and addressing issues of equity (see A.F.2).Using rigorous methods that are systematic and transparent (sometimes shortened to 'being evidence-based') was the most commonly cited strength among all organizations. Several organizations that produce CPGs, HTAs, or both referred specifically to using 'Cochrane methods,' one noted their use of a hierarchy of outcomes, and another noted their use of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system (see A.F.2). The weaknesses noted by most of these types of organizations were inadequate resources, more specifically insufficient numbers of skilled staff and time, together with using labour- and time-intensive processes that limit the number and quality of CPGs and HTAs that can be produced and updated. The GSUs, on the other hand, identified a range of different types of research or evaluation methods as additional strengths, including systematic reviews, measurement of health system performance, economic analyses, and surveys. Other strengths noted by GSUs included: having a small organization that can respond quickly, publishing drafts for public comment, maintaining close links with policymakers, and having independence and financial stability. The weaknesses that were identified by GSUs tended to be limitations of the methods used or how the methods were employed, including: not usually providing an exhaustive literature search or critical appraisal, 'just a systematic review often not being exactly what the audience wants,' use of casual 'vote counting' instead of a more rigorous approach to synthesizing research evidence, inaccuracies in long-term forecasting, and limitations in how health system performance is measured. GSUs also identified inadequate human resources and time as weaknesses.There was a great deal of variability both within and across CPG-producing organizations, HTA-producing organizations, and GSUs in who makes recommendations or policy decisions related to their products and the processes they use. For example, organizations producing CPGs, HTAs, or both in some jurisdictions have full responsibility for making policy decisions, whereas in other jurisdictions these decisions are made at the highest levels in the Ministry of Health. Two GSUs based outside of government acknowledged having little understanding of how policy decisions are made. Other GSUs based outside of government complained about the limited role of research evidence in policy decisions (see A.F.2). In contrast, none of the directors based in government spoke of the limited role of research evidence (see A.F.2). There was also variability in the perceived strengths and weaknesses of the processes that are used to make recommendations or policy decisions. Several directors referred to the explicit use of research evidence as a strength of the process and the time or capacity needed to produce recommendations as a weakness (see A.F.2). GSUs more consistently described their close links with policymakers as a strength, particularly those GSUs based in government, whereas organizations producing CPGs, HTAs, or both had conflicting viewpoints about such close links (see A.F.2). Two directors from organizations producing CPGs, HTAs, or both referred to the split between synthesizing the evidence and making a decision as a strength, whereas another director identified the involvement of stakeholders as a strength (see A.F.2). Another organization identified involvement of stakeholders as a weakness as well as a strength (see A.F.2). Two organizations producing CPGs, HTAs, or both noted their lack of influence as a weakness (see A.F.2). A lack of understanding of evidence-informed decision-making and the need for more education of and communication with policymakers was also noted (see A.F.2) Organizations sometimes mentioned the media as both a strength and a weakness in how recommendations or policy decisions related to their products are made (see A.F.2).e.g., lack of financial incentives for guideline adherence, practical difficulties in engaging health professionals, particularly those in rural areas), and the lack of funds to pay for effective (but expensive) technologies (see A.F.2).Most organizations argued that it is the clients who requested a CPG or HTA, the minister of health or more generally the department of health who is responsible for implementing recommendations or policy decisions related to their products (see A.F.2). Nearly all GSUs viewed policy implementation as the government's responsibility, although a couple of directors suggested that individual physicians also have some responsibility. Some organizations noted that responsibility for implementation is frequently spread among several organizations or that it is not clear who is responsible for implementing policy decisions. All types of organizations tended to focus largely on weaknesses in implementation, with few exceptions (see A.F.2). One reason that was frequently cited for this shortfall was the existence of multiple actors and multiple decision-makers in implementation processes that can be quite decentralized (see A.F.2). Other reasons that were cited for inadequate implementation included the general lack of formal processes for implementation, the specific challenges associated with guideline implementation (n = 3), the Cochrane Collaboration (n = 2), International Network of Agencies for Health Technology Assessment (n = 1), opinion leaders (n = 1), the health services (n = 1), and the public (n = 1) were identified less frequently as important to their organization. For GSUs, informal relationships with academics (n = 6) and health professionals (n = 3) were identified less frequently as important to their organization than relationships with policymakers, and informal relationships with advocacy organizations, non governmental organisations (NGOs), funders, industry, an HTA organization, and World Health Organisation (WHO) (one each) were identified even less frequently as important to their organization. Two organizations reported only having formal organizational relationships, and occasionally personal relationships, but no informal organizational relationships. While nearly all of the organizations reported using personal communications with decision-makers, a few organizations reported having only ad hoc communication, communication through policy advisors only, or only informal or indirect communication. A few of the organizations considered themselves to be decision-makers, and several others were located within government. Many of the organizations based within government viewed their close links with policymakers as a strength (see A.F.2). Organizations based outside of government also viewed their close relationships with policymakers as a strength (see A.F.2).While informal relationships with policymakers were identified more frequently as important by GSUs (8 of 12) than by organizations producing CPGs, HTAs, or both (4 of 13), nearly all of the organizations reported using personal communications with decision-makers, particularly policymakers. For organizations producing CPGs, HTAs, or both, informal relationships with health professionals (8 of 13) and academics (5 of 13) were identified more frequently as important to their organization than relationships with policymakers, and informal relationships with other HTA organizations . However, physicians, particularly older physicians, specialists, and experts could also be among the most vocal critics (see A.F.2). The department of health, as well as other regulatory bodies, health insurers, and local health authorities or managers were also frequently identified as strong advocates, both by people working inside government and by those working in organizations based outside of government (see A.F.2). Other strong advocates that were identified included satisfied clients, the mass media, speciality societies, and other researchers. The last three were also seen as critics in some jurisdictions or in some circumstances. The most commonly identified critics were drug companies, particularly when their products were not recommended, and more generally 'groups who don't like our findings; for example, manufacturers or pharmaceuticals' (see A.F.2). Both other stakeholders and competitors were also frequently cited as critics. Stakeholders were generally perceived as critics when a new technology was not recommended (see A.F.2). Several organizations also identified as critics those who thought the processes took too long and cost too much and those with different methodological viewpoints (see A.F.2).Most of the examples of success among organizations producing CPGs, HTAs, or both were occasions where there was a perception that clinicians adhered to the organization's recommendations or policymakers based their decisions (at least in part) on the work of the organization. Only one organization producing CPGs, HTAs, or both could not identify an example of success, but on the other hand only one organization cited data from an audit to support the perception that clinicians adhered to the organization's recommendations. In three of the examples of policymakers acting on the work of an organization, an intervention was recommended, and policymakers' subsequent support for the intervention was perceived as a success (see A.F.2). In another three of the examples of policymakers acting on the work of an organization, an intervention was not recommended and policymakers' subsequent lack of support for the intervention was perceived as a success. One director cited a Minister's decision not to start a screening program, and a second cited a Minister's decision not to fund an expensive new technology, despite lobbying. A third director cited the example of a decision not to fund a drug and argued that this decision had saved lives and money (see A.F.2). Two examples of success were drawn from the field of public health: one that addressed smoking cessation, where success was attributed to good timing; the other addressed lowering the legal blood alcohol level for drivers.The examples of success among GSUs were more diverse and the pathway from research evidence to policy more complex. Several organizations did not identify any examples of success or failure, noting that their role is only to report the research evidence, and the decision about whether and how to act on the research evidence is best left to others. The examples of success again tended to represent occasions where policymakers based their decisions (at least in part) on the work of the organization. One director cited examples of savings and improved accessibility to effective drugs from using generic drugs and supporting local producers. Another director cited savings from the discounts that could be negotiated based on drug class reviews. Other domains where success had been achieved included evaluations of a national health reform, healthcare financing policies, implementation of a human resources policy leading to re-categorising health professionals, provision of funds by a donor agency to support local coordination of HIV programs, and a housing policy.The so-called failures typically involved the perception that clinicians were not adhering to the organization's recommendations, or policymakers were not basing their decisions (at least in part) on the work of the organization. Reasons ranged from insufficient awareness-raising among decision-makers to political lobbying by the patient groups, specialists, and companies directly affected by the decision. Often, the failures involved a technology not being recommended, but policymakers deciding to fund it anyway. However, one failure involved a technology being recommended but not being funded by government. Among the four examples of failures that pertained to broader health system policies, two recommendations were complex and a clear explanation was not offered as to why they were not acted upon (even though one would have saved the government money), one recommendation was likely not acted on because it was too broad, and one was likely not acted on due to political opposition. Several other 'problems' were noted as well, such as insufficient research evidence, use of an intervention beyond its recommended uses, and inadequate monitoring of adherence to guidelines through audit (see A.F.2).e.g., independence, particularly from the pharmaceutical industry, close links to decision-makers, well trained and committed staff, use of rigorous methods, an interdisciplinary, collaborative approach, stakeholder involvement, and international collaboration), as well as many of the same weaknesses . The relatively small size of the organizations was viewed by many organizations either as a strength or as both a strength and a weakness (see A.F.2). The relatively small size of the organizations and the relatively low pay of those working in the organizations were viewed by some organizations as a weakness (see A.F.2).When asked about any other strengths and weaknesses in how the organizations are organized, directors repeated many of the same strengths that were described previously (The advice offered to those trying to establish similar organizations can be grouped into seven main recommendations.Most directors emphasised collaboration as important both in establishing an organization and in the ongoing work of an organization (see A.F.2).Many directors, particularly those working in GSUs, strongly recommended that organizations 'establish links to policymakers' (see A.F.2). A number of directors from across all types of organizations also stressed the importance of involving stakeholders (see A.F.2).While many directors argued for establishing strong links with policymakers and involving stakeholders in the organization's work, a number of them highlighted the importance of being independent and managing conflicts of interest (see A.F.2).Many directors emphasised the challenge and the importance of recruiting or training multidisciplinary staff (see A.F.2). A couple of directors noted the importance of having a multidisciplinary team and, specifically in LMICs, thinking internationally (see A.F.2). Several directors, particularly those working in GSUs, emphasised the importance of leadership capacity (see A.F.2).Many directors stressed the importance of using good methods and being transparent (see A.F.2).A number of directors stressed the magnitude of the work involved, and hence the importance of starting small, having a clear audience and scope, and addressing important questions (see A.F.2). And while several directors pointed out the need to address important questions, no consistent advice emerged about how to approach the selection of questions (see A.F.2).Several directors noted the importance of implementation (see A.F.2). A number of directors who did not comment on implementation had made clear that implementation is not part of their organizations' work; however, some of these directors indicated that implementation considerations still inform their work even if responsibility for implementation lies elsewhere.Only a small number of directors provided comments about WHO's potential role. However, these comments almost always pertained to the role that WHO is or could be playing in fostering collaborations across organizations (see A.F.2).The organizations employed a mix of models for producing outputs \u2013 with some undertaking some or all of the work internally and others commissioning some or all of the work externally \u2013 and there was substantial variation in the number and type of activities in which the organizations were involved. All but one of the organizations producing CPGs, HTAs, or both used informal methods for setting priorities, whereas GSUs were more likely to respond directly to government requests. Organizations producing CPGs, HTAs, or both were much more likely than GSUs to conduct or use systematic reviews and to have a manual that described the methods they use. Using rigorous methods that are systematic and transparent (sometimes shortened to 'being evidence-based') was the most commonly cited strength among all organizations, whereas organizations producing CPGs, HTAs, or both noted inadequate resources coupled with using labour- and time-intensive processes as weaknesses, and GSUs noted limitations of the methods used or how the methods were employed as weaknesses.e.g., threatening professional freedom, diminishing the role of expertise, creating funding pressures, and enhancing accountability), its specific approach , and its specific recommendations on any given topic .There was a great deal of variability in who makes recommendations or policy decisions related to the organizations' products, the processes they use, and the perceived strengths and weaknesses in these processes. Several organizations referred to the explicit use of research evidence as a strength of the processes, and the time or capacity needed to produce recommendations as a weakness. GSUs more consistently described their close links with policymakers as a strength, particularly those GSUs based in government, whereas organizations producing CPGs, HTAs, or both had conflicting viewpoints about such close links. Most organizations argued that it is the clients who requested a CPG or HTA, the minister of health or more generally the department of health who is responsible for implementing recommendations or policy decisions related to their products. With few exceptions, all types of organizations tended to focus largely on weaknesses in implementation, rather than strengths. While informal relationships with policymakers were identified more frequently as important by GSUs than by organizations producing CPGs, HTAs, or both, nearly all of the organizations reported using personal communications with decision-makers, particularly policymakers, and many of the organizations viewed their close links with policymakers as a strength. While health professionals (particularly those involved in the organizations' activities) and policymakers were often identified as advocates, and drug companies, patient groups, and competitors were often identified as critics, particular sub-groups could be supportive or critical depending on their perception of the organizations' general focus on the work of the organization. The examples of so-called success among GSUs were more diverse, and the pathway from research evidence to policy more complex. The so-called failures typically involved the perception that clinicians were not adhering to the organization's recommendations, or policymakers were not basing their decisions (at least in part) on the work of the organization. Reasons ranged from insufficient awareness-raising among decision-makers to political lobbying by the patient groups, specialists, and companies directly affected by the decision. The advice offered to those trying to establish similar organizations can be grouped into seven main recommendations: 1) collaborate with other organizations; 2) establish strong links with policymakers and involve stakeholders in the work; 3) be independent and manage conflicts of interest among those involved in the work; 4) build capacity among those working in the organization; 5) use good methods and be transparent in the work; 6) start small, have a clear audience and scope, and address important questions; and 7) be attentive to implementation considerations even if implementation is not a remit. Only a small number of directors provided comments about WHO's potential role, however, these comments almost always pertained to the role that WHO is or could be playing in fostering collaborations across organizations.i.e., there may be a social desirability bias in their responses); and 3) given the nature of many of the structured questions posed and responses given the analysis relied heavily on counting and hence could have missed subtleties in emphasis and inadvertent omissions of select points.The interviews have three main strengths: 1) we drew on a regionally diverse project reference group to ensure that our draft protocol and interview guide were fit for purpose; 2) we interviewed roughly equal numbers of CPG- and HTA-producing organizations and GSUs; and 3) no organization declined to participate in the interviews. The interviews have three main weaknesses: 1) despite significant efforts to identify organizations in low- and middle-income countries, just under one-half (48%) of the organizations we interviewed were drawn from high-income countries; 2) despite efforts to ask questions in neutral ways, many organizations may have been motivated by a desire to tell us what they thought we wanted to hear on systematic reviews suggests that other international networks, such as the Cochrane Collaboration, have important roles to play both in conducting and keeping up-to-date systematic reviews that address important high-priority policy questions and in building capacity to undertake systematic reviews that address such questions.Given that timing or timeliness emerged as one of two factors that increase the prospects for research use in policymaking, and that the labour- and time-intensiveness of the processes used was a commonly cited weakness, research is needed to develop methods and organizational structures to respond rapidly to policymakers' questions . There iThe authors declare that they have no financial competing interests. The study reported herein, which is the second phase of a larger three-phase study, is in turn part of a broader suite of projects undertaken to support the work of the World Health Organization (WHO) Advisory Committee on Health Research (ACHR). Both JL and AO are members of the ACHR. JL is also President of the ACHR for the Pan American Health Organization . The Chair of the WHO ACHR, a member of the PAHO ACHR, and several WHO staff members were members of the project reference group and, as such, played an advisory role in study design. Two of these individuals provided feedback on the penultimate draft of the report on which the article is based. The authors had complete independence, however, in all final decisions about study design, in data collection, analysis and interpretation, in writing and revising the article, and in the decision to submit the manuscript for publication.JL participated in the design of the study, participated in analyzing the qualitative data, and drafted the article and the report in which it is based. AO conceived of the study, led its design and coordination, participated in analyzing the qualitative data, and contributed to drafting the article. RM participated in the design of the study, led the data collection and the analysis of the qualitative data, and contributed to drafting the article. EP contributed to data collection. All authors read and approved the final manuscript.Interview guide for units participating in the telephone interviews.Click here for fileQualitative data from the interviews.Click here for file"}
+{"text": "The pioneering ancestor of land plants that conquered terrestrial habitats around 500 million years ago had to face dramatic stresses including UV radiation, desiccation, and microbial attack. This drove a number of adaptations, among which the emergence of the phenylpropanoid pathway was crucial, leading to essential compounds such as flavonoids and lignin. However, the origin of this specific land plant secondary metabolism has not been clarified.via horizontal gene transfer (HGT) during symbioses with soil bacteria and fungi that are known to have established very early during the first steps of land colonization. This horizontally acquired PAL represented then the basis for further development of the phenylpropanoid pathway and plant radiation on terrestrial environments.We have performed an extensive analysis of the taxonomic distribution and phylogeny of Phenylalanine Ammonia Lyase (PAL), which catalyses the first and essential step of the general phenylpropanoid pathway, leading from phenylalanine to p-Coumaric acid and p-Coumaroyl-CoA, the entry points of the flavonoids and lignin routes. We obtained robust evidence that the ancestor of land plants acquired a PAL Our results highlight a possible crucial role of HGT from soil bacteria in the path leading to land colonization by plants and their subsequent evolution. The few functional characterizations of sediment/soil bacterial PAL suggest that the initial advantage of this horizontally acquired PAL in the ancestor of land plants might have been either defense against an already developed microbial community and/or protection against UV.This article was reviewed by Purificaci\u00f3n L\u00f3pez-Garc\u00eda, Janet Siefert, and Eugene Koonin. The appearance of land plants was a key step towards the development of modern terrestrial ecosystems. Fossil data indicate that the first land plants appeared around 500 million years ago, from a pioneer green algal ancestor probably related to Charales ,2.Early terrestrial environments were harsh. The ancestor of land plants that conquered emerged lands had to face important stresses including desiccation, UV radiation (not anymore shielded by water), as well as attack by already diversified microbial soil communities ,3. This Nitella and in bryophytes , early branching lineages of land plants that do not harbor a developed vascular system such as that found in Tracheophytes [The initial physiological advantage of phenolic compounds is not clear. In fact, flavonoids are not thought to have been immediately effective as UV protection before the emergence of complex structures allowing for their accumulation in large quantities, and it has been proposed that they were initially used as internal signaling molecules . Lignin-osperms) . Becauseosperms) . To dateosperms) ,10.Rhodotorula, but also Ascomycetes such as Aspergillus and Neurospora, where they participate to the catabolism of phenylalanine as a source of carbon and nitrogen [The phenylpropanoid pathway likely evolved progressively in land plants by the recruitment of enzymes from the primary metabolism (for a recent review see 4). However, the origin of PAL was a key event, since it provided the initial step from which the rest of the pathway was assembled. Indeed, PAL is a key regulator of the phenylpropanoid pathway and any nitrogen -14.Streptomyces maritimus (Actinobacteria), where PAL is required to supply cinnamic acid for the production of benzoyl-CoA, the starter molecule for the biosynthesis of the bacteriostatic agent enterocin [Photorhabdus luminescens (\u03b3-Proteobacteria), where PAL is essential for the production of the powerful stilbene antibiotic through yet unknown intermediate steps [Saccharotrix espanaensis, where they are used to produce the antibiotic saccharomicin [Rhodobacter, where they are involved in the synthesis of the chromophore of their photoactive yellow protein photoreceptor [The PAL of some plants and fungi also harbor a tyrosine ammonia lyase (TAL) activity that is responsible for the synthesis of p-coumaric acid directly from tyrosine, which in turn leads to the production of p-coumaroyl-CoA Figure . PAL enznterocin , and Phote steps ,17. Morete steps . From fute steps . PAL homaromicin , and in receptor .PAL is homologous to histidine ammonia lyase (HAL), which is involved in the catabolism of histidine and is widespread in prokaryotes and eukaryotes ,22. It hGiven the clear importance of PAL in the emergence of the phenylpropanoid pathway and adaptation of plants to land, we sought to get more insight into the origin of this enzyme by carrying out an extensive search of PAL/TAL/HAL homologues in current sequence databases and by analyzing their phylogeny.Based on preliminary exhaustive phylogenetic analyses, 160 representative sequences were chosen for final tree construction , Amoebozoa (Dictyostelium discoideum), Haptophytes (Emiliania huxleyi), Heterokonts , Excavates , and Metazoans [D. discoideum harbors two additional homologues: one is very divergent and could not be included in the analysis, while the other lies outside of the eukaryotic HAL cluster and close to the characterized PAL of P. luminescens ,24, indie system .Amanita, Rhodotorula, Aspergillus) and is thus likely that the other orthologues have also PAL activity, although more functional data is needed to verify this. We found PAL orthologues in all complete genomes that are currently available (i.e. exclusively from Dikarya), to the exception of the late emerging lineages Saccharomycotina, Schizosaccharomycetes, and Cryptococcus [In contrast to the wide distribution of eukaryotic HAL orthologues, the eukaryotic PAL cluster contains exclusively orthologues from plants and fungi but no other eukaryotic lineage, and these form two well-supported monophyletic sister groups Figure . Concerntococcus , indicattococcus . This mavia HGT between these two phyla , and Methylobacterium sp. (\u03b1-Proteobacteria), a facultative methylotrophic pink pigmented relative of Rhizobiales [bona fide HAL as wellm Nostoc -34.via HGT from a bacterium . PAL is a cytoplasmic enzyme and is not targeted to the chloroplast , they might have provided protection against UV radiation, for example being the precursor of a light capturing pigment such as in modern purple bacteria. Moreover, cinnamate and p-coumarate are the precursors of benzoic acid and salicylic acid, which are known defense compounds ,37 FinalIt would be interesting to know if fungi also use PAL for these purposes, and what are the corresponding mechanisms for UV shielding and antimicrobial defense in the green algae that are known to colonize soil habitats. To answer these questions, it will be important to investigate further the distribution of PAL enzymes in both bacteria and fungi, which may be more widespread than currently thought, as well as their role in still largely unexplored secondary metabolisms.Cyanidioschyzon merolae Genome Project web service [Exhaustive Blast searches were carried out by using different HAL and PAL sequences as seeds on the non-redundant sequence database and on the EST database at NCBI , on ongo service .Based on exhaustive preliminary phylogenetic analyses, 160 representative taxa were chosen for final tree construction. From the global alignment, 369 unambiguously aligned amino acid positions were selected for analysis. Tree reconstruction was performed using the bayesian method implemented in MrBayes with a mPAL: phenylalanine ammonia lyase; TAL: tyrosine ammonia lyase; HAL: histidine ammonia lyase; HGT: horizontal gene transfer; CH4: cinnamic acid 4-hydrolase; 4CL: p-coumaroyl:CoA ligase.The authors declare that they have no competing interests.GE, MF, RF conceived the study, GE MF and SG performed the analyses and all authors drafted the manuscript. All authors read and approved the final manuscript.This article presents an extensive molecular phylogenetic analysis of phenylalanine ammonia lyase , the enzyme catalyzing the first step of the phenylpropanoid pathway leading, in plants and some fungi, to the synthesis of flavonoid secondary metabolites and lignin monomers. The study includes also the related enzyme histidine ammonia lyase (HAL), widespread in the three domains of life. Since land plants and dikaryotic fungi PAL form two sister monophyletic clades clearly distinct from eukaryotic HAL and from their prokaryotic homologues, it is proposed that PAL was transferred horizontally from bacteria to land plants or to fungi and, subsequently, from land plants to fungi or viceversa. This is an interesting observation, well supported by the phylogenetic analysis presented, that leads the authors to hypothesize a key role of this enzyme for the adaptation of plants to land.A horizontal gene transfer at the origin of phenylpropanoid metabolism: a key adaptation of plants to land). Have the authors tried to make preliminary phylogenetic analyses for other genes in the pathway or, at least, do they have an idea about their phylogenetic distribution? It would be interesting to compare the distribution of enzymes involved in flavonoid and lignin monomer biosynthesis with that of PAL.I have two major comments. First, the hypothesis that a horizontal gene transfer of PAL to the land plant ancestor is at the origin of the phenylpropanoid metabolism and of their adaptation to terrestrial ecosystems is appealing. However, a single enzyme does not make a pathway and, in the absence of data about the remaining genes involved in phenylpropanoid metabolism, this idea remains hypothetical. In this sense, the title of the article appears too conclusive and these are large gene families that do not appear to show a pattern similar to PAL, supporting the idea that they were recruited from preexisting pathways and strengthening the importance of the HGT of PAL.My second comment relates to the primary selective advantage attributed to the acquisition of PAL from bacteria, which might have been the production of antimicrobial or pigmented metabolites that would allow the successful competition of land plants/fungi in soils or protection against UV light. Again, the idea is attractive but, to prove it, would require as a preliminary step to show that the whole flavonoid biosynthesis pathway emerged prior to that of lignin monomer biosynthesis.If the latter appeared first, one could propose instead that the advantage of acquiring this pathway was to increase stiffness and developing the ability to construct rigid structures, an essential property of land plants and some stages of many fungal life cycles. Perhaps the authors can consider this possibility or discuss why they think it is unlikely. In addition, green algae, which also colonize soil surfaces, have also to compete with other members of the microbial community and to protect themselves from UV. They might have preferred to keep their own, non PAL-derived, protective systems against microbes and UV light.AU: We now clarify in the text that early branching land plant lineages harbor the first enzymes of the two main branches of the pathway leading to lignin monomers and flavonoids. Unfortunately, the unavailability of genomic data from earlier lineages prevents understanding for the time being which of the two branches emerged first. We now discuss briefly the production of lignin-like monomers in non-vascular early emerging land plants where these are likely used as defense against either UV or microorganism attack. To our knowledge fungi consume lignin but do not produce it, they construct rigid structures by using chitin.We speculate that the initial selective advantage of PAL that would have lead to the fixation of the HGT may have had to do with the use of its direct products, cinnamic acid and p-coumarate, both involved in antimicrobial or anti UV functions in bacteria and possibly fungi. Moreover, cinnamate and p-coumarate are the precursors of benzoic acid and salicylic acid, which are known defense compounds.The remark on green algae colonizing soil habitats is very interesting. We now discuss it in the text.Alternatively, the authors might wish to consider the possibility that flavonoid synthesis did not confer a particular efficient protection against microorganisms, but against metazoan grazers, which constitute indeed the major threat for land plants.AU: interesting point, although we do not address specifically the origin of flavonoid production (see above), coumarins have appetite suppressing properties, suggesting its widespread occurrence in plants, especially grasses, is because of its effect of reducing the impact of grazing animals. Thus an immediate advantage of PAL (TAL) might have also been defense against grazers. We now mention it in the text.This straightforward phylogenetic study of Phenylalanine Ammonia Lyase (PAL), the first committed enzyme of the phenylpropanoid pathway, reveals the monophyly of PALs from land plants and dikaryal fungi, with this eukaryotic branch embedded with a highly diverse bacterial tree. The interpretation of this result favored by the authors is that the ancestor of dikaryal fungi acquired the PAL gene from a soil bacterium and passed the gene to the ancestor of land plants. This conclusion implies a key role of HGT in the land colonization by plants.I think this study highlights both the huge advantages and the considerable headaches that are associated with having numerous genome sequences from all walks of life. The conclusion made by the authors is, of course, interesting and plausible but it is by no means the only one that is possible to make from the tree shown in Figure AU It is indeed hard to tell, but we think that one HGT is a much more parsimonious scenario than massive independent losses in all eukaryotes apart from land plants and fungi. We now explain it more clearly in the text.ii) HGT from the chloroplast to the common ancestor of all Plantae, with subsequent loss in algae, followed by HGT to dikaryal fungi; in the manuscript, this scenario is also dismissed as a highly unlikely one but, in this case, I am not sure I agree as the bacterial sister group of the eukaryotic PALs does include some cyanobacteria, and a loss of the gene in 2\u20133 algal lineages is not unlikely;AU: at least two reasons make us think that this scenario is unlikely:First, even if Nostoc is considered the extant cyanobacterium most similar to the first photosynthetic endosymbiont, only three cyanobacteria over 36 complete genomes harbor PAL/HAL homologues. No chloroplastic genomes harbor a PAL nor a HAL homologue. Furthermore, PAL is a cytoplasmic enzyme and is not targeted to the chloroplast .Second, the ancestor of the phylum Plantae likely preceded the ancestor of land plants of many millions of years. If a PAL was transferred by Endosymbiotic Gene Transfer from the cyanobacterial symbiont to the host nucleus in the ancestor of the phylum Plantae, it is not clear why it would have been lost multiple times independently in 2\u20133 algal lineages, indicating a lack of selective advantage, whereas it would have been maintained only in the algal line leading to land plants up to around 500 million years ago.We therefore think that a pal EGT from the cyanobacterial endosymbiont to the host nucleus, although it cannot be excluded a priori, is not a scenario more supported than the one that we propose.iii) independent HGTs from related (soil) bacterial to plants and fungi \u2013 a possibility that is not discussed in the manuscript but that, as far as I see, cannot be ruled out.AU: We included this possibility in the text.The above alternatives to the authors' conclusion do not invalidate the work but it must be admitted that, e.g., the chloroplast scenario is less surprising than the one presented by the authors, so much so that the advisability of dedicating a special papers to the origin of PAL in plants and fungi could be questioned. My disappointment with the manuscript is that the authors do not investigate the phylogeny of other enzymes of the phenylpropanoid pathway. Had this been done and had a coherent pattern been discovered, the conclusions could be much more convincing and exciting. If, on the other hand, such a coherent pattern does not exist, this also would be notable indicating that, like many other systems, this key pathway is a patchwork of genes of different origins. I understand, of course, that such a complete phylogenetic analysis requires a considerable amount of extra work, so the authors might prefer to highlight the PAL analysis separately, but I still think that a more comprehensive paper will be of greater value.AU: As we explained in our answer to referee 1, we now clarified better in the text that in this report we wished to focus on the very first step in the origin of the pathway that was key to its further assembly. How the pathway was then assembled is surely an interesting question but we feel not directly relevant to our hypothesis. Since without the acquisition of PAL the pathway could not have been be assembled, in particular because of the absence of a preexisting HAL homologue from which a PAL may have been derived, we reckon that our analysis is not incomplete.Indeed, as the referee points out, it would be more exciting not seeing the same pattern for the other genes, and this is what appears from preliminary analysis (see answer to referee 1).At a more technical level, I think that it is highly desirable to also include result from a maximum likelihood analysis to buttress those obtained with the superoptimistic MrBayes. With just one family to analyze, this will not take too much effort.AU: this analysis was in fact already done and gave very similar results and statistical support, we now mention it in the text and included the tree as supplementary material 2.It's a beginning insight into land plant colonization. I think that other reviewers might have some issues with the argument being based on this one enzyme. I have to admit I did wonder myself about other key enzymes in the phenylpropanoid pathway. I think to help your cause in this regard, you should make a definition of what you mean by the 'first committed step' when you are speaking of the PAL enzyme. The team does a reasonably good job of speculating why the ancestor to land plants might have acquired this gene and it's beneficial use. In figure AU: the referee is right, we added some clarifying comments in the legend to figure Unrooted bayesian tree of Figure 2 with full accession numbers and posterior probabilities.Click here for fileUnrooted ML tree of the same dataset. Numbers at nodes represent non-parametric bootstrap values caluclated on 100 bootstrapped samples of the original alignment calculated by Phyml . For both trees, when no accession number is indicated, the corresponding sequence was retrieved from either the EST database at NCBI or from ongoing genome projects at JGI. EST_chimera indicates chimeric sequences obtained from two different EST sources of the same species.Click here for file"}
+{"text": "Sir,We read with great interest a recent article by et al (2.8%), suggesting that ErbB2 is overexpressed to a greater extent in SCCO. Of the six studies described above, two reported a statistically significant association of ErbB2 overexpression with poor prognosis in SCCO cell lines have shown that it does have inhibitory effect on growth of cells, either alone or in combination with conventional treatments (Although methodologies used in these studies to detect ErbB2 expression are different but all of them clearly suggest that ErbB2 receptors are overexpressed in SCCO to a greater extent as reported by Gibault ErbB2-targeted therapies are still in an early stages of development in reference to SCCO and at this stage we look forward to results evaluating its effects in other cancers, where these therapies are in a relatively advanced stages of development. We hope that further research in this field will help determine the value of ErbB 1 and ErbB2 targeted therapies in SCCO."}
+{"text": "Currently, microelectrode arrays (MEAs) offer new possibilities for CNS microstimulation. However, although focal CNS activation is of critical importance to achieve efficient stimulation strategies, the precise spatial extent of EES remains poorly understood. The aim of the present work is twofold. First, we validate a finite element model to compute accurately the electrical potential field generated throughout the extracellular medium by an EES delivered with MEAs. This model uses Robin boundary conditions that take into account the surface conductance of electrode/medium interfaces. Using this model, we determine how the potential field is influenced by the stimulation and ground electrode impedances, and by the electrical conductivity of the neural tissue. We confirm that current-controlled stimulations should be preferred to voltage-controlled stimulations in order to control the amplitude of the potential field. Second, we evaluate the focality of the potential field and threshold-distance curves for different electrode configurations. We propose a new configuration to improve the focality, using a ground surface surrounding all the electrodes of the array. We show that the lower the impedance of this surface, the more focal the stimulation. In conclusion, this study proposes new boundary conditions for the design of precise computational models of extracellular stimulation, and a new electrode configuration that can be easily incorporated into future MEA devices, either More recently, microstimulation, which makes use of electrodes on the \u00b5m scale, is gaining increasing interest in both fundamental and clinical research, opening the possibility to stimulate small groups of neurons instead of large regions. In this perspective, microelectrode arrays (MEAs) are the focus of intensive developments in vitro or in vivo microsystems increasingly benefit fundamental neuroscience aiming at understanding activity-dependent plasticity of neural networks, as well as clinical developments of efficient neural implants or prostheses Electrical extracellular stimulation of the central nervous system has been used empirically for several decades by electrophysiologists to explore fundamental properties of neural networks. Currently, peripheral nerve, deep brain, and spinal cord stimulation paradigms are also used routinely for clinical restoration of lost motor function As reported recently, the activation of single neurons may strongly impact the activity of a large neural network and even behavior in vitro applications, as well as in vivo neuroprosthetic devices requiring focal stimulations. Part of this work has been presented in abstract form The aim of the present study is twofold: First, we validate a finite element model (FEM) for the realistic computation of the electrical potential field, and, second, we propose a new electrode configuration to achieve focal stimulations of neural networks using MEAs. This paper is thus divided into two parts. In the first part, we developed a FEM for the calculation of the potential field incorporating the surface conductance of the electrodes through Robin boundary conditions, which we validated on experimental recordings of the electrical potential field. In the second part of the paper, we used this model to evaluate the focality of MEA stimulations for different electrode configurations, in terms of both the potential field and the threshold-distance curves for a straight fiber and a reconstructed cortical neuron. In particular, we propose a variant of the monopolar configuration consisting in replacing the usual distant ground electrode by a ground surface surrounding all the electrodes of the array. We show that this new configuration improves the stimulation focality, and that this improvement is best when the interface conductance of this ground surface is high. This configuration can easily be incorporated into microelectrode arrays for in vitro experiments, we recorded the electrical potential distribution induced by extracellular stimulation. Current-controlled stimulations were delivered either in the absence (Ringer only) or in the presence of neural tissue. Experimental protocols conformed to recommendations of the European Community Council and NIH Guidelines for care and use of laboratory animals.Using microelectrode arrays (MEAs) dedicated to 2), and 4 integrated ground disk electrodes (diameter 1 mm), all made of Pt . The 4 integrated ground electrodes were disconnected and not used in this study. Instead, an external cylindrical Ag/AgCl ground electrode pellet was used . The array was surrounded by a cylindrical glass chamber, and the bottom part, including electrode leads, was insulated from the extracellular medium by a 5 \u00b5m thick SU-8 epoxy layer We used a microelectrode array to deliver electrical stimulations and to record the potential field induced in the MEA chamber . The arr22H2O, 1 MgCl26H2O, 25 NaHCO3, 1 NaH2PO4H2O and 11 D-Glucose. We also performed stimulations in the presence of a whole embryonic mouse hindbrain-spinal cord preparation, which was dissected as described previously Charles River Laboratories, L'Arbresle, France) previously killed by cervical dislocation. The whole spinal cord and medulla were dissected in the Ringer solution (pH 7.5) gassed with carbogen , meninges were removed, and the preparation was then placed in the MEA cylindrical chamber. A plastic net with small holes (70\u00d770 \u00b5m2) was laid on the neural tissue, in order to achieve a tight and uniform contact with the microelectrodes. Experiments were performed at room temperature.Stimulations were first performed in a Ringer solution composed of (in mM): 113 NaCl, 4.5 KCl, 2 CaClhttp://www.ayanda-biosys.com/Documents/safe_charge_injection_limit.pdf). Stimuli consisted of a train of 10 cathodic-first biphasic current pulses separated by 10 sec . They were delivered using the STG2008 stimulator controlled by the MC_Stimulus II v2.1.4 software .Current-controlled monopolar stimulations were performed between one 2D stimulation electrode of the array and the Multi Channel Systems MEA1060 filter amplifiers). Also, the voltage of the stimulation electrode was measured with a home-made follower circuit. It should be noted that no 3-electrode montage was needed here because we recorded the metal voltage of the stimulation electrode (stimV) with respect to the metal voltage of the ground electrode (zero by convention). In particular, we did not measure the junction potential at the stimulation electrode interface, which would have required a 3-electrode montage, and only considered the variations of the interface potential around the junction potential. Data were acquired at 15625 Hz using the Micro 1401 AD converter and the Spike2 v5.14 software from Cambridge Electronic Design . Examples of recordings of the stimulation electrode voltage stimV and of the potential in the medium V at a recording electrode are shown in The electrical potential field was recorded on the 60 3D recording electrodes referenced to the Ag/AgCl ground electrode pellet, 1200\u00d7 amplified and low-pass filtered at 3 kHz . The relationship between metalV and V is given by writing Ohm's law at the interface. Considering an elementary piece of surface \u03b4S, the elementary current \u03b4i flowing through \u03b4S is given by:g is the surface conductance of the interface. Moreover, on the medium side, the current entering the medium through \u03b4S is given by:n is the unit vector normal to the surface. From Equations 3 and 4, the natural BC that can be applied to the frontier of the medium in front of the electrode is thus the following Robin BC:g\u200a=\u200a0. In the opposite case of an infinitely conductive interface (g\u2192\u221e), this condition reduces to the classically used Dirichlet condition:V on the medium side to be uniform in front of the electrode as with Dirichlet BCs. Indeed, since electrodes are made of metals the electrode voltage on the metal side, metalV, has to be uniform. However the less conductive medium does not impose the potential V on the medium side to be uniform as well. We will see below that the new electrode configuration proposed in this paper actually takes advantage of this important property.When modeling stimulations, a conductive boundary condition should be used on the surfaces of the electrodes through which a current flows, namely the stimulation and ground electrodes. The type of BC used on these electrodes directly determines the calculation of the potential field. To obtain an accurate calculation, it is thus crucial to choose BCs that best reflect the electrode/medium interface. It should be noted that when a metal electrode is bathed in a conductive solution, a junction equilibrium potential establishes between both sides of the interface We developed a 3D finite element model (FEM) in order to compute the electrical potential field generated in a conductive medium by an electrical stimulation. This model was tested to reproduce the potential field obtained experimentally. Simulations were run with the finite element simulation software FEMLAB\u00ae 3.1a interfaced with Matlab 6.2 , under Linux (Fedora 7).The 3D model geometry corresponded to the experimental MEA, including the chamber, the neural tissue, the recording and stimulation microelectrodes of the array, and the external ground electrode pellet see . The outRinger\u03c3\u200a=\u200a1.65 S/m at about 700 Hz and room temperature. Possible variations of conductivity with respect to frequency were neglected. When tissue was present, its conductivity was one of the parameters that was estimated to fit the experimental recordings of the electrical potential field.The finite element model solved the homogeneous Poisson equation (Equation 1). The electrical conductivities of the Ringer solution and neural tissue were supposed homogeneous and isotropic in each region. When no tissue was considered, the conductivity of the tissue region was set to that of the Ringer solution, which was measured with a conductimeter and found to be 13 \u2126). It should be noted that this type of BC allows the potential in front of the recording electrodes to be non-uniform.Insulating BCs (Equation 2) were assigned to the circumference of the chamber, the air-Ringer solution interface (top part of the chamber), the insulated floor of the chamber, the 7 unused (and disconnected) 2D stimulation electrodes, and also the 60 3D recording electrodes. Indeed, although current may enter and exit recording electrodes at different places on their surface, on average the global current flowing through these electrodes was negligibly small due to the very high amplifier input impedance for the stimulation electrode. The surface conductances of these electrodes (groundg and stimg) were optimized so that the modeled potential field best fitted the experimental one.Robin BCs (Equation 5) were used for the conductive elements (ground and stimulation electrodes). The metal voltage in Equation 5 was set to Direct (UMFPACK)). Using this mesh, one calculation of the extracellular potential field took about 21 seconds on a Pentium IV 2.4 GHz with 2 Gb RAM. We verified that, with a finer mesh , and the SSOR-preconditioned conjugated gradient algorithm solver, the potential on the recording electrodes differed by less than 0.1%.The 3D geometry of the model was meshed with 63,214 tetrahedral Lagrange P2 elements, corresponding to 101,105 degrees of freedom . The proV in the medium has been calculated, the metal voltage metal,jV of each recording electrode j has to be calculated. Using Robin BCs on recording electrodes, these values would be directly estimated under the constraints that no global current flows through recording electrodes: metalV value of all electrodes (60 more parameters). Instead, we used homogeneous Neumann BCs (Equation 2), and calculated a posteriori the metal voltage of each recording electrode using Equation 7. We checked, on a single recording electrode model, that using this approach led to errors in the estimation of metalV of less that 0.1% compared to that obtained directly using a Robin BC on the recording electrode.The FEM was validated by comparing the experimental and the modeled data across all recording electrodes. Once the potential Ringer\u03c3) and the neural tissue (tissue\u03c3), and the surface conductances of the stimulation (stimg) and ground (groundg) electrodes. For all simulations, the conductivity of the Ringer solution was set to the measured value Ringer\u03c3\u200a=\u200a1.65 S/m. The other parameters were optimized to best fit experimental recordings of the potential field, using the Levenberg-Marquardt algorithm to minimize the following weighted least squares criterion:j\u03c32 was the measured variance of the experimental potential jVexp.The FEM solution depended on the following parameters: the conductivities of the Ringer solution was considered, no other external ground electrode was used. The metal voltage of the GS was set to 0 V and several values of surface conductance were tested: GSg\u200a=\u200a400, 4000, 40000 S/m2 or infinite (homogeneous Dirichlet condition V\u200a=\u200a0). We also tested in The volume conductivity was uniformly set to 1.95 S/m, corresponding to 37\u00b0C, the practical temperature for z\u200a=\u200a50 \u00b5m above the stimulating disk electrode. For this purpose, the potential field was normalized. Indeed, as we shall see below , while the layer IV stellate cell model was used as is (its axon was straight with a diameter of 0.6\u20130.8 \u00b5m). Temperature was set to 37\u00b0C for the calculation of voltage-dependent conductances. For each neuron and each stimulation configuration, the extracellular potential computed in the finite element model (without offset correction) was interpolated at the center of each compartment and assigned with the extracellular mechanism. This approach, which has been used by others and in a previous study http://www.neuron.yale.edu/neuron/docs/help/neuron/neuron/mech.html#extracellular). Cathodic-first biphasic stimulations (phase duration: 200 \u00b5s) were used, the amplitude of which was increased until firing an action potential . A 10-\u00b5s time step was used, allowing a reduced error on the activation threshold estimation . The stimulation focality of the three configurations was assessed by moving both structures on a horizontal line passing over the stimulating electrode and determining the activation thresholds along the line.We also performed numerical simulations with compartmentalized neurons embedded in the extracellular potential fields to further assess the stimulation focality of the three configurations. These simulations were performed with the NEURON software, v6.1 metalV\u200a=\u200amediumV\u200a=\u200astimV\u200a=\u200a754.4 mV on the stimulation electrode and metalV\u200a=\u200amediumV\u200a=\u200agroundV\u200a=\u200a0 on the ground electrode) and found that the modeled potential field was two orders of magnitude higher than the experimentally recorded one . This large difference was due to the fact that the potential drops across the stimulation and ground electrode/electrolyte interfaces were not taken into account by this type of BC. To model this potential drop we used Robin BCs (Equation 5), taking into account the surface conductance of the metal/medium interface of the stimulation and ground electrodes. This BC depends on three parameters: The stimulation electrode voltage, which was set to the measured value , and the surface conductances groundg and stimg of the electrodes which were estimated with a Levenberg-Marquardt algorithm so as to best reproduce experimental data . This model gave an excellent fit of the experimental recordings of the potential field . Moreover, we checked for a 1000-Hz sinusoidal stimulation that the fitted value of the surface conductance of the stimulation electrode led to the prediction of a theoretical electrode impedance (60.9 k\u03a9) close to the actually measured impedance (65 k\u03a9). In the following, we thus use the model equipped with Robin boundary conditions on the stimulation and ground electrodes.Monopolar stimulations were applied in a Ringer solution between a 2D stimulation microelectrode of the array and an external cylindrical ground electrode, and the potential field was measured on the 60 recording electrodes of the array. The FEM solved the homogeneous Poisson equation (Equation 1) under given boundary conditions to reproduce these experimental data. We first tested the use of a standard Dirichlet BC to be non-uniform in front of the electrode surface. We indeed found that the potential was not uniform on both the stimulation and ground electrodes . Over thgroundg and stimg, respectively) on the spatial distribution of the potential field. Each panel shows the influence of one parameter considered separately from the other, which were set to their values fitted in shift of the potential field towards smaller values (groundg\u2192\u221e). This global offset is due to the potential drop across the ground/electrolyte interface, which decreases when groundg increases due to Ohm's law at the interface. Because the potential field is defined relative to a constant (from Equation 1), the whole field is then shifted by the value of this drop. It should be noted that groundg actually influences the shape of the potential field in the close vicinity of the ground electrode. Indeed, as groundg increases, V becomes all the more uniform (and close to zero) in front of the ground electrode surface. However, this influence is not seen on the recording electrodes in the case of the classical monopolar configuration. By contrast, we will see below that groundg strongly influences the shape of the potential field when the ground electrode is replaced by a ground surface surrounding the electrodes of the array.First, increasing the surface conductance of the ground induces a global r values . The \u201clistimg) for both current-controlled stimulations and voltage-controlled stimulations . In the case of current-controlled stimulations, changing stimg has no influence on the potential field distribution being obtained for the standard Dirichlet BC (stimV\u200a=\u200a754.4 mV). These results mean that the field amplitude is entirely determined by the current injected through the stimulation electrode and not by its metal voltage.Second, we determined the influence of the surface conductance of the stimulation electrode are not seen on distant recording electrodes.It should be noted that, for current-controlled stimulations, the potential distribution in the medium is actually not uniform locally on the stimulation electrode and thisIn the tissue\u03c3\u00d7\u2207V) imposes greater variations of the potential for lower conductivities, hence greater values of the potential. By contrast, for distances beyond about 1000 \u00b5m from the stimulation electrode, the potential field is not affected by the conductivity of the neural tissue. This result can be explained by the fact that far away from the stimulation electrode, the potential is imposed by the ground electrode, which is always surrounded by the Ringer solution.First, we introduced a volume of neural tissue in the finite element model, and computed the extracellular potential field created by a 1-\u00b5A stimulus for different values of tissue conductivity ranging from 0.05 to 1.65 S/m . These sstimg, groundg, and tissue\u03c3) to fit these data. The regression slope and intercept were 0.987\u00b10.014 and 7.39\u00b16.66 \u00b5V, respectively . We estimated the following optimal parameters: groundg\u200a=\u200a799 S/m2, stimg\u200a=\u200a116 S/m2, and tissue\u03c3\u200a=\u200a0.057 S/m. Adjusting tissue\u03c3 thus provided an estimation of the conductivity of the neural tissue. The value of groundg was relatively close to the one fitted in the absence of tissue (groundg\u200a=\u200a975 S/m2), which was consistent with the fact that the solution in front of the ground electrode was unchanged in both cases. By contrast, the value of stimg was lower than that obtained without tissue (116 S/m2 vs. 338 S/m2).Second, we recorded experimentally the potential field generated by a current-controlled command stimulation of 1 \u00b5A, in the presence of a whole embryonic mouse hindbrain-spinal cord preparation , and adjThe second major goal of this paper was to study the stimulation focality for different electrode configurations, and to propose a new configuration that improves the focality of both the potential field and the threshold-distance curves for neurons placed in this field. For this purpose, we used the model based on Robin BCs described and validated above. The way to improve the focality of a stimulation is to constrain the current to flow back through some location close to the stimulation electrode. In this respect, multipolar electrode configurations are generally considered In a first step, we assessed the focality of the normalized potential field for these three configurations . As expeGSg of the ground surface increased, the best focality being obtained in the limit case of an infinitely conductive ground/electrolyte interface (modeled with a homogeneous Dirichlet condition). Moreover, the ground surface approach leads to a reduction of the potential field amplitude with respect to monopolar stimulation by factors of only 1.10, 1.30, 2.22, and 8.30 for GSg\u200a=\u200a400, 4000, 40000 S/m2 and infinite, respectively.For these reasons, we tested the use of a ground surface laying on the substrate of the MEA around the electrodes. We found that this configuration increases the potential field focality (see GS plots and maps in gGS\u200a=\u200a400\u201340000 S/m2) compared to the monopolar case, while the CB approach requires 17\u201326 times higher currents , did not allow an accurate computation of the potential field (regression slope of 1.28\u00b10.005 and intercept of \u2212171\u00b13.36 \u00b5V). By contrast, taking into account the potential drop at both interfaces provides a good estimation of the potential field.In this paper, by comparing experimental recordings and modeling results, we first showed that accurate calculation of the extracellular potential created by an electrical stimulation can be achieved using a finite element model equipped with Robin BCs on stimulation and ground electrodes. In particular, we found that it is important to take into account the potential drop at the stimulation and ground electrode/medium interfaces. We verified (data not shown) that accounting for this drop at the stimulation electrode but not at the ground electrode , except a change on the overall offset remains however constant as stimg varies. The evolution of the metal voltage stimV as a function of stimg can be further described analytically by integrating the Robin BC (Equation 5) over the stimulation electrode:stimg increases, the stimulation electrode voltage stimV decreases so that the current and the average potential on the medium side, as well as the potential field away from the electrode, remain constant.When nchanged . Howeverm varies . We founstimg is small enough so that the potential drop at the interface is high, namely V\u226ametalV. In this case, the Robin BC becomes \u03c3\u2207V\u00b7n\u200a=\u200ametalg V\u2212g V\u2248metalg V. Integrating this expression over the electrode surface leads to metalg V\u200a=\u200aI/S, and thus to the non-homogeneous Neumann condition:I is the injected current, and S the electrode surface.The use of Robin BCs requires knowing the values of the electrodes' surface conductance, which depends on many factors, such as for example the electrode material or the stimulation frequency. This BC (Equation 5) can actually be simplified when the surface conductance stimV\u200a=\u200a754.4 mV) was much higher than the potential on the medium side (V ranging from 4.9 to 9.5 mV), meaning that the potential drop was nearly uniform. We then verified that the use of this simplified BC gave similar simulation results to those obtained with the Robin BC. This non-homogeneous Neumann BC would be very useful when surface conductances or electrode voltages are unknown, since it requires only the knowledge of the current and the area of the stimulation electrode. Nevertheless, it should be noted that this simplified BC is not valid for the ground electrode, over the surface of which V is highly non-uniform (groundV\u200a=\u200a0), the potential drop (and thus the current density) is also non-uniform. This may not induce large differences in the calculation of the potential when the ground electrode is located far from the region of interest. However, in the case of the ground surface configuration, which surrounds closely the stimulating electrodes, using this BC would not have allowed seeing the potent influence of the surface conductance of the ground surface on the focality of the stimulation .The electrode/medium interface has a complex frequency-dependent impedance that can be modeled with several capacitive and resistive elements in series and/or in parallel to each other We found that the model could also be used to predict the conductivity of the neural tissue laid on the microelectrode array. This is an interesting side-result of the present work, because authors usually take conductivity values from standard studies of the literature to compute the electrical potential generated in a tissue by an extracellular stimulation The second goal of this work was to estimate the focality of the potential field and of threshold-distance curves for different electrode configurations. Conventional bipolar configurations with two nearby electrodes actually focalize the stimulation, but create anisotropic potential fields V). This is an interesting property of the novel configuration, since extracellular recordings are often greatly contaminated by this artifact. Here, we estimated the surface conductance of Pt and Ag/AgCl electrodes to be 338 S/m2 and 975 S/m2, respectively. With these materials, the proposed configuration already improves the focality compared to a monopolar configuration. However, better focality would be obtained with higher surface conductances integrated on the MEA substrate and surrounding the electrodes of the array. By contrast with classical multipolar configurations, where several electrodes must be addressed together to form a single stimulation site, this configuration enables that each electrode be used independently of the others as an individual \u201cstimulation pixel\u201d, in the same way as pixels of a computer screen are addressed separately. Interestingly, we found that the focality of the potential field and activation thresholds achieved with this configuration was strongest for highest surface conductance of the ground surface. This can be explained intuitively by the fact that the current always searches the least \u201ccostly\u201d route to enter back into the ground. For a low ground conductance, the cost to travel further through the extracellular space would be small compared to the effort required to enter the ground electrode, and the stimulation would not be focal. Conversely, for a high ground surface conductance, the main cost would be to flow through the extracellular space. In this case, the current would thus return through the ground electrode at a location close to the stimulation electrode, and the stimulation would be focal. In addition, it can be noted that the ground surface configuration generates a low stimulation artifact are needed with the concentric bipolar configuration. This gain in current amplitude is important to reduce electrode deterioration and to design low-consumption implantable devices for which battery life is an important practical issue.in vitro to study the activity-dependent dynamics and plasticity of neural networks, and could also be adapted in vivo for the development of neural prostheses.In conclusion, a realistic model has been validated for the computation of the extracellular potential field generated by an electrical stimulation in a neural tissue, and a new electrode configuration has been proposed to achieve focal stimulations. Based on our simulation results, we encourage modelers to use Robin BCs instead of Dirichlet BCs on the conductive electrodes, and experimenters to prefer current-controlled stimulations to voltage-controlled stimulations, in order to better control the spatial extent of the stimulations. Finally, the new configuration proposed here could be advantageously used"}
+{"text": "Organotin compounds (OTCs) have been widely used as stabilizers in the production of plastic, agricultural pesticides, antifoulant plaints and wood preservation. The toxicity of triphenyltin (TPT) compounds was known for their embryotoxic, neurotoxic, genotoxic and immunotoxic effects in mammals. The carcinogenicity of TPT was not well understood and few studies had discussed the effects of OTCs on gap junctional intercellular communication (GJIC) of cells.In the present study, the effects of triphenyltin chloride (TPTC) on GJIC in WB-F344 rat liver epithelial cells were evaluated, using the scrape-loading dye transfer technique.TPTC inhibited GJIC after a 30-min exposure in a concentration- and time-dependent manner. Pre-incubation of cells with the protein kinase C (PKC) inhibitor did not modify the response, but the specific MEK 1 inhibitor PD98059 and PI3K inhibitor LY294002 decreased substantially the inhibition of GJIC by TPTC. After WB-F344 cells were exposed to TPTC, phosphorylation of Cx43 increased as seen in Western blot analysis.These results show that TPTC inhibits GJIC in WB-F344 rat liver epithelial cells by altering the Cx43 protein expression through both MAPK and PI3-kinase pathways. Organotin compounds have been widely used as agricultural biocides, antifouling agents in boat paint, wood preservatives, and stabilizers for polyvinylchloride polymers (PVC) in industry ,2. TriphConnexins (Cxs) are a group of at least 20 highly conserved proteins that provide the basis for communication through the direct exchange of ions, nutrients, second messengers, electrical coupling, and small metabolites from one cell to its neighboring cells -20. CellThe carcinogenicity of TPT remained unclear. The present work was undertaken to define the effects of TPTC on GJIC in WB-F344 rat liver epithelial cells.Powder of TPTC was supplied by MERCK .Lucifer yellow, DMSO (dimethylsulfoxide), formaldehyde, MTT were supplied by Sigma-Aldrich . D medium and newborn calf serum were from Gibco , Trizole was from Invitrogen Life Technologies and 2 X SYBR green PCR master mix was from Applied Biosystems . The protein kinase C (PKC) inhibitor GF109203X, extracellular signal-regulated protein kinase (ERK) inhibitor PD98059 and PI3 kinase inhibitor LY294002 were from Sigma . Immobilon Western HRP Substrate Peroxide Solution and luminal reagent were supplied by Millipore Corporation . All chemicals used in the study were of the highest available purity.2 incubator before being used in the different experiments. Confluent cells, grown in plates, were exposed to various concentrations of TPTC. To prepare the TPTC stock solution, 0.01 g of TPTC powder was dissolved in 10 ml DMSO and then diluted to a final concentration of 1000 ppm.WB-F344 rat liver epithelial cells were cul4/well). On the following day, the experimental medium containing different TPTC concentrations was added, and then incubated for 30 and 60 minutes. Fifty \u03bcl of MTT solution (2 mg/ml in PBS) was added to each well and incubated for 6-8 hours. After careful removal of the medium, 150 \u03bcl of DMSO was added to each well, and then after careful shaking, the absorbance was read at 570 nm using an ELISA microplate reader . Cell viability was expressed as a percentage of control cells not treated with TPTC and was designated as 100%.The effect of TPTC on the survival of WB F344 cells was assessed using MTT toxicity assay as described previously for more than 1 h at room temperature. The protein was probed with antibodies against connexin 43 at 4\u00b0C overnight and this was followed by incubation with horseradish peroxidase-conjugated secondary antibodies . Protein visualization was carried out using an enhanced chemiluminescence kit (Pierce) according to the manufacturer's protocol.Immunofluorescence staining experiment s were performed as previously described. In brieMeans \u00b1 SEM were calculated and the data are presented as a percentage of control. All data were analyzed by Sigma Plot 8.0 software using repeated measures. ANOVA was performed to examine the effect of independent variables . Tests for contrasts were carried out to compare the different levels of the independent variables. P values \u2264 0.05 were considered statistically significant.TPTC dissolved easily in DMSO but not in water. To exclude the toxic effects of DMSO on cell viability and diffusion length of GJIC, tests involving exposure to DMSO were carried out. Results revealed that after exposure to 2% DMSO for 30 minutes, the diffusion length of GJIC did not obviously decrease as compared with that of the control group (p > 0.05).Cytotoxicity evoked by TPTC in WB-F 344 cells was tested with 0, 0.25, 0.5, 1, 2, 3, 4, and 5 ppm of TPTC using the MTT proliferation assay. After 30- and 60-min exposure to TPTC, it was found that cell viability decreased obviously with increasing concentration of TPTC and the lethal concentration 50 (LC 50) in 60 min calculated was 5 ppm Fig. .Colony-forming efficiency in WB-F 344 cells was evaluated using TPTC of 0, 3, 9, 12, 15, 18 ppb. After 14 days of exposure, the colony-forming efficiency decreased significantly when TPTC concentration exceeded 12 ppb Fig. .Inhibition of GJIC has been suggested to be an important activity of tumor promoters . TherefoThe effects of TPTC on GJIC were evaluated with cells exposed to TPTC for 15 min, 30 min, 45 min, and 60 min. After 15 min of exposure to 1.5 ppm of TPTC, the diffusion length was significantly decreased as compared with that of the control group (p < 0.05) Fig. . The difOrganotin compounds showed that inhibition through some kinase pathways is a possible mechanism involved in the apoptotic effects . The mitPhosphatidylinositol 3'-kinase (PI3K) has been demonstrated to be critical in mediating several aspects of PDGF actions in various cells ,58-62. TI, \u03b2II, \u03b3, \u03b4, and \u03b5 [To study the involvement of protein kinase C (PKC) in the inhibition of GJIC by TPTC, an inhibitor of PKC, GF109203X was utilized to block the activity of the enzyme before exposure to TPTC-GF109203X inhibits the isozymes of PKC \u03b1, \u03b2\u03b4, and \u03b5 ,64. The Neither GF109203X, LY294002 nor PD98059 alone at the indicated concentration had any notable effects on GJIC in these cells.One possible mechanism involved in the inhibition of GJIC is abnormal phosphorylation of connexins -67. WB-FThe expression of Cx43 in WB-F344 cell under stained with fluorescein isothiocyanate (FITC) and DAPI after 30-min exposure with1.5 ppm TPTC compared to the control group (A) with 1.5% DMSO was showed Fig. . The fluCarcinogenesis is a multistep process, including \"initiation,\" \"promotion,\" and \"metastasis\" (\"progression\") . Potter The inhibition of GJIC by TPTC was independent of PKC activity but clearly dependent upon the activation of both MAPK and PI3-kinase pathways. The loss of GJIC was also described in cancer cells ,78. AlteHence, there is no evidence of a causal cross-talk between the two modulatory pathways, MAPK and PI3K. However, both PD58059 and LY294002 abolished completely the effect of TPTC downregulation of Cx43, implicating both MAPK and PI3K signaling cascades in a common mechanism of Cx regulation. It is possible that MAPK and PI3K act through a common downstream pathway, such as GSK-3 activation -86, to cIn conclusion, the present study shows that TPTC inhibits GJIC in WB-F344 rat liver epithelial cells by altering the Cx43 protein expression through the MAPK and PI3-kinase pathways. However, to prove the carcinogenicity of TPTC still needs further study. This preliminary study could provide the possible mechanism for further evaluation of toxicity of TPTC.The authors declare that they have no competing interests.CHL participated in the study design, interpretation of results, analysis, and manuscript writing. IHC participated in the study design and analysis. CRL participated in the statistical analysis and manuscript writing. CHC participated in the study design and coordination. MCT participated in the study design and coordination. JLT carried out the immunoassays, the study design, analysis and manuscript writing. HFL participated in the study design, interpretation of results and manuscript preparation. All authors read and approved the final manuscript."}
+{"text": "PolyP is synthesized in bacterial cells by the actions of polyphosphate kinases (PPK1 and PPK2) and degraded by an exopolyphosphatase (PPX). Bacterial cells with polyP deficiencies are impaired in many structural and important cellular functions such as motility, quorum sensing, biofilm formation and virulence. Knockout mutants of the Pseudomonas sp. B4), we were able to eliminate most of the cellular polyP (>95%). Furthermore, the effect of overexpression of PPX1 resembled the functional defects found in motility and biofilm formation in a ppk1 mutant from Pseudomonas aeruginosa PAO1. The plasmids constructed were also successfully replicated in other bacteria such as Escherichia coli, Burkholderia and Salmonella.As an alternative method to construct polyP-deficient bacteria we developed constitutive and regulated broad-host-range vectors for depleting the cellular polyP content. This was achieved by the overexpression of yeast exopolyphosphatase (PPX1). Using this approach in a polyphosphate accumulating bacteria (ppk genes. It is of great importance to understand why polyP deficiency affects vital cellular processes in bacteria. The construction reported in this work will be of great relevance to study the role of polyP in microorganisms with non-sequenced genomes or those in which orthologs to ppk genes have not been identified.To deplete polyP contents in bacteria broad-host-range expression vectors can be used as an alternative and more efficient method compared with the deletion of Polyphosphate (polyP) is a ubiquitous linear polymer of hundreds of orthophosphate residues (Pi) linked by \"high-energy\" phosphoanhydride bonds. The best-known enzymes involved in the metabolism of polyP in bacteria are the polyphosphate kinases (PPKs) that catalyze the reversible conversion of the terminal phosphate of ATP (or GTP) into polyP and the exopolyphosphatase (PPX) that processively hydrolyzes the terminal residues of polyP to liberate Pi ,2.E. coli has only PPK1 and Pseudomonas aeruginosa PAO1 contains both. Interestingly, the enzyme in charge of polyP synthesis still remains unknown in several bacteria containing the biopolymer polyP750. Reactions were stopped after incubation of the mixtures for 60 min at 65\u00b0C. After this, 4 \u03bcl was taken from each reaction mixture and loaded on polyethyleneimine-cellulose plates (Merck). For TLC, samples of 4 \u03bcl were separated in 0.75 M KH2PO4 (pH 3.5). Radioactive spots were visualized and quantified by using a Phosphorimager . One unit of enzyme was defined as the amount releasing 1 pmol of phosphate from polyP min-1.PPX activity was determined as previously described , with thMore details about the Methods employed in this work were included in the Additional File The authors declare that they have no competing interests.FCH and CAJ conceived and designed the study. FCH performed the experiments and drafted the manuscript. CM carried out some experiments. CAJ participated in coordination and funding for the study, critical evaluation and amended the manuscript. All authors read and approved the final manuscript.Construction and characterization of constitutive and regulated expression vectors for generation of polyP-deficient bacteria. Methods and Results. The data provided the methods for the construction of constitutive and regulated expression vectors to study polyP deficiency in Gram-negative bacteria as exemplified in the overexpression of exopolyphosphatase from yeast in the genus Pseudomonas.Click here for file"}
+{"text": "This paper assesses the agreement between household-level income data and an area-based income measure, and whether or not discrepancies create meaningful differences when applied in regression equations estimating total household prescription drug expenditures.Using administrative data files for the population of BC, Canada, we calculate income deciles from both area-based census data and Canada Revenue Agency validated household-level data. These deciles are then compared for misclassification. Spearman's correlation, kappa coefficients and weighted kappa coefficients are all calculated. We then assess the validity of using the area-based income measure as a proxy for household income in regression equations explaining socio-economic inequalities in total prescription drug expenditures.The variability between household-level income and area-based income is large. Only 37% of households are classified by area-based measures to be within one decile of the classification based on household-level incomes. Statistical evidence of the disagreement between income measures also indicates substantial misclassification, with Spearman's correlations, kappa coefficients and weighted kappa coefficients all indicating little agreement. The regression results show that the size of the coefficients changes considerably when area-based measures are used instead of household-level measures, and that use of area-based measures smooths out important variation across the income distribution.These results suggest that, in some contexts, the choice of area-based versus household-level income can drive conclusions in an important way. Access to reliable household-level income/socio-economic data such as the tax-validated data used in this study would unambiguously improve health research and therefore the evidence on which health and social policy would ideally rest. Measures of income are often central to health and health policy research. Among many potential implications, income can be a non-medical determinant of health -3 an enaPrior studies have investigated misclassification of income and other socio-economic variables by comparing individual versus area-level survey responses for small samples of the population,9 and byOur primary datasets are administrative files for the provincially administered, universal public medical and hospital health insurance program, Medical Services Plan (MSP) of BC. This program covers virtually all 4.2 million residents of BC, excluding only those residents covered by federal health insurance programs (collectively about 4% of the population). We restrict our attention to households for which one or more member resided in BC for at least 275 days per year from 2001 to 2004, inclusive.Household income was obtained from the 2004 registration files for provincially administered, universal public pharmaceutical insurance program, BC PharmaCare. In addition to programs for social assistance recipients and other select populations, BC PharmaCare began offering income-based public drug coverage to all residents of the province in May 2003. Terms such as deductibles and co-insurance are based on household income, with more generous but still income-based coverage offered to senior citizens (residents aged 65 and older). For all households that registered to receive coverage, the BC Ministry of Health obtains net, pre-tax income information from the Canada Revenue Agency. Because of differences in coverage offered and average needs, 95% of households with one or more senior member were registered for Fair PharmaCare in 2004 whereas only 73% of non-senior households were registered.The area-based income variables used in this study are based on linking MSP registry postal codes to average household income in the area as recorded in the 2001 Census. Statistics Canada collates average household income and composition for over 7,000 Census Dissemination Areas comprised of 400 to 700 persons. For research purposes, these areas are sorted by income and aggregated into 1,000 strata. Income strata contain an average of 1,700 households, with some variation due to variations in populations by postal code. Both the household level and area-based income variables are based on the same income concept, gross income prior to any deductions.Total individual expenditures on prescription drugs were obtained from BC PharmaNet. BC PharmaNet is an administrative dataset in which every prescription dispensed in the province must be entered by law\u2013it is designed to support drug dispensing, drug monitoring and claims processing. These individual expenditures were aggregated at the household level according to registration files for the MSP program to create a variable indicating total household spending on prescription drugs.The research data were extracted for this study from the British Columbia Linked Health Database and the BC PharmaNet database with permission of the BC Ministry of Health and the College of Pharmacists of BC. Ethics approval was obtained from the Behavioural Research Ethics Board at the University of British Columbia.The household-specific and area-based income measures were each aggregated into deciles (ordered from lowest to highest income). We assess the discrepancy between the two measures using the CRA validated, household-specific incomes as the standard. We calculated the Spearman's rank correlations of the various income measures, and both the kappa and weighted kappa to measure the degree of non-random agreement and partial agreement between the measures.We proceed to examine whether the choice of income measure has an impact on how pharmaceutical expenditures are distributed by income status. We begin by examining the distribution of prescription drug expenditures by income deciles, where the deciles are defined according to household-level income then according to neighbourhood level income. As measurement error is accommodated more easily in regression analysis than in descriptive analysis, we also include a series of dummy variables for both versions of the income variable in an OLS regression in order to determine whether both area-based income and household income generate meaningfully different results when applied in a research context. We perform regressions of income on total drug expenditures with and without covariates controlling for the presence of one or more seniors in the household as well as household size. Through the comparison of coefficients between household-level income variables and area-level income variables, one can reach some conclusions about the appropriateness of substituting an area-based measure for a missing household-level variable in a regression equation. By including regressions with and without covariates, we can determine whether multivariate models influence the discrepancy between area-based and household-level variables.A total of 1.74 million households were registered for MSP and had valid postal codes for linkage with area-based income strata. This cohort accounts for 95% of the total population in the province. Of these households, 1.36 million were registered for the Fair PharmaCare program. Cross-tabulations of the household-level and area-based income measures are shown in Table Table Statistical evidence of the disagreement between income measures can be found in Table To examine whether these discrepancies result in any meaningful differences in an applied research context, we start by examining the distribution of total prescription drug expenditures by income deciles stratified by senior and non-senior households, first using household-level CRA validated income and then using aggregate neighbourhood level income Table . Table 4In Table We found a sufficient level of discrepancy between the area-based and household-level income measures. Using validated household income as the standard, area-based measures misclassified the income decile for eighty-five percent or more of the households in the data. We also found that these discrepancies did affect the size of coefficients in regression analyses, suggesting that very different conclusions can be reached regarding the 'same' issue depending on which income variable we use. Thus, these results indicate that, at least in some contexts, the choice of neighbourhood versus household income can drive conclusions in an important way. Our results are consistent with a large amount of work indicating substantial discrepancy between area-based and household SES measures,6,8,10.There are also a couple of important caveats. The first is that our study did not examine the inclusion of income as simply one of several control variables, but rather only looked at the difference between household-level and area-level income when applied as the primary variable of interest. Thus, results cannot be extended to the use of income as a control in much larger regression equations. Second, these results are not meant to suggest that the use of neighbourhood income is inferior in all contexts. An author particularly concerned with measuring permanent income free of yearly fluctuations may find that neighbourhood income provides a better measure. When measuring access to health care, it might also be true that low-income families living in high-income neighbourhoods have better access to care than other similar low-income families simply because of where they live. Thus, an argument could be made for including both measures in this type of work.While the level of agreement between area-based and household-level SES measures has frequently been studied, our work adds to the knowledge base for several reasons. It encompasses a larger number of Canadians, a sample of 78% of all households in British Columbia, of which 95% of all senior households are analyzed. Also, while other studies have tended to compare area-based measures to household-level survey data,8,9 or hWhile many authors have argued that household-level income should be used whenever possible, census-based aggregate measures will continue to be necessary for health research until household-level data become more readily available. Two suggestions can be made based on these research results. The first is that researchers should be cautious when interpreting the results of studies using aggregate measures as proxies for individual and household income. Area-based measures are approximations that are best suited to investigating major differences in incomes or to studying context in which someone lives rather than their specific income. The second suggestion is perhaps obvious to researchers but important for governments and statistical agencies to fully understand: access to reliable individual-level income/socio-economic data, as well as the neighbourhood level income data that is currently available, would unambiguously improve health research and therefore the evidence on which health and social policy would ideally rest.The author(s) declare that they have no competing interests.GH participated in conception of the study and study design, performed the statistical analysis and drafted the manuscript. SM participated in conception of the study and study design and participated in drafting the manuscript. Both authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"}
+{"text": "ERBB2 gene amplification and overexpression of the ERBB2 tyrosine kinase receptor. They are associated with a poor prognosis but can benefit from targeted therapy. A better knowledge of these BCs, genomically and biologically heterogeneous, may help understand their behavior and design new therapeutic strategies.Around 20% of breast cancers (BC) show ERBB2-amplified BCs using 244K oligonucleotide array-comparative genomic hybridization and whole-genome DNA microarrays. Expression of ERBB2, phosphorylated ERBB2, EGFR, IGF1R and FOXA1 proteins was assessed by immunohistochemistry to evaluate the functional ERBB2 status and identify co-expressions.We defined the high resolution genome and gene expression profiles of 54 ERBB2-C17orf37-GRB7 genomic segment as the minimal common 17q12-q21 amplicon, and CRKRS and IKZF3 as the most frequent centromeric and telomeric amplicon borders, respectively. Second, GISTIC analysis identified 17 other genome regions affected by copy number aberration (CNA) . The expression of 37 genes of these regions was deregulated. Third, two types of heterogeneity were observed in ERBB2-amplified BCs. The genomic profiles of estrogen receptor-postive (ER+) and negative (ER-) ERBB2-amplified BCs were different. The WNT/\u03b2-catenin signaling pathway was involved in ER- ERBB2-amplified BCs, and PVT1 and TRPS1 were candidate oncogenes associated with ER+ ERBB2-amplified BCs. The size of the ERBB2 amplicon was different in inflammatory (IBC) and non-inflammatory BCs. ERBB2-amplified IBCs were characterized by the downregulated and upregulated mRNA expression of ten and two genes in proportion to CNA, respectively. IHC results showed (i) a linear relationship between ERBB2 gene amplification and its gene and protein expressions with a good correlation between ERBB2 expression and phosphorylation status; (ii) a potential signaling cross-talk between EGFR or IGF1R and ERBB2, which could influence response of ERBB2-positive BCs to inhibitors. FOXA1 was frequently coexpressed with ERBB2 but its expression did not impact on the outcome of patients with ERBB2-amplified tumors.First, we identified the ERBB2-amplified BCs are different, distinguished ERBB2 amplicons in IBC and non-IBC, and identified genomic features that may be useful in the design of alternative therapeutical strategies.We have shown that ER+ and ER- ERBB2 gene. ERBB2 encodes a transmembrane tyrosine kinase receptor of the ERBB/EGFR family. ERBB2 is amplified in around 20% of BCs. The receptor is overexpressed in most amplified cases and in some non-amplified cases as well. This alteration is associated with a poor clinical outcome. BCs with ERBB2 overexpression can benefit from a targeted therapy that uses the humanized monoclonal antibody trastuzumab or the ERBB kinase inhibitor lapatinib genomic segment. Table S5 - Definition of the ERBB2 amplicon score. Table S6A - Significant altered regions found in the 54 samples harboring the 17q12-q21-amplification (defined by the score index with a threshold of 10-3). Table S6B - Gene expression deregulation frequencies of genes included in ERBB2-amplicon. Table S7A - Integrated genome analysis of ER- and ER+ ERBB2-amplified tumors. Table S7B- Clinical and histological features of the two clustered ERBB2-amplified tumor groups defined using gene expression data. Table S7C - Genes with expression significantly different in ER- and ER+ ERBB2-amplified tumors. Table S7D - Canonical pathways associated with ER+ and ER- expression signature in ERBB2-amplified BCs. Table S7E - Regions significantly altered by CNA in ERBB2-amplified IBC and NIBC. Table S8 - Transversal analysis of ERBB2-amplified BCs. Table S9A - Clinical features and protein expression analysis of ERBB2-amplified BCs. Table S9B - Clinical features and protein expression analysis of ER+ and ER- ERBB2-amplified BCs.Click here for fileSupplementary Material.Click here for fileERBB2-amplified primary breast tumors and breast cancer cell linesFigure S1: Genomic profiles of chromosome 17 in . A-C - Regional 17q12-q21 amplification centered on the ERBB2 locus observed in the 54 studied BCs. S1A and S1B-C show genomic profiles of chromosome 17 established with CGH analytics\u00ae software (Agilent Technologies) in IBC and NIBC samples, respectively. The 17q12-q21 amplification (log2 ratio >1) was found as single abnormality or associated with other various copy number aberrations along chromosome 17. The arrow indicates the 17q12-q21-amplicon centered on the ERBB2 locus. D - Regional 17q12-q21-amplification centered on the ERBB2 locus observed in the 14 studied breast cancer cell lines. Genomic profiles of chromosome 17 were established as defined in Additionnal file Click here for fileERBB2-amplified BCsFigure S2: Whole-genome expression profiling of . A - Hierarchical clustering of 51 samples and 13,114 genes/ESTs with significant variation in mRNA expression level across the samples. Each row of the data matrix represents a gene and each column represents a sample. Expression levels are depicted according to the color scale shown at the bottom. Red and green indicate expression levels respectively above and below the median. The magnitude of deviation from the median is represented by the color saturation. The dendrogram of samples (above matrixes) represents overall similarities in gene expression profiles and is zoomed in B. B - Dendrograms of samples. Top, Two large groups of tissue samples (designated I to II) are evidenced by clustering and delimited by the orange solid vertical line encoded by genes associated with the ER+/ER- ERBB2-amplified BCs molecular signature (Additionnal file Click here for file"}
+{"text": "Out-of-frame stop codons (OSCs) occur naturally in coding sequences of all organisms, providing a mechanism of early termination of translation in incorrect reading frame so that the metabolic cost associated with frameshift events can be reduced. Given such a functional significance, we expect statistically overrepresented OSCs in coding sequences as a result of a widespread selection. Accordingly, we examined available prokaryotic genomes to look for evidence of this selection.The complete genome sequences of 990 prokaryotes were obtained from NCBI GenBank. We found that low G+C content coding sequences contain significantly more OSCs and G+C content at specific codon positions were the principal determinants of OSC usage bias in the different reading frames. To investigate if there is overrepresentation of OSCs, we modeled the trinucleotide and hexanucleotide biases of the coding sequences using Markov models, and calculated the expected OSC frequencies for each organism using a Monte Carlo approach. More than 93% of 342 phylogenetically representative prokaryotic genomes contain excess OSCs. Interestingly the degree of OSC overrepresentation correlates positively with G+C content, which may represent a compensatory mechanism for the negative correlation of OSC frequency with G+C content. We extended the analysis using additional compositional bias models and showed that lower-order bias like codon usage and dipeptide bias could not explain the OSC overrepresentation. The degree of OSC overrepresentation was found to correlate negatively with the optimal growth temperature of the organism after correcting for the G+C% and AT skew of the coding sequence.The present study uses approaches with statistical rigor to show that OSC overrepresentation is a widespread phenomenon among prokaryotes. Our results support the hypothesis that OSCs carry functional significance and have been selected in the course of genome evolution to act against unintended frameshift occurrences. Some results also hint that OSC overrepresentation being a compensatory mechanism to make up for the decrease in OSCs in high G+C organisms, thus revealing the interplay between two different determinants of OSC frequency. The biased codon usage in many genomes is generally believed to result from selection for maximizing translational speed and/or accuracy -3, althoambush hypothesis: OSCs could reduce the metabolic costs of accidental frameshifts, and a positive correlation between the usage of codons and the number of ways codons can be part of hidden stops is expected :[AT] content) of coding sequences Figure . Among tTN[GA|AA|AG]NN in the protein coding frame. As the third codon position is the most variable codon position due to degeneracy of the genetic code, variation of G+C content at the third codon position (GC3) should have the greatest effect on OSC usage bias for frame +2. As expected, relative frequencies of TAA and TAG for frame +2 decreased with increasing GC3 while that of TGA increased but by the first and second codon positions of the second half of the dicodon. Regression analysis confirmed that GC1 is the dominant independent regressor showed overrepresentation of OSCs in the +2 frame when compared to frequencies predicted by the second-order three-periodic Markov model. This model accounted for the codon position-specific trinucleotide bias including codon usage bias. Only 3 genomes (0.9%) showed a statistically significant underrepresentation of OSCs in the +2 frame under the same model. When compared to the OSC frequencies predicted by the fifth-order three-periodic Markov model, 185 genomes (54.1%) still showed statistically significant overrepresentation of OSCs in the +2 frame, while 26 genomes (7.6%) showed underrepresentation of OSCs in the same frame. For the +3 frame, 57 and 53 genomes (16.7% and 15.5%) showed overrepresentation and underrepresentation of OSCs respectively when compared to frequencies predicted by the second-order three-periodic Markov model. When examined under the fifth-order three-periodic Markov model, the number of genomes with OSC overrepresentation in the +3 frame greatly increased to 306 (89.5%), while only 4 (1.2%) showed underrepresentation.When both alternate reading frames were considered together, 339 genomes (99.1%) showed OSC overrepresentation under the second-order three-periodic Markov model, while 319 genomes (93.3%) showed OSC overrepresentation under the fifth-order three-periodic Markov model. The percentage deviations of the observed from mean expected OSCs were found to range from -0.0343% to +5.69% and -0.111% to +0.616% under the second- and fifth-order three-periodic Markov models respectively. There are significant positive correlations between G+C content and the degrees of OSC overrepresentation under both models (Spearman's rank correlation coefficient = 0.838 and 0.630 for the second- and fifth-order three-periodic Markov models respectively) Figures and 7B.Mixed genome analysis was conducted with 8 artificial metagenomes with sizes ranging from 5.2 to 12.2 MB. OSC overrepresentation was found in all cases, ranging from 0.084 to 0.841%, under both the second- and fifth-order three-periodic Markov models. These results indirectly suggested that the phenomenon of OSC overrepresentation is stable to distant horizontal gene transfer, and should apply to presently uncharacterized genomes that may have arisen from extensive horizontal gene transfer with significant sequence compositional diversity and phylogenetic incongruence ,29.The expected frequencies of OSC occurrences in the selected genomes under the different models were shown in Table All the simpler Markov models chosen were nested within the more complex models . The relationship between OSC overrepresentation and optimal growth temperature was also supported by stepwise variable selection on the multiple linear regression model using Akaike information criterion.Prokaryotes included in our analysis were classified into one of the following 4 categories: psychrophiles, mesophiles, thermophiles and hyperthermophiles. The degree of OSC overrepresentation was found to correlate negatively with the optimal growth temperature of the organism after correcting for the G+C% and AT skew of the coding sequence on the relative codon frequencies of off-frame stop codons (OSC); Figure S2, biplot of the PCA results; Figure S3, relationship of the Markov models used.Click here for fileResults from Monte Carlo simulations of OSC frequencies in prokaryotic genomes. Excel file containing simulated OSC frequencies under different Markov models from each Monte Carlo run.Click here for file"}
+{"text": "Despite decades of efforts to improve quality of health care, poor performance persists in many aspects of care. Less than 1% of the enormous national investment in medical research is focused on improving health care delivery. Furthermore, when effective innovations in clinical care are discovered, uptake of these innovations is often delayed and incomplete. In this paper, we build on the established principle of 'positive deviance' to propose an approach to identifying practices that improve health care quality.We synthesize existing literature on positive deviance, describe major alternative approaches, propose benefits and limitations of a positive deviance approach for research directed toward improving quality of health care, and describe an application of this approach in improving hospital care for patients with acute myocardial infarction.i.e., organizations that consistently demonstrate exceptionally high performance in the area of interest ; study the organizations in-depth using qualitative methods to generate hypotheses about practices that allow organizations to achieve top performance; test hypotheses statistically in larger, representative samples of organizations; and work in partnership with key stakeholders, including potential adopters, to disseminate the evidence about newly characterized best practices. The approach is particularly appropriate in situations where organizations can be ranked reliably based on valid performance measures, where there is substantial natural variation in performance within an industry, when openness about practices to achieve exceptional performance exists, and where there is an engaged constituency to promote uptake of discovered practices.The positive deviance approach, as adapted for use in health care, presumes that the knowledge about 'what works' is available in existing organizations that demonstrate consistently exceptional performance. Steps in this approach: identify 'positive deviants,' The identification and examination of health care organizations that demonstrate positive deviance provides an opportunity to characterize and disseminate strategies for improving quality. Despite decades of efforts to improve quality of health care, poor performance persists in many aspects of care. Patients often do not receive guideline-recommended processes of care -3, and re.g., survival rates, medication use, and timely emergency treatment). The central premise of a positive deviance approach [We describe an approach to quality of care research that identifies innovative strategies from 'positive deviants' in health care, those organizations that consistently demonstrate exceptionally high performance in an area of interest ; study the organizations in-depth using qualitative methods to generate hypotheses about practices that enable organizations to achieve top performance; test hypotheses statistically in larger, representative samples of organizations; and work in partnership with key stakeholders, including potential adopters, to disseminate the evidence about newly characterized best practices.The positive deviance approach accomplishes two goals: the identification of practices that are associated with top performance, and promoting the uptake of these practices within an industry, using the following steps Figure : identifWhen should one consider using a positive deviance approach to identify and disseminate best practices in health care organizations? First, the approach requires concrete, widely endorsed, and accessible performance measures for organizations. For instance, in the case of hospital care, there are several specific, validated, and publicly-reported performance measures; therefore, hospitals can be ranked according to performance, and positive deviants within the industry can be identified. In contrast, there are no publicly accessible data on performance measures for many health care conditions such as treatment of children with fevers or hospital falls among elderly, among others. Positive deviance studies in these areas would therefore be difficult to accomplish.i.e., there are positive deviants. Additionally, the approach is effective when organizations are adequately open to sharing their strategies for exceptional performance. In cases where organizations are highly proprietary and resistant to sharing what might be viewed as competitive advantages or 'trade secrets,' the positive deviance approach is unlikely to produce meaningful results.Second, the positive deviance approach works when there is variation in organizational performance and outcomes across the industry, with some organizations achieving marked and consistent top performance and other organizations not doing so, Third, the approach is effective when hypotheses generated from the experience of top performing organizations can be tested in larger, representative samples. Evidence from statistical testing is particularly useful when disseminating findings to health care organizations because clinicians, whose support is often fundamental to successful changes in clinical processes ,22, are Finally, for potential adopting organizations, the perceived importance of improvement on the selected performance measure can enhance effective dissemination. Involving potential adopters in the development and testing of a particular practice can also accelerate the pace and scope of uptake by increasing the fit of the practice with the organizational context.i.e., when successive sampling does not produce additional hypotheses.Studies using positive deviance begin with purposive sampling, with the goal of selecting organizations based on diversity of performance with adequate representation of organizations with exceptional performance. As is standard in purposive sampling for qualitative studies , the samThe sampling strategy for the next stage of a positive deviance study, in which one is statistically testing hypotheses generated from the qualitative study, employs methods for quantitative investigation. The goal is to sample the universe of relevant organizations in order to attain a large, representative sample of the industry to which one is generalizing, thereby permitting valid and precise inferences from subsequent statistical analysis. Sample size is determined by considerations of statistical power and desired level of precision.e.g., concepts of organizational culture, norms of behavior, inter-group relations) into the understanding of 'what works' or best practices. This integration is often neglected in randomized controlled trials and difficult to measure in quantitative studies. Data collection may include observations, in-depth interviews and focus groups with staff, archival reviews of documents from the organization, or a combination of these methods, with the goal of developing a deep understanding of the organization and how it functions relative to the particular performance measures.The in-depth examinations of organizations requires open-ended, qualitative data collection methods that explore both specific strategies taken by organizations as well as the broader context in which such strategies are employed ,25. A pai.e., hypotheses) and the quantitative measures of those variables hypothesized to influence performance. A benefit of the mixed methods approach [A core challenge and opportunity in positive deviance studies is the linking of the qualitative findings that are statistically related to an outcome .Biomedical or epidemiologic outcomes research focuses on developing an evidence base through quantitative measurement and statistical examination of a variety of predictors or correlates of an identified outcome, r models ,36 can bThe advantage of this approach to identifying best practices is that the production of statistical associations is often based on the experience of a large sample of organizations and, particularly for health care, produced in a language and with methods that are credible to physicians whose involvement is often important for successful adoption and implementation of best practices by a health care organization.e.g., differences in monitoring functions, reward systems, leadership styles) might influence the impact of various practices on the outcome. Furthermore, such studies do not delve into the variation within the intervention or non-interventions arms of the trials to understand how organizational context might influence the success of the intervention. As a result, while such trials produce useful data, they do not provide insight into organizational features such as inter-group relations, leadership, and culture might influence the impact of the intervention on performance. Furthermore, the organizations in which such studies are conducted may be systematically different from most. Although this is the concern of generalizability from any type of research, organizations that participate in randomized and controlled trials may be particularly distinct (often large teaching or research facilities) from potential adopting organizations. In summary, such studies can provide credible statistical evidence, particularly if they are integrated in the hypothesis testing step of positive deviance studies; however, used in isolation, such studies they may oversimplify recommendations for best practices with inadequate attention to the subtleties of implementation, thereby slowing their translation into practice and widespread uptake.Conversely, a disadvantage of this approach is that it typically neglects the complexity of organizational context, which is problematic given that organizational factors can be important barriers to implementation of innovative practices or programs -39. RandQuality improvement and action research, as applied to organizations, both focus on developing best practices within focal organizations. The approaches recognize the importance of organizational context, and the goal of developing best practices for the selected organization. Quality improvement ,40 seeksThere are strengths to these approaches, which have been shown to improve targeted administrative and clinical performance measures in health care ,42. For However, there are also important limitations to consider. The process of development of best practices in these approaches is informed typically by a very small sample of organizations, even a single organization or unit within an organization. Particularly for action research, solutions are developed within and for a selected organization; these solutions may not be amenable to widespread dissemination, thus limiting opportunities for large-scale change. In addition, these approaches neglect potential extant knowledge among other organizations that have previously attained top performance, which is not integrated into the quality improvement or action research efforts. Finally, neither quality improvement nor action research has an explicit goal of disseminating the knowledge gained to the larger community or industry.e.g., organizational culture, leadership support, norms of behavior) in which they are implemented. These practices are characterized by extracting common themes or hypotheses based on several, rather than single, organizational settings where the proof of concept exists. This attention to organizational context is particularly important for complex, adaptive organizations [The positive deviance approach integrates some of the strengths of each of these approaches by combining intensive organizational-level examination using qualitative methods with the broader-scale statistical analysis possible with a large sample of organizations. The positive deviance approach allows for the explicit integration of real-life implementation issues and organizational context because it seeks to characterize not just what processes and practices are present in top performing organizations but also the context , it may be difficult to create valid, quantitative measures; in such cases, evidence may come solely from qualitative studies, which may not have credibility among certain individuals who are central to successful uptake and implementation. Furthermore, relative to quality improvement and action research efforts, the positive deviance approach focuses on organizations learning from external sources rather than internal process improvement efforts. Consequently, staff members of adopting organizations may not achieve the same level of learning and investment as they might if they were to develop best practices themselves. Nevertheless, even if the practice originates from outside the focal unit or organization, its adoption into a new organization typically requires adaptation to local circumstances in which staff must engage and hence learn [de novo discovery efforts that periodically can fully shift the paradigm of an industry in ways not possible through the study of only positive deviance.Despite these strengths of the positive deviance approach, there are limitations relative to the other approaches. In some but not all cases, positive deviance studies may rely on self-reports of organizational practices rather than procedures of a controlled trial, which may result in reporting bias, although established survey methods can be used to limit measurement error -49. Addice learn . Finallyde novo through a quality improvement of action research cycle of inquiry. Second, the source of best practices differs. Whereas quality improvement methods seek to discover through experimentation and data feedback within the organization, the positive deviance approach focuses on learning from exceptional examples of extant performance external to the focal unit or organization.Ultimately, there are two major differences between the positive deviance approach and a quality improvement or action research approach. First, in positive deviance approaches, the best practices are assumed to already exist; they are not built Promoting wide dissemination of best practices, particularly among health care organizations, has been the subject of expansive theoretical inquiry . A distiThe positive deviance approach to identification and dissemination of best practices employs some of the key features thought to speed diffusion, or spread. First, some theoretical literature ,45,52,53We used a positive deviance approach in our recent efforts to improve hospital care for patients with acute myocardial infarction. In the span of three years, the proportion of patients whose care met the targeted national guidelines for timeliness of care for ST-segment elevation myocardial infarction increased from about 50% to more than 75% of patients. The process reveals the potential of the positive deviance approach to identifying and disseminating best practices in order to accelerate whole-system change.Prompt treatment is critical for survival of patients with ST-segment elevation myocardial infarction -60. The one half of patients received care that met the national target of door-to-balloon times within 90 minutes. Furthermore, performance had remained stagnant for several years with little improvement [As of 2004 to 2005, less than rovement , despiterovement . Neverthrovement , thus ilWe used the National Registry of Myocardial Infarction , a patiei.e., clinical medicine, nursing, quality improvement, health services research, and management), including the two people who were present on the site visit as well as two researchers who participated in analysis of all data. Coded data were organized and further analyzed for recurrent and unifying themes using NUD*IST 4 (Sage Publications Software and now replaced by NVivo 8). We identified a set of specific strategies [We conducted in-depth site visits comprised of tours and open-ended interviews with all staff identified by the hospital as being involved with door-to-balloon time improvement efforts. This varied by hospitals but typically included cardiologists; emergency medicine physicians; nurses from the catheterization laboratory where PCI is performed; the emergency department; quality improvement units; technicians and technologists from various departments; emergency medical services staff, including ambulance staff; and senior and middle-level administrators. We interviewed a total of 122 staff members to understand their perspectives and experiences in improving door-to-balloon time at their hospitals. Researchers with diverse clinical and non-clinical backgrounds conducted the interviews in teams of two. After appropriate consent and institutional review board approval, interviews were audio-taped and transcribed by a professional, external transcription service. Interview teams underwent a formal debriefing with an organizational psychologist, and these sessions also were tape-recorded and summarized to identify possible additions to subsequent interviews and insights pertinent to the particular visit. All qualitative data, including the transcriptions of interviews and notes from the visit, were analyzed using the constant comparative method of qualitative data analysis ,69,70. Trategies that potrategies were in discussion about how best to disseminate the findings. The selected vehicle for dissemination was the door-to balloon (D2B) Alliance campaign supporteThe D2B Alliance made available a change packet and toolkit, held webinars, published newsletters of success stories, facilitated workshops at the ACC and AHA annual meetings, and managed an online community. All of the activities were open regardless of enrollment status, although all hospitals that were formally enrolled completed a web-based survey at the time of enrollment and approximately one year later to evaluate their changes in strategies adopted and reported physician and management support for their quality improvement efforts.Several features of the D2B Alliance were developed to be consistent with the theoretical literature on diffusion, or spread, of innovations . In termIn terms of alignment with the external environment, the D2B Alliance efforts occurred in a broader environment that was also promoting improvements in door-to-balloon time. The Centers for Medicare & Medicaid Services was beginning to report hospital achievement of door-to-balloon times of 90 minutes or less and include modest financial incentives for meeting performance targets; the professional organizations responding to peer-reviewed literature of the clinical importance of door-to-balloon time were supportive of improvement efforts, and physicians seeking re-certification through the American Board of Internal Medicine could use participation in the D2B Alliance activities as evidence of their quality improvement efforts.Ultimately approximately 1,000 of the 1,400 US hospitals that perform primary PCI enrolled with the D2B Alliance, a 70% penetration rate in the industry. Survey data indicate that there has been a significant increase since 2006 in the use of the recommended strategies among enrolled hospitals (unpublished data), and data from before and after the D2B Alliance show significant three-year improvement in door-to-balloon times . WhereasThe positive deviance approach holds much promise for improving practice. It takes advantage of natural variation in performance, develops an evidence base through detailed organizational analysis and statistical testing of hypotheses, and supports collaboration between researcher and practitioner in ways that identify feasible solutions and foster support for dissemination and uptake of recommendations. Practitioners and organizations can take advantage of positive deviance by identifying top performance within units of the organization or in other organizations, and foster examination and discussion of such performance in order to elevate performance in other areas. Barriers to its use may include competition between units within a single organization or between organizations such that secrets of success are not readily shared, structural separation of units so that information does not flow easily, or workforce issues in that employees do not see others' experience as adequately relevant to their own.The case study illustrates the key steps to applying positive deviance methodology to improving hospital care for myocardial infarction and also highlights circumstances in which the positive deviance method may be most useful. First, in the case of door-to-balloon time, there was a concrete and widely-endorsed indicator of organizational performance. Second, the indicator could be assessed reliably for multiple organizations using existing data from national registries of patients with acute myocardial infarction and the national public reporting system for hospital quality. Third, substantial variation in hospital performance was apparent, with some exceptional performers but many that did not meet national guidelines. Fourth, organizations were willing to share their experiences openly to help produce needed evidence for how to improve performance. Finally, there was substantial impetus from both clinical and management staff to reduce door-to-balloon time. Reducing door-to-balloon times both benefited patient survival and enhanced organizational standing in a competitive, profitable market for which hospital performance was publicly reported. Together, these features created an ideal opportunity for using the positive deviance approach to identify and disseminate innovations to improve quality of care.The gap between what we know and what we do is well-documented ,74. ThisThe authors declare that they have no competing interests.EHB is the lead author and the corresponding author of the paper. LAC, SR, LR, IMN, and HMK co-wrote the paper and have approved of the final draft of the manuscript."}
+{"text": "ERBB2 gene amplification is used to overcome repression of its expression by sequence-specific transcription factors.Alterations of receptor-type tyrosine kinases (RTK) are frequent in human cancers. They can result from translocation, mutation or amplification. The ERBB2 RTK is encoded by a gene that is amplified in about 20% breast cancers. The question is: why is this RTK specifically subjected to this type of alteration? We propose that RTK genes in various types of tumors. One of the earliest reports of RTK alteration in human cancer was issued more than twenty years ago. It described the amplification of the ERBB2 RTK gene in a good proportion of breast cancers [RTK alterations in human tumors. This still ongoing search has registered a recent success in neuroblastoma. The ALK RTK gene, which is translocated and fused to various partner genes in lymphomas and non-small cell lung cancer [RTK alterations are central to many malignant diseases such as thyroid, lung and breast cancers, a major question remains: what determines the mechanism of alteration of an RTK oncogene?Receptor-type tyrosine kinases (RTK) are major regulators of cellular processes. As such they are often mutated in human cancers. Several types of alterations have been characterized. Translocations, amplifications and mutations affect many cancers . This ing cancer -6. WhileALK in neuroblastoma, enhances this effect. But why are some RTK genes such as ERBB2 amplified without mutation or rearrangement? We would like to propose an explanation.Translocation with fusion may be necessary to both activate the tyrosine kinase and express the oncogenic enzyme in a given tissue or cell. The partner gene will provide the appropriate promoter, dimerization motifs and protein subcellular localization. Mutation is an obvious way of constitutive activation of a kinase. Amplification of the mutated gene, as observed for ERBB2 in mammary epithelial cells. Expression of this gene is apparently tightly controled by a number of transcriptional repressors. FOXP3 represses ERBB2 expression, and acts as a tumor suppressor when inactivated [ERBB2 [ERBB2 expression [ERBB2 in a cell-density-dependent manner [ERBB2 [ERBB2 gene have been described so far. It remains to determine whether some, if not all, of these sequence-specific DNA-binding proteins share a common cofactor such as the CTBP corepressor [ERBB2 promoter.A series of recent works has shed new light on the regulation of ctivated . Similard [ERBB2 . We recepression . The ETSpression . The Y-bt manner . Finallyr [ERBB2 . Thus, aepressor , and howERBB2 by-default expression is quenched by strong repressors. We hypothesize that amplification of the ERBB2 gene and its cognate non-coding regulatory sequences titrates out these repressors, uncovering a permanent proliferative effect of the ectopically-expressed ERBB2 protein in the progenitors of ER-positive cells. ERBB2 overexpression could in turn shut down more or less tightly ER expression by deletion or mutation may lead to er cells . One canRTK genes can display different oncogenic alterations. Titration of sequence-specific repressors or corepressors could be the mechanism at stake in other cases of RTK amplification. It could also take place in cases of non-RTK gene amplification without mutation. However, in many of these cases (e.g. cyclin D1 or cyclin E in breast cancers), amplification may be the alteration of choice simply because the oncogenic product is the overexpressed normal protein and mutation will not do. Acquisition of a new promoter by translocation and gene fusion would also free an RTK oncogene from its natural repressors but has different effects from amplification; it could modify signaling pathways and/or target different cells; in addition constitutive dimerization and activation could bypass other regulatory controls.ERBB2 promoter.In the same line of reasoning, accumulation of gene copies could be a mechanism to escape negative control by microRNAs or any other type of inhibition. For example, amplification might also titrate out methylases to turn on the ERBB2 gene expression by specific drugs could synergize with anti-receptor or anti-kinase therapy. It should aim at restoring ERBB2 repression or inhibiting ERBB2 transcription [ERBB2 downregulating activity in cell lines [ERBB2 expression, has been launched in breast cancer [ERBB2 and ER, are emerging as promising therapeutic targets [ERBB2 expression and its interplay with ER could yield interesting molecules [ERBB2 promoter and associated transcription factors will probably help find new targets and design new strategies.The biology of transcriptional repressors will have clinical use. First, knowledge of repressor status may help prognosis assessment and selection of patients for appropriate treatment . Second,cription ,17. Preccription . Chimerill lines . A clinit cancer . Members targets . Large-solecules ,22. A beERBB2 it could be an ER-positive progenitor cell) where it is normally repressed if the reason for the amplification is to remove transcriptional repression, or in a cell where it is normally expressed if the reason is to rise the level of protein made. Amplification of other RTK genes such as EGFR, FGFR1, FGFR2 and IGF1R occurs in various subtypes of breast cancers [The mechanism of oncogenesis involving an RTK may give a clue as to what kind of cell is targeted. A mutated RTK may trigger oncogenic transformation in a cell where it is normally expressed, using the same signaling pathway but in a permanent fashion. An amplified RTK could trigger oncogenesis in a cell (in the case of cancers . It willERBB2 is associated with stem cell biology in the mammary gland and breast cancer . It woulERBB2 promoter amplification may free other genes from repression by the same transcription factors. ERBB2 amplification would thus have consequences outside the activated signaling pathway of the receptor itself. Some of these \"liberated\" genes might be found upregulated in gene expression analyses of ERBB2-amplified tumors [ERBB2 amplification is associated with a bona fide breast cancer subtype.Finally, the titration of ERBB2 repressors by d tumors . This maThe authors declare that they have no competing interests.The hypothesis came to view during discussions between the three authors."}
+{"text": "The type I insulin-like growth factor receptor (IGF-IR) and ErbB2 (Her-2) are receptor tyrosine kinases implicated in human breast cancer. Both proteins are currently the subject of targeted therapeutics that are used in the treatment of breast cancer or which are in clinical trials. The focus of this study was to utilize our inducible model of IGF-IR overexpression to explore the interaction of these two potent oncogenes.ErbB2 was overexpressed in our RM11A cell line, a murine tumor cell line that overexpresses human IGF-IR in an inducible manner. ErbB2 conferred an accelerated tumor onset and increased tumor incidence after injection of RM11A cells into the mammary glands of syngeneic wild type mice. This was associated with increased proliferation immediately after tumor cell colonization of the mammary gland; however, this effect was lost after tumor establishment. ErbB2 overexpression also impaired the regression of established RM11A tumors following IGF-IR downregulation and enhanced their metastatic potential.in vivo, mediate resistance to IGF-IR downregulation and facilitate metastasis. This supports the growing evidence suggesting a possible advantage of using IGF-IR and ErbB2-directed therapies concurrently in the treatment of breast cancer.This study has revealed that even in the presence of vast IGF-IR overexpression, a modest increase in ErbB2 can augment tumor establishment Receptor tyrosine kinases (RTKs) are transmembrane proteins with intracellular kinase domains that undergo phosphorylation in response to ligand binding. This group of proteins has a well established role in breast cancer, and thus many RTKs are currently the focus of directed therapeutics with a significant number of these therapies in clinical trials. Two such proteins with validated roles in breast cancer are ErbB2 , a member of the epidermal growth factor receptor family, and the type I insulin-like growth factor receptor (IGF-IR). A large amount of evidence implicating both in clinical breast cancer is emerging. In addition, both receptors have been validated as oncogenes through the generation and characterization of transgenic mouse models (reviewed in and [2])The IGF-IR undergoes autophosphorylation on conserved intracellular tyrosine residues after binding its ligands IGF-I and IGF-II which subsequently triggers signal cascades involved in many processes including proliferation and evasion of apoptosis . Common + breast cancer .There is a growing body of evidence suggesting an interaction between the IGF-IR and ErbB2 in clinical breast cancer. Different studies have shown a physical interaction between the two receptors through immunoprecipitation -34 and iIn vivo, these cells were shown to form tumors upon injection into the mammary gland of syngeneic, wild type, FVB mice. Because of its inducible nature, our model can be used to mimic the effects of IGF-IR-directed therapies through the deactivation of the transgene, and therefore provides a unique opportunity to study the potential function of other known oncogenes during IGF-IR-mediated mammary tumorigenesis. As a number of IGF-IR inhibitory compounds are currently in clinical trials [To examine the role of the IGF-IR in breast cancer, our lab has previously created a doxycycline-inducible transgenic mouse model (MTB-IGFIR). IGF-IR-induced transgenic animals develop multiple tumors with 100% penetrance and an average latency of approximately 50 d , with mel trials ,44, thisin vitro and in vivo, cell signaling, primary tumorigenesis, recurrence in the absence of IGF-IR transgene expression and metastasis. In this study, it was determined that a modest increase in ErbB2 expression could accelerate primary tumor growth by enhancing proliferation immediately after cell colonization of the mammary gland. Overexpression of ErbB2 also impaired regression of tumors in the absence of IGF-IR transgene expression and facilitated metastasis.It has been observed that ErbB2 overexpression can alleviate the requirements of IGF and EGF for proliferation in a series of human normal and breast cancer cell lines . Howeverneu) expression construct . This ErbB2 expression plasmid was referred to as pEN1-ErbB2, while the empty vector was referred to as pEN1. Plasmid DNA was purified using a Qiagen mini-prep kit in accordance with the manufacturer's instructions.A wild type rat ErbB2 (ribed in ) was a g1222) both used at a concentration of 1:250, anti-phospho-Akt, anti-Akt, anti-phospho-Erk1/2 and anti-Erk1/2 were all used at a dilution of 1:1,000, as well as anti-IGF-IR used at a dilution of 1:1,000, and anti-\u03b2-actin used at 1:2,000. The secondary antibody used was anti-rabbit IgG and was used at a dilution of 1:2,000. Densitometry of the bands was quantified using a FluorChem 9900 imaging system and AlphaEaseFC software version 3.1.2 . Densitometry values were normalized to those of the loading control, \u03b2-actin, and these normalized numbers were expressed as values relative to the control.Western blotting was performed as previously described . PrimaryRM11A cells, a cell line previously derived from a tumor from an MTB-IGF-IR mouse, were maintained in media as described previously . Cells wOne thousand RM11A+Dox or RM11A+Dox/ErbB2 cells/well were plated in triplicate in 96-well plates. Forty-eight hours after plating the cells were incubated with MTT at a final concentration of 5 mg/mL for 1 h at 37\u00b0C. Cells were then lysed and the absorbance value at 570 nm was determined. Results represent the average of seven replicates.H&E staining was performed as previously described . To meas4 cells/well. Two days after plating, cells were fixed and stained as described previously [Immunofluorescence was used to assess proliferation. Cells were plated on glass coverslips in 6-well plates at a density of 3 \u00d7 10eviously . The priImmunohistochemistry was performed as previously described . Anti-Kith inguinal mammary glands were injected with 5 \u00d7 105 RM11A+Dox or RM11A+Dox/ErbB2 cells resuspended in 10 \u03bcL of PBS using a 25 \u03bcL Hamilton syringe as described in [volume = length \u00d7 width2/2. To track tumor growth, two methods were used. The first method was calculating specific growth rate (SGR), an established method for this measurement [10(tumor volume) versus time(d) was used to calculate tumor doubling time for validation. For the tumor regression studies IGF-IR expression was downregulated when tumors reached 7-11 mm in length by removing doxycycline from the animals' diets. Subsequent regression and recurrence in the absence of IGF-IR transgene expression was monitored as above.All mice were housed and utilized following the guidelines established by the Animal Care Committee at the University of Guelph and the Canadian Council on Animal Care. Wild type FVB mice were purchased from Charles River . At approximately 4 weeks of age, animals were anesthetized and both 4ribed in . Mice wesurement . In addiMTB-IGFIR double transgenic mice were used as previously described . Lungs wTissue comprising the entire lung from each mouse harboring a 15-17 mm length tumor was collected and processed as described above. Approximately 25 serial sections were taken from the middle of each lung. H&E was performed on sections from the beginning and second half of the series. Slides were evaluated for the presence of metastases using light microscopy by two individuals in a blinded manner.Data values are presented as the mean \u00b1 SE. Significance and p-values were obtained using a Student's t-test. Differences in metastasis were assessed using a Fisher's exact test. Significance between tumor regression data was calculated using a chi-squared test.in vitro and in vivo growth characteristics have previously been reported [The ability of doxycycline to induce elevated IGF-IR levels in the RM11A cells as well as their reported . To examreported . Two indin vitro (data not shown). Therefore, elevated ErbB2 expression cannot further enhance survival or proliferation beyond that of the effect demonstrated by IGF-IR overexpression.We hypothesized that ErbB2 overexpression would enhance cell survival and proliferation. Survival of RM11A+Dox and RM11A+Dox/ErbB2 cells was assessed by MTT assays, while proliferation was quantified using Ki67 immunofluorescence. ErbB2 overexpression did not significantly affect proliferation or survival in vivo, RM11A+Dox and RM11A+Dox/ErbB2 cells were injected into the mammary fat pad of wild type syngeneic FVB mice and tumor onset and growth rates were evaluated. RM11A+Dox/ErbB2 cells produced palpable mammary tumors approximately 22 days post injection. This latency was significantly shorter than the time required for the RM11A+Dox cells to form palpable tumors (48 days) IGF-IR transgene expression was suppressed by switching the animals from a doxycycline diet to a normal diet . Tumor length prior to IGF-IR downregulation did not significantly vary between the RM11A+Dox and RM11A+Dox/ErbB2 groups. As shown in Table Previously it has been observed that most tumors formed after injection of RM11A+Dox cells into the mammary gland regress following IGF-IR downregulation, with most of these tumors recurring independent of IGF-IR transgene expression . To examTo determine whether ErbB2 overexpression altered the metastatic capacity of RM11A cells, lung tissue from mice harboring primary tumors (15-17 mm in length) or tumors that recurred following IGF-IR transgene downregulation (15-17 mm in length) was analyzed. As shown in Table Lung metastasis was also examined in mice harboring RM11A+Dox and RM11A+Dox/ErbB2 tumors that grew following IGF-IR transgene downregulation. Six of 13 tumors expressing high levels of ErbB2 metastasized to the lung while none of the tumors with basal ErbB2 expression metastasized to the lung Figure . In addiMetastasis to the lung has been observed in approximately 40% of MTB-IGFIR transgenic mice harboring tumors 15-17 mm in length and these metastases range in size from microscopic lesions of approximately 50-100 \u03bcm in length to macroscopic tumors approximately 6-8 mm in length (unpublished observations). To determine whether ErbB2 is involved in metastasis of mammary tumors produced by MTB-IGFIR transgenic mice, immunohistochemistry for ErbB2 was performed on the aforementioned lung tissue. While a relatively high level of variability was observed in both primary tumors and microscopic lung metastases, there was a tendency for lung lesions to stain more intensely for ErbB2 than primary tumors Figure . Some ofWhile the contribution of the IGF-IR to ErbB2 signaling and resistance to ErbB2-directed therapies in breast cancer has been studied in several systems, the reciprocal interaction remains almost completely unknown. To study the potential role of ErbB2 during IGF-IR-mediated mammary tumorigenesis we utilized our model of inducible IGF-IR overexpression. The importance of the IGF-axis in proliferation and transformation of a vast number of cells including human mammary epithelial cells is well documented. Our laboratory has shown that IGF-IR overexpression alone is capable of mediating an extremely rapid transformation of mouse mammary epithelial cells . Most ofSelection of stable transfectants yielded RM11A cells with approximately 3-fold higher expression of ErbB2 (RM11A+Dox/ErbB2) than control RM11A cells (RM11A+Dox). Phosphorylated ErbB2 was also elevated approximately 3-fold in the ErbB2 overexpressing cells thus indicating the receptor was active. This overexpression was monitored and consistently maintained throughout the duration of the study . These bands were determined to be approximately 95 and 70 kDa. A 95 kDa truncated N-terminal product of Her2, known as p95Her-2, has been previously described . This deDownstream signaling pathways were studied to determine those potentially augmented by ErbB2 overexpression. The levels of phosphorylated Akt and Erk1/2 were similar in RM11A+Dox cells and RM11A+Dox/ErbB2 cells suggesting that upregulation of ErbB2 was incapable of further activating PI-3K or MAPK pathways. Given the magnitude of IGF-IR overexpression and the fact that both of these pathways are known to be activated by this receptor this observation is not surprising; it is anticipated that the high level of IGF-IR expression has already maximized signaling though the PI-3K and MAPK pathways .In vivo, it was observed that ErbB2 conferred a more rapid tumor onset and tumor incidence was also elevated as indicated by the number of mammary glands injected that actually developed tumors. To verify that ErbB2 shortened tumor latency, mammary glands were collected 14 d post-injection. Average tumor size at this time point was 4-fold greater in the RM11A+Dox/ErbB2 cells compared to RM11A+Dox cells. We then explored possible mechanisms through which ErbB2 augmented tumor growth. First we looked at survival and proliferation in vitro. Only a small, insignificant increase in survival was observed in RM11A+Dox/ErbB2 cells were compared to RM11+Dox cells and thus it was concluded that ErbB2 overexpression had a negligible effect on RM11A cell survival in vitro. Despite a minimal effect in vitro, overexpression of ErbB2 had a marked effect on tumorigenesis in vivo. Using Ki67 staining to examine proliferation our data suggested that proliferation is only significantly affected by ErbB2 overexpression shortly after tumor cells colonize the mammary tissue (4 d post injection but not 14 d post injection). The lack of difference in proliferation in established tumors was corroborated by evaluating tumor growth rates using two independent methods. For the first method log(tumor volume) was plotted against time and from the resulting slope of the line tumor doubling time was calculated. The second technique, specific growth rate has been mathematically determined to be an accurate means of quantifying tumor growth rate and is less susceptible to negligible or negative changes in volume from one measurement to the next [the next . In thisTumor regression following IGF-IR transgene downregulation was studied to model the effects of ErbB2 overexpression during the use IGF-IR-directed therapeutics. Here it was observed that ErbB2 overexpression impaired tumor regression following IGF-IR downregulation thus suggesting that ErbB2 could potentially facilitate resistance to IGF-IR-directed therapies. These results are of obvious clinical importance as ErbB2 status may become an important predictor of response to IGF-IR-directed therapies. In addition, subsequent mutations enhancing ErbB2 expression may render tumors unresponsive to these therapies. It is becoming clear that IGF-IR can mediate resistance to ErbB2-targeting treatments ; therefoMetastasis was also studied for multiple reasons; first, ErbB2 expression is well known to correlate with distant metastasis in human clinical breast cancer . Also, oErbB2 overexpression did however facilitate metastasis following IGF-IR downregulation. It is possible that through the delayed process of partial regression and subsequent resumption of growth, metastasic lesions have time to grow to a size where they are detectable histologically. Furthermore, upregulation of ErbB2 was observed in metastatic primary tumors as well as many metastatic lesions from MTB-IGFIR mice compared to non-metastatic primary tumors. Based on the fact that metastasis is only observed in 40% of all MTB-IGFIR animals, it is apparent that other alterations must occur to confer metastatic competency. Our results suggest that upregulation of ErbB2 is one such mechanism through which tumor cells gain this capacity. These observations suggest that ErbB2 can compensate for the loss of IGF-IR signaling during mammary tumorigenesis and further supports a potential advantage in combining ErbB2 and IGF-IR-directed therapies.In conclusion, this study describes experiments providing information regarding the interaction between two potent oncogenes in mammary tumorigenesis. It has been previously postulated that targeting multiple signaling pathways such as IGF-IR and ErbB2 may be beneficial to the treatment of breast cancer . Based oThe authors declare that they have no competing interests.CC participated in design and coordination of the study as well as all of the experiments described in this study and drafting of the manuscript. JP participated in design of the study. RM coordinated the study and contributed to drafting of the manuscript. All authors have read and approved the final manuscript."}
+{"text": "Only rarely have surveys focused at least in part on units that directly support the use of research evidence in developing health policy on an international, national, and state or provincial level collaboration and adapted one version of the questionnaire for organizations producing CPGs and HTAs, and another for GSUs. We sent the questionnaire by email to 176 organizations and followed up periodically with non-responders by email and telephone.We received completed questionnaires from 152 (86%) organizations. More than one-half of the organizations (and particularly HTA agencies) reported that examples from other countries were helpful in establishing their organization. A higher proportion of GSUs than CPG- or HTA-producing organizations involved target users in the selection of topics or the services undertaken. Most organizations have few (five or fewer) full-time equivalent (FTE) staff. More than four-fifths of organizations reported providing panels with or using systematic reviews. GSUs tended to use a wide variety of explicit valuation processes for the research evidence, but none with the frequency that organizations producing CPGs, HTAs, or both prioritized evidence by its quality. Between one-half and two-thirds of organizations do not collect data systematically about uptake, and roughly the same proportions do not systematically evaluate their usefulness or impact in other ways.The findings from our survey, the most broadly based of its kind, both extend or clarify the applicability of the messages arising from previous surveys and related documentary analyses, such as how the 'principles of evidence-based medicine dominate current guideline programs' and the importance of collaborating with other organizations. The survey also provides a description of the history, structure, processes, outputs, and perceived strengths and weaknesses of existing organizations from which those establishing or leading similar organizations can draw. Organizations that support the use of research evidence in developing health policy can do so in many ways. Some produce clinical practice guidelines (CPGs) or more generally guidance for clinicians and public health practitioners. Others undertake health technology assessments (HTAs) with a focus on informing managerial and policy decisions about purchasing, coverage, or reimbursement. Still others directly support the use of research evidence in developing health policy on an international, national, and state or provincial level . As we argued in the introductory article in the series, a review of the experiences of such organizations, especially those based in low- and middle-income countries (LMICs) and that are in some way successful or innovative, can reduce the need to 'reinvent the wheel' and inform decisions about how best to organize support for evidence-informed health policy development processes, particularly in LMICs .We focus here on describing the methods and findings from the first phase of a three-phase, multi-method study Table 2]. In . In 2]. We drew on many people and organizations around the world, including our project reference group, to generate a list of organizations to survey . We modii.e., clinicians, health system managers, and public policymakers); 2) identify and contextualise research evidence in response to requests from decision-makers; and/or 3) plan, commission, or carry out evaluations of health policies in response to requests from decision-makers. The GSUs could include units located within a health system, government or international organization, units hosted within a university or other research-intensive organization, and independent units with a mandate to directly support evidence-informed health policy . We excluded organizations that receive core funding from industry or that only produce or provide health or healthcare utilization data.Eligible CPG-producing organizations, HTA agencies, and GSUs had to perform at least one of the following functions (or a closely related function): 1) produce systematic reviews, HTAs, or other types of syntheses of research evidence in response to requests from decision-makers established CPG-producing organizations that are members of the Guidelines International Network (GIN) and select other organizations that are known to produce CPGs in particularly innovative or successful ways; 2) established HTA agencies that are members of the International Network of Agencies for Health Technology Assessment (INAHTA) and select other HTA agencies that are known to produce HTAs in particularly innovative or successful ways; and 3) any units that directly support the use of research evidence in developing health policy. We drew on members of both formal and informal international networks to identify particularly innovative or successful CPG-producing organizations and HTA agencies and to identify GSUs. The formal networks included the Appraisal of Guidelines for Research and Evaluation (AGREE) collaboration, the Cochrane Collaboration, GIN, GRADE Working Group, International Clinical Epidemiology Network (INCLEN) Knowledge Management Program, and INAHTA. The informal networks included our project reference group, staff at WHO headquarters and regional offices, and personal networks.We drew on a questionnaire developed and used by the AGREE collaboration , and we We sent the questionnaire by email to the director (or another appropriate person) of each eligible organization with three options for responding: by answering questions in the body of our email message and returning it; by answering questions in a Word version of our questionnaire attached to our e-mail message and returning it; or by printing a PDF version of our questionnaire, completing it by hand, and mailing it. We sent three reminders if we did not receive a response , each time offering to re-send the questionnaire upon request. We used additional mechanisms to increase the response rate, including an endorsement letter and personal contacts .Quantitative data were entered manually and summarized using simple descriptive statistics. Written comments were grouped by question, and one member of the team (RM) identified themes using a constant comparative method of analysis. The findings were then independently reviewed by two members of the research team (AO and JL).The principal investigator for the overall project (AO), who is based in Norway, confirmed that, in accordance with the country's act on ethics and integrity in research, this study did not require ethics approval from one of the country's four regional committees for medical and health research ethics. In keeping with usual conventions in survey research, we took the voluntary completion and return of the survey as indicating consent. We did not mention either treating participants' responses as confidential data or safe-guarding participants' anonymity in our initial request to participate in the study or in the questionnaire itself. Nevertheless, we present only aggregated data and take care to ensure that no individuals or organizations can be identified. We shared a report on our findings with participants and none of them requested any changes to how we present the data.i.e., are what we call GSUs) completed questionnaires were returned. Ninety-five organizations produce CPGs, HTAs, or both and 57 units support government policymaking were from high-income countries, 13% (n = 19) from upper middle-income countries, 24% (n = 36) from lower middle-income countries and 5% (n = 8) from low-income countries. Over one-half the organizations (54%) that produced CPGs and HTAs were identified through GIN and INAHTA (51/95), and 68% (n = 65) were from high-income countries compared to 35% (20/57) of GSUs. Although we aimed to identify organizations throughout the world, the included organizations were not spread evenly across different regions. Sixty-seven percent (64/95) of the organizations that produce CPGs and HTAs were located in Western Europe (n = 40), North America (n = 17), Australia and New Zealand (n = 7), compared with 33% of GSUs (19/57). We identified few organizations in Eastern Europe (n = 1), India (n = 2), the Middle East (n = 3) or China (n = 4) that met our inclusion criteria, and only three international organizations were included.A high proportion of organizations that produce CPGs, HTAs, or both also support government policymaking in other ways, whereas the reverse (GSUs producing CPGs or HTAs) was much less common Table . Among tThe organizations' ages, budgets, and production profiles varied dramatically Table . The medOrganizations producing CPGs were more often focused on health care 65\u201384%) than on public health (45%) or healthy public policy (26%), whereas GSUs were more focused on public health (88%) and to a lesser extent on primary healthcare (72%) and healthy public policy (67%) staff Table . For exaOrganizations draw on a wide variety of types of information Table . More thAll or almost all organizations producing CPGs, HTAs, or both produced a full version of their final product with references, whereas only HTA agencies uniformly produced both the full version and an executive summary Table . Less thBetween one-half and two-thirds of organizations do not collect data systematically about uptake, and roughly the same proportions do not systematically evaluate their usefulness or impact in other ways Table . A littlSee additional file e.g., five or fewer FTEs for CPG- and HTA-producing organizations). More than one-half of all organizations always involved an expert in information/library science, and more than two-thirds of CPG- and HTA-producing organizations always involved an expert in clinical epidemiology. More than four-fifths of organizations reported providing panels with or using systematic reviews. GSUs tended to use a wide variety of explicit valuation processes for the research evidence, but none with the frequency that organizations producing CPGs, HTAs, or both prioritized evidence by its quality. Less than one-half of all organizations provided a summary of take-home messages as part of their products. Almost two-thirds of GSUs involved target users in an implementation group, whereas lower proportions of other types of organizations involved target users in implementation through this or another approach. Between one-half and two-thirds of organizations do not collect data systematically about uptake, and roughly the same proportions do not systematically evaluate their usefulness or impact in other ways.A high proportion of organizations that produce CPGs, HTAs, or both also support government policymaking in other ways, whereas the reverse (GSUs producing CPGs or HTAs) was much less common. More than one-half of the organizations (and particularly HTA agencies) reported that examples from other countries were helpful in establishing their organization. The organizations' ages, budgets and production profiles varied dramatically. A higher proportion of GSUs than CPG- or HTA-producing organizations involved target users in the selection of topics or the services undertaken. Most organizations have a small number of FTE staff when they were being established, many conducted a focused review of one particular organization that they then emulated or a broad review of a variety of organizational models; 2) independence is by far the most commonly cited strength in how they are organized and a lack of resources, both financial and human, the most commonly cited weakness; 3) an evidence-based approach is the most commonly cited strength of the methods they use and their methods' time-consuming and labour-intensive nature the most commonly cited weakness; 4) the brand recognition that was perceived to flow from their evidence-based approach, and much less commonly from their strict conflict-of-interest guidelines, is the main strength of their outputs, while the most commonly cited weaknesses were the lack of dissemination and implementation strategies for the outputs and the lack of monitoring and evaluation of impact; 5) the individuals, groups, and organizations who have worked with them or who have benefited from their outputs are their strongest advocates, and the pharmaceutical industry and clinicians who are closely associated with them their strongest critics; and 6) a facilitating role in coordination efforts (in order to avoid duplication) and in local adaptation efforts are their most frequently offered suggestions for WHO and other international agencies and networks.For GSUs: 1) focusing on the need for secure funding when establishing a GSU was their most commonly offered advice; 2) working within national networks and, more generally, collaborating rather than competing with other bodies, was a commonly cited strength in how these units are organized; 3) government health departments are their strongest advocates; and 4) helping to adapt global evidence to local contexts or at least supporting such processes are their most frequently offered suggestions for WHO and other international agencies and networks. No themes emerged with any consistency among the diverse weaknesses identified in how the units were organized, strengths and weaknesses identified in their methods and outputs, or critics cited.i.e., we surveyed GSUs as well as CPG- and HTA-producing organizations); 2) we adapted a widely used questionnaire; 3) we drew on a regionally diverse project reference group to ensure that our draft protocol, study population, and questionnaire were fit for purpose; and 4) we achieved a high response rate (86%). The study has two main weaknesses: 1) despite significant efforts to identify organizations in LMICs, just over one-half (54%) of the organizations we surveyed were drawn from high-income countries; and 2) despite efforts to ask questions in neutral ways, many organizations may have been motivated by a desire to tell us what they thought we wanted to hear .The survey has four main strengths: 1) we surveyed the directors of three types of organizations that support evidence-informed policymaking, not just the two types of organizations that are usually studied of the messages arising from previous surveys and related documentary analyses and add several new messages. First, our findings concur with the conclusion of the most recent and comprehensive survey of CPG-producing organizations that 'principles of evidence-based medicine dominate current guideline programs' , althougBoth policymakers and international organizations and networks can play an important facilitating role in coordination efforts (in order to avoid duplication) and in local adaptation efforts . They also have an important advocacy role to play in calling for coordination and local adaptation. International organizations and networks can play several additional facilitation roles, particularly in the areas of sharing robust methodologies and where necessary improving existing methodologies , collecting and analyzing 'global' research evidence and making it available as an input to 'local' processes, and engaging more organizations based in LMICs and providing training and support for their continued development. Select international organizations, such as the Alliance for Health Policy and Systems Research, may have a particular role to play in sponsoring the development of an international organization of GSUs, which can be difficult to identify, let alone support.The survey should be repeated in a few years on an augmented sample of organizations, including organizations that have self-identified as partners of the Alliance for Health Policy and Systems Research (many of which may be GSUs). Also, as suggested above, there is a need for improving some of the existing methodologies used by the organizations and for establishing a common framework for evaluations of their impact.The authors declare that they have no financial competing interests. The study reported herein, which is the first phase of a larger three-phase study, is in turn part of a broader suite of projects undertaken to support the work of the WHO Advisory Committee on Health Research (ACHR). Both JL and AO are members of the ACHR. JL is also President of the ACHR for the Pan American Health Organization . The Chair of the WHO ACHR, a member of the PAHO ACHR, and several WHO staff members were members of the project reference group and, as such, played an advisory role in study design. Two of these individuals provided feedback on the penultimate draft of the report on which the article is based. The authors had complete independence, however, in all final decisions about study design, in data collection, analysis and interpretation, in writing and revising the article, and in the decision to submit the manuscript for publication.JL participated in the design of the study, participated in analyzing the qualitative data and deciding how to present the quantitative data, and drafted the article and the report in which it is based. AO conceived of the study, led its design and coordination, participated in analyzing the qualitative data, and contributed to drafting the article. RM participated in the design of the study, led the analysis of the qualitative data, and contributed to drafting the article. EP led the data collection for the study and led the analysis of the quantitative data. All authors read and approved the final manuscript.Questionnaire for units producing clinical practice guidelines or health technology assessments. This questionnaire is designed to be completed by units or departments that primarily produce clinical practice guidelines (CPGs), and/or produce health technology assessments (HTAs).Click here for fileQuestionnaire for units supporting health policy. This questionnaire is designed to be completed by units or departments that primarily provide research evidence and other support for organisations or policymakers developing health policy.Click here for fileQualitative data from the survey of organizations that support the use of research evidence.Click here for file"}
+{"text": "Despite endorsement by national organizations, the impact of screening for intimate partner violence (IPV) is understudied, particularly as it occurs in different clinical settings. We analyzed interviews of IPV survivors to understand the risks and benefits of disclosing IPV to clinicians across specialties.Participants were English-speaking female IPV survivors recruited through IPV programs in Massachusetts. In-depth interviews describing medical encounters related to abuse were analyzed for common themes using Grounded Theory qualitative research methods. Encounters with health care clinicians were categorized by outcome , attribute , and specialty (emergency department (ED), primary care (PC), obstetrics/gynecology (OB/GYN)).Of 27 participants aged 18\u201356, 5 were white, 10 Latina, and 12 black. Of 59 relevant health care encounters, 23 were in ED, 17 in OB/GYN, and 19 in PC. Seven of 9 ED disclosures were characterized as unhelpful; the majority of disclosures in PC and OB/GYN were characterized as beneficial. There were no harmful disclosures in any setting. Unhelpful disclosures resulted in emotional distress and alienation from health care. Regardless of whether disclosure occurred, beneficial encounters were characterized by familiarity with the clinician, acknowledgement of the abuse, respect and relevant referrals.While no harms resulted from IPV disclosure, survivor satisfaction with disclosure is shaped by the setting of the encounter. Clinicians should aim to build a therapeutic relationship with IPV survivors that empowers and educates patients and does not demand disclosure. The extensive physical and mental health burden of intimate partner violence (IPV) exposure has been documented in various settings -6. In reEvidence to support IPV screening interventions includes surveys of patients who report expectations that a clinician inquire about IPV and increased satisfaction with the visit after being asked regardless of disclosure ,9. In a The most recent guidelines of the United States Preventive Services Task Force found insufficient evidence for screening for family violence due to lack of studies showing that a primary care based screening intervention helps reduce harmful outcomes . In addiPreviously we reported results from a qualitative study of IPV survivors in which we examined those qualities of the patient-provider relationship that facilitate a safe and productive disclosure . In thatIn this paper we present the results of a re-analysis of participants' descriptions of patient-provider encounters to examine potential harms and benefits of IPV disclosure. We explored whether the specialty of care was related to the outcomes of disclosure, and identified a series of factors affecting these outcomes across primary care, obstetrics/gynecology and emergency department specialties.Ethnographic interviewing elicited IPV survivors' experiences interacting with both physician and non-physician health care providers. Grounded theory, a method of qualitative analysis , was useTwenty-seven IPV survivors were recruited from community-based domestic violence counseling or sheltering programs in eastern Massachusetts. They were recruited either through referral by local shelter staff or through a flier sent to all domestic violence programs in eastern Massachusetts. Eligible participants were female, ages 18 to 64, English-speaking, with a history of an abusive intimate partner relationship within the past 3 years. Each participant provided written informed consent and was compensated $25.After approval by the Boston University Medical Center Institutional Review Board, data were collected from October 1996 through November 2000. Open-ended, in-depth interviews, conducted by 1 of 2 authors , both primary care physicians, were audio-taped and lasted 1\u20132 hours.Using an interview guide, the interviewer asked participants to describe encounters with health care clinicians both related and unrelated to the abusive relationship after the onset of the abuse. While most participants related to the onset of the adult intimate partner violence, others spontaneously mentioned experiences with healthcare providers during adolescence or relating to childhood abuse. The participants were asked to provide information on perceived barriers to care and the abusive relationship over the past three years. Interviews were iterative; participants enrolled later in the data collection interval were questioned about themes revealed in previous interviews.Each audio-taped interview was transcribed verbatim by a professional transcriber, reviewed for accuracy and de-identified. Authors independently reviewed transcripts to identify common themes which were developed into a preliminary coding scheme with the first 10 interviews. An advisory group of domestic violence advocates and survivors helped revise this scheme and suggest new concepts. The authors then independently coded the interviews using this revised coding scheme. Coding was compared and differences of opinion resolved through examination of the text.encounters. Encounters, which could be composed of a single interaction or continued contact over a period of years, were first categorized into \"related to abuse\" or \"unrelated to abuse\". As we did reiterative coding and analysis to understand the specific effect of disclosing (or not-disclosing) IPV, these unrelated encounters did not offer relevant material to allow categorization into a specific outcome and were thus dropped from the analyses. Each medical encounter related to abuse was then coded according to three characteristics: outcome, specialty and attribute.Using NUD*ST qualitative research software for data organization and coding, separate narratives representing a single patient-clinician relationship were identified and labeled as The first of these, outcome, described three mutually exclusive types of encounters: disclosure, discovery, and non-disclosure. A disclosure occurred when a participant reported telling her clinician about IPV. When a participant perceived her clinician knew of the abuse when she had not made an explicit disclosure, the outcome was labeled discovery. To be labeled discovery, the participants made explicit reference that the provider discussed some aspect of IPV, such as counseling or referral, even without explicit disclosure of IPV. All other encounters that did not fall into disclosure or discovery were labeled non-disclosure. To qualify for non-disclosure, one of two circumstances had to apply. First, the provider asked but the participant purposely did not disclose. Second, the participant was in an actively abusive relationship but did not spontaneously disclose, such as during treatment for injury, or during medical or pregnancy related care.specialty: Emergency Department (ED), Obstetrical or Gynecological Care (OB/GYN), Primary Care (PC) or other. PC included pediatricians and family physicians identified as the primary care provider but who may have also provided obstetrical care. Encounters occurring in other specialties were excluded from this analysis because there were too few of any single type.Each encounter was also coded for its The final category, attribute, described the participant's level of satisfaction with the encounter as a result of whether she perceived the interaction as beneficial, harmful or unhelpful. For example, if an unpleasant interaction ended in the participant accepting help or receiving information that she found useful, we labeled it beneficial. Harmful interactions were ones resulting in injury to self, child or direct worsening of abuse. We classified negative reports not resulting in actual harm as unhelpful. When we were unable to categorize attribute due to a lack of information or contradictory descriptions, we excluded that encounter from analysis. Finally, we conducted a comparative analysis to explore the characteristics of encounters across outcomes, specialties and attributes.We interviewed 27 women; 12 were black, 10 Latina, and 5 White. Fourteen were recruited by domestic violence staff, and thirteen contacted the authors in response to the informational flier. Sixteen were living in a residential program at the time of the interview. Participant ages ranged from 18\u201356 years; median age was 31 years. Twenty-three participants had at least one child.A total of 185 health care encounters were described. The number of encounters per participant ranged from 3\u201312; median number of encounters was 7. Although it was frequently difficult to determine the professional designation of an individual provider, specialty was clear in 175 encounters. The thirty-one mental health encounters were excluded because most were visits specifically related to the IPV. Twenty-two were nurses from different treatment settings . Of the twenty-nine other encounters, there were two few (<5) of any single type and could not be easily combined into categories- such as radiology technicians, surgeons, ambulance drivers, physical therapist, child protective service worker, medical subspecialist, etc. Thirty-one were excluded because they were unrelated to abuse, and did not contribute to the analysis presented in this paper, the impact of IPV disclosure. Another three were unable to be classified by attribute, leaving a sample pool of 59 encounters representing 25 participants of these encounters involved IPV disclosure to the clinician, 7 12%) in discovery, and 17 (29%) in non-disclosure. Of the disclosures, 25 (71%) were beneficial. Among discoveries, 4 were beneficial (57%), while among non-disclosures, 6 (35%) were beneficial. Setting of care was associated with reported satisfaction from disclosure. In the ED, 2 (22%) disclosures were beneficial. Of OB/GYN disclosures, 9 (75%) were beneficial. In primary care, all 14 disclosures were beneficial explicit acknowledgement of the content of the disclosure , 2) demonstration of a caring attitude after disclosure (most cases) and 3) specific referral to other resources (some cases). For example, one participant said an ED clinician explicitly acknowledged her abuse and demonstrated concern:He said, well, 'I hear you're in a battered women's shelter. What's the deal? I take a special interest in domestic violence and what happens,' and he sat and talked to me. I felt comfortable in talking to him because he was showing this special interest in what was going on with me.\"\"Also of note, in all but two beneficial disclosures the participant reported familiarity with the clinician. In primary care, these relationships involved getting to know the clinician through a variety of contacts both related and unrelated to the IPV. In OB/GYN, these relationships generally formed during prenatal care, or in the peri-partum period when the participant had daily contact with hospital clinicians. Such familiarity can also occur in the ED setting, as in one case where the participant accepted advice from a nurse who had treated her a few weeks earlier for IPV-related injuries. When the participant returned to the ED with more injuries, the nurse recognized her:And I started crying, and she's like, 'Two weeks ago you was here, now you're back here again today and it's for the same thing. Your face isn't all bruised up like it was two weeks ago, but you're hurtin'. What's goin' on?' I broke down and told her...She was like, 'Well, you don't need to be in a relationship like that.\"'\"The participant acted on referrals and left her abusive partner as a result of this encounter.The common thread to benefits and problems without verbal disclosure by the participant included explicit clinician acknowledgement of potential abuse (or lack thereof). In particular, participants reported being upset by health care providers who they felt should have recognized IPV but did not acknowledge it. This, in turn, led to avoidance of healthcare. One participant reported that healthcare personnel failed to bring up IPV even after her husband yelled at her in the ED during two separate visits. She interpreted this lack of acknowledgement as an indication that clinicians did not care to get more involved. Another participant was particularly disappointed that her primary care clinician did not address the abuse with her, given that she had received counseling about it from his nursing staff: \"He never gave me any type of indication...he didn't talk to me about it. That's why I left him...because he wasn't really direct with me.\"Several participants reported benefit when the clinician spoke openly with the participant about IPV but did not insist upon disclosure. Furthermore, clinicians in these encounters used verbal and non-verbal cues to convey concern, and offered options for intervention while not forcing the participant to take action. The aftermath of acute injury was a particularly vulnerable time, as survivors were emotionally and physically exhausted as well as fearful of more injury.They asked me, 'How did it happen?' 'What happened to you?' 'Who did that?' I was in so much pain that I really didn't want to talk about it.\"\"A critical component of beneficial non-disclosure experiences was consideration of the patient's safety, as in this ED visit:She realized that I had other bruises on me. I thought he might hear her and I was like, 'No. Let's just drop the conversation. Let's just get me stitched up.' My husband came in so there was no more talk about it. When I left, she called me apart, and she [said]: 'you could call here in an emergency and we could get you some help.\"'\"Another example included ED staff suggestion that a participant treated for acute injuries continue care in PC: \"and they gave me a choice, 'would you rather go to your doctor and tell them what happened?\"' As a result of that referral, she revealed the abuse to her primary care clinician.Narratives of intimate partner violence survivors reveal no actual harms occurred as a result of disclosure of abuse to health care clinicians. However, some negative disclosure experiences did impair subsequent interactions with the health care provider as well as increase emotional distress. The benefits included immediate changes (e.g. filing a restraining order), improvement in self-esteem to facilitate long term changes, and relationship building with health care clinicians. The setting of care appeared to influence these outcomes, impacted strongly by patient familiarity with the clinician.This study reinforces insights from prior studies that asking about IPV in longitudinal care specialties offers the greatest opportunity for disclosure . Indeed,In all specialties, participants were more likely to disclose IPV and find disclosure beneficial if clinicians respectfully addressed the abuse, ensured participants' physical safety after an assault, assured participants of confidentiality regarding disclosed information, gave patient choices for action and demonstrated emotional support. Indeed, our study demonstrates that inquiry and discussion of IPV in the right setting can be a powerful tool for change.Despite the increased potential to identify and refer a victim of IPV in the aftermath of an acute injury ,27, partIPV case-finding may satisfy the need for a quantifiable, appropriate quality improvement measure. However, measuring case finding alone may obscure whether the inquiry is occurring in an empowering and safe manner that benefits survivors. In settings such as the ED or even inpatient hospital care, where the risks of disclosure may be higher, other measures of quality could include surveys of patients at high risk for IPV to assess whether they received any education about resources or options for IPV. Future studies of intervention for IPV could consider measuring empowerment and trust around IPV disclosure in the health care setting. Outcome measures often determine the emphasis of clinical care ,32. If aThere are several limitations to this study. First, we were not always able to determine the exact nature of the visit or specialty. Furthermore, participants were not directly asked to compare their experiences; differences were gleaned from the stories they told. This is typical of qualitative research studies in which unexpected themes emerge from close examination of the data. Self-report is subject to recall bias, which may be particularly affected by any post-traumatic stress disorder related to abuse. The interviews occurred almost 10 years ago and clinician response might have improved since then, given the educational efforts with medical students and residents. However, this has not been demonstrated in more recent studies . FinallyOur results reveal that whether or not disclosure of abuse is achieved, clinician conversations with survivors about IPV have a powerful impact on both positive and negative outcomes. When these conversations occur in the context of a supportive relationship with that clinician, positive outcomes are more likely. Although these findings will need to be replicated in other settings, this study suggests a need to tailor interventions for women who experience IPV to the nature of the clinical specialty, particularly treatment of acute injury. Our findings indicate that it is not enough for health care providers to simply ask about abuse. Clinicians should aim for a therapeutic relationship with IPV survivors that does not demand disclosure or action, but instead empowers and educates the patient.The authors declare that they have no competing interests.JL designed the study. JL and TB conducted the interviews. All coauthors helped analyze data and reviewed the manuscript drafted by JL for important intellectual content. All coauthors approved the final draft.The pre-publication history for this paper can be accessed here:"}
+{"text": "Here, we combine two well-established techniques, fine-needle aspiration (FNA) and fluorescence in situ hybridization (FISH), to detect c-erbB2/neu amplification in patients candidate to primary chemotherapy and, in part, previously analysed for c-erbB2/neu overexpression. Sixty smears from FNA were used to simultaneously detect c-erbB2/neu and chromosome 17 centromere. FISH was successful in 58 cases and detected 24 amplified cases, three of which were negative by immunophenotyping, 28 negative cases, with evidence of two normal c-erbB2/neu/ signals, two cases with deletion of c-erbB2/neu, and four cases with polysomy, thus providing more reliable and informative results than ICC. This study underlines the advantages offered by the FNA and FISH combination which are two rapid, reliable, simple and informative techniques, to analyse one of the most important genetic markers for predicting prognosis and chemotherapy planning for breast carcinoma in particular in the light of the recently proposed trials of primary chemotherapy. \u00a9 1999 Cancer Research CampaignThe detection of specific genetic alterations in breast cancer is useful for diagnosing, predicting prognosis and planning preoperative treatment."}
+{"text": "HO-1/HMOX1) is a powerful anti-inflammatory and anti-oxidant enzyme, whereas the pro-inflammatory interleukin 1\u03b2 (IL-1\u03b2/IL1B) and anti-inflammatory interleukin 10 (IL-10/IL10) are key modulators for the initiation and maintenance of inflammation. We investigated whether single nucleotide polymorphisms (SNPs) in the IL-1\u03b2, IL-10, and HO-1 genes, together with smoking, were associated with risk of CD and UC.Crohns disease (CD) and ulcerative colitis (UC) are characterized by a dysregulated inflammatory response to normal constituents of the intestinal flora in the genetically predisposed host. Heme oxygenase-1 (IL-1\u03b2 T-31C (rs1143627), and IL-10 rs3024505, G-1082A (rs1800896), C-819T (rs1800871), and C-592A (rs1800872) and HO-1 A-413T (rs2071746) SNPs were assessed using a case-control design in a Danish cohort of 336 CD and 498 UC patients and 779 healthy controls. Odds ratio (OR) and 95% confidence interval (95% CI) were estimated by logistic regression models.Allele frequencies of the IL-10 gene, were at increased risk of CD and UC and, furthermore, with risk of a diagnosis of CD and UC at young age and OR = 1.35, 95% CI: 1.04-1.76), respectively). No association was found between the IL-1\u03b2, IL-10 G-1082A, C-819T, C-592A, and HO-1 gene polymorphisms and CD or UC. No consistent interactions between smoking status and CD or UC genotypes were demonstrated.Carriers of rs3024505, a marker polymorphism flanking the IL-10 gene was significantly associated with risk of UC and CD, whereas no association was found between IL-1\u03b2 or HO-1 gene polymorphisms and risk of CD and UC in this Danish study, suggesting that IL-10, but not IL-1\u03b2 or HO-1, has a role in IBD etiology in this population.The rs3024505 marker polymorphism flanking the The chronic inflammatory bowel diseases (IBD), ulcerative colitis (UC) and Crohn's disease (CD), are complex diseases caused by an interplay between genetic and environmental factors .CARD15 [The recent years have brought much progress regarding the genetics in IBD and the number of confirmed IBD associated loci and genes have risen dramatically -7. Yet, CARD15 ,10, and The emerging picture of IBD pathogenesis is focused on the sequential occurrence of pivotal events leading to the initiation and subsequent perpetuation of inflammation ,12. Firs2 (PGE2) and nitric oxide (NO) via the induction of cyclo-oxygenase 2 (COX-2) and inducible nitric oxide synthase (iNOS) among others [IL-1\u03b2 knock-out mice have no spontaneous abnormalities, however, on challenge with LPS, a less pronounced acute phase response is observed, suggesting that IL-1\u03b2 is required for an adequate immune response [IL-1\u03b2 promoter polymorphisms, IL-1\u03b2 T-31C and IL-1\u03b2 C-511T, have been found to be in almost complete linkage disequilibrium [IL-1\u03b2 T-31C variant conferred higher transcription of IL-1\u03b2 compared to the wild type haplotype [IL-1\u03b2 polymorphisms in IBD has been explored in several studies [IL-1\u03b2, however, the studies were rather small.Activation of the pro-inflammatory IL-1\u03b2 leads to production of prostaglandin Eg others . IL-1\u03b2 kresponse . In bothresponse and stimresponse . The varilibrium , and theaplotype . The rol studies -24. ThesIL-10 knock-out mice develop colitis if they are not kept in germ-free environment [in vitro models [IL-10 gene, in a case-control study of paediatric onset CD [IL-10 promoter is polymorphic and genetic variation may account for different levels of cytokine production [IL-10 promotor polymorphisms G-1082A, C-819T, and C-592A have been most extensively studied. They are in tight linkage disequilibrium [in vitro [IL-10 promoter polymorphism C-592A leads to the formation of a binding site for the ETS family of transcription factors [IL-10 promoter polymorphisms and IBD susceptibility have been inconsistent [IL-10 is an anti-inflammatory cytokine, which leads to dampening of the activated immune system. ironment , and theo models . In patio models ,28. Receo models . This sto models . Howeveronset CD . The IL-oduction . The IL-ilibrium and the in vitro and low in vitro probably factors . Studiesnsistent -31,35-40HO-1 expression and protein levels have been reported to be increased in inflamed colon compared to normal mucosa from patients with UC [HO-1 A-413T, indicated that the A allele promoter had significantly higher activity than the T allele promoter [HO-1 (GT)N dinucleotide repeat polymorphism, was not associated with risk of inflammatory bowel disease [Heme oxygenase-1 (HO-1) is involved in the degradation of heme, thereby reducing oxidative stress and protecting against acute and chronic inflammation . Animal with UC . Studiespromoter . The AA promoter . Another disease . Interes disease . The mec disease in accor disease .IL-1\u03b2, IL-10, and HO-1 together with smoking in relation to risk of developing IBD in a Danish case-control study of 336 CD, 498 UC and 779 healthy controls, respectively.In this study we wanted to assess the role of polymorphisms in Patients with CD (n = 373) or UC (n = 541), and healthy controls (n = 796) were included. All information was available for 336 CD cases, 498 UC cases and 779 healthy controls. Diagnosis of CD or UC was based on clinical, radiological, endoscopic and histological examinations . InfectiDNA was extracted from EDTA-stabilized peripheral blood samples from all patients and healthy controls by using either a PureGene or Wizard Genomic DNA purification kit, according to the manufacturers' recommendations.IL-1\u03b2 T-31C (rs1143627), and IL-10 C-592T (rs1800872) were genotyped as previously described [IL-10 G-1082A (rs1800896), C-819T (rs1800871) [2O, and 1 \u03bcl genomic DNA. HO-1 A-413T . Genotyping was performed by TaqMan real-time PCR on an ABI7900HT (Applied Biosystems), using Allelic Discrimination. Twenty ng of DNA was genotyped in 5 \u03bcl containing 1 \u00d7 Mastermix , 100 nM probes, and 900 nM primers or as recommended by the manufacturer for predesigned assays.. Controls of known genotypes were included in each run, and repeated genotyping of a random 10% subset yielded 100% identical genotypes. Laboratory personnel were blinded to the case/control status of the study group [SNPs were chosen from literature studies. escribed Rs3024501800871) were gendy group .We used logistic regression to analyse the relationship between the six polymorphisms and disease. The statistical analyses included only subjects where all information was available. Age was entered linear in the model after checking for linearity using a linear spline . SubgrouWe used Genetic Power Calculator for Case - control for discrete traits for poweAll subjects received written and oral information and gave written informed consent. The study was conducted in accordance with the Declaration of Helsinki and approved by the local Scientific Ethical Committees at Viborg and Aalborg County (VN 2003/5).IL-1\u03b2 T-31C, IL-10 rs3024505, G-1082A, C-819T, C-592A and HO-1 A-413T were 0.35, 0.18, 0.45, 0.21, 0.21, and 0.42, respectively, in the control group.Characteristics of the Danish IBD patients and controls are shown in Table IL-10 gene were at increased risk of both CD and UC. Homozygous variant allele carriers were at 2.48-fold (95% CI: 1.27-4.84) increased risk of CD and heterozygous carriers were at 1.31-fold (95% CI: 0.98-1.75) increased risk of CD after adjusting for age, gender and smoking status .No significant difference in the genotype distribution between CD and UC was found (data not shown). When combining UC and CD data to increase the statistical power there were still no associations between the IL-1\u03b2, the three IL-10 promoter polymorphisms, and HO-1 polymorphisms and age at diagnosis or disease localisation were found.Subgroup analyses showed that variant allele carriers of rs3024505 were at 1.47-fold (95% CI: 1.10-1.96) and 1.35-fold (95% CI: 1.04-1.76) higher risk of a diagnosis of CD and UC, respectively, before the age of 40 years than the homozygous wildtype carriers (results not shown). No associations between rs3024505 genotype and disease localisation, or between The effect of smoking habits at diagnosis on the genotype associations was investigated for CD and UC, respectively . Previous studies were unable to find association between IBD and the IL-10 promoter polymorphisms [IL-10 C-819T wildtype allele [IL-10 G-1082A variant allele and risk of UC [Our results replicate the findings by Franke et al. Table 44. In addorphisms ,35,39,40e allele , Crs2222e allele and betwsk of UC N [IL-1\u03b2 T-31C, and HO-1 A-413T have been shown to have biological effect [Our results are in accordance with previous studies which were unable to find association between IBD and 1\u03b2 T-31C , taqI [11\u03b2 T-31C ,23 or C-1\u03b2 T-31C ,21. The -1 (GT)N . Howeverl effect ,48,59, al effect .IL-1\u03b2, IL-10 or HO-1, since the polymorphisms had no effect among present smokers. Cigarette smoke has been reported to act differentially on inflammation in the small and large intestine, thus worsening small intestinal inflammation, but ameliorating colitis [We found no consistent interactions between the studied polymorphisms and smoking in relation to risk of CD or UC. Although both smoking and nicotine administration lower the exaggerated IL-1\u03b2 response in IBD patients ,62, the colitis . We wereIt is important to stress the strengths and limitations of the study. The present study included 1600 participants and power analyses showed that this study has more than 80% power to detect a dominant effect with an OR of 1.5 in relation to either CD or UC and or 1.4 if CD and UC were combined. Moreover, genetic determinants may be stronger among patients with extensive disease and ileal disease ,65 and dIL-10 gene was associated with risk of UC and CD in the present Danish case-cohort study, and, furthermore, with risk of a diagnosis of CD and UC at young age. None of the polymorphisms IL-1\u03b2 T-31C, IL-10 G-1082A, C-819T, C-592A, or HO-1 A-413T were associated with risk of CD or UC. No consistent interactions between smoking status and genotypes were found. The study suggests that IL-10, but not IL-1\u03b2 or HO-1, play a role in IBD etiology.In conclusion, the rs3024505 marker polymorphism flanking the 2: prostaglandin E2CD: Crohns disease; CI: confidence interval; CO: carbon monoxide; COX-2: cyclooxygenase 2; HO-1: heme oxygenase 1; IBD: inflammatory bowel disease; IL-1\u03b2: interleukin 1\u03b2; IL-10: interleukin 10; iNOS: inducible nitric oxide synthase; NO: nitric oxide; OR: odds ratio; RQ-PCR: real-time quantitative RT-PCR; SNP: single nucleotide polymorphism; UC: ulcerative colitis; PGEThe authors declare that they have no competing interests.UV and AE carried out the genotyping. VA, HK, AE, M\u00d8, BAJ established the cohort and/or participated in sample preparation and collection. JC and AT performed the statistical analyses. VA and UV conceived the genotyping study, and its design and coordination and wrote the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2350/11/82/prepubInteraction between the studied polymorphisms and smoking status in relation to risk of Crohns Disease. Table.Click here for fileInteraction between the studied polymorphisms and smoking status in relation to risk of ulcerative colitis. Table.Click here for file"}
+{"text": "Information on arthritis and other musculoskeletal disorders among Aboriginal people is sparse. Survey data show that arthritis and rheumatism are among the most commonly reported chronic conditions and their prevalence is higher than among non-Aboriginal people.To describe the burden of arthritis among Aboriginal people in northern Canada and demonstrate the public health significance and social impact of the disease.Using cross-sectional data from more than 29 000 Aboriginal people aged 15 years and over who participated in the Aboriginal Peoples Survey 2006, we assessed regional differences in the prevalence of arthritis and its association with other risk factors, co-morbidity and health care use.The prevalence of arthritis in the three northern territories (\"North\") is 12.7% compared to 20.1% in the provinces (\"South\") and is higher among females than males in both the North and South. The prevalence among Inuit is lower than among other Aboriginal groups. Individuals with arthritis are more likely to smoke, be obese, have concurrent chronic diseases, and are less likely to be employed. Aboriginal people with arthritis utilized the health care system more often than those without the disease. Aboriginal-specific findings on arthritis and other chronic diseases as well as recognition of regional differences between North and South will enhance program planning and help identify new priorities in health promotion.arthritis, Aboriginal people, Northern Canada, Inuit, First Nations, M\u00e9tis, North American Indians, Aboriginal Peoples Survey Several national surveys\u2014the First Nations Regional Longitudinal Health Survey (RHS), the Canadian Community Health Survey (CCHS) and the Aboriginal Peoples Survey (APS)\u2014have provided some data on the prevalence of arthritis, rheumatism and other musculoskeletal conditions, such as back pain, among adults in the Aboriginal population. These surveys generally show that arthritis and rheumatism are among the most commonly reported chronic conditions, that prevalence is higher than among non-Aboriginal people in Canada, and that prevalence is increasing; for example, the crude prevalence was 15% in 1991 and 19% in 2001 according to the APS, while the age-adjusted prevalence was 22% in 1997 and 25% in 2002/03 according to the RHS. Arthritis also contributes to more than half of the self-reported disability among First Nations people in Canada.Information on arthritis and other musculoskeletal disorders among Aboriginal people is sparse and geographically limited\u2014mainly to Alaska, British Columbia and Manitoba.Disability resulting from arthritis can be exacerbated in the north of Canada by severe weather, inadequate infrastructure and unreliable transportation. Arthritis compromises the ability of Aboriginal people to pursue traditional activities, such as harvesting country foods, and traditional crafts. The geographical isolation of many communities reduces access to specialist services. Cultural context is an additional dimension and requires region-specific directions and broad partnerships to plan and implement culturally appropriate health services and support systems; specific considerations include, but are not limited to, access to traditional healers and medicines, languages spoken, and the design of support services in communities.This paper describes the burden of arthritis among Aboriginal people in Yukon, Northwest Territories and Nunavut\u2014the three northern territories of Canada. We assess regional differences in the prevalence of arthritis and its association with other risk factors, co-morbidity and health care use between these three northern territories (the \"North\") and the ten provinces of southern Canada (the \"South\") using data from the recently released APS 2006.We used cross-sectional data from more than 29 000 Aboriginal respondents aged 15 years and over who participated in the APS 2006 . Statist2 or higher, and smoking are well-established risk factors for arthritis, and the APS 2006 asked respondents about their height, weight and smoking experience and habits. Arthritis is also associated with reduced employment and work limitations among adults; the APS asked respondents, \"Last week, did you work for pay or in self-employment?\"The APS 2006 asked respondents whether a doctor, nurse or other health professional had ever told them that they have arthritis or rheumatism. We separately examined associations between arthritis and various demographic, socioeconomic, behavioural and health care correlates for the three territories and the 10 provinces, not to test specific etiological hypotheses but to demonstrate the public health significance and social impact of the burden of arthritis. Obesity, defined as body mass index (BMI) of 30 kg/mWe determined prevalence of arthritis for three separate groups based on the question \"Do any of your ancestors belong to the following Aboriginal groups? (Can check more than one): North American Indian, M\u00e9tis or Inuit.\" Individuals who checked only \"North American Indian\" constitute the \"First Nations\" group, individuals who checked only \"Inuit\" constitute the \"Inuit\" group, and all others including M\u00e9tis and those who checked multiple Aboriginal groups were combined into an \"Other\" cate\u00adgory as each of these groups have small sample sizes in the North. We report only crude prevalence proportions; we did not compute age-adjusted prevalence as the dataset did not include non-Aboriginal people for comparison, and comparing this study with published age-adjusted rates is difficult due to the different standard popu\u00adlations that have been used; further, the crude prevalence more accurately reflects the burden of disease needed to plan public health programs. We used age- and sex-adjusted logistic regression analyses to assess associations between arthritis and various correlates.We performed all analyses using SAS version 9.2 . Since the APS 2006 was based on a complex survey design, we used survey weights in all analyses and calculated variance estimates using the bootstrap technique with the 1000 bootstrap weights provided by Statistics Canada. We determined all proportions in accordance with rounding guidelines suggested by Statistics Canada and calculated confidence intervals (CIs) from unrounded components. Detailed survey methodology is available from Statistics Canada.The crude prevalence of arthritis or rheumatism for the three combined Aboriginal groups in the territories is 12.7% (95% CI: 12.5-13.0) compared to 20.1% in the provinces (95% CI: 19.9-20.3). Arthritis is more prevalent among females than males in both the North and South. The prevalence among Inuit is lower than among First Nations and other Aboriginal groups. As expected, prevalence increases with age .For a comparison of the proportion of respondents with and without arthritis who are daily smokers, are obese or have co-morbid conditions, see Smoking is more prevalent among Aboriginal people in the North than in the South. In the South, there is an association between daily smoking and arthritis , but daily smoking is not a significant factor in the North . Obesity is more prevalent among individuals with arthritis, and the association between obesity and arthritis is stronger among Aboriginal people in the South than in the North .In both the South and the North, a higher proportion of individuals with arthritis than those without report having at least one other chronic condition such as diabetes, heart disease, hypertension, stroke, asthma, chronic bronchitis, emphysema or cancer .The proportion of individuals who report consulting a health professional (primary care physician or nurse) or traditional healerA lower proportion of individuals with arthritis report being employed in the week before the survey (either self-employed or otherwise working for pay) compared to those without arthritis. The association was stronger in the South than in the North . In Tjepkema's analyses of CCHS 2000/01, the prevalence for Aboriginal people in the North is 10% while that in the South is 19% for rural residents and 20% for urban residents; Lix et al. obtain a prevalence of 12% in the North and 20% in the South from the CCHS 2005/06. Both these studies also show that the prevalence among Aboriginal people is higher than non-Aboriginal people in the South but not in the North. Note that both the CCHS and APS cover the same Aboriginal groups\u2014off-reserve First Nations, Inuit and M\u00e9tis. Although less access to specialist care may be responsible for the lower detection rate of arthritis in the North, the prevalence of arthritis is based on self-report and not on clinically verified diagnoses by rheumatologists; further, as a chronic disease arthritis is likely to have been diagnosed sometime in the past over the long term even with limited specialist health care.The lower prevalence estimates among Aboriginal people in the North compared to those in the South obtained from the APS 2006 are comparable to those from other surveys.In surveys such as the APS, CCHS and RHS, self-reports under the rubric \"arthritis and rheumatism\" lack clinical accuracy. These self-reports are also limited by the inability to differentiate between different types of arthritides\u2014rheumatoid arthritis, osteoarthritis, etc. However, as a tool for assessing population health and the need for health care, such crude measures are nevertheless useful, particularly to describe the patterns in different population subgroups.) A lower prevalence of arthritis among Inuit relative to other Aboriginal people has been shown nationally in APS 2001 and CCHS 2000/01. In this study we demonstrate that, within the North, the prevalence of arthritis among all Aboriginal groups\u2014Inuit, First Nations, and Other\u2014is also lower than the corresponding group in the South have high rates of rheumatoid arthritis compared to some Native American tribes, and much higher than the Yupik in western Alaska. A recent study from Alaska that estimated the prevalence of self-reported and clinically undifferentiated arthritis showed that it is higher among Alaska Natives than the general U.S. population, but the Alaskan sample is a mix of Yupik and Native American tribes in the southeastern part of the state.It is unclear as to why Canadian Inuit have lower prevalence of arthritis than First Nations people. The self-reported arthritis rubric is a mixed bag of clinical entities with different etiologies. A review of North American indigenous populations found that Inuit tend to have high rates of spondyloarthropathies whereas Native Americans have high rates of rheumatoid arthritis.Aboriginal people suffering from arthritis have unfavourable health profiles; they are more likely to be daily smokers, be obese and have concurrent chronic diseases, although the magnitude differs between the North and South, reflecting the background prevalence of these associated traits and conditions. Arthritis can limit the opportunity for employment, although this survey does not provide evidence that the lower employment rate is the direct result of the disease.As expected, Aboriginal people with arthritis are more likely to utilize the health care system, with higher proportions reporting visits to physicians, nurses and traditional healers. The pattern of use reflects the different systems in place in the North and South. We cannot, however, determine if the higher health service use is the direct result of arthritis, but it is a plausible explanation given the nature of the disease, the presence of other risk factors such as smoking and obesity, and co-morbidities. In the North, primary care is predominantly delivered by nurses in health centres in the communities, and individuals have only periodic contact with visiting physicians. For many, visits to specialists such as rheumatologists requires air travel away from home.Further research is required to explore North-South disparities in the burden of arthritis in Aboriginal populations. Also needed are more refined diagnoses, including rheumatoid arthritis, osteoarthritis and other musculoskeletal disorders, as well as separate analyses of Inuit and First Nations samples, which are sufficiently large within the North. Aboriginal-specific findings on arthritis and other chronic diseases, as well as recognition of regional differences between North and South, will enhance program planning and help identify new priorities in health promotion. The creation and transmission of quality evidence to appropriate stakeholders to ensure uptake and application of study findings will help reduce health disparities."}
+{"text": "Kocuria species are gram-positive, non-pathogenic commensals. However, in immunocompromised patients such as transplant recipients, cancer patients, or patients with chronic medical conditions, they can cause opportunistic infections.Kocuria rosea.We report the first case of descending necrotizing mediastinitis in a 58-year-old, relatively healthy woman caused by Kocuria rosea can be successfully treated with prompt surgical drainage combined with antimicrobial therapy.Descending necrotizing mediastinitis due to Descending necrotizing mediastinitis (DNM) is an acute form of mediastinitis caused by odontogenic or deep cervical infections such as tonsillitis and pharyngitis that descend into the mediastinum and pleural space through the cervical fascial planes . DNM is Kocuria infections are mentioned in literature. Furthermore, to our knowledge, this is the first case reported in the English literature of Kocuria rosea associated with DNM.Only a limited number of Kocuria rosea is an aerobic, gram-positive coccus that is generally considered as a non-pathogenic commensal that colonizes the oropharynx, skin, and mucosa. Nonetheless, it can cause an opportunistic infection in immunocompromised patients [K. rosea in a relatively healthy woman.patients . We repoA 58-year-old woman presented to her local hospital with fever, myalgia, and sore throat. Her medical history was significant for gout and hypertension controlled with medications. After a few days of treatment, although her condition improved, she still complained of nausea, neck discomfort, and difficulty swallowing. Endoscopy revealed a gastric ulcer but no esophageal lesions. Ultrasound showed fluid collection in the neck space and a diagnosis of DNM was made by cervicothoracic computed tomographic (CT) scan. At that point, she was transferred to our hospital. On admission, she was afebrile with swelling of the neck and associated discomfort. She denied any other specific symptoms. Laboratory testing showed elevated levels of erythrocyte sedimentation rate 120 mm/h) and C-reactive protein (75.77 mg/L). The albumin level had decreased (3.2 g/dL) and she had a normal white blood cell count. The remaining laboratory values were within normal limits. A CT scan showed a large, retropharyngeal abscess extending from the surrounding piriform sinus to the bronchial bifurcation. No significant abnormality was seen in the pharynx and tonsils ; however, the alternative means of identification, 16s rRNA sequencing was not performed. Additionally, antibiotic sensitivity tests were not performed.Culture of the abscess was performed with sheep blood agar, MacConkey agar, and thioglycollate broth. The plates were incubated at 35\u00b0C for 48 h. Anaerobic culture was performed using chocolate (Reduced) agar with an anaerobic pouch and thioglycollate broth and incubated at 35\u00b0C for 5 days. Anaerobic culture did not yield any microorganisms. The culture was positive for gram positive cocci arranged in tetrads; these cocci were non-hemolytic, catalase positive, coagulase negative, and nonmotileDescending necrotizing mediastinitis is caused by a deep cervical or oropharyngeal infection that descends into the mediastinum and pleural space through several cervical fascial planes such as the carotid, prevertebral, retropharyngeal, and retrovisceral spaces. Among these, the retrovisceral space is the most vulnerable pathway leading to the mediastinum ,2,4,5.Streptococcus has been reported to be the most common pathogen responsible for DNM, it is usually a polymicrobial infection involving both anaerobic and aerobic organisms [Streptococcus.The primary origin of a deep cervical infection is mostly unknown, although it can be caused by an odontogenic infection, acute tonsillitis, a peritonsillar abscess, cervical lymphadenitis, sinusitis, or cervical trauma ,5. Althorganisms ,6. In a rganisms found 96The main predisposing factor complicating DNM due to a deep neck infection is multiple space involvement . In addiKocuria spp. was classified as Micrococcus spp., then later reclassified in the new genus Kocuria spp. by Stackebrant and colleagues [K. kristinae is the most pathogenic organism of the Kocuria spp.[Kocuria species [Kocuria spp. has been most commonly responsible for infections in chronically ill or immunocompromised patients. Only limited cases have been reported where this organism has caused infection in an immunocompetent subject [K. rosea has been reported as a pathogen in infective endocarditis in an immunocompetent patient [K. rosea in an immunocompetent host has not yet been reported making this the first such reported case.lleagues . Our lituria spp.,10-12 an species ,11. The subject ,10-13. T patient and, per patient in immunKocuria by using the VItek 2 system with ID- GPC card has been reported [Kocuria[Misidentification of coagulase-negative staphylococcus as reported . However [Kocuria. AlthougK. rosea were susceptible to tetracycline, erythromycin, oleandomycin, novobiocin, methicillin, kanamycin, polymyxin, vancomycin, penicillin G, streptomycin, chloramphenicol, and neomycin. In our case, a third-generation cephalosporin and clindamycin were administered empirically prior to emergency surgical drainage. The same antibiotic regimen was then administered for 2 more weeks, after which time, the patient was discharged.Because of the normal flora of the oropharynx and skin and the pattern of the abscess formation in this patient, we can assume that the abscess originated from an infection of the piriform sinus Figure\u00a0. AlthougKocuria rosea comprises the normal flora in the oropharynx, skin, and mucosa. It generally causes infections only in immunocompromised patients. However, it can also be a causative pathogen of oropharyngeal and deep cervical infections in immunocompetent patients.K. rosea has a low pathogenicity and high susceptibility to a variety of antibiotics, prompt surgical drainage, debridement, and administration of broad spectrum antibiotics could show an excellent result in DNM caused by K. rosea.Although, Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor of this journal.The authors declare that they have no competing interests.DR and SC performed the operation. ML carried out the clinical study of the patient. DR drafted the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2334/13/475/prepub"}
+{"text": "Theoretical and experimental results indicated that the graphene monolayer was transparent to the electromagnetic field. This transparency led to significant substrate-induced plasmonic hybridization at the heterostructure interface. Combined with interparticle plasmonic coupling, the substrate-induced plasmonics concentrated light at the interface and enhanced the photo-excitation of dyes, thus improving the photoelectric conversion. Such a mechanistic understanding of interfacial plasmonic enhancement will further promote the development of efficient plasmon-enhanced solar cells and composite photocatalysts.Surface plasmon resonance (SPR) is widely used as light trapping schemes in solar cells, because it can concentrate light fields surrounding metal nanostructures and realize light management at the nanoscale. SPR in photovoltaics generally occurs at the metal/dielectric interfaces. A well-defined interface is therefore required to elucidate interfacial SPR processes. Here, we designed a photovoltaic device (PVD) with an atomically flat TiO The photovoltaic conversion of solar energy into electrical power is a promising way to provide sustainable clean energy, and to overcome energy and environment issues facing human society in the 21st century12346791011122 are randomly mixed232 dielectric/dye/graphene interface (Surface plasmon resonance (SPR) involves the resonance of light waves with the collective oscillation of a gas of electrons inside a metal. It can produce a strong charge displacement in metallic nanostructures, and concentrate the light field into a small space surrounding the nanostructures14151610192123nterface . Each in2 dielectric substrate (\u03b5s\u2009=\u20097.34)282 substrate was separated from the SLG layer by a gap of 2\u2009nm. The global electromagnetic field distribution at the TiO2/SLG/NP interface under parallel polarized light excitation is shown in 2 dielectric /(\u03b5s\u2009+\u20091). The large permittivity of TiO2 (\u03b5s\u2009=\u20097.34)282/NP interface.FDTD simulations were used to predict the spatial distribution of the electromagnetic field in the PVD. The simulated model consisted of periodic spherical plasmonic NPs of radius 20\u2009nmelectric , the adjnterface , a maxim2 and NPs. SLG is a monolayer-thick Dirac material, possessing both dielectric and semi-metallic properties322/graphene interface, and thus the image charge effect of the TiO2 substrate. This field transparency deflected and concentrated the SPR effect at the dye layer between the TiO2 and SLG and energy-dispersive X-ray spectroscopy (EDX) characterization of individual NPs indicated that Ag and Au atoms were homogeneously distributed throughout the polycrystalline NPs. The top of the Ag/Au NPs was a layer of poly(methyl methacrylate) (PMMA). PMMA acted as a supporting layer during SLG/NP transfer, and a protective layer for photovoltaic measurements.To illustrate the substrate-induced interfacial plasmonic enhancement, we fabricated a PVD containing a TiO 2 (001) was usednsitizer inset wi2 surface characteristics of a typical PVD containing Ag/Au NPs, and a control device without Ag/Au NPs, in the dark and under broadband visible (>420\u2009nm) irradiation (100\u2009mW cm\u20132). In the dark, the I\u2013V curves of both devices showed similar rectifying characteristics, indicating that a built-in electric field formed at the TiO2/dye/SLG interface. This was necessary to effectively separate photo-generated electrons and holes from Z907, for photovoltaic conversion. The working device had a short-circuit current density (JSC) of ~7.36\u2009\u03bcA cm\u20132 under visible light irradiation, which was ~1.9 times larger than that of the control device (~3.83\u2009\u03bcA cm\u20132). This was attributed to SPR enhancement of Ag/Au NPs, as discussed above. The open-circuit photovoltage (VOC) of the working device (~0.743\u2009V) was larger than that of the control device (~0.715\u2009V), probably due to the p-doping effect of graphene by the Ag/Au NPsThe mechanism of the TiOvice ~0.7\u2009V was laJSC values under the corresponding monochromatic irradiation. The IPCE spectrum of the control device (Incident photon-to-current conversion efficiencies (IPCE) of the devices were measured as a function of wavelength to investigate the origin of the enhanced photoelectric conversion. IPCE values were calculated from l device , blue atl device , blue. Tl device , red shol device , red. A l device . This suI\u2013V curves and corresponding IPCE spectra of PVDs containing different NPs was used for graphene growth for 15\u2009min; after that, the temperature was quickly dropped to room temperature with 10 sccm H2 protection.High-quality single layer graphene (SLG) was grown by a low-pressure chemical vapor deposition (LPCVD) method with methane as the carbon-containing precursor under optimal conditions\u22124 Pa. After that, the samples were annealed at a temperature of 300\u2009\u00b0C for 30\u2009min with H2 (600 sccm) and Ar (600 sccm) as the protective gas atmosphere, which lead to the formation of a uniform Ag/Au nanoparticle (NP) layer on the surface of graphene.Ag/Au alloy thin films with proper Ag/Au volume ratios and thicknesses were first uniformly thermally evaporated onto the surface of graphene on copper foil substrates at a rate of 0.1\u2009\u00c5/s under a pressure of 2\u2009\u00d7\u2009102 (001) single crystals with atomically flat surface were obtained from MTI Corporation. 100\u2009nm of In and 100\u2009nm of Ag were in turn thermally evaporated onto the rough back side of the TiO2 crystals, which were used as Ohmic contact back electrodes. Before dye assembly, the surface of the TiO2 layer was pretreated by aqueous HF solution for chemical polishing and was further etched with oxygen plasma for surface hydroxy activation. After that, Z907 dye assembly on the surface of TiO2 was realized by immersing the samples into a 0.3\u2009mmol/L solution of Z907 in acetonitrile and t-butanol for 24\u2009hours and further rinsing with copious amount of acetonitrile. Finally, the In/Ag side of the samples were mounted on a copper foil wire, which was fixed on a glass sheet with conductive tap, with Ga/In eutectic; and the back electrode was further sealed with epoxy (Epotek 377) and dried for one hour to solidify.One-side mechanically-polished rutile TiO4)2S2O8 aqueous solution for about 8\u2009hours, producing the SLG/NP/PMMA film floating in the etchant. After that, the floating film was in turn washed with copious ultrapure water and isopropanol. Finally, the SLG/NP/PMMA film was transferred on the surface of TiO2/Z907 layers and baked at 80\u2009\u00b0C for 5\u2009min for close contact. In addition, the PMMA-supported graphene layer was further connected to the external circuit with copper foils.For SLG/NP layer assembly, a layer of poly(methyl methacrylate) (PMMA) (MicroChem 950 PMMA A6), which was used as a protective and support layer, was spin-cast on the surface of prepared Cu/SLG/NP samples at 4000\u2009rpm for 30 seconds and baked at 180\u2009\u00b0C for 3\u2009min. As both sides of Cu foils were growth with graphene, the back side of the Cu/SLG/NP/PMMA samples was further etched by oxygen plasma to remove the residual graphene. Then, the copper layer in the samples was wet-etched by 1\u2009M . The crystal structure and elemental composition of metal nanoparticles was analyzed with a high-resolution transmission electron microscope . Atomic force microscopy (AFM) measurements were carried out with a tapping mode AFM .The light-absorption properties of the samples were measured by a Lambda 35 ultraviolet\u2013visible (UV-vis) spectrometer equipped with an integrating sphere . Raman measurements were carried out by using a micro-Raman spectroscope (Renishaw 1000), with an excitation wavelength of 632.8\u2009nm.2.The current-voltage characteristics of the photovoltaic devices were measured by using a semiconductor characterization system (Agilent 4155C) in the dark and under AM 1.5 simulated solar light irradiation (Science Tech) through a UV light filter (420\u2009nm cut-off wavelength), whose light intensity was adjusted to 100\u2009mW/cmFor the incident photon-to-current efficiency (IPCE) spectroscopic measurements, a computer-controlled grating monochromator (Zolix Omni-\u03bb150) with a 150\u2009W Xe lamp was used for providing the monochromatic light illumination to the samples through an optical filter. The light power and corresponding wavelengths for the illuminated monochromatic light was measured with an OPT-2000 spectrophotometer. The corresponding photocurrent was measured with the Agilent 4155C semiconductor parameter analyzer at current time mode. The IPCE spectra were calculated from the measured photocurrents, corresponding monochromatic light wavelength (\u03bb) and light power with the following formula:h is the Planck\u2019s constant and c is the speed of light. All the photoelectrical measurements were performed on at least three samples and each sample was tested at least three times. The light intensity-dependent performance of the optimized model photovoltaic devices was shown in the where 2 substrate, a single layer graphene on the top, and Ag/Au nanoparticles, which arranged in a square lattice with a lattice constant P. The cell was normally illuminated by a broadband plane wave source on the above. This simulation was carried out upon a single unit cell of the nanoparticle array, and periodic boundary conditions were applied on the horizontal directions, and PML boundary conditions on the vertical directions. For the calculations, we take the experimentally measured optical constants of Ag from ref. 2Finite-difference time-domain (FDTD) simulations were carried out by using a commercially available software packageHow to cite this article: Li, X. et al. Substrate-induced interfacial plasmonics for photovoltaic conversion. Sci. Rep.5, 14497; doi: 10.1038/srep14497 (2015)."}
+{"text": "The evolution of the main phenolic secoiridoid compounds throughout the different stages of the virgin olive oil making process\u2014crushing, malaxation and liquid-solid separation\u2014is studied here, with the goal of making possible the prediction of the partition and transformation that take place in the different steps of the process. The concentration of hydroxytyrosol secoiridoids produced under the different crushing conditions studied are reasonably proportional to the intensity of the milling stage, and strongly depend on the olive variety processed. During malaxation, the content of the main phenolic secoiridoids is reduced, especially in the case of the hydroxytyrosol derivatives, in which a variety-dependent behaviour is observed. The prediction of the concentration of phenolic secoiridoids finally transferred from the kneaded paste to the virgin olive oil is also feasible, and depends on the phenolic content and amount of water in the olive paste. The determination of the phenolic compounds in the olive fruit, olive paste and olive oil has been carried out by LC-MS (Liquid-Chromatography Mass-Spectrometry). This improved knowledge could help in the use of more adequate processing conditions for the production of virgin olive oil with desired properties; for example, higher or lower phenolic content, as the amount of these minor components is directly related to its sensory, antioxidant and healthy properties. The extraction of virgin olive oil (VOO) is a critical process, as its operating conditions greatly affect the quality of the final product; moreover, it is not just a physical process that breaks the fruit\u2019s tissues to free the oil droplets enclosed in the cell. Upon olive crushing, and during kneading, different enzymes involved in the generation and transformation of phenolics and volatile components are activated ,2. ThereThe appreciated sensory profile of this fruit juice, characterized by a unique aroma and taste (flavour), and noticeable nutritional and biological properties, are due to the content of its minor components, mainly volatile and phenolic compounds ,10,11,12In fact, the fruity and green aroma of superior quality virgin olive oils is mainly produced by the volatile compounds generated by the lipoxygenase (LOX) pathway from polyunsaturated fatty acids ; whereasThe two main families of complex phenolic compounds found in VOO (named secoiridoids) are the derivatives of hydroxytyrosol and of tyrosol . Arbequina and Cornicabra olives were collected at a ripeness index of 2.8 and 4.5 respectively (since the Cornicabra cultivar is processed at a higher ripeness than Arbequina). Their oil content was 31.4% and 44.7% as dry weight, and the humidity was 50% and 35%, respectively.800 g olive samples from batches of 40\u201350 kg were crushed using laboratory scale hammer mills at 1500 rpm and 3000 rpm, each equipped with fixed grids with different hole diameters . A blade cutter was also used.2SO4 as a drying agent to preserve the samples from the oxidation, and stored at 4 \u00b0C in darkness using topaz glass bottles without head space prior to analysis.The crushed pastes, obtained from the different techniques, were kneaded according to the Abencor procedure under thThe water and fat content of the olive fruit in both cultivars was assessed according to the UNE Spanish Standard method (AENOR) .g). The hydromethanolic solution was recovered and filtered through a 0.45 \u03bcm nylon syringe filter. High-performance liquid chromatography equipped with an automatic injector, a column oven and a diode array UV detector was used to analyse the phenolic fraction. A ZORBAX SB-C18 column , maintained at 30 \u00b0C, was used with an injection volume of 20 \u03bcL and a flow rate of 1.0 mL/min. The mobile phase was of water/acetic acid (95:5 v/v) (solvent A) and methanol (Solvent B) with the following gradient: 95% A/5% B for 2 min, 75% A/25% B for 8 min, 60% A/40% B for 10 min, 50% A/50% B in 16 min, and 0% A/100% B for 14 min, for 10 min, and return to initial conditions for 13 min according to [The phenolic content of the fruit and olive paste was analysed in the following way: 4.0 \u00b1 0.0001 g of sample was mixed with 4-hydroxyphenylacetic acid used as internal standard (2.0 \u00b1 0.1 mg) in methanol/water (80:20 v/v) (40 mL) for 2 min with an Ultraturrax homogenizer . The suspension was shaken , and then centrifuged mass detector equipped with an electrospray ionization system was used, with nitrogen as nebulizing gas at a flow rate of 14 units, and 250 \u00b0C and 4.50 kV as temperature and voltage of the capillary, respectively. Negative ionization mode was employed to acquire data and fragmentation was carried out using helium with a collision energy between 30% and 40%.Phenolic compounds were identified by MS and UV-visible spectra and retention times of standard substances. A LCQ Deca XP Plus (2/kg), and the K232 and K270 extinction coefficients at 232 and 270 nm\u2014were determined by the methods described in the European Union standard methods and subsequent amendments (European Community Regulation 2568/91) [The quality indices\u2014free acidity (oleic acid percentage), peroxide value .For the phenolic compounds according to , 250 \u03bcL n-hexane were from Merk KgaA . Ultra purity water was produced using a Millipore Milli-Q system.Regarding the origin of the standards, reagents and solvents, Oleuropein (>90% purity) was purchased from Extrasynthese ; 4-hydroxyphenylacetic acid (98% purity), syringic acid (98% purity), and 4-methyl-2-pentanol (99%) were from Sigma-Aldrich . All the others common reagents were of the appropriate purity from various suppliers. HPLC grade methanol, acetonitrile and All experiments and analytical determinations were carried out at least in duplicate.An overview of the fate of the major polar phenolics throughout the different stages of the virgin olive oil (VOO)-making process\u2014the crushing of the olive fruit, the kneading of the olive paste , and finally the liquid-solid separation by centrifugation of the oily phase to yield a virgin olive oil directly ready for consumption\u2014is depicted in As recalled in the introduction, the major polar phenolic compounds found in VOO are the complex form of the hydroxytyrosol and tyrosol . These polar phenolics originate from the corresponding glucosidic forms\u2014namely, oleuropein and demethyl-oleuropein \u2014which arThe two olive cultivars chosen for this research show great differences in their minor component profiles ,49: CornAs stated in the introduction, the goal of this work was to analyse the feasibility of being able to understand and predict the evolution of the content of the main families of VOO polar phenolics during the making process. To this end\u2014as explained in material and methods\u2014a batch of Cornicabra and Arbequina olives were processed using different intensities of crushing\u2014like those used in the oil industry\u2014with the purpose of producing a set of nine different olive pastes for each cultivar with a different olive paste phenolic content.As expected, the quality indices of the virgin olive oils (VOO) produced in this research were far below the limits established by the European Commission Regulation 702/07 for the Crushing produces a profound transformation in the chemical composition of the phenolic compounds of the olive fruit , as descIndeed, oleuropein decreased considerably; i.e., from 7740 down to 127\u2013794 mg/kg in the case of Cornicabra a dependiThe stronger the conditions of crushing the higher the concentration of phenolics in the olive paste in both varieties . This ma2 = 0.844, p < 0.01 for Cornicabra and r2 = 0.843, p < 0.01 for Arbequina), as depicted in 2 = 0.833, p < 0.01 for Cornicabra and r2 = 0.887, p < 0.01 for Arbequina).If a scale establishing the intensity of milling is plotted\u2014defined by the percentage yield of the transformation of oleuropein and demethyl-oleuropein into the tyrosol secoiridoids (since this family is more stable than that of hydroxytyrosol)\u2014then it can be observed that the concentration of hydroxytyrosol secoiridoids produced under the different crushing conditions studied are reasonably linear for both olive varieties studied , meaning that the effect of crushing is not simply proportional to the initial content of phenolic precursors in the olive fruit, but strongly depends on the olive cultivar processed. This relevant difference in the transformation rate may be due to the different enzymes\u2019 levels in each olive variety, which significantly affects the amount of secoiridoids produced during milling ,54. In fThe kneading also produces an important effect on the concentration of phenolics in the olive paste, mainly due to the activity of oxidative enzymes such as polyphenol oxidase, peroxidase and lipooxygenase during malaxation ,25,26. A2 = 0,953; slope = 0.98; intercept, practically = 0), showing that a small reduction is observed when kneading; being, as known, more stable than HtyrSec. On the other hand, in the case of Htyr derivatives, an apparent exponential relationship is observed . However, if the values of each olive variety are analysed separately, a different behaviour emerges, and almost a linear relationship for each case is observed; with a slope of 1.20 for Cornicabra and 0.30 for Arbequina ; moreover, a great difference is observed between the two olive varieties studied. However, if the partition of phenolics between the oily and water phases is taken into account, assuming that all the polar phenolic compounds are solubilized in the water phase present in the olive paste\u2014constituted by the olive fruit humidity plus the water possibly added to the malaxer during processing\u2014and that the content of these compounds in the oily phase depend only on their partition coefficient . Average partition coefficients of 0.045 for total polar phenolics, 0.047 for HtyrSec and 0.064 for TyrSec, for the 18 olive pastes studied were observed.If the relationship between the polar phenolic contents in the olive paste at the end of kneading and in the corresponding olive oil is analysed, an unclear relationship is observed (rc. water ), a bettThis means that it is apparently feasible to predict the concentration of the phenolic secoiridoids that are transferred from the kneaded paste to the virgin olive oil, depending on the phenolic concentrations in the olive paste and the amount of water present: the humidity of the olive paste (35% and 50% for Cornicabra and Arbequina in this case) and the water possibly added during malaxation.The present study shows that it is apparently feasible to predict the amount and type of phenolic compounds which evolve and are transferred along each of the different stages of the virgin olive oil making process: crushing, malaxation and liquid-solid separation. This improved knowledge could help in the use of adequate processing conditions for the production of VOO with desired properties\u2014higher or lower phenolic content\u2014as the amount of these minor components is directly related to the sensory, antioxidant and healthy properties of this highly appreciated product by consumers. This research has been carried out at a laboratory scale for a better control of both processing conditions and sampling; however, further investigation is required at an industrial scale, employing several olive varieties and crop seasons, in order to check and validate the approach and model proposed."}
+{"text": "Other structurally unrelated agents have been reported to activate the P2X7R via a poorly understood mechanism of action: (a) the antibiotic polymyxin B, possibly a positive allosteric P2X7R modulator, (b) the bactericidal peptide LL-37, (c) the amyloidogenic \u03b2 peptide, and (d) serum amyloid A. Some agents, such as Alu-RNA, have been suggested to activate the P2X7R acting on the intracellular N- or C-terminal domains. Mode of P2X7R activation by these non-nucleotide ligands is as yet unknown; however, these observations raise the intriguing question of how these different non-nucleotide ligands may co-operate with ATP at inflammatory or tumor sites. New information obtained from the cloning and characterization of the P2X7R from exotic mammalian species and data from recent patch-clamp studies are strongly accelerating our understanding of P2X7R mode of operation, and may provide hints to the mechanism of activation of P2X7R by non-nucleotide ligands.The P2X7 receptor (P2X7R) is a ligand-gated plasma membrane ion channel belonging to the P2X receptor subfamily activated by extracellular nucleotides. General consensus holds that the physiological (and maybe the only) agonist is ATP. However, scattered evidence generated over the last several years suggests that ATP might not be the only agonist, especially at inflammatory sites. Solid data show that NAD It is also worth mentioning that to our knowledge no analysis of P2X7R expression and function was ever repeated in the same experimental model used by Gomperts, i.e., mast cells obtained by peritoneal lavage of rats pre-immunized with ovalbumin or with antigens from the helminth parasite Nippostrongylus brasiliensis belongs to the ionotropic P2X receptor subfamily . It was eceptor\u201d . Identifcurrents . P2X7R ecurrents . Reason iliensis , 1980. Iiliensis . Eosinopiliensis . This railiensis . TherefoP2RX7 gene is located on chromosome 12q24.31, in the proximity of the P2RX4 gene located at 12.q24.32. Mouse P2rx7 and P2rx4 genes are both located on chromosome 5. Close proximity of P2RX7 and P2RX4 may suggest an origin by gene duplication (Table 1).The human lication . Ten, orlication , splice lication . Receptolication . Thus, Plication . Whetherlication . Severallication . Combinalication . Additiolication ; Table 1The most relevant mouse SNP is the P451L missense mutation that changes a proline to a leucine at position 451 . This muAiluropoda melanoleuca (giant panda) has been crystalized, allowing 3-D reconstruction of the trimeric receptor and identification of the ATP-binding pocket and of allosteric sites from ic sites . This 3-ic sites .A solid dogma in this field is that the carboxyl terminal extension of the P2X7 subunit is absolutely needed to support \u201cmacropore\u201d formation , therefoFigure 1).Interest on P2X7R stimulants alternative to nucleotides originated from the finding that some agents (see below) strongly synergize with ATP to stimulate P2X7R-mediated uptake of low MW fluorescent dyes (such as ethidium bromide) , or evenSome of the agents showed to potentiate ATP-mediated P2X7R activation, as well as some widely used inhibitors, are thought to be allosteric modulators. The ATP-binding site is contributed by two adjacent subunits. Structural analysis revealed three equivalent ATP-binding sites at the interface of each of the three couples of adjacent subunit contact surfaces . The ATPFigure 2). Occupancy of this site prevents conformational changes associated to P2X7R activation and therefore might hinder movements of P2X7R subunits necessary to allow opening of the ion-conducting pathway , and that no channel dilatation occurs even during prolonged (30 min) stimulation with ATP , and on the other by the technique that might perturb phospholipid mobility in the vicinity of the P2X7R. However, in absence of an experimental proof of these hypotheses, we must stick to the hard data highlighting a discrepancy between description of P2X7R permeability features provided by electrophysiology and cell biology. Electrophysiology and cell biology evidence might be reconciled by assuming that the \u201cmacropore\u201d is a separate entity from the P2X7R, i.e., an accessory molecule recruited upon P2X7R activation. This accessory molecule has been long searched for, and general consensus now points to pannexin-1 as the most likely candidate that catalyzes transfer of an ADP-ribose moiety from NAD+ to arginine 125, close to the ATP-binding pocket of the P2X7R (+-degrading enzyme ecto-NAD+-glycohydrolase (CD38). It is not entirely clear whether NAD+ is a true P2X7R agonist or whether it lowers the activation threshold for ATP, thus sensitizing the P2X7R to autocrine/paracrine-released ATP. Anyway, since an increased NAD+ content has been shown at inflammatory sites has also been suggested to directly stimulate the P2X7R . However2+ influx and ethidium bromide uptake in HEK293 cells transfected with the human P2X7R .More interesting is the activity of the bactericidal peptide cathelicidin LL-37. Cathelicidins are a family of endogenous antimicrobial peptides found in mammals where they are either constitutively expressed or induced following injury and inflammation . LL-37 ian P2X7R . LL-37 san P2X7R as it pran P2X7R . Even moan P2X7R . IncubatBacillus polymyxa, also acts as a positive allosteric P2X7R modulator , the paradigmatic bacterial endotoxin, involves ATP release and P2X7R activation . RecentlFigure 4).Activity of the P2X7R macropore can be also affected by changing the splice variants expressed. It is well known that 10, or 9 according to some authors , human PEver since its molecular cloning and functional characterization it was assumed that the only physiologically relevant agonist of the P2X7R was extracellular ATP. Accruing evidence from various laboratories now shows that other factors may gate this receptor thus revealing an entirely novel and exciting scenario where multiple agents produced during inflammation may converge on this receptor to trigger release of pro-inflammatory factors and even cytotoxic reactions. Furthermore, novel data suggest that permeability through the P2X7R can also be modulated from the inside of the cell, albeit the mechanism involved is utterly unknown. Finally, resolution of the 3-D structure of the full-length receptor, i.e., COOH tail included, will certainly bring novel exciting information on the mechanism underlying P2X7R permeability changes.FDV delineated the outlines, wrote a section, and edited the whole review. ALG wrote a section in the review and contributed to others. VV-P wrote a section of the review. SF wrote a section on the review. ACS wrote a section, contributed to overall writing of the review, and took responsibility for iconography.FDV is a member of the Scientific Advisory Board of Biosceptre, Ltd., a UK-based biotech company involved in the development of P2X7R-targeted therapeutics. The other authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "CYP24A1 mRNA was also significantly decreased by IL-4 and IL-13. The expression of VDR and CYP27B1 mRNAs was not influenced by any of the 13 tested ILs. These data suggest possible cross-talk between the VDR signalling pathway and IL-4- and IL-13-mediated cell signalling.The role of vitamin D receptor (VDR) in immune responses has been broadly studied and it has been shown that activated VDR alters the levels of some interleukins (ILs). In this study, we studied the opposite, i.e. whether 13 selected pro-inflammatory and anti-inflammatory ILs influence the transcriptional activity of human VDR. The experimental models of choice were two human stably transfected gene reporter cell lines IZ-VDRE and IZ-CYP24, which were designed to evaluate the transcriptional activity of VDR. The gene reporter assays revealed inhibition of calcitriol-induced luciferase activity by IL-4 and IL-13, when 1 ng/mL of these two compounds decreased the effect of calcitriol down to 60% of the control value. Consistently, calcitriol-induced expression of Vitamin D receptor (VDR) is an essential regulator of calcium homeostasis and bone metabolism. It has been shown that calcitriol (one of the D vitamins) also plays crucial roles in other physiological processes including in the induction of cell differentiation, inhibition of cell proliferation, modulation of the immune system and control of other hormonal systems. Hence, any disturbance in the VDR signalling pathway may have severe impact on human health. Given its many roles, the identification of compounds, endogenous or synthetic, that alter the transcriptional activation of VDR is therefore highly relevant.CYP27B1 by IL-6 and tumour necrosis factor alpha (TNF-\u03b1) in the COGA-1A colon cancer cell line, implying that pro-inflammatory cytokines might impair VDR activation, thereby limiting its anti-inflammatory action containing three copies of the VDR response element VDRE-I from the human CYP24A1 promoter (IZ-VDRE cells) containing a fragment (the base pairs -326/-46) of the human 4 per well) were seeded in 96-well plates in DMEM supplemented with FBS, and incubated for 24 h. After the incubation with various concentrations of the ILs (1 pg/mL to 100 ng/mL) for 24 h, the culture medium was replaced with medium containing 10% MTT at a final concentration of 0.3 mg/mL and incubated for an additional 30 min. The absorbance was measured spectrophotometrically at 540 nm using a Tecan Infinite M2000 plate luminometer .Cells (2\u00d7104 per well) were seeded in 96-well plates in DMEM supplemented with FBS, and incubated for 24 h. Then, the cells were treated with the different concentrations of the test ILs (1 pg/mL to 100 ng/mL) in the presence or absence of 50 nM calcitriol. After 24 hours of incubation, the cells were lysed, and luciferase activity was measured using the Tecan Infinite M2000 plate luminometer.Cells for 24 h. After incubation, the cells were lysed and cell lysates containing NanoLuciferase were collected. The lysates were mixed with the highest concentration of each tested compound, and luciferase activity was measured using the Tecan Infinite M2000 plate luminometer. A decrease in luciferase activity of less than 15% in the gene reporter assays was considered a non-effect.\u00ae Green PCR Master Mix on a StepOne\u2122 Plus Real-Time PCR System (Applied Biosystems). VDR, CYP24A1, CYP27B1, and ribosomal protein lateral stalk subunit P0 (RPLP0) genes were detected using the primers listed in RPLP0 as a reference gene. Data were processed using the delta-delta CT method and calculated relative to untreated control cells.Total RNA was isolated using TRI Reagent . Then, cDNA was synthesized from total RNA (1000 ng) using the High-Capacity cDNA Reverse Transcription Kit . qRT-PCR was carried out using Power SYBRwww.graphpad.com). A p-value less than 0.05 was considered to be statistically significant.All experiments were repeated three times, in three independent consecutive cell passages, and all the measurements were performed in triplicates. Data shown in the graphs are expressed as the mean \u00b1 SD. The following statistical analyses were used: Student\u2019s paired t-test and one-way analysis of variance (ANOVA) followed by Dunnett\u2019s test . All analyses were performed using GraphPad Prism version 6.00 for Windows, GraphPad Software, La Jolla, CA, USA as the positive control. We observed no induction of luciferase activity by any of the tested ILs in either cell line in the absence of calcitriol . CalcitrVDR gene, as well as the VDR-controlled genes CYP24A1 and CYP27B1 in IZ-VDRE and IZ-CYP24 cells. The experimental design and cell incubation were identical to those in the gene reporter assays (vide supra), with two incubation times (6 h and 24 h).We next examined whether the tested ILs influence the expression of the CYP24A1 mRNA levels in both IZ-CYP24 and IZ-VDRE cells, yielding fold inductions from 1\u00d7105 to 3\u00d7105 compared to the levels in untreated cells after 6 h and 24 h of incubation. In contrast, the levels of CYP27B1 and VDR mRNA were not changed by calcitriol treatment at any of the incubation times. Basal mRNA expression of VDR was inhibited by IL-2 through a mechanism involving signal transducer and activator of transcription 5 (STAT5) . The IL- CYP24A1 . Similar CYP24A1 .CYP24A1 promoter that contains binding sites for many other transcriptional factors in addition to VDR, it appears that the effects of IL-4 and IL-13 are independent of VDR. None of the other tested ILs had any significant effect on basal or calcitriol-induced luciferase activity. Consistent with the gene reporter assay results, calcitriol-induced expression of CYP24A1 mRNA was significantly decreased by IL-4 and IL-13 in both cell lines. In contrast, the expression of VDR and CYP27B1 mRNAs was not influenced by any of the tested ILs in either cell line, regardless of incubation time or the presence of calcitriol. These findings are consistent with our previous results from COGA-1A colon adenocarcinoma cells, where we observed no modulatory effect of IL-6 on CYP27B1 and CYP24A1 mRNAs levels. While we the expected significant up-regulation of CYP24A1 mRNA by calcitriol, we did not observe any change in the expression of CYP27B1 or VDR mRNA in either of the cell lines when incubated with calcitriol. This was unexpected, as there are reports in the literature showing that calcitriol down-regulates CYP27B1 expression through trans-repression [In the present study, we examined the effects of 13 pro-inflammatory and anti-inflammatory ILs on the transcriptional activity of human VDR, using the stably transfected gene reporter cell lines IZ-CYP24 and IZ-VDRE. Gene reporter assays revealed that IL-4 and IL-13 inhibited calcitriol-induced luciferase activity by 40% in IZ-CYP24 cells but not in IZ-VDRE cells. Because IZ-VDRE cells primarily respond to VDR-mediated effects through the VDRE sequences in the promoter, and IZ-CYP24 cells contain the reporter gene under the control of a 180-bp sequence from the pression . This typression , which mpression . Importapression . For inspression . Collect"}
+{"text": "Metallic, especially gold, nanostructures exhibit plasmonic behavior in the visible to near-infrared light range. In this study, we investigate optical enhancement and absorption of gold nanobars with different thicknesses for transverse and longitudinal polarizations using finite element method simulations. This study also reports on the discrepancy in the resonance wavelengths and optical enhancement of the sharp-corner and round-corner nanobars of constant length 100 nm and width 60 nm. The result shows that resonance amplitude and wavelength have strong dependences on the thickness of the nanostructure as well as the sharpness of the corners, which is significant since actual fabricated structure often have rounded corners. Primary resonance mode blue-shifts and broadens as the thickess increases due to decoupling of charge dipoles at the surface for both polarizations. The broadening effect is characterized by measuring the full width at half maximum of the spectra. We also present the surface charge distribution showing dipole mode oscillations at resonance frequency and multimode resonance indicating different oscillation directions of the surface charge based on the polarization direction of the field. Results of this work give insight for precisely tuning nanobar structures for sensing and other enhanced optical applications. Strong local-field enhancements, light absorption, and scattering all occur at a resonant incident wavelength, which can depend on the polarization of the light [When light illuminates metal nanostructures, the free electron gas density oscillates collectively. This collective oscillation is known as a he light \u20133. Due the light ; as a rehe light . Nanorodhe light ,5, high-he light , opticalhe light \u201310, plashe light , solar che light \u201318, and he light .Others have both computational and experimentally investigated plasmonic properties of various metal nanostructures includinneff = 1.25 around the nanobar [E to the incident electric field E0 squared (E2/E02), was studied since light intensity is proportional to the electric field squared. Around the sample, an integration space of radius 125 nm has been defined for the near-field region where most of the enhancement occurs. This integration space was used to calculate the optical enhancement of the nanobar.Computational analyses were performed on gold nanobars of constant length and width with different thicknesses using COMSOL simulations. The simulations are performed in three-dimensional space, where the length of the nanobar is 100 nm, the width is 60 nm, and the thickness varies from 8 nm to 60 nm. The geometries are chosen to represent nanobar structures that can be fabricated with electron beam lithography on a silicon substrate with a silicon dioxide layer. The substrate effect was approximated by an effective medium nanobar \u201343. For nanobar . A normaThe absorption spectra of the nanobars was calculated for each thickness variation and is plotted in For the transverse polarization, the observed trend is same but the amplitude is reduced compared to the longitudinal polarization. The resonance peak value comparing the two polarizations for the same geometrical parameters gives different position. For longitudinal polarization, the resonance wavelength value is larger than for the transverse polarization because plasmonic response depends on the geometric length along which the electric field is polarized . The fulE/E0) is shown at the resonant wavelength for each thickness for the sharp-corner and round-corner rectangular gold nanobars in Figs The electric field distribution Click here for additional data file.S2 File(XLSX)Click here for additional data file."}
+{"text": "To describe the process of cross-cultural adaptation of the Patient-Doctor Relationship Questionnaire (PDRQ-9), as well as compare the agreement between two different types of application.Mais M\u00e9dicos Program Evaluation Research, which is a cross-sectional study with a systematic sample of Primary Care Services in all regions of Brazil. We evaluated the semantic, conceptual, and item equivalence, as well as factor analysis and reliability. This is a cross-sectional study with 133 adult users of a Primary Health Service in Porto Alegre, State of Rio Grande do Sul, Brazil. The PDRQ-9 was answered by the participants as a self-administered questionnaire and in an interview. The instrument was also validated by interview, using data from 628 participants of the All items presented factor loading > 0.5 in the different methods of application and populations in the factor analysis. We found Cronbach\u2019s alpha of 0.94 in the self-administered method. We found Cronbach\u2019s alpha of 0.95 and 0.94 in the two different samples in the interview application. The use of PDRQ-9 with an interview or self-administered was considered equivalent. The cross-cultural adaptation of the PDRQ-9 in Brazil replicated the factorial structure found in the original study, with high internal consistency. The instrument can be used as a new dimension in the evaluation of the quality of health care in clinical research, in the evaluation of services and public health, in health management, and in professional training. Further studies can evaluate other properties of the instrument, as well as its behavior in different populations and contexts. It can be seen as the relationship of trust, therapeutic alliance, or empathy, developed between physician and patientIn the context of Primary Health Care (PHC), the DPR is inserted within longitudinality, which is one of the essential attributes of PHC defined by StarfieldThe DPR in the clinical setting is usually measured from the perception of patientsWe found no instruments that evaluate DPR in Brazil adapted to the scenario of outpatient medical practice. In addition, the Brazilian population of illiterates or functional illiterates reaches 17.6% of the persons aged 15 years or more, and this value can reach 27.1% in the Northeast regionWe performed a cross-cultural adaptation according to the recommendations of the Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN Initiative)The PDRQ-9 is an instrumentEach item of the instrument is a statement about different attributes of the DPR , which evaluate the relational and satisfaction aspects. The instrument was developed to be self-administered, and the patient should answer how much each statement is appropriate on a five-point Likert scale. In a population, the score of each item is calculated by the arithmetic mean of the answers of that item, and a general score is calculated by the arithmetic mean of the answers of the nine items.Two samples were used in the process to evaluate the psychometric properties of the PDRQ-9. The sample of the main validation study (MVS) aimed to evaluate the instrument both when self-administered and when applied in an interview. The MVS was a cross-sectional study with 133 users in a Primary Care Service (PCS) in Porto Alegre, State of Rio Grande do Sul, Brazil. We used convenience sampling, stratified by sex and two age groups (18 to 59 years and \u2265 60 years). The strata were performed using data from a large PCS in Porto Alegre. Data collection took place between September and December 2016. Users were approached after medical consultation by trained interviewers. They should have four or more years of education and at least two appointments with that physician. They answered the self-administered PDRQ-9, deposited their answers in a ballot box, and then answered the same instrument in an interview. The patient did not know that they would have to answer the instrument again in an interview when invited to fill the self-administered questionnaire. In order to assess the stability of the scale over time, participants received the instrument again by letter or e-mail after two weeks in order to answer it at home.We calculated the sample size of the MVS to test the equivalence between two paired means, according to the Bland-Altman procedureMais M\u00e9dicos Program Evaluation Research (PAPMM), which is a cross-sectional study with a systematic sample of Primary Care Services (PCS) throughout Brazil. The objective of this study was to evaluate the quality of the medical care offered to adult users of the Family Health Strategy (FHS) in Brazil. Cuban and Brazilian doctors of the Mais M\u00e9dicos Program (PMM) were compared to Brazilian doctors who did not work with the PMM. In each PCS sampled, approximately twelve adult users (\u2265 18 years) were approached, with at least two appointments with that physician by consecutive selection after appointment with a previously selected physician. These users answered several instruments to trained researchers, among them the PDRQ-9. Of the 6,200 users interviewed in the PAPMM, 10.0% of the participants were randomly selected for the evaluation of the properties of the PDRQ-9. This sub-sample was stratified by state, size of the city, number of FHS teams, and work category of the physician (part or not of the Mais M\u00e9dicos Program). We did not include data of patients cared by Cuban doctors, since the purpose of the study was the cross-cultural adaptation to Brazil, including questions related to Brazilian Portuguese.We also evaluated the instrument when applied in an interview using a sample of participants of the The instrument was selected by one of the authors (LW) after reviewing the literature on the subject. The face and content validity of the scale was evaluated based on nationalTwo translations were made from English to Portuguese by two independent translators who were native English speakers. Back translation to English was performed by another pair of independent translators, who were Brazilians fluent in English. Four pre-tests were performed with ten questionnaires in adult users, in the same PCS of the MVS. The objective of the questionnaire was explained to the participants, who were asked if they considered the statements comprehensible, and relevant results were discussed with the research team after each pre-test. Doubts were discussed with the author of the original instrument (CMVF).We used factor analysis extraction with principal axis factoring to evaluate the validity related to the construct. We selected the items with factor loading above 0.30Hospital de Cl\u00ednicas of Porto Alegre in 2015 (CAAE 48653615.6.0000.5327) and by the Ethics Committees of all cities participating in the PAPMM that requested such approval. The information collected was kept confidential and the names of the participants were not disclosed. The data were presented grouped, keeping the confidentiality of each individual. All interviewees received a clear explanation of the objectives of the study. The participants signed the informed consent.This research was approved by the Research Ethics Committee of the concordo\u201d (\u201cagree\u201d) in the answer options of the instrument was suggested. This change was considered appropriate by the expert committee and approved by the author of the original instrument. In general, participants had a good understanding of the questionnaire. Different words and syntaxes were tested to improve understanding, keeping the original meaning of each item: item 6 \u2013 nature versus cause, symptoms versus medical symptoms; item 7 \u2013 speaking versus talking; item 8 \u2013 satisfied versus content; item 9 \u2013 have access versus easily accessible. At the end of the fourth pre-test, we reached the version to test the psychometric properties. There were no missing data in any of the questionnaires used in the MVS and PAPMM.The expert committee considered the instrument appropriate in relation to the face and content to be used in the Brazilian context. Translations and back translations were compared to each other and the original version, and the first version of the pre-test instrument was developed. The translation of the word \u201cappropriate\u201d into \u201cSeventeen physicians were responsible for the care of the participants of the MVS. The mean age of physicians in this sample was 32 years and 70.6% were women; 29.4% were specialized in family medicine. The mean time of medical practice was 4.7 years, and they worked in the PCS for 2.3 years, on average. They had a mean weekly workload of 54 hours and cared for approximately 34 patients per week in the PCS. In the PAPMM, 52 physicians were responsible for the care of the participants, of whom two refused to provide their data. The mean age of physicians in this sample was 39 years and 50.0% were women; 72.0% were specialized in family medicine. The mean time of medical practice was 12.2 years, and they worked in the PCS of the research for 3.6 years, on average. They had a mean weekly workload of 60 hours, taking care of approximately 126 patients per week in the PCS of the research.Factor loading of the self-administered PDRQ-9 in the population of the MVS was > 0.30 for all items, and item-total correlation was > 0.50 .We obtained an overall score of 4.45 (SD = 0.7) using the self-administered PDRQ-9. In the reliability assessment, we found Cronbach\u2019s alpha of 0.94. The variance explained by the factor extracted was 65.3%.Factor loading of the PDRQ-9 applied in an interview in the populations of the MVS and PAPMM was > 0.30, and item-total correlation was > 0.50 for all items .When evaluating the reliability of the PDRQ-9 applied in an interview in the MVS, we found a general score of 4.43 (SD = 0.7), with Cronbach\u2019s alpha of 0.95, and variance explained by the extracted factor of 70.2%. In PAPMM, the overall score obtained was 3.23 (SD = 0.8), with Cronbach\u2019s alpha of 0.94 and explained variance of 65.6%.Thirty-five participants of the MVS completed the retest questionnaire sent after two weeks. There were no differences related to sex, race, age, education level, number of appointments, or score of the instrument between respondents and non-respondents of the retest. We found an intraclass correlation coefficient (ICC) of 0.96 (95%CI 0.94\u20130.98) between the retest and the self-administered instrument. The Bland-Altman scatter plot used to evaluate the time stability of the PDRQ-9 suggested a homogeneous distribution, with greater agreement for extreme values. The upper limit of agreement can be considered slightly enlarged .The ICC was 0.94 (95%CI 0.93\u20130.95) in the assessment agreement between the self-administered and interview methods. The Bland-Altman scatter plot presented a homogeneous distribution, difference of means very close to zero, and narrow limits of agreement . We obtaThe cross-cultural adaptation of the PDRQ-9 replicated the one-dimensional structure observed in the original study,-,The high internal consistency verified in this study, either by the self-administered or interview method in the different populations (\u03b1 = 0.94\u20130.95), can also be observed in the other evaluations of this instrument, such as the Dutch (\u03b1 = 0.94)The application of the PDRQ-9 in the PAPMM allowed its cross-cultural adaptation, with a sample of participants from all regions of Brazil. These users were found in PHC services of the Brazilian Unified Health System, in their different types of organization and offer of care. In addition, we could include persons with great individual and social plurality. These factors add robustness to the presented results.Although originally designed to be self-administered, the PDRQ-9 has already been validated in Spain for use through interviewsTo evaluate the stability of the scale over time, the response rate after two weeks was low (26.3%), which was also verified in the original PDRQ-9 validation study (33%)This study presents limitations. We did not evaluate the time needed to answer the instrument. Therefore, we could not perform analyses related to learning bias or interference of factors such as education level. The participants of the MVS may have felt compelled to answer the instrument identically, as they had to answer the PDRQ-9 using two different methods in sequence, which may have underestimated the difference between the methods. The lack of knowledge of the participant on the fact that they would answer the instrument a second time minimizes this effect. On the other hand, the use of the ballot box reinforced that the goal in answering the instrument for the second time was not to remember what was already answered, but to provide a new authentic answer. The application of the instrument in the health service can lead to socially acceptable answers and overestimate the judgment of the persons towards their physicians. As in other studies, this was minimized by interviewers not tied to the service and the ensured anonymity of answers.The cross-cultural adaptation of the PDRQ-9 to the Brazilian context allowed the availability of a concise and versatile instrument in the evaluation of the DPR, especially in the PHC scenario. It can be self-administered or applied in an interview. Further studies may evaluate other properties of the scale as well as their behavior in different population strata and specific contexts. The use of the PDRQ-9 will allow the inclusion of a new dimension of the quality of health care in clinical research, in the evaluation of services, in health management, in pay for performance, and in professional training."}
+{"text": "Triticum aestivum L.) is one of the most abiotic stresses of the crop restricting forage and grain production in the Southern Plains of the United States. To map quantitative trait loci (QTLs) and identify single-nucleotide polymorphism (SNP) markers associated with seedling heat tolerance, a genome-wide association mapping study (GWAS) was conducted using 200 diverse representative lines of the hard red winter wheat association mapping panel, which was established by the Triticeae Coordinated Agricultural Project (TCAP) and genotyped with the wheat iSelect 90K SNP array. The plants were initially planted under optimal temperature conditions in two growth chambers. At the three-leaf stage, one chamber was set to 40/35\u00b0C day/night as heat stress treatment, while the other chamber was kept at optimal temperature (25/20\u00b0C day/night) as control for 14 days. Data were collected on leaf chlorophyll content, shoot length, number of leaves per seedling, and seedling recovery after removal of heat stress treatment. Phenotypic variability for seedling heat tolerance among wheat lines was observed in this study. Using the mixed linear model (MLM), we detected multiple significant QTLs for seedling heat tolerance on different chromosomes. Some of the QTLs were detected on chromosomes that were previously reported to harbor QTLs for heat tolerance during the flowering stage of wheat. These results suggest that some heat tolerance QTLs are effective from the seedling to reproductive stages in wheat. However, new QTLs that have never been reported at the reproductive stage were found responding to seedling heat stress in the present study. Candidate gene analysis revealed high sequence similarities of some significant loci with candidate genes involved in plant stress responses including heat, drought, and salt stress. This study provides valuable information about the genetic basis of seedling heat tolerance in wheat. To the best of our knowledge, this is the first GWAS to map QTLs associated with seedling heat tolerance targeting early planting of dual-purpose winter wheat. The SNP markers identified in this study will be used for marker-assisted selection (MAS) of seedling heat tolerance during dual-purpose wheat breeding.Heat stress during the seedling stage of early-planted winter wheat ( Triticum aestivum L.) is one of the most important feed and food crops in the world and it covers more cultivable land globally than any other crop. Moreover, it provides food for 36% of the world\u2019s population Moench] (Zea mays L.) (Oryza sativa L.) , and rictiva L.) . Heat sttiva L.) . The yietiva L.) . Heat toTo date, dissection of QTLs for heat tolerance in wheat has been mainly conducted during the grain filling stage using bi-parental mapping populations . These sMoreover, using a meta-analysis strategy, major QTLs associated with heat tolerance were detected on chromosomes 1B, 2B, 2D, 4A, 4D, 5A, and 7A . SimilarThe GWAS approach has been used to discover genes controlling both polygenic and monogenic traits. For example, QTLs associated with important traits such as disease resistance , yield, Although heat tolerance during the reproductive stage of wheat has been well characterized, heat stress during the seedling stage is not studied. Therefore, the objectives of this study were: (1) to map QTLs associated with seedling heat tolerance in wheat and (2) to identify SNP markers for MAS of seedling heat tolerance during dual-purpose wheat breeding in the southern Great Plains of the United States.1), was used in this study. The association mapping panel is composed of representative winter wheat lines across the Great Plains . Three measurements of leaf chlorophyll content were taken per line, and the average was used for statistical analysis. Shoot length was measured from the soil surface to the tip of the longest leaf. Leaf chlorophyll content and shoot length were measured 10 days after heat stress treatment. Number of leaves per seedling was recorded as the average number of leaves counted from three seedlings, 14 days after the seedlings were exposed to heat stress. Seedling recovery was the percentage of seedlings that were able to recover 7 days after removal of heat stress treatment. Heat stress response, referred to as trait relative difference (TRD), was calculated as the difference between trait performance at optimal and high temperatures, and then divided by performance at optimal temperature. The experiment was repeated six times using the same two chambers.Analysis of variance of the phenotypic data was performed using the Statistical Analysis System (SAS) software V9.3 to assesThe wheat lines were genotyped using the wheat iSelect 90K SNP genotyping array , which gK) and the membership coefficients. A model-based Bayesian clustering approach was performed, where the number of assumed groups was set from k = 1 to 10. During STRUCTURE analysis, a Markov chain Monte Carlo (MCMC) of 15,000 burn-in replicates followed by 15,000 iterations was run and repeated five times using an admixture model. Due to lots of admixtures in the panel, the STRUCTURE results were verified by comparing the results to other analyses. The optimal number of groups in this panel was determined based on the point where the posterior probability [LnP(D)] began to plateau from the STRUCTURE analysis was selected. The analysis of K between lines was performed following the identity-by-state method , and neighbor-joining (NJ) tree analysis. The STRUCTURE program version 2.3.4 was usedanalysis and the n 5.2.28 . To detee method .2r) between SNP marker pairs. All SNP marker pairs with p-values of less than 0.001 were considered to be in significant LD. LD decay distance was estimated by plotting the scatterplot of LD 2r values between marker pairs and the genetic distance (in cM) using the R package SNPRelate SNP hit, LD analysis was performed on every chromosome where significant QTLs were detected.Linkage disequilibrium among pairs of SNP markers was performed with the TASSEL software using 3,484 tag SNPs selected using the R package SNPRelate . LD for NPRelate , while tNPRelate . To deteGenome-wide association mapping was performed with the Genome Association and Prediction Integrated Tool (GAPIT) . For theFor the Q model, the following equation was used:Y is the vector of phenotypic values, X is the design matrix, \u03b2 is the vector consisting of SNP markers and population structure (PCs) included in the model as fixed effects, and e is the random error.For the K and MLM models, the following equation was used:Z is the design matrix and \u03bc is the vector comprising additive genetic effects considered as random. In the K-model, \u03b2 contains only markers and \u03bc contains the K-matrix, while in the MLM, \u03b2 has both markers and population structure (PCs), and \u03bc has the K matrix. Significant QTLs were initially tested based on a false discovery rate (FDR)-adjusted p-value of 0.05 following a step-wise procedure database to identify candidate genes or related proteins with DNA sequences similar to the SNPs significantly associated with seedling heat tolerance-related traits detected in this study.A BLAST search was performed against the newly released wheat reference sequence hosted by the URGI-INRATable 1). Frequency distribution of the lines for the investigated traits at optimal and heat-stressed growth conditions are presented in Figure 1. Mean leaf chlorophyll content at optimal temperature was 38.3 with a range from 31.8 to 44.9, while for heat-stressed plants, mean leaf chlorophyll content was 26.7, ranging from 17.0 to 37.1. At optimal temperature, mean shoot length was 44.9 cm, ranging from 35.0 to 56.5 cm, whereas at heat-stressed growth condition, the mean value was 33.8 cm, and the range was from 23.5 to 44.4 cm. Mean number of leaves per seedling was six at optimal temperature compared with four at heat-stressed growth condition. For the number of leaves per seedling, phenotypic variation among lines was very small as shown in Figure 1 because almost all plants were at three-leaf stage when the experiment started. As a result, variation in number of leaves per seedling among lines was very small by the end of 14-day temperature treatment. As for seedling recovery, on average, 52.3% of seedlings were able to recover after the removal of heat stress treatment (Table 1). Overall, heat stress reduced leaf chlorophyll content, shoot length and number of leaves per seedling by 30.3, 25.0, and 32.2%, respectively.Phenotypic variation was observed among genotypes for all traits in both temperature regimes . However, the PCA revealed that the population structure in this panel is very low since the first three PCs collectively explained only about 19.4% of the total variance. The first principal component (PC1) explained about 9.4%, while the second (PC2) and the third (PC3) explained about 6.2 and 3.8% of the total variance, respectively . According to the NJ tree analysis, this panel can also be divided into four major groups , based mainly on geographic origins and pedigree information . For example, in the first main group (G1), majority of the lines were from the Oklahoma State University and the Texas A&M University. Most lines with a common parent in their pedigree tended to cluster into the same group. For example, the majority of the lines assigned to G1 had \u201cJagger\u201d as one of the parents in their pedigree. The largest number of lines forming G2 originated from the University of Nebraska breeding program, followed by the Kansas State University and the Colorado State University. Group G3 was dominated by wheat lines from the AgriPro Syngenta followed by those from the University of Nebraska wheat breeding program. Finally, the largest number of lines in G4 came from the Texas A&M University, followed by those from the Oklahoma State University.Three different clustering methods, PCA, NJ tree analysis, and STRUCTURE analysis, were compared to assess their agreement in the pattern of structuring of this panel. PCA divided the panel into four main groups with lots of admixture . The lack of a distinct clustering pattern observed in this panel is because there is a high degree of relatedness among lines included in this study due to sharing genetic materials among wheat breeding programs. For GWAS analysis, we used the three PCs from the PCA as a fixed-effect covariate in the Q and MLM to correct for population structure.The STRUCTURE program also stratified panel into four groups but with a lot of admixtures (p < 0.001), while on B and D genomes, 32.8 and 14.0% of SNP marker pairs were in significant LD (Supplementary Table S1). The scatter plots of the allele frequency correlations (r2) between the SNP marker pairs and the genetic distance (in cM) within each of the three wheat genomes are presented in Supplementary Figure S2. The data showed that LD decayed to <0.1 at 9.7 cM in A genome, 9.8 cM in B genome, and 10.9 cM in D genome.After filtering using the R package SNPRelate , 3,484 tp-values comparing the uniform distribution of the expected \u2013log10(p) to the observed \u2013log10(p) of all evaluated traits are presented as Supplementary Figure S3. Genome-wide association mapping analysis results for all traits using the MLM are presented in Figures 6. The QTLs and the SNP markers significantly associated with seedling traits at optimal and heat-stressed growth conditions, as well as heat stress responses of all traits are presented in Supplementary Table S2. Although, no QTLs were declared significant at a FDR of 0.05, some SNPs were significant at unadjusted significance p-value <0.001 at optimal and/or heat-stressed growth conditions.Compared to Q and K models, MLM has high statistical power for controlling false positives. Therefore, in this study MLM was chosen as the appropriate model for reporting QTL mapping results. The quantile\u2013quantile (Q\u2013Q) plots of p-value <0.001 in chromosomes 1B, 2B, 3B, 5B, and 6B . The first QTL (QLCCOT.nri-1B) region was represented by six SNPs, which were mapped within genetic distance of 78\u201382 cM on chromosome 1B, and together accounted for 42.9% of the total phenotypic variation in leaf chlorophyll content at the optimal temperature. The second QTL region (QLCCOT.nri-2B), represented by four SNPs, was mapped at the genetic position of 119 cM on chromosome 2B. The four markers together explained 23.3% of the phenotypic variation in leaf chlorophyll content. On chromosome 5B, one QTL (QLCCOT.nri-5B) was mapped at 171\u2013184 cM, which explained 18.6% of the phenotypic variation. On chromosomes 3B and 6B, two QTLs, QLCCOT.nri-3B (124 cM) and QLCCOT.nri-6B (121 cM) were detected collectively accounted for 11.7% of the phenotypic variation. Overall, the most significant SNPs for the trait were IWB9175 (80 cM), IWB14950 (80 cM), and IWB27292 (78 cM) on chromosome 1B, which collectively explained about 23.8% of the total phenotypic variation in leaf chlorophyll content under optimum growth temperature.For leaf chlorophyll content at the optimal temperature, five QTLs, represented by 15 SNPs, were detected significant based on unadjusted significance Figure 3B and Supplementary Table S2). The first QTL (QLCCHS.nri-2B) was located on chromosome 2B, and explained 12.1% of the phenotypic variation of the trait. On chromosome 2D, one QTL, QLCCHS.nri-2D (71\u201386 cM) was detected. This QTL was represented by 37 SNPs, explaining phenotypic variation in leaf chlorophyll content at heat-stressed growth condition ranging from 5.7 to 7.8%. On chromosome 4A, one QTL (QLCCHS.nri-4A) was found and mapped at 9 cM. This QTL explained about 5.8% of the phenotypic variation. In addition, three QTLs were detected at 42, 60, and 76 cM on chromosome 4B, respectively. The phenotypic variation explained by these QTLs ranged from 5.8 to 17.5%. The most significant SNP markers associated with leaf chlorophyll content under heat-stressed growth condition were IWB28109 (71 cM) and IWB65632 (77 cM) on chromosome 2D, and IWB55435 (27 cM) on chromosome 2B (Supplementary Table S2). These three SNP markers together accounted for 21.4% of the phenotypic variation of leaf chlorophyll content at heat-stressed growth condition.For leaf chlorophyll content at the heat-stressed growth condition, six QTLs were detected on chromosomes 2B, 2D, 4A, and 4B . The QTLs were represented by 39 SNPs significantly associated with heat stress response. A single QTL (QLCCHR.nri-2B) was detected on chromosome 2B, and it was mapped at a genetic position of 27 cM. The phenotypic variation explained by this QTL was 6.8%. On chromosome 2D, two QTLs: QLCCHS.nri-2D.1 (22 cM) and QLCCHS.nri-2D.2 (71\u201385 cM) were identified. The two QTLs on 2D were represented by 29 SNPs, which accounted for 5.8\u20137.1% of the total phenotypic variation in heat stress response of leaf chlorophyll content. Furthermore, one QTL was mapped at 9 cM on chromosome 4A, and it accounted for 6.6% of the phenotypic variation in heat stress response of leaf chlorophyll content. On chromosome 4B, two QTLs (QLCCHR.nri-4B.1 and QLCCHR.nri-4B.2) were detected at genetic positions of 40 and 76 cM, respectively. Similarly, on chromosome 5B, one QTL (QLCCHR.nri-5B) was mapped at 182\u2013189 cM. The QTL on 5B was represented by four SNPs which together explained about 25.1% of total phenotypic variation in heat stress response of the trait . The most significant SNPs were IWB28109 at 71 cM on 2D, IWB55435 at 27 cM on 2B and IWB48055 at 40 cM on 4B. These SNP markers accounted for 6.6\u20137.1% of the phenotypic variation in heat stress response of leaf chlorophyll content.For heat stress response of the leaf chlorophyll content, i.e., the relative difference under the two growth temperatures, seven QTLs were identified on chromosomes 2B, 2D, 4A, 4B, and 5B (Table 2).Overall, the data suggest that the leaf chlorophyll content QTLs associated with heat stress or heat response are located on chromosomes 2B, 2D, 4A, 4B, and 5B based on the QTLs detected for heat response of the trait, or the QTLs detected under heat-stressed but not under the optimum condition . The first QTL (QSLOT.nri-4B), represented by three SNPs, was mapped at 57\u201363 cM on chromosome 4B, explaining 17.4% of the phenotypic variation of shoot length at the optimal growth temperature. The other QTL (QSLOT.nri-7B) was mapped at 54 cM on chromosome 7B with about 5.3% of the phenotypic variation in shoot length.For shoot length at the optimal growth temperature, two QTLs represented by four SNPs were detected significant at unadjusted Figure 4B and Supplementary Table S2). On chromosome 4B, the QTL (QSLHS.nri-4B) was mapped at genetic position ranging from 57 to 60 cM. This QTL explained 12.8% of the phenotypic variation in shoot length. The QTL (QSLHS.nri-7B) on chromosome 7B was represented by two SNPs and mapped within 54\u201358 cM. Together, the two SNP markers explained 10% of the phenotypic variation in shoot length at heat-stressed growth condition. The most significant markers were the same markers that were detected at optimal growth condition, located on chromosomes 4B and 7B, indicating that the detected shoot length QTLs are expressed under both optimum and heat-stressed growth conditions, thus they are not necessarily related to heat stress.On the other hand, at heat-stressed growth condition, the same two QTLs for shoot length were also found on chromosomes 4B and 7B . On chromosome 3B, two QTLs (QSLHR.nri-3B.1 and QSLHR.nri-3B.2) were found, one mapped at 10 cM and the second one at 67 cM, together explaining 11.8% of the phenotypic variation in heat stress response of shoot length. The third QTL (QSLHR.nri-7D) was located at 27 cM on chromosome 7D. This QTL was represented by two SNPs, which collectively explained 12.8% of the phenotypic variation in heat stress response. In short, as the same QTLs were detected under optimal and heat-stressed growth conditions, shoot length QTLs responding to heat stress were only found by mapping heat stress response of the trait on chromosomes 3B and 7D (Table 2).For heat response of shoot length, three QTLs were detected on chromosomes 3B and 7D . The SNP markers representing the QTLs explained 5.8\u20138.9% of total phenotypic variation in the number of leaves per seedling at the optimal growth condition. The two most significant SNP markers (IWB40186 and IWB25267) were co-localized at 78 cM on chromosome 2A, explaining 17.2% of the phenotypic variation in the number of leaves per seedling. The third most significant SNP was mapped at 68 cM on chromosome 4B, which accounted for 7.4% of phenotypic variation.At optimal growth condition, four QTLs associated with the number of leaves per seedling were detected at genetic positions of 56, 77\u201378, 177\u2013181, and 68\u201372 cM on chromosomes 1B, 2A, 3A, and 4B, respectively . The first QTL (QLNHS.nri-1B) was mapped at 112 cM on chromosome 3A, and explained about 6.2% of the phenotypic variation. The second (QLNHS.nri-3B), third (QLNHS.nri-4B), and fourth QTLs (QLNHS.nri-5A) were located at genetic positions of 66, 64, and 115 cM on chromosomes 3B, 4B, and 5A, respectively, and collectively explained 24.2% of the phenotypic variation in number of leaves per seedling at heat-stressed growth condition.At heat-stressed growth condition, four QTLs significantly associated with number of leaves per seedling were detected . On chromosome 2A, two QTLs were found; one (QLNHR.nri-2A.1) mapped at 77\u201378 cM, and the other QTL (QLNHR.nri-2A.2) was located at 150 cM. The QTL (QLNHR.nri-3A) on 3A was located at 177 cM, while the one on 4B (QLNHR.nri-4B) was mapped at genetic position of 68\u201371 cM. Furthermore, two QTLs, QLNHR.nri-5B.1 and QLHR.nri-5B.2 were located at 49 and 144 cM, respectively, on chromosome 5A, while one QTL, QLNHR.nri-7B was found at 145 cM on chromosome 7B. The most significant SNP markers were IWB40186 and IWB25267, which were co-localized at 78 cM on chromosome 2A, and IWB61157, which was mapped at 150 cM on the same chromosome. The two markers mapped at 78 cM together explained 15.2% of the phenotypic variation, while the marker located at 150 cM accounted for 8.3% of the phenotypic variation in heat stress response of number of leaves per seedling. Overall, the data suggest that heat stress or heat response QTLs associated with the number of leaves per seedling are located on chromosomes 2A, 3A, 3B, 4B, 5A, 5B, and 7B according to QTLs detected for heat stress response of the trait, or by comparing the QTLs detected under heat-stressed vs. the optimum condition (Table 2).For heat stress response of number of leaves per seedling, seven QTLs, represented by 26 significant SNPs, were detected on chromosomes 2A, 3A, 4B, 5B, and 7B . The phenotypic variation explained by these SNPs varied from 6.5 to 8.5%. On chromosome 2A, one QTL (QSLHS.nri-2A) was located at genetic position of 96 cM. This QTL was represented by five SNPs, which collectively explained 33% of the phenotypic variation in seedling recovery after heat stress. The second QTL (QSLHS.nri-2B) was found on chromosome 2B at 19 cM, which accounted for 6.5% of the phenotypic variation. Another QTL (QSLHS.nri-2D) was found on chromosome 2D at the genetic distance of 26 cM, and it was represented by three SNP markers, which together explained 18.6% of the phenotypic variation. On chromosome 3A, one QTL (QSLHS.nri-3A) was detected and mapped at 123\u2013129 cM. The QTL on 3A explained 12.4% of the phenotypic variation in seedling recovery after removal of heat stress treatment. In addition, one QTL (QSLHS.nri-7A) was identified on chromosome 7A at position 42\u201343 cM, while another one (QSLHS.nri-7B) was found at 90 cM on chromosome 7B. The QTLs on 7A and 7B accounted for 13.4 and 23.3% of the phenotypic variation in seedling recovery, respectively.For seedling recovery after removal of heat stress treatment, six QTLs were detected on chromosomes 2A, 2B, 2D, 3A, 7A, and 7B, and these were represented by 16 SNPs compared to A (9.7 cM) and B (9.8 cM) genomes. As only 200 representative lines were selected from the original panel in the current study, the LD distances are changed compared to a previous study involving the same panel . In geneIn this study, three statistical models were compared to assess their ability to map QTLs and identify SNPs associated with seedling heat tolerance. We decided to do this because previous studies have shown that the best model can vary depending on the trait . FinallyTable 2). We believe that these are the true chromosomes that harbor leaf chlorophyll content QTLs responding to heat stress since they were only detected under heat stressed temperature and/or mapped using heat response of the trait. Previous studies also identified QTLs for heat stress tolerance traits, specifically at grain filling stage of wheat on chromosomes 2B, 2D, and 4A . The results showed that some of the significant SNP markers have high sequence similarities with candidate genes, known to be involved in plant stress responses in different crops including wheat. For example, on chromosome 2D, the significant SNP IWB28728 for leaf chlorophyll content responding to heat stress has 89% sequence similarity with putative plastid-lipid-associated protein 13. The putative plastid lipid-associated protein 13 has been reported to play an important role in improving plant performance under stress conditions. In addition, it actively participates in thylakoid function from biogenesis to senescence, suggesting that it is a precursor of the chloroplast thylakoid membranes (+), a major osmoticum of plant cells. The accumulation of potassium (K+) in the plant vacuole is important for plants under high-salt stressed conditions of crops with broad environmental adaptation. On the other hand, shoot length QTLs were detected for heat response on 3B and 7D, which were also reported previously to harbor QTLs for heat tolerance traits at vegetative and grain filling stages of wheat (For shoot length, the same significant QTLs were detected at both optimal and heat-stressed growth conditions. Generally, QTLs associated with a trait under optimal conditions usually controls the trait under stressed-conditions . In the of wheat .Although some of the markers associated with shoot length were significant at both growth conditions, BLAST search revealed that some of the identified SNPs have high sequence similarities with candidate genes known for plant stress response. For example, the DNA sequence of SNP IWB35611 on chromosome 4B has high sequence similarity with serine/threonine protein kinase STE 20-like, which has been reported to play an important role in salt tolerance in plants . AnotherSimilarly, for the number of leaves per seedling and seedling recovery, some of the QTLs detected in this study were located in the same chromosomes that were reported in other heat stress studies at various adult plant stages . HoweverIn summary, some QTLs for seedling heat tolerance-related traits identified in this study were found on the same chromosomes previously reported to harbor QTLs for heat tolerance, although the growth stages reported in the previous studies are different from the growth stage investigated in the present study. Our results suggest that some of heat tolerance QTLs detected during the seedling and the flowering stages of wheat may be co-localized. In addition, other QTLs identified in the seedling stage in the present study have not been reported in those studies conducted at the flowering time or grain filling stages. Moreover, BLAST search using DNA sequences of some of the significant loci found in this study revealed candidate genes known to be involved in plant stress responses in wheat and other crop species. To the best of our knowledge, this is the first GWAS to map QTLs and identify SNP markers significantly associated with seedling heat tolerance-related traits targeting early planting of dual-purpose winter wheat. Significant SNP markers identified in this study will be used for MAS of seedling heat tolerance to facilitate selection of the trait during wheat breeding.FM phenotyped the association mapping panel, analyzed both phenotypic and genotypic data, and drafted the manuscript. HA helped in the candidate gene search and review of the manuscript. JA assisted in experiment implementation and review of the manuscript. TK helped in data collection and review of the manuscript. WH helped in review of manuscript. X-FM supervised the study and finalized the manuscript. All authors read and approved the manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Sonoluminescence, the emission of light from acoustically-induced collapse of air bubbles in water, is an efficient means of generating UV-C light. However, because a spherical bubble collapsing in the bulk of water creates isotropic radiation, the generated UV-C light fluence is insufficient for disinfection. Here we show, based on detailed theoretical modelling and rigorous simulations, that it should be possible to create a UV light beam from aspherical air bubble collapse near a gallium-based liquid-metal microparticle. The beam is perpendicular to the metal surface and is caused by the interaction of sonoluminescence light with UV plasmon modes of the metal. We estimate that such beams can generate fluences exceeding 10\u00a0mJ/cm UV-C light inactivates pathogens through absorption of radiation energy by their cellular RNA and DNA prompting the formation of new bonds between adjacent nucleotides. This results in a photochemical damage that renders pathogens incapable of reproducing and infecting1.The ability of UV-C light (200\u2013280\u00a0nm) to inactivate bacteria, viruses and protozoa is widely used as an environmentally-friendly, chemical-free and highly effective means of disinfecting and safeguarding water against pathogens responsible for cholera, polio, typhoid, hepatitis and other bacterial, viral and parasitic diseases1. For example, the fluence must be 5 and 10\u00a0mJ/cm2, respectively, to inactivate 99% and 99.9% of Giardia and Cryptosporidium pathogens2. These specifications are for water purified from solid particles larger than 5\u201310\u00a0\u03bcm [turbidity of 5 Nephelometric Turbidity Units (NTU)]2. Otherwise, particles can shield pathogens from the UV light, thereby allowing many pathogens to recover and infect.However, pathogens can recover from photochemical damage when the initial UV dosage (fluence) is not sufficiently high2. Moreover, filtered water, dissolved iron, organic salts and the pathogen population itself absorb UV-C light. Therefore, a 50% UV radiation loss has been accepted as suitable for practical use1.The filtration of natural water presents significant challenges for remote communities and developing nations5. Air bubbles suitable for sonoluminescence are often present in natural water7 and their concentration can be increased, for example, by using bubble diffusers8. We show that such bubbles could act as compact sources of germicidal radiation located several optical wavelengths away from pathogens. This means that shielding of pathogens by particles suspended in water would be greatly reduced, but small distances travelled by UV-C light between the source and pathogens would result in negligible absorption losses.To enable UV disinfection of turbid water, we suggest to use the effect of sonoluminescence\u2014the emission of broadband UV light in acoustically-induced collapse of air bubbles in water4, which means that the fluence of light generated by any single collapse event is low compared to that required for UV germicidal irradiation , the peak temperature max[T(t)] of the bubble and the FWHM of the light pulse depend on the acoustic frequency f [Fig.\u00a0T(t)] varies slowly when f\u00a0>\u00a0f0, remains approximately constant for f\u00a0\u2248\u00a0f0 and increases quickly at f\u00a0<\u00a0f0 when the collapse becomes more violent.As a representative example, Fig.\u00a0mum Fig.\u00a0, which ry f Fig.\u00a0. The val18. At the collapse stage, a water jet develops through the centre of the bubble toward the surface18. The collapse near a solid surface also results in stable sonoluminescence19.When the bubble oscillates near a solid surface, it flattens near the surfaceThe latter property is important for the application envisioned in this work because the stability of sonoluminescence implies that pathogens would be continuously exposed to germicidal UV radiation. In the course of multiple expansion-collapse cycles some bubbles stop producing UV light, but others, which were not previously involved in the process, start contributing to sonoluminescence. Thus, on average the UV radiation fluence delivered to pathogens should be stable over a complete disinfection cycle.\u03bcm radius, which can be fabricated by means of a self-breakup of a liquid metal jet9. We assume that an initially spherical bubble with R0\u00a0=\u00a01\u00a0\u03bcm nearly touches the particle\u2014the distance between the bubble centre and the metal surface is 1.1\u00a0\u03bcm. Because R0 is 50\u2013100 times smaller than the radius of the particle, we simplify our model by considering a planar 100\u00a0\u03bcm-thick liquid-metal layer.We analyse an aspherical bubble collapse near spherical gallium-alloy particles of 50\u2013100\u00a020We use a high-order numerical method for solving fully-compressible multiphase inviscid flow equationsi\u00a0=\u00a01,\u00a02,\u00a03, \u03b1i are the volume fractions of bubble gas, water, and liquid metal (\u2211i\u03b1i\u00a0=\u00a01), \u03c1\u00a0=\u00a0\u2211i\u03b1i\u03c1i is the density, u is the velocity vector, p is the pressure, E\u00a0=\u00a0\u2211i\u03b1iEi is the total energy, and I is the identity matrix.where 20. We neglect viscous forces because they are significantly smaller than the pressure forces driving the collapse. The influence of surface tension is also neglected because the primary force driving the collapse is due to the difference between the pressure inside the bubble and the acoustic pressure20. The density of the liquid metal is 6360\u00a0kg/m3 and the parameters for the EOS are taken from21. The presence of the 1\u20133\u00a0nm-thick gallium oxide layer11 is neglected because it does not cause qualitative changes in fluid-mechanical and optical properties of the metal.These equations combined with an equation of state (EOS) for the mixture of bubble gas, water, and liquid metal define the compressible-multiphase systemTa of the f\u00a0=\u00a01.5\u00a0MHz sinusoidal acoustic wave that triggers the expansion-collapse cycles of the bubble and the deformation of the liquid-metal surface. A toroidal bubble is formed in the end of the collapse stage, which is also the instance when the sonoluminescence light is emitted19. This behaviour closely resembles that of the bubble located near the solid metal surface, but the solid metal is not deformed.\u00a0Figure\u00a018. We calculate the pressure developed by the micro-jet near the solid metal to be \u00a0~7\u00a0MPa, which agrees with experimental data18.The formation of a toroidal bubble is accompanied by a water micro-jet impinging the metal surfaceliquid-metal surface, we assume that due to a periodicity of the bubble expansion-collapse process it would regain its initial shape in the beginning of each cycle. We predict the jet pressure at the liquid metal surface to be ~26\u00a0MPa. This value may be somewhat overestimated because of a numerical instability arising in the end of the collapse stage, but experimental data suitable for validation of this prediction are currently not available.When modelling the motion of the 25. It is also known that prolonged irradiation of macroscopic liquid gallium alloy particles with MPa-level pressure ultrasound may break them into smaller particles26. However, for this to occur, irradiation times should be of order of several minutes26, which is much longer than the existence time of the water jet that, in turn, constitutes a small fraction of the period of oscillation of an acoustic wave driving the collapse. Moreover, smaller liquid-metal particles are known to quickly coalesce forming macroscopic particles unless their surface is modified or special measures are taken to remove the surface oxide layer26. However, such special conditions are not present in our scenario.It can be shown that the development of a water jet would not cause significant permanent damage to the liquid-metal surface. By analogy with the solid-metal surface, which is warranted because of the similarity between the respective bubble dynamics in Fig.\u00a0f decreases because the collapse becomes more violent. Thus, we follow the theoretical prediction27 of consistent aspherical collapse shapes at f\u00a0<\u00a0f0\u00a0\u2248\u00a03.26\u00a0MHz. Similar to the radius of spherical bubbles R(t) exhibits larger excursions from V0 followed by steeper collapses when f is decreased27. This enables us to use the frequency dependence of the temperature of spherical bubbles with R(t) and T(t) drawn from Fig.\u00a0with the Planck and Boltzmann constants L\u03bb. However, it does not allow for a quantitative control of the emitted power, which means that output energy quantities are expressed in arbitrary units. To calculate the fluence in real physical units, we exploit the linearity of Maxwell\u2019s equations and first simulate the spatial pattern of the emitted light. This pattern corresponds to the profile of radiant emittance, a radiometric term that is equivalent to intensity in optics, expressed in arbitrary units. Then, we semi-analytically calculate the total amount of the radiant power as \u03bb1\u00a0=\u00a0200\u00a0nm and \u03bb2\u00a0=\u00a0280\u00a0nm. The time integration of \u03a6(t) gives the optical energy in Joules, but the spatial pattern of the emitted light allows us to define the area through which the radiation passes. This allows us to calculate the fluence as the energy delivered per unit area.The FDTD method can readily simulate wide-spectrum signals such as 11, which means that liquid and solid metal surfaces with identical profiles would equally affect UV-C light. The refractive index of water is 1.36629.The spatial resolution of the FDTD mesh is 5\u00a0nm. In the UV-C band, the liquid and solid state gallium alloys share the same complex dielectric permittivity described by a Drude modelf\u00a0=\u00a01.5\u00a0MHz. The radius of the spherical bubble is 500\u00a0nm [Fig.\u00a0L\u03bb for \u03bb\u00a0=\u00a0200\u2013280\u00a0nm. A spherical bubble produces isotropic radiation while its aspherical counterpart produces a beam near both liquid and solid metal surfaces. The beam arising near a deformed liquid-metal surface acting as a concave mirror is more intense compared to that near a flat solid surface.\u00a0Figure\u00a0\u00a0nm Fig.\u00a0 and the 33. Significantly, in the UV range, both liquid and solid-state gallium alloy structures support strong plasmon resonances leading to enhanced focusing of light produced by sources located near the metal surface34. This property is used in the current work.The liquid-metal surface has excellent optical properties in a broad spectral range covering the UV, visible and near-infrared wavelengths19, which in the framework of the blackbody model requires T(t) to follow the same trend. Hence, we use T(t) from Fig.\u00a0R0\u00a0=\u00a01\u00a0\u03bcm. For the aspherical bubble, we take the radius (~75\u00a0nm) and the cross-sectional area of the beam from the spatial profile in Fig.\u00a0The energetics of sonoluminescence produced by spherical and aspherical bubbles are similar19. This behaviour was noticed in early experiments on cavitation and was extensively used in the literature to model the dynamics of cavitation near solid boundaries38. This further justifies our assumption of similar energetics of sonoluminescence produced by spherical and aspherical bubbles.The above approximation serves the purpose of this work demonstrating the feasibility of UV sonoluminescence beams. Indeed, we consider a bubble that initially nearly touches the liquid-metal surface and then develops and maintains a quasi-hemispherical shape over a significant portion of its expansion-collapse cycle Fig.\u00a0. It is wf\u00a0<\u00a0100\u00a0kHz. However, only the formation of a directed beam near the metal microparticles results in the irreversible inactivation of pathogens. The improvement of the germicidal effectiveness is due to the sharp increase in temperature T as f decreases [Fig.\u00a0\u03bbp\u00a0=\u00a0b/T with b\u00a0\u2248\u00a02.9\u00a0\u00d7\u00a010\u22123\u00a0K\u00a0m, which implies that \u03bbp shifts to the UV-C band when f is decreased.\u00a0Figure\u00a0ses Fig.\u00a0. The wavf. This has two important implications. Firstly, from the point of view of potential applications, fluences above 40\u00a0mJ/cm2 are required to inactivate various viruses and destroy bacterial spores2. Hence, by varying the acoustic frequency one should be able to control the fluence level targeting specific pathogens present in water. Secondly, even though the current use of a simple blackbody model in our analysis may result in the overestimation of the fluence delivered to pathogens at a given ultrasound frequency (this can be checked by employing a more accurate volume emitter model4), the increase of fluence due to lowering the frequency is expected to be sufficient to compensate for such a model overestimation so that the realistic output of 40\u00a0mJ/cm2 is maintained. We also note that in practice pathogens would be exposed to higher fluences produced by an ensemble of bubble. Therefore, the maximum fluence delivered to pathogens could be increased by increasing the concentration of bubbles in water.As shown in Fig.\u00a01, a setup based on our approach would be suitable for disinfecting turbid water. This is essential for applications in developing countries where the filtration of natural water presents significant challenges.We have suggested that the collapse of air bubbles and sonoluminescence near liquid-metal particles should result in the generation of UV light beams capable of inactivating pathogens contaminating drinking water. In contrast to conventional UV light water disinfection systems1. A similar shortcoming is inherent to the solar UV light disinfection method2, which also requires a prolonged exposure of water to sun light ranging from 6 hours to several days depending on weather conditions. The recovery of pathogens is less likely to occur in the approach suggested in this work, where air bubbles collapsing near liquid-metal particles dispersed in the bulk of water act as compact sources of germicidal radiation located only several optical wavelengths away from pathogens, which greatly reduces the shielding of pathogens by turbidity particles suspended in water.For example, conventional UV light disinfection systems are inefficient when water is not properly purified from microscopic solid particles: such particles can shield pathogens from the UV light thereby allowing many of them to recover and infect1. This doubles energy consumption in UV light generation. The cost of that is normally acceptable in developed countries but could be a limiting factor for economically developing nations. The water disinfection system proposed here should be more affordable for remote communities and developing countries, also benefiting developed nations, because equipment required for a its practical realisation would include simple, reliable and inexpensive devices such as a generator of microbubbles, ultrasound transducers and power supplies. This equipment could also be combined with water treatment systems using gas bubble injection8 and ultrasounic radiation39.In addition to a shielding effect, conventional UV light disinfection methods suffer from another shortcoming. Iron and organic salts dissolved in water as well as the pathogen population itself absorb up to 50% of UV light in conventional UV water disinfection systems2, which is sufficient to irreversibly inactivate most common pathogens in water with the turbidity of more than 5\u00a0Nephelometric Turbidity Units. We expect that a water disinfection system based on our approach would be suitable for the treatment of small batches and/or low flows of water at the local community level.We calculate that the UV beams produced by the aspherical air bubble collapse can generate fluences exceeding 10\u00a0mJ/cm42. One could also convert them into porous filters for further improvement of water quality10.Liquid-metal particles remaining in disinfected water can be removed and reused in further disinfection cycles by using mechanical, chemical or electrochemical methods9 and also tin could be used as the constituent material of the liquid-metal particles. These materials are approved by the Food and Drug Administration and similar organisations, they are non-toxic and environmentally-friendly and are used, for example, in mercury-free analog clinical thermometers that are inexpensive and reliable43. The actual cost of the liquid metal is much lower and it could be decreased by using a mass-production technique such as self-breakup of a liquid metal jet9.Metal alloys based on the eutectic mixture of gallium and indium33. This concerns not only the sonoluminescence light, but also light emitted by external sources such as lasers. In particular, lasers are used to generate bubbles suitable for experiments on sonoluminescence46 and it has also been shown that the generation of such bubbles can be controlled by using plasmonic particles47. Furthermore, the resonant plasmonic enhancement of the intensity of light opens up an opportunity to compensate for a decrease in the intensity of sonoluminescence light caused by the effect of non-sphericity of the collapse of laser-induced bubbles46.The plasmonic properties of gallium alloys can also be used to resonantly enhance the intensity of light by about two orders of magnitude48 or water chlorination49 and ozonation50. For example, a reliable disinfection of water is possible after a minimum of 20 minutes of continuous boiling. Moreover, before the boiled water can be used it needs sufficient time to cool during which it can be recontaminated since in a tropical environment bacteria proliferate at a fast rate. Handling of boiling water can also lead to serious injuries such as burns.Finally, the proposed water disinfection scheme should also be more energy- and cost efficient than water boiling49. Disinfection systems using ozone are also expensive and complex50. In contrast, a system based on our approach should be safe, inexpensive as well as easy to install and operate.The use of chlorine and ozone also poses a significant challenge for remote communities and developing nations. For example, all forms of chlorine are highly corrosive and toxic. Therefore, storage, shipping, and handling of chlorine pose a significant risk and require special safety arrangements14 and finite-difference time-domain (FDTD) methods51 were used, respectively, to solve the Rayleigh-Plesset equation and Maxwell\u2019s equations with appropriate initial and boundary conditions. In the FDTD simulations, the standard Drude model was employed to fit experimental values of the dielectric permittivity of the liquid metal11.Standard fourth-order Runge-Kutta scheme20. An immersed, moving, reflective boundary was used to simulate the oscillations of the active face of an ultrasound transducer resulting in the acoustic field20. The model presented in ref. 15 was modified to capture the bubble growth prior to the bubble collapse and extended to represent a three-fluid system including air, water, and liquid metal20.The dynamics of the acoustically-driven bubble was modelled by using a high-order, fully-compressible, multiphase flow model"}
+{"text": "Prevalence of infertility in sub-Saharan Africa is high yet fertility care, its development and access is limited in resource-poor countries like Nigeria so infertile women resort to different forms of treatment. This study aimed to determine the use and pattern of previous treatments.This was a descriptive Cross Sectional study conducted at a tertiary hospital in North-Western Nigeria. Interviewer administered pretested questionnaires were administered to 236 consenting clients seen at their first visit to the gynaecology clinic with complaints of inability to conceive, between January 2016 to March 2018. We collected information on demographic and reproductive characteristics, previous fertility treatment and other data relevant to infertility. Descriptive analysis was done using SPSS software version 22.p value <\u20090.05).Two hundred and thirty six clients participated in the study and majority were 20\u201329\u2009years (44.5%), with a mean age of 31.5\u2009\u00b1\u20097.4, while the mean age of their husbands was 41\u2009\u00b1\u20098.0. More clients were educated up to secondary level or above (80.9%), with more Muslims (65%) than Christians. All clients were married except one, most clients had been married for 5\u2009years or more, 18.2% were in their second order of marriage and 28% were in polygamous marriages. Many of the clients were homemakers (46.6%) and earned an average monthly income of less than fifty thousand naira. About 59.3% of clients presented with primary infertility, with 15.7% being infertile for duration of more than 10\u00a0years. One hundred and forty six respondents (61.9%) had received previous hospital treatments before presentation to our facility, 37% had visited more than three hospitals, 70% did not have adequate investigations done, treatment was successful in 15% while 40.7% received traditional treatments. Husbands of women receiving previous treatment were slightly older (Majority of woman have multiple and unnecessary visits to several hospitals for infertility care with little positive results despite time and resources spent. Quality of infertility care needs to be improved. Infertility is prevalent worldwide affecting about 5\u20138% of couples . PrevaleBeing able to get pregnant is a big part of the marriage institution, especially in the African cultural context. Hence infertility is associated with a lot of negative psychosocial and other consequences such as stigma, deprivation and neglect, violence, marital problems and mental health issues , 7.Despite this large burden, very few infertility-management programs exist . FertiliTo satisfy their needs and end their suffering, infertile women may resort to different forms of treatment. This study aimed to determine what prior treatments have been received, and if infertility care was adequate. Very few studies if any, have looked into this aspect of infertility care in the study setting. One Nigerian study by Ola et al. noted thFindings of this study will contribute to literature on the pattern and quality, or lack thereof, of infertility services women access in low resource settings. It will also have implications on recommendations to improve infertility assessment and management.Ethical approval for the study was received from the Barau Dikko Teaching Hospital (BDTH) Health and Research Ethics Committee and verbal informed consent obtained from participants.This was a descriptive cross sectional study done between January 2016 to March 2018.The study was conducted at the BDTH, Kaduna, North-Western Nigeria. The hospital serves as a major referral facility for the metropolis and its environs. The gynaecology clinic is run twice a week with an average number of 50 new clients seen weekly, half of which are infertility clients.The study participants were women presenting for the first time to the gynaecology clinic with complaints of inability to conceive, and consented to participate in the study. Women were eligible to participate irrespective of their age or duration of complaint.The minimum sample size was determined using the formula by Lemeshow et al. , and prep value of <\u20090.05 was considered to be statistically significant.Descriptive analysis with frequencies and percentages was done using SPSS computer software version 22. Chi square was used to compare proportions between groups and a p value <\u20090.05).As shown in Table\u00a0One hundred and forty six respondents 61.9%) had received previous hospital treatments before presentation to our facility. Only ninety respondents responded on where they received such treatments, mostly from both public and private hospitals, and 37% had visited more than three hospitals .The mean age of the sample population was 31.5\u2009years, while their husbands had a mean age of 41\u2009years. It is importance that any previous or current infertility care received is of good quality as time is of essence and fertility is known to decline with increasing age. The age factor is important, as the day-specific probability of conception declines with age. Dunson et al. found thLess than half of clients had been married for 5\u2009years or more, and 38% have been infertile for duration of 5 or more years. This means they may have been exposed to the psychological adverse effect of infertility for long periods . If womeRespondents visited both public and private hospitals. Public hospitals are more affordable, and public tertiary centres are perceived to have better trained personnel but have long waiting times. However, more private centres provide artificial reproductive techniques (ART) should it be required than the Nigerian public sector.About 37% had visited more than three hospitals. This is similar to other studies in Indonesia and Iran where the mean number of specialists visited for fertility consultation was three , 17, forBasic investigations for infertility should assess both male and female factors. A high proportion of respondents (70%) that had previous hospital treatments did not have these basic tests done. The quality of fertility care and completeness of investigation may depend on the type, level and experience of health worker seen, but this was not fully explored in our study. One study in Indonesia found thOmitting tests is sometimes done to save costs but it is no excuse for missing out basic tests. Laparoscopy for example can be omitted from the infertility work-up to save cost when the hysterosalpingography is normal and there is no abnormal contributing history , and it The FIGO Fertility Tool Box simplifiPrevious hospital treatment was successful in only 15% of cases. This is similar to another Nigerian study . This isThe commonest treatment offered was ovulation induction with clomiphene citrate. Women already know this drug from multiple consultations, and abuse it/ self-medicate despite potential serious side effects .Only six women had hydrotubation in our study. The procedure for hydrotubation is not standardized and usually involves an attempt to flush a large amount of fluids trancervically in the hope that it might correct some tubal blockage. The manner it was done in these women could not be ascertained. Though hydotubation is discouraged by ART specialists, its use is actually more widespread in Nigeria than reported, because women are unable to afford ART. There are reports that with careful selection, hydrotubation may be useful in resource poor countries, especially in patients with incomplete tubal occlusion .Apart from hospital treatments, 40.7% of respondents also received traditional treatments. This is similar to a study in Freetown, where 36.5% of 167 women used herbal medicine for infertility treatment . AnotherThis study was a questionnaire based survey so it is subject to significant recall bias. Some bias may also have been introduced because women with unsuccessful treatments are more likely to have visited the clinic than those with successful treatment who may have no need to visit the clinic.The study did not delve into great details of some information which would also have been useful e.g. causes of male infertility and treatments, specific types of herbs used by women and their ingredients and reasons why women visit multiple hospitals.Overall the study raises a lot of interesting observations but is limited by its small size and hospital setting which may make conclusions difficult to extrapolate to the whole population. A qualitative method may have explored reasons for multiple visits more deeply. It would also have been interesting to note the situation at different levels of care, and between rural and urban milieu, which was not done in this particular study.p value <\u20090.05). An emerging pattern seen is that majority of women studied have multiple and unnecessary visits to several hospitals for infertility care, which may be of low quality since inadequate investigations were done. It is therefore not surprising that there are little positive results despite time and resources spent. This may also lead to delays in accessing ART and prospects of reduced success with increasing age. A lot of women still resort to the use of traditional medicines, mainly herbs which were not as effective as conventional treatment. Vast herbal resources remain unexplored and studies need to be conducted to see if they have any potential for infertility treatment, and to ensure proper regulation, safety and non-exploitation of desperate women. Most importantly quality of infertility care needs to be improved by better education of women, training of health workers, early referrals to fertility and ART specialists as required, and innovative funding options to widen scope of fertility care offered, and access to it. There should be wider dissemination of the FIGO fertility tool box, customised to context and used at all levels of care.Sociodemographic characteristics were similar among women that received previous treatments or not but, Husbands of women receiving previous treatment were slightly older ("}
+{"text": "The current trend in hand surgery has streamlined the treatment of acute hand trauma to the modern-day surgery unit. As the volume of hand trauma caseloads continues to increase, it is becoming increasingly difficult to schedule patients for theater\u00a0on the day of injury. It, therefore, becomes paramount to adequately triage patients in accordance with best clinical evidence and predictors of poor clinical outcomes.Animal models suggest that the earlier flexor tendons are repaired, the better the patient functional outcome. The largest study to date examining the timing of injury to functional post-operative outcome also recognizes that the faster these injuries are repaired, the better the patient outcome. Age-related changes to tendon biomechanics and structure are well-documented. However, no conclusive evidence exists specific to the degenerative changes and mechanical properties of flexor tendons in humans. The animal model strongly suggests that increasing age is associated with local architectural and biological changes that directly affect the tendon repair functional outcome. Although retrospective analyses to date suggest\u00a0that smoking is a negative outcome predictor for functional tendon outcome, no prospective large-scale studies exist.A large, single-center prospective study specifically examining the positive and negative outcome predictors of flexor tendon repairs and functional post-operative outcome is warranted. The negative predictive model of patient care may enable us to further council patients preoperatively and stratify patients according to clinical need. The core of hand surgery research is focused on improved suture techniques and postoperative rehabilitation protocols . DespitePredictors of poor outcomes after flexor tendon repairs have long been debated -8. Age, Timing of surgeryThe effect of the timing of repair and the functional outcome of flexor tendons remains controversial and underreported. The solitary animal model data of timing of surgery in flexor tendon repairs is decades old. Tang et al. report\u00a0a chicken model of flexor tendons repaired at intervals of one, four, eight, 14, and 20 days after injury . Eight tendons were zone I, 14 were zone II, and seven were zone III repairs. Functional evaluation of tendons was assessed using a Likert scale. As with Rigo et al.\u2019s cohort, there was no statistical significance in TAM and delay of surgery. Half of the immediate tendon repairs received \u201cExcellent\u201d or \u201cGood\u201d outcomes, with half having \u201cFair\u201d or \u201cPoor\u201d outcomes. Again, this study suffers from a small patient cohort, with a smaller number of candidates in the delayed tendon repair category. Secondary to the nature of retrospective study design, its significant difference of follow-up timescale greatly distorts the TAM assessment. As the tendon repairs were not limited to one zone of injury, it is difficult to compare any results with accuracy. In zone II of the thumb, there is a pulley system and a local avascular region of the tendon \u00a0and is tAgeThe effects of aging on the biomechanical properties during homeostasis, healing potential, and repair rupture rate have been well-documented in the rotator cuff tendon -16, pateThere is a paucity of data strictly examining age as a predictor for flexor tendon repair outcomes in the human model. Kasashima et al.\u2019s examination of 29 flexor pollicis longus tendons stratified the cohort into 10-20 years, 21-30 years, 31-40 years, and more than 41 years. The authors hypothesized an age of 20 or less as a potential safe factor\u00a0and an age of over 20 as a potential risk factor. However, for any combination of clinical state, including the timing of surgery, zone of injury, vascular injury, and postoperative management, age was not a predictor of outcome.Contrary to Kasashima et al.\u2019s findings, Rigo et al. report\u00a0that increasing age may be a significant negative predictor at eight weeks for the postoperative active range of flexor tendon repairs . This isSmokingSmoking has been well-documented to have negative clinical effects on the musculoskeletal system, including increased rates of tendon rupture, soft tissue infection, wound-healing complications, and a negative influence on clinical outcomes . SmokingTo the author's knowledge, there have been no animal model studies on the effects of nicotine specific to flexor tendon healing. Galatz et al. examine\u00a0the effects of nicotine on rotator cuff injury and repair with the use of the rodent model . In thisZone of injuryZone II injuries are well-established risk factors for poorer functional outcomes -8. Zone The functional outcome of zone 2 injuries has been well-documented. Rigo et al. report\u00a0that failure to preserve the tendon sheath or pulley was a direct negative predictor of the postoperative range of motion, with a loss of up to 15 degrees at eight weeks. These poorer outcomes are directly related to the zone\u2019s difficult anatomical presentation; both the flexor digitorum profundus and flexor digitorum superficialis run within its fibro-osseous digital sheath [Current trendsThere are many biological, technical, and surgical problems and challenges with flexor tendon repairs. These challenges that the surgeon's face\u00a0have yielded little change to the paradigm of clinical research on flexor tendon repair and clinical outcomes in the past decade. Developments in improving global rupture rates, tenolysis rates, and complication rates remain static . PostopeThe current trend in hand surgery has streamlined the treatment of acute hand trauma to the modern-day surgery unit. As the volume of hand trauma caseload continues to increase, it is becoming increasingly difficult to schedule patients for theater\u00a0on the day of injury . The BriAge-related changes to tendon biomechanics and structure are well-documented. However, no conclusive evidence exists specific to the degenerative changes and mechanical properties of flexor tendons in humans. The animal model strongly suggests that increasing age is associated with local architectural and biological changes that directly affect tendon repair functional outcome . ConflicIn the current modern trend toward\u00a0the day surgery trauma unit, patient prioritization is paramount. The negative predictive model of patient care may enable us to further council patients preoperatively\u00a0and stratify patients according to clinical need. The scheduling of surgeries may be more appropriately stratified according to strong negative predictors of outcome, including age, smoking status, and zone of injury. A large, single-center\u00a0prospective study specifically examining the positive and negative outcome predictors of flexor tendon repairs and functional postoperative outcome is warranted."}
+{"text": "Daily recombinant human GH (rhGH) is currently approved for use in children and adults with GH deficiency (GHD) in many countries with relatively few side-effects. Nevertheless, daily injections can be painful and distressing for some patients, often resulting in non-adherence and reduction of treatment outcomes. This has prompted the development of numerous long-acting GH (LAGH) analogs that allow for decreased injection frequency, ranging from weekly, bi-weekly to monthly. These LAGH analogs are attractive as they may theoretically offer increased patient acceptance, tolerability, and therapeutic flexibility. Conversely, there may also be pitfalls to these LAGH analogs, including an unphysiological GH profile and differing molecular structures that pose potential clinical issues in terms of dose initiation, therapeutic monitoring, incidence and duration of side-effects, and long-term safety. Furthermore, fluctuations of peak and trough serum GH and IGF-I levels and variations in therapeutic efficacy may depend on the technology used to prolong GH action. Previous studies of some LAGH analogs have demonstrated non-inferiority compared to daily rhGH in terms of increased growth velocity and improved body composition in children and adults with GHD, respectively, with no significant unanticipated adverse events. Currently, two LAGH analogs are marketed in Asia, one recently approved in the United States, another previously approved but not marketed in Europe, and several others proceeding through various stages of clinical development. Nevertheless, several practical questions still remain, including possible differences in dose initiation between na\u00efve and switch-over patients, methodology of dose adjustment/s, timing of measuring serum IGF-I levels, safety, durability of efficacy and cost-effectiveness. Long-term surveillance of safety and efficacy of LAGH analogs are needed to answer these important questions. The long-term safety and efficacy of daily recombinant human growth hormone (rhGH) therapy in children with GH deficiency (GHD) are well-studied \u20133. HowevTo this end, many pharmaceutical companies have spent a significant amount of money developing LAGH analogs using a several different yet novel technologies to prolong GH action that may allow for weekly \u201318, bi-wTreatment with rhGH in children with GHD has been well-established for over 35\u00a0years in inducing linear growth and attaining adult height appropriate for genetic potential . In earlThe first studies assessing the effects of rhGH replacement in adults with GHD was performed in 1989 , 29. TheThe main indication for the development of LAGH analogs in children and adults with GHD is to improve patient adherence and to ease the burden of chronic daily injections. While many early LAGH analogs were not shown to be effective or practical , two LAGvs 10.70\u00a0cm for those receiving daily rhGH) . Based oly rhGH) . Other Lly rhGH) , 74, NNCly rhGH) , 74, PHAly rhGH) , TV-1106ly rhGH) . On the ly rhGH) .\u00ae, Ascendis Pharma A/S), a sustained-release inactive prodrug of unmodified GH transiently bound to an inert carrier molecule designed to release fully active GH over one week, was granted Orphan Drug Designation by the FDA, after previously receiving Orphan Designation for the treatment of GHD in Europe from the European Commission in October 2019 (https://www.globenewswire.com/news-release/2020/04/15/2016859/0/en/Ascendis-Pharma-A-S-Receives-Orphan-Drug-Designation-for-TransCon-hGH-as-Treatment-for-Growth-Hormone-Deficiency-in-the-United-States.html). In a Phase 1 randomized trial, 44 healthy subjects were treated with 4 different doses of weekly TransCon GH and 2 different doses of daily rhGH. These investigators discovered that TransCon GH was well-tolerated with no binding antibody formation and comparable levels of serum GH and IGF-I were obtained (vs 10.3\u00a0cm). The preliminary data of the Phase 3 fliGHt trial (NCT03305016) on children with GHD who switched over from daily rhGH injections to once-weekly TransCon injections were presented at the 2020 Endocrine Society Annual Meeting (ENDO 2020) . In thisDO 2020) .vs 9.8\u00a0cm). This trial also demonstrated that children receiving Somatrogon hGH-CTP reported good tolerability with lower treatment burden than Genotropin. Based on these data, Pfizer Inc. is expected to file for FDA approval in early 2021 , a long-acting derivative of rhGH modified by the addition of three C-terminal peptide segments from human chorionic gonadotropin to allow for once-weekly delivery, in children with GHD NCT02968004) were presented at ENDO 2020 . Previou04 were p\u00ae, Novo Nordisk A/S, Denmark) was approved by the FDA for treatment of adult GHD (https://www.fda.gov/drugs/drug-safety-and-availability/fdaapproves-weekly-therapy-adult-growth-hormone-deficiency). Somapacitan is a long-acting human GH derivative to which a small noncovalent albumin-binding moiety is attached to facilitate reversible binding to endogenous albumin, delaying its elimination, and thereby extending its duration of action with little to no accumulation of the drug when administered once-weekly , safety monitoring, and whether LAGH analogs would be as effective and safe compared to daily rhGH because of the differences in pharmacokinetics and pharmacodynamics, as they are not physiologic. Furthermore, because the therapeutic response to daily rhGH injections can be highly variable among patients and may be influenced by multiple factors , it is lIt is also anticipated that LAGH analogs will share many, if not all, of the known side-effects of daily rhGH. However, because of the mechanism by which GH action is prolonged and the duration of prolongation, additional safety risks may be present. New safety concerns may include the formation of neutralizing anti-drug antibodies, and growth and metabolic effects related to the altered profile of serum GH and IGF-I levels during therapy. Furthermore, in those drugs where modifications of the GH molecule have been made, there may be a risk of anti-GH antibodies developing. Anti-GH antibodies formed against rhGH given as a daily injection have not been previously shown to be clinically relevant, except in individuals with GH gene deletions , 86. If Another potential pitfall of LAGH analogs is the impact of prolonged elevated serum GH levels after an injection of a LAGH analog resulting in the relative lack of daily GH nocturnal peak and daytime trough profile, unlike the profile with daily rhGH injections at bedtime. This may cause long-term metabolic aberrations since GH is closely involved in the regulation of fat and glucose metabolism, and body composition , 87, 88.The profile of the IGF-I response to each LAGH analog that differs from daily rhGH injections may present with some unique safety concerns. Early epidemiological studies have demonstrated associations of elevated and high normal serum IGF-I levels with increased risk of cancers . A speciWhen new LAGH analogs become commercially available, their use in clinical practice will be determined by coverage through insurance programs or government health policies. In countries with a single payer program, the coverage of LAGH analogs will be assessed not only for safety and efficacy, but also for cost-effectiveness compared with daily rhGH injections. It is possible that insurance carriers and governmental health policies may decide against covering LAGH analogs simply for the sole purpose that LAGH analogs are \u201cconvenient\u201d because of the lower frequency of administration, especially if the costs are higher than daily rhGH injections.Finally, post-marketing surveillance registries are recommended to enable surveillance of LAGH analogs for efficacy, safety, tolerability, cost-effectiveness, and therapeutic durability. Since each individual LAGH analog is unique in its formulation and molecular structure, further studies are needed for each individual LAGH molecule to better understand its pharmacokinetic and pharmacodynamic properties. It would be even more beneficial to set up a combined registry of all LAGH analogs used for treatment of children and adults with GHD in an independent data repository supported by the manufacturers of these compounds. This would enable manufacturers to fulfil their obligatory safety reporting requirements from governmental agencies, facilitate collaborative \u201creal-world\u201d studies, and increase the power of the studies. A global registry would also be an ideal platform to capture the data on the impact of patients being initiated or switched from daily rhGH to LAGH analogs and from one LAGH analog to another.vs daily rhGH injections is another key question that requires resolution. Perhaps the key overarching question is will LAGH analogs increase treatment adherence, and improve treatment efficacy and long-term outcomes without sacrificing patient safety? Though it seems plausible that this presumption might hold true in certain patient populations, this question to date has not been proven and needs to be prospectively tested further in well-designed clinical trials with the answer likely to be dependent on multiple external and individual factors. Clearly there is still much to be learned moving forward in the coming years, but for now, the available data seem to suggest that LAGH analogs are a useful addition to currently available daily rhGH injections, especially for patients who are not coping with the rigors of daily rhGH injections but yet are wanting to continue as they are obtaining clear benefits from this therapy. Finally, we recommend starting surveillance registries once LAGH analogs are approved and become commercially available so that data on efficacy, safety, tolerability, and cost-effectiveness can be collected in large numbers to improve our understanding of the effects of prolonged exposure to these analogs.The major usefulness of LAGH analogs when compared with current rhGH formulations is that the former requires significantly lesser number of injections compared to the latter. However, given the unphysiologic profile of LAGH analogs, new safety concerns have been raised. Prolonged elevated GH levels might induce supra-physiologic serum IGF-I levels and induce iatrogenic acromegaly, neoplasia and glucose intolerance. Nevertheless, these concerns have reassuringly not been substantiated by any robust evidence in numerous published clinical trials thus far. Because each individual LAGH analog has its own unique pharmacokinetic and pharmacodynamic features, safety issues, dose titrations and therapeutic monitoring need to be individually addressed. Pitfalls of LAGH analogs include whether there are pathophysiological long-term implications of prolonged supra-physiologic elevations of serum GH and IGF-I levels, differences in tissue distribution and tissue sensitivity to modified GH molecules, development of anti-drug antibodies, and differences in the side-effect profile compared with daily rhGH injections. The cost-effectiveness of LAGH analogs All\u00a0authors\u00a0listed have made a substantial, direct, and intellectual contribution\u00a0to the work and approved it for publication.KY is an investigator on research grants from Pfizer, Novo Nordisk, and OPKO Biologics, and has consulted for Pfizer, Novo Nordisk, Sandoz, and Ascendis. BM is an investigator on research grants from Alexion, Abbvie, Amgen, Ascendis, Novo Nordisk, OOKO Biologics, Protalix, Sangamo, Sanofi Genzyme, Tolmar, and Takeda and has consulted for Abbvie, Ascendis, BioMarin, Bluebird Bio, Novo Nordisk, Pfizer, Sandoz, Sanofi Genzyme, Tolmar, and Vertice. AH is supported by the Biomedical Research Service of the Department of Veterans Affairs and has consulted for Ascendis, GeneScience, Genexine, Novo Nordisk, Pfizer, and Versartis.The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."}
+{"text": "Although both disaster management and disaster medicine have been used for decades, their efficiency and effectiveness have been far from perfect. One reason could be the lack of systematic utilization of modern technologies, such as eHealth, in their operations. To address this issue, researchers\u2019 efforts have led to the emergence of the disaster eHealth (DEH) field. DEH\u2019s main objective is to systematically integrate eHealth technologies for health care purposes within the disaster management cycle (DMC).This study aims to identify, map, and define the scope of DEH as a new area of research at the intersection of disaster management, emergency medicine, and eHealth.An extensive scoping review using published materials was carried out in the areas of disaster management, disaster medicine, and eHealth to identify the scope of DEH. This review procedure was iterative and conducted in multiple scientific databases in 2 rounds, one using controlled indexed terms and the other using similar uncontrolled terms. In both rounds, the publications ranged from 1990 to 2016, and all the appropriate research studies discovered were considered, regardless of their research design, methodology, and quality. Information extracted from both rounds was thematically analyzed to define the DEH scope, and the results were evaluated by the field experts through a Delphi method.In both rounds of the research, searching for eHealth applications within DMC yielded 404 relevant studies that showed eHealth applications in different disaster types and disaster phases. These applications varied with respect to the eHealth technology types, functions, services, and stakeholders. The results led to the identification of the scope of DEH, including eHealth technologies and their applications, services, and future developments that are applicable to disasters as well as to related stakeholders. Reference to the elements of the DEH scope indicates what, when, and how current eHealth technologies can be used in the DMC.Comprehensive data gathering from multiple databases offered a grounded method to define the DEH scope. This scope comprises concepts related to DEH and the boundaries that define it. The scope identifies the eHealth technologies relevant to DEH and the functions and services that can be provided by these technologies. In addition, the scope tells us which groups can use the provided services and functions and in which disaster types or phases. DEH approaches could potentially improve the response to health care demands before, during, and after disasters. DEH takes advantage of eHealth technologies to facilitate DMC tasks and activities, enhance their efficiency and effectiveness, and enhance health care delivery and provide more quality health care services to the wider population regardless of their geographical location or even disaster types and phases. Disasters are destructive events that threaten public health and the environment and disrupt and/or impede normal operations. They also impose considerable pressure on health care systems. The source of disasters can be natural or the result of human actions . Diseaseemergency management and the health sectors are natural allies that have, seemingly, only recently begun to recognize each other. According to the reviewed literature, some major contributions to this issue are as follows:Disaster management and disaster medicine are complementary disciplines that can significantly reduce the harmful effects of disasters. Disaster management conveniently encompasses 4 phases: mitigation, preparedness, response, and recovery . AddressDisaster management and disaster medicine have different roots, development, and priorities . TherefoAlthough both areas emerged to work side by side, they sometimes fail to share their tools and personnel and have not collaborated smoothly in preparing for and responding to mass emergencies.Neither disaster medicine nor disaster management routinely uses information or modern eHealth technologies .Therefore, there is a pressing need for efficient disaster management and emergency medicine to mitigate human pain and suffering and the overall impact of disasters . These iDespite the growth of information technology capacities and services in mainDEH is an emerging field that was introduced earlier in the study by Norris et al . In thatDEH can be seen as a model telling us what, when, and how current eHealth technologies can be used in the DMC. These technologies include not only those used in established eHealth practices but also those recently made available by the rapid development in mobile and sensor technologies.Disaster eHealth: the application of information and eHealth technologies in a disaster situation to restore and maintain the health of individuals to their predisaster levelsDisaster management: the coordination and integration of all activities necessary to build, sustain, and improve the capabilities to prepare for, respond to, recover from, or mitigate against threatened or actual disasters or emergencies, regardless of cause Disaster medicine: a system of study and medical practice associated primarily with the disciplines of emergency medicine and public health eHealth: the cost-effective and secure use of information and communications technology in support of health and health-related fields, including health care services, health surveillance, health literature, and health education, knowledge, and research To undertake this research, a rigorous scoping study was conducted based on the framework of Arksey and O\u2019Malley . Our adaThis research is limited to publications from 1990 to 2016. This scoping study was undertaken in 2 complementary rounds: uncontrolled and controlled search. The uncontrolled search was commenced in multiple databases using free-text terms rather than indexed terms. This approach uses a search engine to identify documents of interest based on terms occurring in the papers\u2019 titles, abstracts, or main bodies. This allowed us to extensively and fully explore the area and extract a broad range of articles to define the scope of DEH. However, to improve the accuracy of free-text search and to decrease the potential searching bias or missing data in the search, a controlled search was employed . In thisBoth search procedures were iterative and captured relevant articles regardless of their research design, methodology, and quality . The setechnology saturation. The rationale for technology selection in Pareto analysis is rooted in the rapid technological revolution or advancement in the field of computation.In the uncontrolled search to facilitate the preliminary eligibility examination phase, Pareto analysis was used to exclude a larger number of articles in a shorter time without affecting the quality of the results. Pareto analysis is a well-established statistical technique in the business and management field known as 80/20 rule, that is, 20% of the major tasks and activities can generate 80% of the benefit of doing the entire job . This meBy following the Pareto formula ,16, onlyAfter selecting the studies for the in-depth review, in controlled and uncontrolled rounds, their full text was added to EndNote (Web of Science Group) and NVivOn each theme, a conventional content analysis was performed to interpret the findings of the theme. The analyses identify the opinions and general trends in eHealth adoptions and applications within disaster management and disaster medicine fields. Finally, the Delphi method was conducted in 2 rounds in which the fields \u2018experts evaluated the initial DEH scope. The results of this evaluation are reflected in the reported DEH scope in this paper.Identification of disaster type is necessary to select appropriate approaches to DEH for particular cases. The research results highlighted a high diversity in the literature with regard to disaster types. A comprehensive list of disaster types was extracted based on the CRED database .Comparison of this disaster\u2019s classifications with the research findings reveals that DEH can cover almost all types of disasters. As in the research we searched for eHealth applications within DMC, this, in turn, may be interpreted as eHealth can be used to support disaster management and disaster medicine activities across a wide range of disaster types, regardless of their sources. The detailed findings of the identified disaster types are presented in On the basis of the frequency of disaster types found in the scoping results, eHealth technologies\u2019 usage is distributed across a broad range of disaster types. Nevertheless, the use of eHealth in epidemics , terrorist incidents , hurricanes , and earthquakes is discussed more than in other disaster contexts in the literature. This may mean that these areas are likely to be more researched either because of researcher interest or the frequency of their occurrence.To identify the disaster phases on which eHealth technologies can be used, within the DEH scope, it was referred to as four-phase DMC: mitigation, preparedness, response, and recovery . The exainternet was the most suitable thesaurus term for eHealth, as the rest are mostly related to health care subjects.The scoping analysis indicates an extensive variability in the list of identified technologies from different domains, most commonly related to information system and telecommunication, but extended to areas such as artificial intelligence and robotics. To reduce this complexity, technologies were demonstrated in a hierarchical representation mapped to an existing hierarchical taxonomy of eHealth technologies. Among the consulted databases, PubMed, CINAHL , and EBSCO (Elton B. Stephens Company) Health databases have a taxonomy of eHealth from which PubMed was chosen because of its comprehensiveness, quality, and equality of depth and breadth of the field. In CINAHL, database subject headings for eHealth are slightly different and at a higher level. In the EBSCO Health database, only information science with medical informatics as a subcategory. On the basis of the PubMed taxonomies, the identified eHealth technologies related to DEH were mapped to these top-level classifications and then further classified into low-level technologies for a number of categories , teleradiology, and radiology information system. In contrast, there are some technologies that are not designed for health care environments. However, based on their positive outcomes in other areas, the health care sector has started using them for the same purposes or other clinical purposes. Technologies such as auto identification, decision support system [DSS], or big data are among these technologies.DEH embraces a wide range of technologies to support health care activities in different disaster types and phases. A number of these technologies are specifically designed for health care environments such as information system and its subtechnologies present prevailing functions and applications in DMC. Furthermore, technologies range from well-established fields, such as DSSs, telehealth, and information system, to the new emergent fields, such as Internet of Things (IoT), augmented realities, and big data.There is a vast range of eHealth technologies in the DEH scope. Among the identified eHealth technologies, our results indicate The identified eHealth technologies could serve specific functions within the DMC and DEH scope. Our thematic analysis identified these functions and highlighted the major ones and classified them based on their features and applications . These fThe identified technologies and their applications support different attributes that could be important in disaster management or disaster medicine activities. On the basis of the thematic and content analysis results, Feature nameAccessibilityAccountabilityAccuracyAvailabilityAwarenessCollaborationCompletenessComputerizationConfidentialityConsistencyContinuityControlCooperationCoordinationEffectivenessEfficiencyImmediacyIntegrationInteroperabilityLocalizationOptimizationProtectionQualityReadinessReal timeRecognitionRelevancyReliabilityResponsibilityRobustnessSafetyScalabilitySecuritySustainabilityTelemetryUsabilityWeb baseclinical and nonclinical. Nonclinical purposes can be further divided into administrative, education, training, and research. These DEH purposes can be defined as follows:On the basis of Clinical purposes: All the tasks and activities, the objectives of which are rooted in providing or expanding disaster health care services for the population.Administrative purposes: All the tasks and activities, the objectives of which directly or indirectly facilitate providing or expanding disaster health care services for the wider population and can cover health care administrative procedures from admitting patients to discharging them or making patient information transfer possible.Education and training purposes: Covers activities where the main purpose is to train and prepare citizens or special groups such as responders for different disaster phases.Research purposes: The cases where their aims are directly or indirectly related to investigation and research. They intend to improve quality, cost-effectiveness, and equity of access to health care services within the DMC.On the basis of these definitions, a number of example activities for each group of disaster phases are shown in international level refers to the international organizations that work in any areas that directly or indirectly can have a role in any disaster phase regardless of their nature, and their branches and their supports cover almost all countries; examples of which include the World Health Organization, Red Cross, and Red Crescent. In contrast, the national-level organizations exist, and their rules are applied within the boundaries of each country and could vary from country to country. At the bottom of the hierarchy, local organizations are located in the country at the subnational level; they work under national or governmental organizations and follow their rules. These organizations are responsible for the needs and demands of specific regions or areas; they are not supposed to make any rules, are just executors of the governmental rules, and need to report to the national organizations. Police and fire bureaus and the regional health care environments are considered at this level. On the basis of their level, we can set out the categories and subcategories in terms of different parties (The review revealed a wide range of DEH stakeholders, targets, or users of the specified technologies and functions at local, national, and international levels. The parties .On the basis of the different parties and their needs in each phase of a disaster and the supported technologies and functions, DEH covers a diverse spectrum of services and purposes within the DMC. These services, from a high-level view, can be categorized into 3 levels:Operational-level services: The function of this group of services is to support or control operations against rules and standards and encompass day-by-day decisions. Although diversity in these services can create islands of automation, they make operations more efficient. A large number of services in DMC and DEH can be categorized under this level, such as rapid victim identification, victim tracking, damage assessment, and critical resource distribution.Tactical-level services: The purpose of this range of services is to support management and provide interconnection among different parties or organizations through diverse information management tools. These services take care of medium-term planning and are used in creating procedures. Under this level of service are categorized DSSs as well as capacity assessment databases, health information exchange, and appropriate allocation of resources.Strategical-level services: The purpose of these level services is to support the system as a whole, and their output is mostly policies and overall structural decisions, either among an organization\u2019s functions or among other organizations. For this group of services, information planning tools are used, and integrated infrastructure and global compatibility are essential. Above all, few services are available for this stage not only in DEH but also in conventional disaster management and medicine. These services cover long-term, complex, and nonroutine planning in DMC and DEH, such as planning for vulnerable population needs and safety, long-term care, or cloud-based coordination. To be useful, strategy must translate into tactics and delivery so that services defined at this level will have related examples at both the tactical and operational levels.According to this classification and explanation, we can place these services for each disaster phase within the DEH scope. Some examples are presented in eHealth can facilitate health care data exchange and dissemination, improving communication, support, and education among communities, health care professionals, and their patients . These pIn this study, the initial scope of the DEH is defined. DEH tries to maximize health care engagement in and integration into the DMC because an effective and successful response is almost unachievable without appropriate levels of different sectors\u2019 readiness, including health care. By integrating health care into the predisaster phases, health care can be shifted from a reactive to a proactive system when disasters occur. In this regard, the variation of eHealth technologies within the DEH scope offers a broad range of functions and applications to facilitate health care management and delivery. For example, as education and communication play a vital role, telehealth and social networks could be useful in raising public awareness or providing remote and special clinical education for physicians. The importance of this application in educating the general population or proviThe technologies themselves can also be integrated to optimize the outcome. For example, the collected data through IoT could be aggregated by cloud computing and then analyzed with big data analytics tools to support strategic disaster management planning or possibly scenario prediction. Such a framework for technology integration was proposed by Madanian and Parry and receEHR and telehealth, are currently limited in disaster settings. However, recently, they have attracted significant interest in responding to COVID-19, but most applications are in developed countries, and most low-resource countries are still suffering [More recently, the IoT, big data, and cloud computing have attracted an exponential interest in automatic data gathering, integration, and analysis for data sharing and decision support applications. The usage of specific eHealth technologies, such as uffering ,30. ThisIn our research, we explicitly identified DEH stakeholders as technology users or targets who can benefit from DEH. These groups could be involved in DMC for a variety of purposes and in different positions. From a broader perspective, we can have the following groups:DEH seeks to enhance clinical and nonclinical personnel\u2019s disaster-related awareness, education, elements, standards, and procedures, mainly in disaster mitigation and preparedness phases. This could possibly result in better response and in meeting wider health care demands, as health care teams are familiar with the very concept of disaster.For disaster management and disaster medicine people, DEH may facilitate technology adoption in their fields, one of the consequences of which, possibly, is rapid communication and data sharing among involved parties in DMC. This results in enhancing access to precise information in a timely manner, which, in turn, may lead to improving the quality of decisions while decreasing the decision-making time.DEH may also appeal to the general population who may be affected by different types of disasters. DEH, based on its defined goal, raises disaster awareness among all people, especially communities in disaster-prone areas. Therefore, population empowerment can be enhanced, and the population can access and use information, become familiar with disaster consequences, and be prepared for them. This will increase preparedness against disasters and if any disasters strike their areas, they are able to take care of their basic health care requirements until disaster responders arrive. Then, as disaster responders have proper training and are equipped with different types of eHealth technologies, they are able to transfer timely and accurate information from disaster sites to top authorities so that they can make appropriate decisions. This, in turn, provides better health care services to disaster casualties.Preparedness and planning to reduce the harmful effects of disasters is becoming one of the highest priorities of governments. These activities are features of disaster management and disaster medicine; disciplines that despite their long standing still generate many debates about their effectiveness and capabilities in responding properly to health care demands in major disasters .This research, along with other studies such as by Sieben et al and NorrThe DEH domain has been introduced mainly to facilitate addressing the current challenges within disaster management and disaster medicine that hinder their operations and created many debates regarding their efficiency and effectiveness. DEH emergence contributes to the design of a systematic model for eHealth technologies that are currently used in nondisaster circumstances but have the potential to be used in disaster situations along with those technologies that were previously used in DMC and had a significant impact on DMC operations.DEH takes advantage of eHealth technologies to facilitate DMC tasks and activities, enhance their efficiency and effectiveness, and improve health care delivery to a wider population regardless of disaster types and phases.In this research, we extensively reviewed the academic literature to define the DEH scope. We have built our scope mostly based on available international hierarchies to make easier embedding DEH into disaster management, disaster medicine, and eHealth fields. Some of the international hierarchies that we referred to are the disaster types offered by CRED and PubMed Medical Informatics taxonomies. However, this work is mostly limited to academic and scientific publications, and gray literature is not extensively reviewed.eHealth technologies are developing rapidly, and the COVID-19 pandemic has revealed some of the eHealth potentials in practice on addressing health care issues. Therefore, we would consider and continue to work on the DEH model and add the most recent applications, such as contact tracing, into the model in the near future."}
+{"text": "The pathogenesis of severe COVID-19 remains poorly understood. While several studies suggest that immune dysregulation plays a central role, the key mediators of this process are yet to be defined. Here, we demonstrate that plasma from a high proportion (77%) of critically ill COVID-19 patients, but not healthy controls, contains broadly auto-reactive immunoglobulin M (IgM), and only infrequently auto-reactive IgG or IgA. Importantly, these auto-IgM preferentially recognize primary human lung cells in vitro, including pulmonary endothelial and epithelial cells. By using a combination of flow cytometry, LDH-release assays, and analytical proteome microarray technology, we identified high-affinity, complement-fixing, auto-reactive IgM directed against 263 candidate auto-antigens, including numerous molecules preferentially expressed on cellular membranes in pulmonary, vascular, gastrointestinal, and renal tissues. These findings suggest that broad IgM-mediated autoimmune reactivity may be involved in the pathogenesis of severe COVID-19, thereby identifying a potential target for novel therapeutic interventions. Indeed, dysregulated coagulopathy and systemic inflammation are hallmark characteristics of severe COVID-197, which involves acute respiratory distress syndrome (ARDS) as well as alterations of other organs9. The pathogenic mechanisms responsible for the most severe clinical progression of COVID-19 are yet poorly understood, although they appear to be multifactorial in nature. In this context, a relatively underexplored mechanistic pathway relates to autoimmunity. Autoantibodies that neutralize type-1 interferons have been described in severe adult COVID-1910, as have autoantibodies against self-antigens associated with systemic lupus erythematosus and Sjogren\u2019s disease in severe pediatric COVID-1911. Additional reports of antiphospholipid autoantibodies have been associated with thrombotic events13 thereby linking immune dysregulation with thrombosis in severe COVID-1914. These observations underscore the urgent need to closely examine the intersection of immunopathology and severe COVID-19, particularly in pulmonary and vascular sites.Although SARS-CoV-2, the etiological agent for COVID-19, is initially and preferentially tropic for respiratory cellular targetsIn this study, we first sought to detect auto-reactive antibodies in patient plasma using a comprehensive screening approach incorporating diverse and relevant cell types. Plasma samples were obtained from 64 patients hospitalized for COVID-19, including 55 patients with critical illness admitted to the intensive care unit and 9 patients with less severe disease admitted to the regular hospital floor (COVID non-ICU). Plasma was also obtained from 13 critically ill patients without SARS-CoV-2 infection (non-COVID ICU), 9 outpatients with hypergammaglobulinemia (Hyper-\u03b3), and 12 healthy donors . SamplesAnalysis of cells using conventional and imagWe next sought to understand which auto-antigens are targeted by these circulating auto-reactive IgM in COVID-19 patients. Plasma samples from COVID ICU patients with strong auto-reactive IgM titers (n=5), non-COVID ICU patients (n=3) and healthy controls (n=4) were surveyed in analytical human proteome microarrays (HuProt v4 array). The array epxresses over 21,000 intact proteins, therefore allowing for a thorough and comprehensive investigation of potential binding targets for auto-reactive IgM antibodies. For stringency, a potential binding target was considered for any protein that had a fluorescence signal at least 4 standard deviations (Z-score>4) above the array mean. Additionally, the target had to possess a fluorescence signal at least 2 Z-scores above the same target across all healthy controls. This strict approach resulted in the identification of 260 candidate autoantigens that were uniquely linked to COVID ICU patients . We next investigated whether these proteins shared similar motifs. Although N-linked glycosylation was predicted in 11 candidate autoantigens, heterogeneity in amino acid sequences flanking predicted N-linked glycosylated residues indicated minimal influence of N-linked glycosylation on potential IgM binding motifs . Importantly, we identified 16 autoantigens associated with the human plasma membrane proteomeg motifs . Howeverantigens \u2013c. Notabin vivo relationship between auto-reactive IgM and COVID-19 pathophysiology, we first examined post-mortem pulmonary tissue to determine IgM distribution and presence. Immunohistochemical staining of parafin-embedded lung tissue revealed vastly greater IgM binding to alveolar septa and luminal surfaces of three COVID-19 non-survivors, compared to three COVID-19 negative control patients for whom lung tissue was available from cancer-related resection . It should be noted that some modest IgM deposition in the COVID-19 negative patient controls was expected, as auto-reactive IgMs can develop during lung cancer progression17 and/or following radiation therapy18. While we cannot formally rule out that the IgM detected in COVID-19+ lung tissue are reactive against SARS-CoV-2 surface antigens, the observed staining patterns are not consistent with the distribution patterns observed for SARS-CoV-2 antigens such as the Spike protein20. Importantly, the extensive IgM staining patterns are at levels at least three times higher than COVID-19 negative controls , and are not described for other causes of acute respiratory distress21. Further histological analysis revealed, in the lung of severe COVID-19 patients, significant alveolar damage and patchy hemorrhage, alongside extensive inflammatory infiltrate breaching the alveolar lumen. Previous studies have linked alveolar damage to dysregulated cytokine release and neutrophil extracellular traps seeded by resident macrophages25. Yet, these observations could also be linked to auto-reactive IgM, through the capacity of these immunoglobulins to fix complement and induce cytotoxicity. Indeed, staining for complement component 4 (C4d), a marker of complement activation, showed a two-fold increase in COVID-19 patients compared to negative controls , indicating frequent in vivo complement fixation.The concomitant observations of auto-reactive IgM potentially targeting O-linked glycosylated motifs and high expression of candidate autoantigens in pulmonary sites led us to hypothesize that auto-reactive IgM are a significant contributor to severe COVID-19 disease. To further explore the 26. Additionally, as there is considerable pulmonary microangiopathy observed in severe COVID-19 patients28, it is conceivable that CDC can precede or even cause the damage to the pulmonary endothelium. Given the observed IgM and C4d binding to pulmonary targets and to confirm that the auto-reactive IgM can mediate CDC, we next tested plasma samples from severe COVID-19 patients for their capability of fixing complement and inducing cytotoxicity in vitro. To this end, we investigated patient plasma samples that showed greater than 10% binding to the respective cell type in the screening assay. Interestingly, we consistently observed higher rates of CDC in cells of pulmonary origin . In addition, while non-COVID-19 ICU patient plasma samples induced limited or no cell death, most COVID-19 ICU patients plasma samples induced cell death at frequencies proportional to their measured level of cell binding . Collectively, these data indicate that auto-reactive IgM present in plasma from severe COVID-19 patients can fix complement and induce cytotoxicity.Complement-dependent cytotoxicity (CDC) and complement deregulation have been proposed to play a roles in the pathogenesis of ARDS29, who typically manifest higher plasma levels of circulating auto-reactive antibodies30. This phenomenon would be exacerbated by decreases in functional T follicular helper cells that promote antibody class switching31, a process associated with better disease outcomes32. Given that IgM levels peak within a week of the clinical onset of COVID-19 and persist at similar levels for weeks thereafter34, the elderly face a protracted period where there is steadfast secretion of auto-reactive IgM that maintain relatively low affinity for the same epitope without either switching to alternate antibody class types or undergoing somatic hypermutation and affinity maturation. In this perspective, the elderly may be more prone to severe COVID-19 due to a more protracted exposure to the cytopathic effects of auto-reactive IgM.The identification of auto-reactive IgM as a potential contributing factor to the pathogenesis of severe COVID-19 has two immediate implications. First, this observation may explain how COVID-19 is disproportionately more serious in the elderly37, and consequently protect against mortality38 and/or reduce the need for invasive mechanical ventilation39. In the long term, preservation of lung integrity may prevent pathogenic sequelae such as pulmonary fibrosis41, which diminishes lung function post-recovery42. These therapeutic goals could be implemented through the use of immunosuppressants, such as dexamethasone, that can attenuate the production of auto-reactive IgM 43, plasma exchange to remove auto-reactive IgM once formed44, or to synergize and supplement proposed anti-fibrotic therapies45. Alternatively, the complement cascade can be directly inhibited through conestat alfa46 or eculizumab46, and indeed, both drugs are presently undergoing evaluation through clinical trials to determine eficacy47. Optimistically, our findings cast support for interventions that can be readily and swiftly implemented in the clinic to alleviate or prevent serious COVID-19 complications.Secondly, it is conceivable that this type of immunopathology can be limited by therapeutic interventions that inhibit the IgM-complement axis. In the immediate term, this approach could mitigate the SARS-CoV-2 associated alveolar damage and ARDSIn summary, we found that broadly auto-reactive IgM are common in the plasma of patients with severe COVID-19. These auto-reactive antibodies bind pulmonary epithelial and endothelial targets, at which point they can be potent mediators of cytopathicity through the recruitment of complement. Future studies will investigate the relationship between SARS-CoV-2 infection and the emergence of auto-reactive antibodies, and determine whether immunosuppressive therapy can reduce the levels of auto-reactive IgM in plasma and consequently attenuate the clinical severity of COVID-19.Plasma samples were obtained from discarded clinical specimens at Emory University Hospital or from healthy donors in accordance with protocols approved by Emory\u2019s Institutional Review Board. Patient demographics and characteristics were obtained by electronic chart review as summarized in 2 and maintained between 50\u201380% confluence. Primary cells were grown in cell culture flasks coated with gelatin and used between 3\u20137 passages.HULEC-5a cells were obtained from the American Type Culture Collection (ATCC) and maintained in MCDB131 Medium supplemented with 10ng/ml epidermal growth factor (Thermo Fisher), 1\u00b5g/ml hydrocortisone (Sigma Aldrich), 10mM L-glutamine (Thermo Fisher), and 10% (v/v) FCS (GeminiBio). Primary human small airway epithelial cells (HSAEC) were purchased (Lifeline Cell Technology) and maintained in BronchiaLife Medium . Primary human alveolar epithelial cells (HAEC) and primary human kidney glomerular endothelial cells (HKGEC) were purchased (CellBiologics) and maintained in Complete Human Epithelial Cell Medium and Complete Human Endothelial Cell Medium , respectively. Primary human small intestine microvascular endothelial cells were purchased (Neuromics) and maintained in ENDO-Growth Medium . All cells were kept at 37\u00b0C in a humidified incubator supplemented with 5% CO5 cells/ml. 100\u00b5l of each cell suspension was added to 96-well U-bottom plates, and 50\u00b5l of patient or healthy donor plasma added and gently mixed. An IgG positive control was performed by adding human anti-CD98 IgG to one well. Plates were transferred to 4\u00b0C for one hour, after which cells were washed with cold DPBS and then incubated with an antibody cocktail containing a viability dye , anti-CD62E BV605 , anti-CD54 BV711 , anti-CD144 BV786 , anti-CD31 PE , anti-human IgG DyLight 650 , anti-human IgA FITC and anti-human IgM BV650 . No-anti-Ig fluorescence minus one controls were also prepared. After one hour at 4\u00b0C, cells were washed twice with FACS buffer and then fixed with 1% PFA before analysis on a BD LSRFortessa flow cytometer. For imaging flow cytometry, cells were stained only with anti-IgM BV650 following plasma incubation. Nuclei were stained with NucSpot Live 488 . Cells were then fixed in 2% PFA and analyzed on a Luminex Amnis ImageStreamX Mark II flow cytometer.Plasma aliquots were stored at \u221280\u00b0C and then thawed at 4\u00b0C for use in assays. Cells were detached from culture flasks using TrypLE Express reagent (Thermo Fisher) and resuspended in DPBS at a concentration of 5\u00d710Plasma levels of IL-6 were quantified using a Human IL-6 ELISA kit and following the manufacturer\u2019s instructions.Five-micrometer sections from formalin-fixed, parafin-embedded lung tissue sections were tested for IgM expression using a rabbit anti-IgM polyclonal antibody at 1:400 dilution and for C4d expression using a rabbit anti-C4d polyclonal antibody at 1:100 dilution. IgM staining was performed on a Dako Link48 Autostainer with the EnVision FLEX dual-link system after heat-induced epitope retrieval in citrate buffer for 30 minutes. C4d staining was performed on a Leica Bond III automated stainer with the Bond Polymer Refine Detection Kit after on-board epitope retrieval using Bond epitope retrieval solution 1 (ER1) for 20 minutes. Images were analyzed in ImageJ using the IHC Image Analysis Toolbox for the enumeration of nuclei, and to identify stained regions. The Color Pixel Counter plugin was further used to quantify the extent of staining in each image.6 cells/ml. 50\u00b5l of the cell suspension was transferred to wells of a 96-well V-bottom plate. 50\u00b5l of plasma was added to each well and plates were incubated at 4\u00b0C for one hour. 2 non-COVID (ICU) and 2 healthy donor plasma samples without IgM reactivity were selected as controls. Cells were washed with cold DPBS twice and resuspended in 100\u00b5l DPBS. 11\u00b5l of reconstituted rabbit complement was added to each well. To one well, 0.1% Triton X-100 was added to induce cell lysis. Plates were then transferred to a 37\u00b0C incubator for two hours. Plates were then centrifuged at 500g for 5 minutes to pellet cells. 50\u00b5l of the supernatant was transferred to a flat-bottom 96-well plate in duplicate. 50\u00b5l of reconstituted lactose dehydrogenase assay reagent was then added to each well, and the plate was subsequently protected from light and left at ambient temperature for 30 minutes, after which 50\u00b5l of the included stop solution was added. Absorbance was read at 490nm and 680nm . Absorbance values at 680nm were subtracted from absorbances at 490nm and duplicate values averaged. Percentage cytotoxicity was calculated by comparing the absorbance values against the lysed-cell and healthy-donor controls.Target cells were dissociated from culture flasks by TrypLE Express reagent (Gibco) and resuspended in PBS at a concentration of 1\u00d7105 COVID-19 (ICU) and 3 non-COVID-19 (ICU) samples characterized as enriched with auto-IgM by the flow cytometry assay described above were submitted alongside 4 randomly chosen healthy control samples to CDI laboratories for antigen-specificity screening across >21,000 full-length recombinant human protein targets (HuProt v4.0 proteome microarray).15 guided the identification of plasma membrane proteins. For all analyses, plasma membrane proteins were those defined as \u2018Enhanced\u2019 or \u2018Supported\u2019 for plasma membrane localization. Visualizations and heatmaps were generated with GraphPad Prism (v9.0) and RStudio Desktop (1.3.959). Predictions of N- and O-linked glycosylated sites were respectively provided by NetNGlyc48 and YinOYang servers16, and only high-cutoff sites were chosen for further analysis. Amino acid probability graphs were generated with WebLogo 3.Tissue-level transcription profiles were based on the Transcript TPMs dataset provided by the GTEx Portal. Subcellular localization data provided by the Human Protein AtlasGraphPad Prism (v9.0) was used to calculate statistical significances and correlations. Corresponding statistical tests are noted in figure legends.12345"}
+{"text": "Dynamic treatment regimens (DTRs) formalise the multi-stage and dynamic decision problems that clinicians often face when treating chronic or progressive medical conditions. Compared to randomised controlled trials, using observational data to optimise DTRs may allow a wider range of treatments to be evaluated at a lower cost. This review aimed to provide an overview of how DTRs are optimised with observational data in practice.Using the PubMed database, a scoping review of studies in which DTRs were optimised using observational data was performed in October 2020. Data extracted from eligible articles included target medical condition, source and type of data, statistical methods, and translational relevance of the included studies.From 209 PubMed abstracts, 37 full-text articles were identified, and a further 26 were screened from the reference lists, totalling 63 articles for inclusion in a narrative data synthesis. Observational DTR models are a recent development and their application has been concentrated in a few medical areas, primarily HIV/AIDS , followed by cancer , and diabetes . There was substantial variation in the scope, intent, complexity, and quality between the included studies. Statistical methods that were used included inverse-probability weighting , the parametric G-formula , Q-learning , G-estimation , targeted maximum likelihood/minimum loss-based estimation , regret regression , and other less common approaches . Notably, studies that were primarily intended to address real-world clinical questions tended to use inverse-probability weighting and the parametric G-formula, relatively well-established methods, along with a large amount of data. Studies focused on methodological developments tended to be more complicated and included a demonstrative real-world application only.As chronic and progressive conditions become more common, the need will grow for personalised treatments and methods to estimate the effects of DTRs. Observational DTR studies will be necessary, but so far their use to inform clinical practice has been limited. Focusing on simple DTRs, collecting large and rich clinical datasets, and fostering tight partnerships between content experts and data analysts may result in more clinically relevant observational DTR studies.The online version contains supplementary material available at 10.1186/s12874-021-01211-2. Dynamic treatment regimens (or regimes) (DTRs) formalise the multi-stage and dynamic decision problems clinicians often face when treating chronic or progressive conditions [The medical needs of patients with chronic or progressive conditions often evolve over time and the treatments administered to these patients need to be regularly reviewed. Treatment decisions may depend on the dynamics of a number of factors or require continual switching between different treatments. Therefore, making optimal treatment decisions requires information across many time intervals. nditions \u20135. A DTRdecision rules, functions that map each patient\u2019s accumulated clinical and treatment history to the subsequent treatment at each treatment decision point. These rules are typically derived from parametric models. An optimal decision rule is one that optimises the long-term value of the decision, for example, expected overall survival. The values of the decision rules are estimated using statistical methods that can account for time-varying treatment effect mediation and confounding. In order for the estimated treatment effects that inform the decision rules to have a causal interpretation, a number of conditions must be met, which are summarised in the next section.A DTR can be defined using One real-world example of a decision problem that has been framed and optimised as a DTR is \u2018when to begin\u2019 antiretroviral treatment in patients with human immune-deficiency virus (HIV), which is often based on their CD4 count history , 7. The sequential multiple assignment randomised trials (SMARTs) [Optimising DTRs relies on estimating the value of the decision rules using data from either (SMARTs) , 10\u201312, A potentially less costly and more operationally feasible alternative is to emulate a \u2018target trial\u2019 using existing observational data , 13, 14.\u25aa To summarise what medical areas, participant numbers, types of outcomes, and statistical methods have been used in real-world practice.\u25aa To describe whether key methodological aspects of the real-world applications were considered.\u25aa To ascertain whether the real-world application was designed more to inform statistical or clinical practice.The effective use of observational data to evaluate dynamic treatment decisions has the potential to provide insight into the management of chronic or progressive conditions, yet it is unclear to what extent it is done in practice. This study provides a scoping review to systeThe overarching aim was to identify whether any particular domains dominate the literature and why this may be so, in order to understand the potential for evidence regarding DTRs to be developed using observational data, and to identify existing gaps in the methodological quality of published studies.The remainder of this article proceeds as follows. We first provide terminology and describe a DTR using a simple two-stage example, selected modelling and estimation approaches for DTR-based decision rules, and the necessary conditions for causal inference. We follow by describing the methods and results of the scoping review to explore the context, methods, and reporting of studies which have modelled DTRs using observational data. We follow with a summary of the results, and general discussion and concluding summary of the key concepts.Ok describes the set of prognostic factors available for treatment decision, Ak, and the terminal outcome, Y, and k\u2009\u2208\u2009K\u2009=\u2009{1,\u20092} indexes the first and second treatment stages. The accumulated history, Hk, includes all covariates and treatments preceding Ak. Therefore, in our simple example, H1\u2009=\u2009O1 and H2\u2009=\u2009{O1,\u2009A1,\u2009O2}. We follow standard convention and denote random variables and their observed values using upper- and lower-case letters, respectively. DTR models define decision rules dk as functions that map a patient\u2019s history (Hk) to a certain course of action (Ak): dk(Hk)\u2009\u2192\u2009Ak. Note that a DTR can be generalised to more than two stages and treatments, multiple covariates with different data types, and different outcome types [Ok and Ak are binary variables.A simple two-stage, two-treatment scenario that can be formalised using DTRs can be described by the following notation:mentclass2pt{minimmentclass2pt{minimdirected acyclic graph (DAG) [A1 on Y can be decomposed into direct and indirect effects. If O2 is a \u2018child\u2019 of A1 , including O2 (a treatment-outcome confounder) as a model covariate blocks the indirect effect of A1 on Y, as seen in Fig.\u00a0A1. In the language of causal inference, we say that O2 mediates the effect of A1 on\u00a0Y.\u25aa The effect of O2 were not a mediator of A1, or treatment (A2)-outcome (Y) confounder, including O2 as a model covariate could induce collider stratification bias in the presence of unmeasured covariate (O2)-outcome confounders (Y) as seen in Fig.\u00a0\u25aa Even if A suboptimal approach to estimating the value of the dynamic treatment decisions in the example two-stage scenario might be to specify an \u2018all-at-once\u2019 regression model for the outcome and o2) . As appedynamic conditional model or a dynamic marginal structural model (MSM).Because standard regression methods fail to account for the complexities inherent in DTRs, more sophisticated statistical methods are required. The exact methodology employed often depends on, and is tailored to, the clinical question of interest. The typical approach is to specify and estimate either a Q-learning [parametric G-formula [G-estimation [A dynamic conditional model defines the average effects of treatments conditional on patient history as target parameters for estimation. The estimated effects can therefore be considered to be personalised in that it is defined only for patients who have the same histories. To account for the effect mediation and biases depicted in Figs. learning , 20, the-formula , 14, 21,timation , 22.A dynamic MSM defines the average treatment effects of following different regimens as the target parameters for estimation. Key to this approach is identifying that many individuals will have histories that are, at least in part, compatible with several regimens. Approaches that use dynamic MSMs rely on creating, for each candidate regimen, replicates of the original data where individuals are artificially censored if they no longer follow the candidate regimen and aim to estimate the treatment effect of the candidate regimen while balancing prognostic factors among the treatment groups using inverse probability weighting (IPW) , 23, 24.Although estimation methods such as Q-learning or IPW typically use relatively simple generalised linear models (for example linear and logistic regression), other estimation methods using the parametric G-formula or G-estimation may require complex estimating equations and/or large sets of models. In all cases, estimation performance can be sensitive to model misspecification, particularly when using the parametric G-formula which tends to use many interrelated models . Althougexchangeability, consistency, and positivity. These conditions require that there are no unmeasured confounders (exchangeability), well-defined treatments (consistency), and that the probability of receiving each treatment regimen of interest is greater than zero for each patient included in the analysis (positivity). A complete and rigorous description of these assumptions is beyond the scope of this review, however Hern\u00e1n and Robins [Several conditions must be met for the estimated DTR effects to have a causal interpretation , 14. ThiThe review protocol was developed by RKM and JAS in consultation with the co-authors. The original version of the protocol, along with the changes to the protocol, is available as an additional file . Observational data were defined as any non-simulated data where the treatments of interest were not randomly allocated. No restriction was placed on study time period, publication type, statistical method, outcome types, sample size, country of origin, or participant characteristics.To be included in the review, studies must have used statistical methods to estimate the value of DTR decision rules from observational data, either as a demonstration of the methodology or to provide real-world evidence to support specific treatment policies. \u25aa only analysed data from experimental studies where the treatment/s were randomised ,\u25aa analysed simulated data or provided theoretical discussion only,\u25aa provided a commentary, review, opinion, protocol, or description only,\u25aa either the abstract or full-text were not available,\u25aa analysed data from non-human subjects only,\u25aa studies were not available in the English language, or\u25aa did not use statistical methods to evaluate a DTR using observational data, for example provided only a graphical or textual description of the data.Studies were excluded from this review if they met any of the following criteria:To identify potentially relevant studies the electronic bibliographic database PubMed was searched on 8 October 2020. The reference lists of the included articles identified from the PubMed database were manually screened to identify additional relevant studies. Grey literature, unindexed journals, and trial registries were not searched.The search strategy was developed by RM and JAS, with input from all co-authors, and in consultation with the University of Melbourne Library. The electronic PubMed search strategy is described in Table\u00a0RKM performed the search of the PubMed database, screened the titles and abstracts returned by the search, and reviewed the full text of all potentially eligible studies that satisfied the selection criteria for eligibility. Excluded studies were categorised by primary reason for exclusion. Titles and abstracts from each bibliography item of the included PubMed articles were also screened , and all studies that satisfied the selection criteria for eligibility were included in the data synthesis.The data extracted from each article included reference details, study characteristics, data type, statistical methods, and whether the study was primarily intended to inform statistical or clinical practice . All data management and analysis was performed using the R programming language .The initial search returned 209 studies. Of these, 156 (75%) were excluded following screening of titles and abstracts. Upon reviewing the full-texts for eligibility, 37 studies were included from the PubMed database and a further 26 studies were identified from the PubMed article reference lists. In total, 63 studies were included in the data synthesis , 26\u201381. The estimation of optimal DTRs using observational data is a recent development and has been most concentrated in in the area of HIV/AIDS , followed by cancer , and diabetes . All but three of the included studies were published after 2005, with all but nine in the last decade and almost half in the last 5 years.Outcome types, participant numbers, and funding sources varied considerably between the included studies. Time-to-event outcomes were most commonly investigated . The median number of participants was 3882 with an interquartile range (IQR) between 1420 and 23,602, and the total range between 133 and 218,217. Studies were funded mostly through public sources , with some studies acknowledging non-profit sources . Ten (16%) studies did not report on funding sources.All of the common statistical approaches that we have described were implemented, yet there was a lack of transparency regarding some of the specific methodological approaches used across many studies. IPW-related methods were the most commonly used , followed by parametric G-formula related methods , Q-learning related methods , G-estimation , targeted maximum likelihood/minimum loss-based estimation , regret regression , and other less common approaches . Many studies did not clearly and explicitly describe the methods that they employed for either missing data , model evaluation , model selection , or model sensitivity , and only eight studies described all four methodological approaches. The studies that published statistical software code relevant to their analyses provided it for either R or SAS only.Eighteen (29%) studies had a clear primary focus of informing clinical practice. The remaining 45 (71%) of studies used observational data only to illustrate the application of statistical methodology , 79. TheThis review provided a summary of how DTRs can be modelled and an overview of how observational data have been used to estimate optimal DTRs. There was substantial variation in the scope, intent, complexity, quality, and statistical methodology between the 63 included studies.DTR models are often necessary when formalising decisions about how best to treat chronic or progressive conditions to properly account for time-varying treatment confounding and mediation. A number of different statistical approaches can be used\u2014including IPW, Q-learning, the parametric G-formula, G-estimation, or targeted maximum likelihood/minimum-loss based estimation\u2014depending on the DTR model used and the nature of the research question. Almost all clinical studies used either IPW or the parametric G-formula methods, possibly because these methods are relatively well-established, less complex, and suited to simpler decision problems such as those encountered in HIV/AIDS treatment. Unsurprisingly, the included methodological studies were more diverse in the methods that they used and tended to detail model selection and sensitivity analyses less often. Encouragingly, this review found that many included studies often dealt with clinically relevant but complicated time-to-event outcomes.Evaluation of dynamic treatment regimens was first described in 1987 by Robins , but mosCompared to randomised controlled trials, using observational data to estimate DTRs may allow researchers to both take advantage of the economics of using existing data and also evaluate a wider range of treatments. Despite this, the majority of included studies did not have a clinical focus. Of the clinical studies, most focused on HIV/AIDS, and analysed large datasets using either IPW or parametric G-formula methods to answer relatively simple questions. This result provides insight into the type and scale of resources, and research questions, that may give rise to feasible observational DTR studies.The majority of studies were methodological investigations and typically included a simplified real-world application only. Many of the included methodological articles involved methods and results that were based on complex estimating equations and/or Monte Carlo simulations which, although no doubt critical for the advancement of the DTR methodology, may be difficult for clinical readers to interpret. It is likely that user-friendly software would make implementing the complex methods easier for clinicians and methodologists alike. Although almost half of the methodological studies included some form of statistical software code related to their methods, which may encourage the application of complex DTR methods, in general this software is not readily usable by non-experts. Furthermore, many studies did not describe the real-world applications or include details of the statistical methods and corresponding assumptions in detail, which may limit how the DTR methods and results are translated in practice.We posit that the limited number of clinically relevant examples of optimised DTRs using observational data is because of the need to satisfy three conditions necessary for estimating causal treatment effects: exchangeability, positivity, and consistency. These conditions, required for valid causal inference, cannot be verified from the data alone and require judgement on biological plausibility.To meet the exchangeability condition, explicit causal relationships must be considered by content experts to identify confounders and the confounder data must be available. Developing a causal model requires both clinical expertise and statistical knowledge to codify such expertise using the causal inference framework. Although the use of DAGs can streamline this process, it still requires substantial investment in learning and collaboration by both content experts and data analysts, particularly if multiple plausible causal models are developed to assess sensitivity of conclusions. Even when it is feasible to fully develop a plausible set of causal models it is not guaranteed that confounder information will be available, particularly when working with retrospectively collected data or electronic health records, which are often designed around clinical practice rather than for research purposes. It is worth noting that fewer than 50% of studies did not describe the model selection process in any way.To ensure causal effects can be estimated, the positivity condition must be met. This requires that all regimens of interest are followed by at least some patients for each potential combination of predictors and outcomes. Large clinical databases, and questions about non-rare medical conditions, are likely to be required for there to be sufficient numbers such that the positivity condition holds. We note that many of the clinical studies that we identified in this review used either very large EHR databases or data from large multinational collaborations, and focused on a relatively prevalent medical condition. Even with large clinical databases, structural factors such as clinical, regulatory, or reimbursement guidelines may completely prevent treatment sequences of interest (not to mention relevant patient histories) from being observed.The consistency condition requires that treatments, and therefore potential outcomes under treatments, are sufficiently well-defined, which may be a difficult condition to meet for conditions where there are many different treatment modalities. A related point is that in clinical areas with rapid and continual treatment innovation the clinical paradigm may change so rapidly that DTRs modelled using data from observational cohort studies or EHRs, with patient treatment histories over a long time period, are less relevant to informing clinical practice. For example, management of many cancers often involves several consecutive lines of treatment following disease progression and determining the optimal sequence of treatments is an open area of research in modern oncology. But new cancer treatments and changing clinical paradigms often dramatically change the treatment landscape, which results in substantial variation in clinical practice. Over time, treatments become less well-defined, and it becomes difficult to satisfy the consistency condition.Although we are satisfied that our scoping review provides a representative sample of the literature there are some limitations worth noting. Our exclusive focus on the PubMed database excludes any studies not indexed therein. We made this choice early on in the design process on the basis of our broad aims, the \u2018scoping\u2019 nature of our review, and also to simplify the review and make it as reproducible and transparent as possible. We note that searching the reference lists of the included PubMed articles served as a practical workaround of the limitation arising from using a single database. Further, the search strategy included only common phrases, and their variants, to capture both DTRs and observational data. There may be variants that we have missed, or there may be ad hoc implementations that use entirely different naming conventions or combinations thereof, although we note that the nomenclature concerning dynamic treatment regimens is relatively well-established in the literature.Using observational data to model DTRs is a modern and methodologically principled approach to evaluating dynamic treatment decisions. There is great potential in using DTR models with existing observational data to support dynamic treatment decisions that improve patient outcomes, particularly where the relevant clinical trial is not feasible. Yet the use of observational DTR studies to inform clinical practice has been relatively limited, primarily because the underlying conditions that are necessary for causal inference are difficult to satisfy. Developing new methods that enable these conditions to be satisfied may more broadly enable additional and more diverse observational DTR studies. Our review suggests that the currently available methods are most likely to find feasible applications for relatively simple dynamic clinical decisions, either for simple treatment sequences or \u2018when to treat\u2019 type questions, where there are numerous and rich clinical data, where treatments can be well-defined, in clinical areas with slowly evolving treatment paradigms, and where content experts and data analysts work in tight partnership.Additional file 1. Original scoping review protocol.Additional file 2. Extracted data for individual studies."}
+{"text": "Studies have shown that microRNA-133 (miR-133) plays a positive role in the growth of cardiac myocytes, the maintenance of cardiac homeostasis, and the recovery of cardiac function, which is of great significance for the recovery of acute myocardial infarction. However, the delivery of miRNA to the site of action remains a challenge at present. The purpose of this study was to design an ideal carrier to facilitate the delivery of miR-133 to the infarct lesion for cardiac protection. A disease model was constructed by ligating the left anterior descending coronary artery of rats, and polyethylene glycol (PEG)-polylactic acid (PLA) nanoparticles modified with arginine-glycine-aspartic acid tripeptide (RGD) carrying miR-133 were injected via the tail vein. The effects of miR-133 were evaluated from multiple perspectives, including cardiac function, blood indexes, histopathology, and myocardial cell apoptosis. The results showed that RGD-PEG-PLA maintained a high level of distribution in the hearts of model rats, indicating the role of the carrier in targeting the heart infarction lesions. RGD-PEG-PLA/miR-133 alleviated cardiac histopathological changes, reduced the apoptosis of cardiomyocytes, and reduced the levels of factors associated with myocardial injury. Studies on the mechanism of miR-133 by immunohistochemistry and polymerase chain reaction demonstrated that the expression level of Sirtuin3 (SIRT3) was increased and that the expression of adenosine monophosphate activated protein kinase (AMPK) decreased in myocardial tissue. In summary, the delivery of miR-133 by RGD-PEG-PLA carrier can achieve cardiac lesion accumulation, thereby improving the cardiac function damage and reducing the myocardial infarction area. The inhibition of cardiomyocyte apoptosis, inflammation, and oxidative stress plays a protective role in the heart. The mechanism may be related to the regulation of the SIRT3/AMPK pathway. The high morbidity and mortality of cardiovascular diseases pose a serious threat to the health of elderly people , and impResearchers have gradually turned their attention to another novel treatment, that is, repairing myocardial cells in the dying state of the marginal area after infarction with special noncoding RNA and proteins to reduce the scope of infarction . MicroRNIn the treatment of cardiovascular diseases, the nanomaterials have shown attractive charm. Nanoparticles act as delivery systems, making it possible to provide treatments with drugs that are difficult to stabilize in body fluids, have poor solubility, or have short half-lives. Nanoparticles which are made of nontoxic, biodegradable polymers show promise in drug delivery use, which render multiple advantages, including: sustained-release, controlled-release, long-acting, targeting, higher-loading, and high-compliance drugs . The enhA Terminal-deoxynucleoitidyl Transferase-Mediated Nick End Labeling (TUNEL) in situ detection kit was obtained from Abcam . A bicinchoninic acid (BCA) protein determination kit and sodium dodecyl sulphate-polyacrylamide gel (SDS-PAGE) kit were purchased from Solarbio Science & Technology Co., LTD. . SIRT3 antibody was obtained from Proteintech . AMPK antibody was obtained from Bioworld . Pentobarbital sodium and Horseradish peroxidase (HRP)-labeled goat anti-rabbit immunoglobulin G (IgG) were obtained from Zs-Bio Co., LTD. . Ristocetin-induced platelet agglutination (RIPA) lysate was obtained from Beyotime Biotechnology Co., LTD. . A Quant-iT\u2122 RiboGreen kit was obtained from Invitrogen Biotechnology Co., LTD. . NHS-PEG3400-PLA2000 polymer was obtained from Xinqiao Biotechnology Co., LTD. . MPEG2000-PLA2000 polymer was obtained from Daigang Biotechnology Co., LTD. . Moreover, all other chemicals used in this study were of molecular biology grades and commercially available.Ninety male Wistar rats with body masses of 220\u2013260 g were supplied by the animal experiment center of the College of Pharmacy, Jilin University. The rats were provided with food and water under standard feeding conditions, including suitable humidity, good ventilation, and 12 h light and dark cycles. The Animal Ethics Committee of Jilin University approved all animal experiments described in this study .w/o/w method [w:w) were dissolved in 2 mL of dichloromethane (DCM), and then, the spermine/miR-133 complex was added dropwise to the DCM mixture. The mixture was placed in an ice solution using a water bath and sonicated for 60 s to emulsify it. Then, 20 mL of 2.5% strength polyvinyl alcohol (w:v) aqueous solution was added and stirred, and the organic solvent was removed by rotary evaporation. The nanoparticles were centrifuged at 21,000\u00d7 g for 45 min and then washed three times with ultrapure water. 1,1-dioctadecyl-3,3,3,3-tetramethylindotricarbocyaine iodide (DIR) was added to the oil phase to generate DIR-labeled nanoparticles to prepare blank nanoparticles. The particle size distribution and surface zeta potential of PEG-PLA/miRNA and RGD-PEG-PLA/miRNA were measured by dynamic light scattering in the distilled water. Nanoparticles were imaged by transmission electron microscopy . The encapsulation efficiency of miRNA loading was tested with a Quant-iT\u2122 RiboGreen kit.The synthesis of RGD-PEG-PLA conjugate was based on the methods by other researchers ,24. N-hyw method . In brieThe AMI model was established by the permanent ligation of left coronary artery method ,10,26. TThe rats were randomly divided into the following 6 groups: the sham group, the model group, the positive drug group , the min = 6 per group), and the spleen, liver, kidneys, heart, lungs, small intestine, large intestine, stomach, brain, fat tissue, and muscle were dissected. All samples were frozen at \u221280 \u00b0C until further processing. Using the previously reported method [w/v deionized water was added to each sample. All the tissues (except the brain) were homogenized at maximum speed for 10 s in a magnetic bead homogenizer. 50 \u03bcL tissue homogenates were subsequently added to a 96-well plate containing 10 \u03bcL dimethylsulfoxide. The fluorescence intensity was read on a plate spectrophotometer at 750 nm excitation/780 nm emission wavelength. The control group was constructed by adding a known quantity of DIR and vehicle to the tissue homogenate, and the calibration curves of each tissue type were plotted. All samples were in triplicate and the units were converted to ng DIR/g organization according to each control curve.Rats were intravenously injected with RGD-PEG-PLGA and PEG-PLGA nanoparticles loaded with the near-infrared dye DIR. At 0.5, 4, 12, and 36 h after injection, the rats were anesthetized and euthanized of the rats was detected on the seventh day after AMI. The pathological Q wave numbers of leads I, aVL, and V1-V6v in rats were observed and recorded.Rats were anesthetized for echocardiography detection on the seventh day after AMI, using a small animal ultrasound instrument . After the heart rate stabilized, a high-frequency probe was placed on the front of the left chest for positioning. A short-axis parasternal image was obtained at the level of the mitral valve tendon. M-mode echocardiography was used to detect the left ventricular end-diastolic diameter (LVEDD), left ventricular end-systolic diameter (LVESD), left ventricular posterior wall thickness at end-diastole (LVPWD), left ventricular posterior wall thickness at end-systole (LVPWS), left ventricular ejection fraction (LVEF), and left ventricular fraction shortening (LVFS). All the original data were selected from three average values of consecutive cardiac cycle measurements.Seven days after AMI, the rats were anesthetized and blood was drawn from the abdominal aorta. A portion of the samples were centrifuged with a high-speed refrigerated centrifuge at 4000 r/min for 10 min, and the serum was divided into parts and stored at \u221280 \u00b0C for later storage. According to the instructions of the electrochemiluminescence kit, the expression of lactic dehydrogenase (LDH) , creatine kinase isoenzyme (CK-MB) , and cardiac troponin T (cTnT) were measured.The activity of superoxide dismutase (SOD) in the reserve serum was determined by the xanthine oxidase method, the content of glutathione peroxidase (GSH-Px) was determined by the colorimetry method, and the content of malondialdehyde (MDA) was determined by the thiobarbital acid method. The three kits were purchased from Jingkang Biological Engineering Co., LTD. .The remaining samples were centrifuged at 3000 r/min for 15 min, the serum was taken, and stored in a refrigerator at \u221280 \u00b0C. Enzyme-linked immunosorbent assay (ELISA) was used to detect the contents of tumor necrosis factor-\u03b1 (TNF-\u03b1) , interleukin-6 (IL-6) , and myeloperoxidase-1 (MPO-1) by colorimetry.ELISA was used to detect the contents of nitric oxide (NO) and endothelin-1 (ET-1) in serum by ultraviolet spectrophotometry was used to determine the activity of serum nitric oxide synthase (NOS) .Seven days after modeling, the left ventricular myocardial tissue was immediately taken while blood was drawn from the abdominal aorta. After washing the tissue three times with phosphate-buffered saline (PBS), fixed with 4% paraformaldehyde (pH 7.4), embedded in conventional paraffin and sectioned , the sections were dewaxed with conventional xylene, graded with ethanol, and rinsed with water. Then, hematoxylin-eosin (H&E) staining, conventional dehydration, transparency, and resin sealing were used to observe the pathological changes of the myocardium.Repeating the steps above, deoxynucleoside transferase was catalyzed by adding fluorescein-labeled deoxyuridine triphosphate and apoptosis was observed under a 400\u00d7 light microscope . Six different fields of vision were randomly selected for each slice to calculate the myocardial cell apoptosis rate. The calculation formula was as follows:http://imagej.nih.gov/ij/).For immunohistochemical staining, the steps of H&E staining as outlined above were followed, and the tissues were blocked with goat serum at room temperature for 30 min, and incubated with primary antibody for SIRT3 and AMPK at 4 \u00b0C overnight. The next day, after being washed with PBS, the tissues were incubated with HRP-labeled secondary antibody in the dark at room temperature for 60 min. After diaminobenzidine staining, hematoxylin was used for counterstaining. The stained sections were observed under a light microscope, and the images were analyzed with Image J software . The primer sequences were as follows: \u03b2-actin (189 bp) upstream primer 5\u2032-CCACCATGTACCCAGGCATT-3\u2032, downstream primer 5\u2032-CGGACTCATCGTACTCCTGC-3\u2019; Bcl-2 (90 bp) upstream primer 5\u2032-GATTGTGGCCTTCTTTGAGT-3\u2032, downstream primer 5\u2032-CACAGAGCGATGTTGTCC-3\u2019; Bax (85 bp), upstream primer 5\u2032-TGAGCTGACCTTGGAGCA-3\u2032, downstream primer 5\u2019-GTCCAGTTCATCGCCAAT-3\u2032. The PCR program was as follows: 95 \u00b0C for 5 min, 95 \u00b0C for 10 s, 60 \u00b0C for 30 s for 40 cycles. \u03b2-actin was used as an internal reference, and the relative expression of mRNA was calculated according to the 2\u2212\u0394\u0394Ct method.The total RNA was extracted with an extraction kit after the cardiac tissue in the apex was chopped and milled, and the purity and integrity of the RNA were detected by agarose gel electrophoresis. The sample with absorbance ACardiac tissue cells were collected in RIPA buffer containing protease inhibitor. After centrifugation, the concentration of total protein in the supernatant was determined with a BCA protein detection kit. Total protein (10\u2013100 \u00b5g) was separated by 10% SDS-PAGE electrophoresis and then transferred to polyvinylidene fluoride membranes (PVDF). After being sealed in PBS containing 5% skim milk powder, a primary antibody was added and incubated overnight at 4 \u00b0C. After incubating with the secondary antibody labeled with HRP for 2 h, the membranes were washed with PBS containing 0.1% Tween-20. PVDF were detected and quantified, and repeated three times to determine the average.p < 0.05 was considered statistically significant.SPSS20.0 software was used for statistical analysis. The experimental results were expressed as the mean \u00b1 standard deviation of at least three independent samples. Based on the assumption that the data showed a normal distribution, a one-way analysis of variance was performed to compare the differences between the groups. The TEM image a showed To verify whether RGD-modified nanoparticles can effectively deliver miRNA to the heart, we analyzed DIR delivery by injecting nanoparticles into the caudal vein of rats. At 0.5 h, significant DIR enrichment was first detected in the RGD-PEG-PLA and PEG-PLA groups in the kidney, liver, lungs, and spleen of the rats . The maxThe significant changes in the ECG when AMIp < 0.01), as shown in LVESD, LVEDD, LVPWD, and LVPWS are the indexes to evaluate left ventricle morphology. Compared with those of the sham group, the LVESD, LVEDD, LVPWD, and LVPWS of the model group were significantly higher (p < 0.05).Compared with those of the model group, the LVESD, LVEDD, and LVPWS of the administered groups were significantly lower. The LVPWD of the administered groups was slightly lower, and this difference was not significant. Compared with those in the miR-133 group, the LVESD, LVEDD, and LVPWS in the positive drug group, the PEG-PLA/miR-133 group, and the RGD-PEG-PLA/miR-133 group were significantly lower. LVEDD was significantly different between the PEG-PLA/miR-133 group and the RGD-PEG-PLA/miR-133 group (p < 0.01). It indicated that the cardiac ejection function of rats decreased after AMI, and the modeling was successful. Compared with those of the model group, the LVEF and LVFS of the administered groups were significantly higher. Compared with the miR-133 group, the PEG-PLA/miR-133 group, the RGD-PEG-PLA/miR-133 group, and the positive drug group had more significant effects on LVEF and LVFS.LVEF and LVFS are the main indexes to evaluate left ventricular function. LVEF reflects the shortening ability of left ventricular myocardial fiber, while LVFS reflects the relationship between stress and shortening. As can be seen from p < 0.01). Compared with those in the model group, the levels of CK-MB, LDH, and cTnT in the administered groups were significantly lower. Compared with those in the miR-133 group, the levels of CK-MB and cTnT in the PEG-PLA/miR-133 group, the RGD-PEG-PLA/miR-133 group, and the positive drug group were significantly lower, especially in the RGD-PEG-PLA/miR-133 group (p < 0.01). The serum levels of myocardial injury markers in AMI rats was significantly increased, and the levels were significantly decreased after administration, suggesting that miR-133 could alleviate myocardial injury in AMI rats.Cardiac-specific markers, namely cTnT, CK-MB, and LDH, were used to evaluate the apoptosis of myocardial cells mediated by mitochondrial injury . From Fip < 0.01). Compared with the model group, the administered groups had significantly higher SOD and GSH-Px activities and lower MDA levels. Compared with the miR-133 group, the PEG-PLA/miR-133 group, the RGD-PEG-PLA/miR-133 group, and the positive drug group showed significantly higher SOD and GSH-Px activities and lower MDA levels (p < 0.01). The results indicated that the redox status of the serum following myocardial infarction was improved by administration.The enhanced oxidative stress has been confirmed to be a key contributor to increased myocardial injury after acute myocardial infarction. The mitochondrial oxidative stress markers evaluated oxidative-stress-induced cellular apoptosis . As can p < 0.01). Compared with those in the model group, the contents of TNF-\u03b1, IL-6, and MPO-1 in the administered groups were significantly lower. Compared with those in the miR-133 group, the contents of TNF-\u03b1, IL-6, and MPO-1 in the PEG-PLA/miR-133 group, the RGD-PEG-PLA/miR-133 group, and the positive drug group were significantly lower (p < 0.01), especially in the RGD-PEG-PLA/miR-133 group. The results proved that miR-133 administration suppressed the inflammation following myocardial infarction.The inflammatory reaction of myocardial cells can be assessed by detecting changes in TNF-\u03b1, IL-6, and MPO-1 serum contents in AMI model rats. As seen in p < 0.01). Compared with those in the model group, the expression levels of NO and NOS in the administered groups were significantly higher, and ET-1 expression was significantly lower. Compared with those in the miR-133 group, the expression levels of NO and NOS in the PEG-PLA/miR-133 group, the RGD-PEG-PLA/miR-133 group, and the positive drug group were significantly higher, and the expression of ET-1 was significantly lower (p < 0.01). It indicated that the increase of endothelial active substances induced by myocardial infarction could be inhibited after miR-133 administration.The degree of vascular endothelial cell damage after myocardial infarction can be assessed by measuring the changes in NO, NOS, and ET-1 serum levels in AMI model rats. As seen in In the sham group, the volume and quality of the heart were the same as those of a normal heart, with a rosy color and no adhesion. As can be intuitively seen from It can be seen from p < 0.05, p < 0.01). Compared with that in the miR-133 group, SIRT3 mRNA expression increased and AMPK mRNA expression decreased most significantly in the PEG-PLA/miR-133 group, the RGD-PEG-PLA/miR-133 group, and the positive drug group (p < 0.01).As As In the experiment, the blood supply to the heart from the left coronary artery of the rat was completely blocked. According to the occurrence of myocardial infarction, the ST segment increased significantly, and pathological Q waves appeared on the ECG. The success of the disease model construction can be identified qualitatively, and the consistency of the initial conditions of the models in each group can be evaluated . AdoptinThe occurrence and development of AMI is related to the occurrence of the inflammatory response, vascular endothelial injury, oxidative stress, metabolic disorders, mitochondrial dysfunction, and so on. Typical cardiac markers such as cTnT, CK-MB, and LDH have been clinically used for the early diagnosis and postmortem detection of AMI . When AMStudies have shown that SIRT3 plays a protective role in cardiovascular diseases . SIRT3 pThe main pathogenesis of myocardial infarction in the human body is coronary atherosclerosis, which causes the accumulation of lipids in the inner wall of the blood vessel, causing stenosis and even the occlusion of the lumen, and insufficient blood supply to the myocardium, which ultimately leads to the onset of AMI . The AMIIn summary, the passive targeting of heart infarction lesions by the EPR effect of nanoscale carriers in the experiment was used to comprehensively investigate the biocompatibility and safety of nanomaterials. The nanoparticles composed of PEG-PLA were used as the carrier to modify RGD with the function of targeting heart infarction lesions ,40. In vThis study is the first to combine active and passive targeting strategies for delivering miRNA to the heart, which is innovative. In vivo biodistribution studies have shown that RGD modification gives nanoparticles the ability to accumulate in cardiac infarction lesions, indicating that RGD-PEG-PLA can be an ideal carrier to target the heart. The delivery of miR-133 to the heart of AMI rats by the carrier can improve cardiac function damage, reduce myocardial infarction area, and exert cardiac protection by inhibiting apoptosis, inflammation, and oxidative stress. These effects may be related to the regulation of the SIRT3/AMPK pathway. RGD was used to modify the carrier, and the combination of RGD and the EPR effect of nanoparticles leads to higher levels of miR-133 in the myocardial infarction area , improve"}
+{"text": "Linear inverted pendulum model (LIPM) is an effective and widely used simplified model for biped robots. However, LIPM includes only the single support phase (SSP) and ignores the double support phase (DSP). In this situation, the acceleration of the center of mass (CoM) is discontinuous at the moment of leg exchange, leading to a negative impact on walking stability. If the DSP is added to the walking cycle, the acceleration of the CoM will be smoother and the walking stability of the biped will be improved. In this paper, a linear pendulum model (LPM) for the DSP is proposed, which is similar to LIPM for the SSP. LPM has similar characteristics to LIPM. The dynamic equation of LPM is also linear, and its analytical solution can be obtained. This study also proposes different trajectory-planning methods for different situations, such as periodic walking, adjusting walking speed, disturbed state recovery, and walking terrain-blind. These methods have less computation and can plan trajectory in real time. Simulation results verify the effectiveness of proposed methods and that the biped robot can walk stably and flexibly when combining LIPM and LPM. Compared with other types of robots, humanoid robots have good adaptability to the environment, stronger obstacle avoidance ability, and a smaller moving blind area, which has attracted the attention and in-depth research of scholars ,6,7,8,9.There are many methods for gait planning of biped robots. These methods could be divided into two classes. The first uses the accurate information of dynamical parameters to generate walking patterns. Joint angle trajectories ,12,13,14The other class is based on a simplified model to generate walking patterns. Inverted pendulum ,20 is wiFrom an application perspective, when the biped robot is walking outdoors, due to the unstructured ground environment, the robot is required to have the ability of real-time gait generation according to the current environment. However, the more accurate the model is, the more computation is needed. Hence real-time gait planning may become very difficult. Therefore, the simplified model is a feasible and very useful method for real-time gait planning.On the other end of spectrum, there is little attention on the DSP. Many gait-planning methods consider only the SSP and ignore the DSP, or the DSP is assumed to be instantaneous. In this situation, the center of pressure (CoP) or zero-moment point (ZMP) needs toTo overcome shortcomings of models without the DSP, some scholars introduced the DSP in gait planning. Kajita et al. planned Shibuya et al. proposedIn our previous work, the two-point-foot walking model was proposed. A planar walking pattern was designed in . Then thIn order to meet the requirement of real-time trajectory generation in complex environments, the gait-planning method should satisfy either planning simplicity or walking stability. In this paper, LIPM and LPM are used to plan the trajectories of the SSP and the DSP, respectively. The dynamic equations of LIPM and LPM are linear, so they have analytic solutions. Trajectory planning only needs a small amount of computation. Through dynamic analysis of two pendulum models and their ZMP, the stability of gait can be guaranteed. Moreover, LPM is well-compatible with LIPM. Not only does the trajectory of CoM have an analytical solution, but the displacement of CoM in the SSP and DSP also has a very intuitive geometric representation.The novelty of the paper is to propose a trajectory-planning method using LIPM and LPM, which can achieve flexible walking for biped robots. According to our proposed method, biped robots can generate trajectory online for several cases, such as periodic walking, changing walking speed, disturbance recovery, and walking on uneven ground, etc. The main difference between this paper and other papers is the trajectory-planning method in the double support phase. In some papers, the trajectory of the CoM is planned by LIPM in the SSP and is designed as a polynomial in the DSP. However, the trajectory of the CoM is planned by LIPM and LPM in the SSP and the DSP, respectively. According to the characteristics of LIPM and LPM, some geometric relations can ensure that acceleration of the CoM is continuous at the switch of SSP and DSP. Moreover, according to the regulation of the DSP, the biped robot can walk flexibly. The main work of this paper is as follows: Firstly, LIPM and LPM are introduced, their dynamic equations are analyzed, and their analytical solutions are given. Secondly, the walking stability of the robot during the SSP and DSP is analyzed. Then, several trajectory-planning methods are proposed under different situations. At last, simulations are carried out and simulation results show that the biped robot can walk stably and flexibly based on our proposed gait-planning method. This validates the effectiveness of the proposed method.To simplify the biped model, we consider the biped robot as a concentrated mass with massless legs. In this paper, we only consider the motion of the sagittal plane. LIPM and LPM are very simple and their dynamic equations have analytical solutions. We use LIPM to plan the CoM\u2019s trajectory for the SSP and LPM for the DSP. In this section, the dynamic equations of LIPM and LPM and their analytical solutions are discussed.When the CoM of the robot moves along a horizontal straight line under the force of the massless leg, we call it a linear inverted pendulum, as shown in When the CoM moves along a horizontal straight line, its resultant force in the vertical direction is zero. Therefore, the dynamic differential equation of the CoM in the horizontal direction could be obtained by:Multiplying The quantity In lots of situations, the time in which the CoM moves from one point to another is required. By doing some algebraic operations on Equations (2) and (3), we can get the following formula for calculating the transfer time:The results of Equations (5) and (6) are the same, unless one of them is singular by the numerator or denominator being zero.If the CoM moves along an oblique line, as shown in The dynamics of the CoM for LIPM in X-direction can be obtained by:It is found that the equation of motion (8) is independent of the slope of the constraint line, and when CoM moves along the oblique line under the push force f, as shown in During the DSP, both feet of the robot are in contact with the ground. It can be considered that there are two kick forces applied to the CoM. Assume that two kick forces are f. In this situation, the robot is modeled as a pendulum model. When the CoM of the pendulum moves along a constraint line, it becomes a Linear Pendulum Model, as shown in Now suppose that there is a virtual suspending point above the CoM of the robot. A massless prismatic joint is used to connect the virtual suspension point and the CoM. The pull force of the prismatic joint on the CoM is equal to the force The local coordinate system is established with the virtual suspension point as the origin, and It seems that Equation (9) is the same as Equation (1). However, we should note that The solution of the differential Equation (9) or (10) is:Similar to LIPM, LPM also maintains the conservation of orbital energy, multiplying both sides of (10) by The quantity Sometimes, the time duration in which the CoM of the LPM moves from one point to another is required. Due to the periodic motion of the LPM, there are infinite solutions to the transfer time. We only want the time in which CoM reaches the final state for the first time. By doing some algebraic operations on (11) and (12), the transfer time can be obtained by:If the CoM of the pendulum moves along an oblique line, as shown in The dynamics of the CoM for LPM in X-direction can be obtained by:It is found that the equation of motion (16) is also independent of the slope of the constraint line, and when the CoM moves along the constraint line under the pull force This section discusses the walking stability of LIPM and LPM. The ZMP stability criterion is chosen in this paper. When the ZMP/CoP remains in the support polygon, it can ensure that the robot\u2019s feet will not turn over.During the SSP, the CoM is pushed by prismatic joint. Since it is assumed that there is no torque at the pivot, CoP is the support point During the DSP, two forces According to the LIPM and LPM introduced in If a certain geometric relationship is satisfied at the moment of the switch between the SSP and DSP, the acceleration of the CoM of LIPM and LPM at transition can be guaranteed to be equal.k-th step is k-th step, and k+1)-th step. When the CoM moves to Suppose the robot walks from left to right using LIPM in the SSP and LPM in the DSP, as shown in Proposition\u00a01.When the robot walks using LIPM and LPM in the SSP and DSP, respectively, the virtual suspension point of LPM and the support point of LIPM are symmetrical, i.e., . Proof\u00a0ofTheorem1.Using the method of proof by contradiction. Suppose Once the step length is determined, the distances that the CoM travels in the SSP and DSP are related to the height parameters The durations of the SSP and DSP are related to the walking speeds. The walking speed of the biped can be adjusted by changing the touchdown time of the swing foot. Now we quantitatively analyze the touchdown timing of the swing foot when we adjust the robot\u2019s walking speed by defining the orbital energies of LIPM of two adjacent steps as And:According to the conservation of orbital energy of LPM during the DSP, the following equation holds:Substituting (19) and (20) into (21) yields:Let Substituting (23) into (22) yields:Let:Equation (25) can be simplified as:We can obtain:Then:Once the step length L, and height parameters L = 0.4 m. The current step begins at the beginning of the SSP for period walking with the apex velocity 0.3 m/s, i.e., the velocity at which the CoM passes over the supporting point of LIPM. Through the adjustment of the DSP, apex velocity of the SSP in the next step becomes 0.35 m/s. In Sometimes, the swing foot lands aground at an unexpected time or in an unexpected foot placement. In these situations, the CoM enters the DSP with an unexpected state. Trajectory of the CoM during the DSP should be adjusted in real time to retain the orbital energy in the next SSP. L = 0.4 m, the apex velocity of LIPM is 0.3 m/s, 36 m see . Due to Suppose When a biped robot walks from laboratory to the outdoor environment, the walking surface is not flat any more. If the biped is not equipped with a visual sensing system, it cannot obtain the height of a new landing point predictively. However, the height of the new landing point can be obtained by the joint sensor at the moment of the robot\u2019s foot landing on the ground. In this situation, we also need to replan the trajectory of the CoM during the DSP. As shown in A, which is under A. Point D is over D to point B, it is obvious that velocities at point D and B are the same in X-direction. As discussed in D to point B in X-direction. Therefore Firstly, determine a point, Suppose When the biped walks on uneven ground, its motion in X-direction is the same as the one walking on flat ground. The difference is the motion in Z-direction. For example, if we suppose the biped robot walks on uneven ground as illustrated in In this section, we propose several trajectory-planning methods for the CoM, i.e., periodic walking, adjusting walking speed, recovery of unexpected landing, and walking on uneven ground. The flowchart of overall idea is shown in In this section, simulation results of the proposed trajectory-planning method using LIPM and LPM are illustrated. To demonstrate the implementation of the research, some simulations are produced in a physical scenario.We develop the simulation platform in Matlab Simscape. The simulation model is built according to the prototype developed by our laboratory. The robot\u2019s height is 1.34 m, and its total mass is about 40 kg. It has 23 degrees of freedom. The structure of the biped is shown in In this paper, the contact model in the normal direction between the feet and ground is assumed to be a distributed and nonlinear spring-damper model . A numbeHere, In the simulation platform, a proportion-integration-differentiation (PID) controller is used for each joint. PID parameters of each joint are listed in In In order to reduce the adverse impact of collision, the velocity of the swing ankle is designed to be zero when landing. In simulations, the desired trajectory of the swing ankle is as follows:Here, L = 0.25 m, t = 1.0 s, it starts to walk and increase the walking speed gradually in the next three steps. From the fourth step, it starts cyclic walking. Then at the seventh step, its foot lands on the ground early with a step length of 0.2 m. Then it returns to the cyclic gait in the next step and goes ahead until the eleventh step. In the first simulation, the biped starts to walk from standing still on a level ground. The parameters for gait generation are: The second simulation is to verify the terrain-blind walking ability using the proposed method. Similar to the first simulation, the biped robot stands still at the beginning, then speeds up to a cyclic walking gait. From the fifth to the eighth step, the ground is no longer flat. The height differences between adjacent two feet locations are 0.03 m, 0.02 m, 0.01 m, and 0.01 m, respectively. Similar to the first simulation, In this paper, we proposed a trajectory-planning method that used LIPM for the SSP and LPM for the DSP. The dynamic equations of these two models have analytical solutions, so it is very easy to plan the CoM trajectory. The walking stability of LIPM and LPM is also discussed. Through dynamics analysis, walking stability of a planned trajectory can be ensured.Using the proposed method, the trajectory of a biped robot can be generated in real time. Some trajectory-planning methods are presented. Periodic trajectory can be generated if some walking parameters are specified, such as step length, height parameters for LIPM and LPM, and apex velocity. We can also change the walking speed of the robot with a fixed step-length by adjusting the virtual suspending point of LPM. Disturbed by interference, the swing leg of the biped maybe lands at an unexpected time or on an unexpected foot placement. Through the adjustment of the DSP, the state of the biped can return to its original planned trajectory. We also present a trajectory-planning method for walking on uneven ground with unknown height. Its horizontal motion is similar to the situation of flat ground with a different motion in the vertical direction. At last, simulations are carried out. In both simulations, the biped robot can walk stably using our proposed method. Simulations verify the effectiveness of the proposed method."}
+{"text": "An increasing number of laryngotracheal complications in mechanically ventilated COVID-19 patients has been reported in the last few months. Many etiopathogenetic hypotheses were proposed but no clear explanation of these complications was identified. In this paper we evaluated the possibility that the tracheal mucosa could be a high viral replication site that could weaken the epithelium itself.Subjects for the COVID-19 group and the control group were selected retrospectively according to specific criteria. Patients\u2019 basic and clinical data were recorded and analyzed. Tracheal samples of both groups were collected during surgical tracheostomies and then analyzed from a histological and genetic-transcriptional point of view.Four COVID-19 patients were enrolled in this study and compared with four non-COVID-19 patients. No laryngotracheal complications were identified in both groups. The SARS-CoV-2 was detected in one out of four COVID-19 samples. A subepithelial inflammatory lymphomonocyte infiltrate was observed in all patients but two cases of the COVID-19 group showed vasculitis of small subepithelial vessels associated with foci of coagulative necrosis. Two gene sets (HALLMARK_INFLAMMATORY_RESPONSE and HALLMARK_ESTROGEN_RESPONSE_LATE) were significantly deregulated in COVID-19 patients compared to the control group.The altered inflammatory response of the COVID-19 patients could be another possible explanation of the increasing number of laryngotracheal complications. The coronavirus disease 2019 (COVID-19) outbreak has led to a significant and unprecedent increase in laryngotracheal complications and their potential life-threatening sequelae in patients subjected to invasive mechanical ventilation . Many etAnother possible cause of laryngotracheal lesions could be the high viral replication within the laryngotracheal mucosa which could weaken the epithelium itself. In fact, SARS-CoV-2 particles were observed in tracheal epithelial cells and within the extracellular mucus in the tracheal lumen of trachea samples taken during autopsies .In this work, we investigated this last etiopathogenetic hypothesis by performing a histological and genetic-transcriptional analysis of tracheal samples taken during surgical tracheostomies in critically ill COVID-19 patients subjected to mechanical invasive ventilation and comparing them with tracheal samples taken from non-COVID-19 patients.Subjects for the COVID-19 group and the control group (non-COVID-19 patients) were selected retrospectively according to the following criteria:age from 18 to 75\u2009years;admitted to the Intensive Care Units (ICU) of our tertiary referral hospital between November 1 and December 31, 2020 and requiring invasive mechanical ventilation for Acute Respiratory Distress Syndrome (ARDS) caused by SARS-CoV-2 (COVID-19 group) or for other pathologies (control group);SARS-CoV-2 detected in nasopharyngeal/oropharyngeal swabs (COVID-19 group) or not detected (control group); andsubjected to open surgical tracheostomy where a small anterior portion of one or two tracheal rings is removed and submPatients\u2019 basic and clinical data such as age, sex, COVID-19 status, comorbidities, duration of invasive mechanical ventilation with oro-tracheal tubes before open surgical tracheostomy, surgical complications and pharmacological treatments were recorded and analyzed. Data on comorbidities were collected using the Adult Comorbidity Evaluation 27 index ACE-27; . This stOne paraffin-embedded inclusion was obtained from each biopsy and three-micrometer thick sections were cut from each sample and stained with Hematoxylin\u2013Eosin .Three-micrometer thick sections were cut from each sample, dewaxed, pretreated by cell conditioner at 95\u00b0C for 32\u2009min with ULTRA CC1 ready-to-use solution , and thereafter incubated with anti-SARS Nucleocapsid Protein Rabbit Polyclonal antibody . The antibody\u2013antigen binding has been detected using the OptiView DAB IHC Detection kit . Then slides were counterstained with Hematoxylin II and Bluing Reagent for 8\u2009min.Three-micrometer thick sections were cut from each sample were stained with ready-to-use CONFIRM anti-CD3 (2GV6) Rabbit Monoclonal Primary Antibody, CONFIRM anti-CD20 (L26) Mouse Monoclonal Primary Antibody, CONFIRM anti-CD4 (SP35) Mouse Monoclonal Primary Antibody, CONFIRM anti-CD8 (SP57) Rabbit Monoclonal Primary Antibody, CONFIRM anti-CD68 (KP-1) Mouse Monoclonal Primary Antibody , and CONFIRM anti-CD34 (QBEnd/10) Mouse Monoclonal Primary Antibody. The antibody\u2013antigen binding has been detected using the ultraView Universal DAB Detection Kit . Staining was done on an automated IHC/ISH slide staining system .Four unstained 10\u2009\u03bcm-thick formalin-fixed paraffin-embedded sections were used for RNA isolation using the RNeasy FFPE kit . RNA quality was tested by spectrophotometry . About 150\u2009ng of RNA were used for the RT-PCR assay to detect the SARS-CoV-2 using the Easy SARS-CoV-2 WE kit . The assay is designed to target the viral nucleocapsid (N) and RNA-dependent RNA Polymerase (RdRp) genes. Viral assays were run in duplicates. A sample was deemed positive when at least one of the targets was amplified, as suggested by the manufacturer. For the gene expression assay, about 150\u2009ng of RNA were hybridized at 65\u00b0C for 21\u2009h with capture and reporter probes of the Human Host Response panel . All procedures were performed following the manufacturer\u2019s suggestions.p were adjusted with the Benjamini-Hochberg method, and false discovery rates (FDR) below 0.05 were considered significant. The ranked gene list was used for the gene set enrichment analysis (GSEA) following the procedures of the clusterProfiler Bioconductor package v.3.13. In detail, the Hallmark collection was used as reference database , unless otherwise specified.Raw expression counts were normalized using the Advanced Analysis module of the nSover v.4.0 . Low count genes (raw numbers below 20 counts) were filtered out, and normalized gene expression levels were log2 transformed. Differentially expressed genes (DEG) between COVID-19 cases and controls were computed by a linear model using control samples as baseline and following the procedures of the Advanced Analysis module of the nSolver software. Values of database , and a mFrom November 1 to December 31, 2020, 62 patients were admitted to the COVID-19 dedicated ICU of our hospital. Twenty patients were referred for tracheostomy and the \u201ctracheo-team\u201d decided to perform an open surgical tracheostomy in four of them. All four patients were enrolled in this study and all of them got a positive PCR test result for Sars-CoV-2 performed the day before the surgical procedure. Four control patients matched for age and sex were selected according to our criteria in the aforementioned time frame. Patients\u2019 basic and clinical data are reported in A predominantly subepithelial inflammatory lymphomonocyte infiltrate was observed in all COVID-19 and non-COVID-19 patients, associated with epithelial erosion. Two cases in the COVID-19 group (case #2 and #4) showed evident lymphocytic vasculitis of small subepithelial vessels associated with foci of coagulative necrosis , while gThe SARS-CoV-2 was detected in one out of four COVID-19 samples (case #4) by the RT-PCR assay whereas all tracheal samples were negative at immunohistochemical detection with anti-SARS Nucleocapsid Protein.After filtering out low count genes, 664 transcripts were considered for further analyses. Compared to the control group, COVID-19 samples showed marked gene expression changes with a trend toward gene down-regulation. In fact, a statistically significant difference was identified in 332 out of 664 genes but, wheThe GSEA showed that two gene sets were significantly deregulated in COVID-19 patients. In details, the HALLMARK_INFLAMMATORY_RESPONSE was activated =1.91) and the HALLMARK_ESTROGEN_RESPONSE_LATE was suppressed , M2 macrophages (p\u2009=\u20090.007), osteoclast-like (p\u2009=\u20090.01) and polymorphonuclear neutrophils , which are coherent with an early phase of immune response. It has been widely discussed that both in patients who eventually died of Severe Adult Respiratory Syndrome (SARS) and in animal models, extensive lung damage is associated with high initial viral loads, increased inflammatory monocytes/macrophages accumulation in the lungs and elevated serum proinflammatory cytokines . While mvia ACE2-independent pathways and phagocytosis of virus-containing apoptotic bodies. SARS-CoV-2 can effectively suppress the anti-viral IFN response in monocytes and macrophages. Upon infection, monocytes migrate to tissues where they become infected resident macrophages, allowing viruses to spread through all organs and tissues. The SARS-CoV-2-infected monocytes and macrophages can produce large amounts of numerous types of pro-inflammatory cytokines and chemokines, which contribute to the local tissue inflammation and dangerous systemic inflammatory response as named cytokine storm due to the fact that the percutaneous technique has several advantages, especially in terms of aerosolization. The second is relative to its retrospective and single center nature. Third, we were not able to identify the viral genome in three tracheal samples due to the aforementioned reasons. For all these issues we were not able to draw any definitive conclusions. Anyway, we can take some ideas from these data, especially related to the gene expression alterations. Even without the identifications of SARS-CoV-2 genome and prominent histological differences between the two groups, the gene expression is clearly altered in the COVID-19 group meaning that a different inflammatory response is taking place in these patients. However, it must be considered that the control group was not matched for the main pathology that had caused the ICU hospitalization. In particular, no patient in the control group suffered from ARDS, so all of them may not have an ongoing inflammatory process in the respiratory tract. Certainly, other studies with larger and adequate sample size are needed to confirm these data, but this could be another piece in the composition of this difficult puzzle of laryngotracheal complications in COVID-19 patients.In conclusion, we cannot confirm that the trachea is a site of high viral replication. However, the tissue samples of the COVID-19 group showed a significant alteration of gene expression in two gene sets (activation of the HALLMARK_INFLAMMATORY_RESPONSE and suppression of the HALLMARK_ESTROGEN_RESPONSE_LATE) compared to the control group, meaning that the inflammatory response of the COVID-19 patients is completely different. Further studies are warranted to investigate these aspects.The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.This study was approved by the Local Ethics Committee on June 24, 2021. Written informed consent to collect deidentified data was obtained from all patients. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by GiF, AP, AMP, and LB. The first draft of the manuscript was written by GiF and AP and all authors commented on previous versions of the manuscript. Review and editing were performed by GiF, AP, MP, ID, and LB. The methodology of study was supervised by GaF and FG. All authors read and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."}
+{"text": "Dear Editor, and would like to take this opportunity to make some clarifications as follows:We sincerely thank Dr Guzzi for the interest in our case report,3,4 We fully agree with Dr Guzzi based on the available literature on this subject. However, one month before the referral, our patient had been diagnosed with lead poisoning just based on serum lead level and had undergone chelation therapy in another tertiary center by a neurologist. Shortly after the completion of treatment, he came to our neuro\u2013ophthalmology clinic for the evaluation of persistent blurred vision in both eyes. Therefore, we could only rely on his previous clinical and paraclinical documents.The main concern raised by Dr Guzzi was that serum lead level alone does not adequately reflect the total body burden of the lead and the whole-blood lead level in conjugation with its urinary levels is a primary measure of lead exposure in humans.In the letter written by Dr Guzzi, the term \u201cpapilledema\" was used for the description of our patient's ocular condition. However, as we mentioned in our article, papilledema is defined as optic disc edema due to increased intracranial pressure (ICP) and should be differentiated from papillitis. The lumbar puncture in this patient showed that ICP was within normal limits, which ruled out papilledema. Therefore, we considered the condition of the patient as bilateral hemorrhagic optic disc swelling.This is an open access journal, and articles are distributed under the terms of theCreative Commons Attribution-NonCommercial-ShareAlike 4.0 License, whichallows others to remix, tweak, and build upon the work non-commercially, aslong as appropriate credit is given and the new creations are licensed underthe identical terms."}
+{"text": "Influenza vaccines have traditionally been tested in naive mice and ferrets. However, humans are first exposed to influenza viruses within the first few years of their lives. Therefore, there is a pressing need to test influenza virus vaccines in animal models that have been previously exposed to influenza viruses before being vaccinated. In this study, previously described H2 computationally optimized broadly reactive antigen (COBRA) hemagglutinin (HA) vaccines (Z1 and Z5) were tested in influenza virus \u201cpreimmune\u201d ferret models. Ferrets were infected with historical, seasonal influenza viruses to establish preimmunity. These preimmune ferrets were then vaccinated with either COBRA H2 HA recombinant proteins or wild-type H2 HA recombinant proteins in a prime-boost regimen. A set of naive preimmune or nonpreimmune ferrets were also vaccinated to control for the effects of the multiple different preimmunities. All of the ferrets were then challenged with a swine H2N3 influenza virus. Ferrets with preexisting immune responses influenced recombinant H2 HA-elicited antibodies following vaccination, as measured by hemagglutination inhibition (HAI) and classical neutralization assays. Having both H3N2 and H1N1 immunological memory regardless of the order of exposure significantly decreased viral nasal wash titers and completely protected all ferrets from both morbidity and mortality, including the mock-vaccinated ferrets in the group. While the vast majority of the preimmune ferrets were protected from both morbidity and mortality across all of the different preimmunities, the Z1 COBRA HA-vaccinated ferrets had significantly higher antibody titers and recognized the highest number of H2 influenza viruses in a classical neutralization assay compared to the other H2 HA vaccines.IMPORTANCE H1N1 and H3N2 influenza viruses have cocirculated in the human population since 1977. Nearly every human alive today has antibodies and memory B and T cells against these two subtypes of influenza viruses. H2N2 influenza viruses caused the 1957 global pandemic and people born after 1968 have never been exposed to H2 influenza viruses. It is quite likely that a future H2 influenza virus could transmit within the human population and start a new global pandemic, since the majority of people alive today are immunologically naive to viruses of this subtype. Therefore, an effective vaccine for H2 influenza viruses should be tested in an animal model with previous exposure to influenza viruses that have circulated in humans. Ferrets were infected with historical influenza A viruses to more accurately mimic the immune responses in people who have preexisting immune responses to seasonal influenza viruses. In this study, preimmune ferrets were vaccinated with wild-type (WT) and COBRA H2 recombinant HA proteins in order to examine the effects that preexisting immunity to seasonal human influenza viruses have on the elicitation of broadly cross-reactive antibodies from heterologous vaccination. The 1957 \u201cAsian Influenza\u201d pandemic was caused by an H2N2 influenza virus resulting in an estimated one to two million deaths worldwide . The 188H2 influenza viruses have not been as extensively studied as other influenza A virus subtypes, such as H1, H3, H5, or H7. While H2 influenza viruses have been isolated numerous times from wild avian species and domestic poultry 510, thereThe goal of this study was to evaluate how memory immune responses to previous influenza virus infections affect broadly reactive HA-based vaccinations. To develop broadly reactive influenza virus vaccines, our group has used the methodology for enhanced antigen design, termed computationally optimized broadly reactive antigen (COBRA) to design hemagglutinin (HA) immunogens for the H1, H3, and H5 influenza subtypes \u201321. This\u201322\u2013Humans have been infected with different types of influenza viruses throughout their lives . Additio26\u2013For this study, Fitch ferrets were infected with different combinations of human isolated H1N1 and H3N2 influenza viruses. These two influenza virus subtypes are the only influenza A viruses that have circulated in the human population since 1968 and would therefore be reflective of the majority of individuals alive today. The H1N1 infections included both a seasonal (before 2009) and a pandemic H1N1 virus (2009 to the present), since individuals alive today who are over the age of 11 would have been exposed to both types of H1N1 influenza viruses. The H1N1 viruses used in this study were Singapore/6/1986 (Sing/86) and California/07/2009 (CA/09), respectively. The H3N2 influenza viruses used to establish preimmunity were either Sichuan/2/1987 (Sich/87) or Panama/2007/1999 (Pan/99). Additionally, the influenza virus preimmunity of individuals alive today would include individuals infected with H1N1 influenza viruses followed by H3N2 influenza viruses and vice versa. Finally, a \u201cnonpreimmune\u201d or \u201cnaive preimmune\u201d group was included as a control for the vaccines alone. An H2N2 preimmune group was also included as a pseudo \u201cpositive control\u201d group since previous studies have shown that imprinting ferrets with a specific subtype of influenza virus followed by vaccination with another antigenically distinct influenza virus of the same subtype induces expansive intrasubtype antibodies . Two antAfter preimmunity was established, two H2 COBRA HA vaccines (Z1 and Z5) were used to vaccinate the ferrets. Protective immune responses elicited by the Z1 and Z5 COBRA HA vaccines were compared to the elicited response in preimmune ferrets vaccinated with wild-type H2 HA proteins. The Z1 COBRA HA-vaccinated preimmune ferrets showed more broadly cross-reactive antibody responses to a panel of H2 influenza viruses across each of the six preimmune immune groups compared to ferrets vaccinated with either of the two wild-type H2 HA vaccines. Therefore, the Z1 COBRA HA would be an ideal vaccine for use in individuals regardless of their previous exposure to influenza A viruses.n\u2009=\u200920) were made preimmune with one of three influenza virus subtypes. The H2N2 preimmunity virus used for infection was either Chk/PA/04 or Qu/RI/16. The H3N2 virus used for infection was either Sich/87 or Pan/99. The H1N1 viruses used for infection were both Sing/86 and CA/09 to represent both seasonal and pandemic H1N1 influenza viruses. After each influenza virus infection, the ferrets were allowed to recover for at least 60 days. Approximately 60 days after the final infection, ferrets had seroconverted to the infection strains with an average HAI titer greater than 1:40 . The H1N1 alone and H3N2-H1N1 and H1N1-H3N2 preimmune groups were then infected with their second virus and allowed to recover for an additional 60 days. The H1N1-H3N2 and the H3N2-H1N1 preimmune groups were then infected with their third virus and allowed to recover for an additional 60 days.Fitch ferrets or COBRA (Z1 or Z5) H2 recombinant HA (rHA) proteins . A compaFour weeks after the second vaccination, ferrets were infected with the H2N3 clade-3, Sw/MO/06 virus (1e\u2009+\u20096 PFU/ml) . In the P\u2009<\u20090.01 for mock and P\u2009<\u20090.05 for Z5). The mock- and Z5-vaccinated groups also had significantly more weight loss than the Mal/WI/08-vaccinated ferrets on day 5 (P\u2009<\u20090.05). These were the only statistically significant differences in weight loss between vaccination groups in any of the preimmune groups.The nonpreimmune ferrets did not have any influenza infection prior to vaccination. Both the Z5- and mock-vaccinated ferrets reached a peak average weight loss of >10% by day 4 postinfection . The MalOnly the nonpreimmune and H3N2 preimmune groups experienced mortality. For the nonpreimmune group, one of the Z5-vaccinated ferrets reached humane endpoint by day 6 postinfection . Also inIn the H2N2 preimmune group, one ferret in both the Z5- and mock-vaccinated groups had viral titers of \u223c1.0e\u2009+\u20093 PFU/ml on day 1. One ferret each of the Mal/NL/01-, Z5-, and mock-vaccinated groups had detectable viral titers in their nasal washes at day 3. There were no detectable viral titers in any ferret in the day 5 or day 7 nasal washes to D. InNone of the ferrets in the H1N1 preimmune group had detectable viral titers in their nasal washes on days 1, 5, or 7 postinfection , K to L.P value of 0.0078 using a one-way ANOVA plus Tukey\u2019s test.In the nonpreimmune group, multiple ferrets in Mal/NL/01, Mal/WI/08, Z5, and mock vaccination groups all had detectable viral titers in their day 1 nasal washes, with multiple ferrets in each of the vaccination groups having \u22653e\u2009+\u20091 viral titers. None of the ferrets in the Z1 vaccination group had detectable viral titers on day 1 postinfection . On day P adjusted\u2009<\u20090.001). Furthermore, the Z1 COBRA also had a titer of 0.219 log 10 lower after adjustment compared to the Z5 COBRA (P adjusted\u2009=\u20090.039) . Only the nonpreimmune ferrets were significantly different from the other preimmunities after controlling for vaccine received and day postinfection . All other preimmunities had nonsignificant mean viral titers. When comparing the day postinfection, day 1 and day 3 were not significantly different, but day 5 had lower viral titers compared to either day 3 or day 1.A three-way ANOVA looking at the main effects of vaccine received, the ferret preimmunity, and the day of the nasal wash indicated that overall, when adjusting for preimmunity and day postinfection, that the mean viral nasal wash titers of the Z1 COBRA was significantly lower than that of the mock-vaccinated group by 0.322 log 10 viral titer (The HAI titers varied greatly between the preimmune groups. The H2N2 preimmune ferrets were the only preimmune group to have HAI titers to virus-like particles (VLPs) in the H2 panel on the day of prime vaccination . FerretsAfter the first vaccination, the H2N2 preimmune ferrets had a geometric mean HAI titer of \u22651:40 to all 12 of the VLPs in the panel excluding the mock vaccination group to E. Tht test analyses comparing for the change in each vaccine titer between days 14 and 42 post-prime vaccination. Each of these four vaccination groups had HAI titers of \u22651:40 to seven or more VLPs in the panel. The Z1- and Z5-vaccinated ferrets had geometric mean HAI titers of \u22651:80 to 9 and 10 of the 12 VLPs in the panel, respectively imprinted ferrets to generate cross-reactive antibodies to H2 influenza viruses than is the case for the H1N1 or H2N2 (group 1) preimmune ferrets. Once the animals were administered a second vaccination, the H3N2 and H3N2-H1N1 preimmune groups had detectable HAI titers, but the titers were 2- to 4-fold lower on average than group 1-imprinted ferrets. The group 2-imprinted ferrets likely have B cells that are highly specific to epitopes on the H3 HA had substantial HAI or neutralization titers to any of the H2 VLPs or influenza viruses tested in these assays. Given these results, it is likely that other immune mechanisms may be playing a role in protecting ferrets from mortality during the viral challenge. Without H2-specific neutralizing antibodies, it is possible that either nonneutralizing antibodies or T cells are contributing to protection against the Sw/MO/06 H2N3 virus infection \u201336. Nonn\u201337\u2013fections , 41. Thefections , 43. In Across all of the preimmune groups, the Z1-vaccinated ferrets had significantly higher cross-reactive H2 antibody titers compared to the other vaccination groups. The Z1-vaccinated ferrets had the highest average HAI titers and recognized more H2 strains than the other vaccines across all of the different preimmunities. The Z1-vaccinated ferrets also had the highest average neutralization titers to more H2 influenza viruses regardless of the preimmune background. The COBRA H2 HA vaccine is likely outperforming the wild-type H2 HA vaccines because they have more diverse epitopes. Higher diversity of epitopes in the COBRA HA would more likely elicit B cells that cross-react across different antigenic sites on the H2 HA. Higher diversity of epitopes is beneficial for vaccinating people who are all preimmune to either H1N1 and/or H3N2 influenza viruses. Vaccinating with a COBRA H2 HA antigen with highly diverse cross-reactive epitopes on a single antigen would increase the likelihood that multiple cross-reactive B cells will be retained in long-term immunological memory.The Z1 COBRA HA also outperformed the Z5 COBRA and the two wild-type vaccines. Z1 outperforming Z5 was somewhat surprising, since there are only four amino acids that differ between the two HA sequences. However, these four amino acids are spread across three of the seven antigenic sites on the H2 HA molecule , 45. It A/Chicken/Potsdam/4705/1984 (Chk/Pots/84) (H2N2) (clade-1), A/Chicken/PA/298101-4/2004 (Chk/PA/04) (H2N2) (clade-1), A/Duck/Hong Kong/273/1978 (Duk/HK/78) (H2N2) (clade-2), A/Mallard/Minnesota/AI08-3437/2008 (H2N3) (clade-3), A/Swine/Missouri/4296424/2006 (Sw/MO/06) (H2N3) (clade-3), A/Formosa/313/1957 (For/57) (H2N2) (clade-2), and A/Taiwan/1/1964 (T/64) (H2N2) (clade-2) were obtained from either the United States Department of Agriculture (USDA) Diagnostic Virology Laboratory (DVL) in Ames, Iowa, from BEI resources , or provided by the laboratory of S. Mark Tompkins . Each influenza virus was passaged using embryonated chicken eggs except for the Sw/MO/06 virus, which was passaged in MDCK cells. Each influenza virus was harvested from either the eggs or cells and aliquoted into tubes which were stored at \u221280\u00b0C. Each influenza virus was titered using a standard influenza plaque assay as described below.http://www.cbs.dtu.dk/services/TMHMM/. The HA gene was truncated at the first amino acid prior to the TM domain. A fold-on domain from T4 bacteriophage (Recombinant HA (rHA) proteins were expressed using the pcDNA 3.1+ plasmid . Each HA gene was truncated by removing the transmembrane (TM) domain and the cytoplasmic tail at the 3\u2032 end of the gene (amino acids 527 to 562). The TM domain was determined using the TMHMM Server v. 2.0 website: riophage , an Avitriophage , and a 66), these cells were transiently transfected for the creation of mammalian virus-like particles (VLPs). Viral proteins were expressed from the pTR600 mammalian expression vectors and the total protein concentration was determined with the Micro BCA protein assay reagent kit . Hemagglutination activity of each preparation of VLP was determined by serially diluting volumes of VLPs and adding an equal volume of 0.8% turkey red blood cells (RBCs) suspended in PBS to a V-bottom 96-well plate with a 30\u2009min incubation at room temperature (RT). Prepared RBCs were stored at 4\u00b0C and used within 72 h. The highest dilution of VLP with full agglutination of RBCs was considered the endpoint HA titer. The H2 HA sequences used for VLPs were Mal/NL/01, Chk/Pots/84, Muskrat/Russia/63/2014 (Musk/Rus/14) (clade-1), Duck/Cambodia/419W12M3/2013 (Duk/Cam/13) (clade-2), Japan/305/1957 (J/57) (clade-2), Moscow/1019/1965 (Mosc/65) (clade-2), T/64, Duk/HK/78, Mal/WI/08, Sw/MO/06, Quail/Rhode Island/16-018622-1/2016 (Qu/RI/16) (clade-3), and Turkey/California/1797/2008 (Tk/CA/08) (clade-3).For the virus-like particle (VLP) production, adherent human endothelial kidney 293T (HEK-293T) cells were grown in complete Dulbecco\u2019s modified eagles\u2019 medium (DMEM). Once confluent were purchased certified influenza-free and descented from Triple F Farms . Ferrets were pair housed in stainless steel cages containing Sani-Chips laboratory animal bedding . Ferrets were provided with Teklad Global Ferret Diet and fresh water ad libitum. The University of Georgia Institutional Animal Care and Use Committee approved all experiments, which were conducted in accordance with the National Research Council\u2019s Guide for the Care and Use of Laboratory Animals, the Animal Welfare Act, and the CDC/NIH\u2019s Biosafety in Microbiological and Biomedical Laboratories guide. Ferrets (n\u2009=\u200920) were preinfected with H1N1, or H3N2 seasonal influenza viruses or H2N2 avian influenza viruses in different orders before vaccination. These influenza viruses included the H1N1 influenza viruses Singapore/6/1986 (Sing/86) and California/07/2009 (CA/09), the H3N2 influenza viruses Sichuan/2/1987 (Sich/87) or Panama/2007/1999 (Pan/99), and the H2N2 avian influenza viruses Chk/PA/04 or Qu/RI/16, all at an infectious dose of 1e\u2009+\u20096 PFU in 1\u2009ml intranasally. For the ferrets with multiple preimmune infections, ferrets were left for 60 days between each infection and before the first vaccination.Fitch ferrets (n\u2009=\u20093), lethargy (n\u2009=\u20091), sneezing, dyspnea (n\u2009=\u20092), and neurological symptoms (n\u2009=\u20093). Any ferret that reached a cumulative score (n) of three was euthanized per rules set by The University of Georgia Institutional Animal Care and Use Committee. In this study, every ferret that reached humane endpoints exhibited both lethargy (n\u2009=\u20091) and dyspnea (n\u2009=\u20092) .After the establishment of preimmunity by viral infection, 60 days elapsed before ferrets were vaccinated with recombinant hemagglutinin (rHA) twice with 4 weeks between vaccinations. The ferrets were vaccinated with a 1:1 ratio of rHA diluted with phosphate-buffered saline (PBS) (15.0\u2009\u03bcg rHA/ferret) and the emulsified oil-water adjuvant Addavax . The mock-vaccinated groups received only PBS and Addavax adjuvant at a 1:1 ratio with no rHA. Each vaccination was given intramuscularly. Before vaccinations and 2 weeks after each of the vaccinations, ferrets were bled and serum was isolated from each of the samples. The blood was harvested from all anesthetized ferrets via the anterior vena cava at days 0, 14, and 42. Blood samples were incubated at room temperature for 1 h prior to centrifugation at 6,000\u2009rpm for 10 min. The separated serum was removed and frozen at \u221220\u00b0C. The ferrets were infected 4 to 6 weeks after the second vaccination with the H2N3 influenza virus Swine/Missouri/4296424/2006 (Sw/MO/06). Animals were monitored daily for 10\u2009days postinfection for clinical symptoms such as weight loss (20% The hemagglutination inhibition (HAI) assay was used to quantify HA-specific antibodies by measuring the inhibition in the agglutination of turkey erythrocytes. The protocol was adapted from the WHO laboratory of influenza surveillance manual . To inacThe HAI titer was determined by the reciprocal dilution of the last well that contained nonagglutinated RBCs. Positive and negative serum controls were included on each plate. Seroprotection was defined as an HAI titer of\u2009\u22651:40 and seroconversion as a 4-fold increase in titer compared to baseline, as defined by the WHO to evaluate influenza vaccines .l-glutamine, and P/S. All of the components of the plaque medium and avicel were obtained from Thermo Fisher Scientific . The MDCK cells were incubated at 37\u00b0C with 5% CO2 for 48\u2009h. After 48\u2009h, the avicel overlay was removed and the MDCK cells were fixed with 10% buffered formalin for a minimum of 15\u2009min. The formalin was then discarded, and the MDCK cells were stained using 1% crystal violet. The MDCK cells were then washed with distilled water to remove the crystal violet. Plaques were then counted and PFU per ml titer was calculated using the number of plaques and the appropriate dilution factor.The nasal washes were performed on anesthetized ferrets by washing out each of their nostrils with a total of 3\u2009ml of PBS on days 1, 3, 5, and 7 postinfection. From each nasal wash, \u223c2.0\u2009ml was recovered. The nasal washes were aliquoted into microcentrifuge tubes and stored at \u221280\u00b0C. Nasal wash aliquots were thawed at RT. Once thawed, 10-fold serial dilutions of nasal washes were overlaid on MDCK cells . The MDC50) for 1 h. The antibody-virus mixture was then added to the incomplete (FBS-free) DMEM-washed MDCK cells in the 96-well plate. After 2 h, the MDCK cells were washed with incomplete DMEM. Approximately 200\u2009\u03bcl of DMEM with P/S and 2.0 \u03bcg/ml of TPCK were added to each of the 96 wells. The cell monolayers in the back-titration control wells were checked daily until cytopathic effect (CPE) had reached the majority of the 1\u00d7 TCID50 rows. After 3 or 4 days, 50\u2009\u03bcl of medium per well was removed and used in an HA assay to identify the presence of influenza virus. The remaining medium in each well was removed and the MDCK cells were then fixed with 10% buffered formalin for a minimum of 15\u2009min. The formalin was then discarded and the fixed cells were washed with 1\u00d7 PBS. Afterward, the MDCK cells were stained using 1% crystal violet . The MDCK cells were then washed with distilled water to remove the crystal violet. Any well having an HA activity of \u22651:2 was defined as positive for the analysis. HA activity was confirmed by >10% of CPE in wells that was positive for HA activity.The neutralization assay was used to identify the presence of virus-specific neutralizing antibodies. The protocol was adapted from the WHO laboratory of influenza surveillance manual . Equal aP value of less than 0.05. The limit of detection for viral plaque titers was 50 PFU/ml for statistical analysis. The viral plaque titers were transformed by log10 for analysis. The limit of detection for HAI was <1:10 and 1:5 was used for statistical analysis. The HAI titers were transformed by log2 for analysis and graphing. The geometric mean titers were calculated for neutralization assays, but the log2 titers were used for ANOVA analysis. All error bars on the graphs represent standard mean error. ANOVAs with Dunnet\u2019s test were used for weight loss, with a statistical significance defined as a P value of less than 0.05. Nasal wash titers stratified by day and preimmunity were analyzed with a one-way ANOVA with the Tukey\u2019s honestly significant difference method to determine differences between vaccine groups. The overall performance of the vaccines was assessed through multivariate ANOVAs for main effects conducted individually for the neutralization titer, viral nasal wash titer, and HAI titer outcomes, followed by Tukey\u2019s honestly significant difference method for adjusting for multiple comparisons. Significantly different groups per outcome were determined from the multiple comparisons. Day 7 of the nasal wash titer was not included in the ANOVA analysis since all of the observations were below the limit of detection. All of the statistical analysis for the various assays can be found in Fig. S2, Fig. S5, and Table S1 in the supplemental material.Statistical significance was defined as a The amino acid sequences for the two COBRA HA sequences have been reported in the United States provisional patent filing 14332088_1.10.1128/mSphere.00052-21.1FIG\u00a0S1FIG\u00a0S1, TIF file, 0.3 MB.Amino acid diversity in antigenic sites of WT and COBRA H2 HA sequences. The amino acid differences for the WT and COBRA H2 HA sequences in the six H2 HA antigenic sites are shown in the six tables. Amino acids are numbered based on the H3 numbering system. Only amino acid positions with differences are shown. All of the other amino acids in the antigenic sites are the same for all of the H2 HA sequences used in this study. Download Copyright \u00a9 2021 Reneer et al.2021Reneer et al.https://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International license.This content is distributed under the terms of the 10.1128/mSphere.00052-21.2FIG\u00a0S2P\u2009=\u20090.05 level. Comparisons are colored based on the adjusted P value. Significance groups were determined from the effect size plots for the vaccine received (D), preimmunity (E), and day (F). Groups that share a letter were not significantly different from one another. Download FIG\u00a0S2, TIF file, 0.5 MB.Main effects of vaccine received, established preimmunity, and day postinfection on the log 10 viral nasal wash titer. ANOVA adjusted by Tukey\u2019s honestly significant difference (HSD) method for the effect sizes for vaccines (A), preimmunity (B), and day (C) when controlling for the main effects of the other variables. The horizontal line at 0.0 indicates identical means with no measured difference. If the 95% confidence intervals extended over this line, the difference between the two compared groups are not significant at the Copyright \u00a9 2021 Reneer et al.2021Reneer et al.https://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International license.This content is distributed under the terms of the 10.1128/mSphere.00052-21.3FIG\u00a0S3FIG\u00a0S3, TIF file, 0.6 MB.Change in HAI titers from prime to boost vaccination. Columns are divided by vaccine group, and rows are divided by preimmunity. Data points from day 14 to day 42 are paired based on ferrets change in HAI titer for a virus in the HAI panel. Significance between groups are analyzed in Table S1. Download Copyright \u00a9 2021 Reneer et al.2021Reneer et al.https://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International license.This content is distributed under the terms of the 10.1128/mSphere.00052-21.4FIG\u00a0S4P\u2009=\u20090.05 level. Comparisons are colored based on the adjusted P value. Significance groups were determined from the effect size plots for the vaccine received (D), preimmunity (E), and virus (F). Groups that share a letter are not significantly different compared to one another. Download FIG\u00a0S4, TIF file, 0.8 MB.Main effects of vaccine received, established preimmunity, and virus tested on the log 2 HAI titer on day 42. ANOVA adjusted by Tukey\u2019s HSD method for the effect sizes for vaccines (A), preimmunity (B), and virus (C) when controlling for the main effects of the other variables. The horizontal line at 0.0 indicates identical means with no measured difference. If the 95% confidence intervals extended over this line, the difference between the two compared groups are not significant at the Copyright \u00a9 2021 Reneer et al.2021Reneer et al.https://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International license.This content is distributed under the terms of the 10.1128/mSphere.00052-21.5FIG\u00a0S5P\u2009=\u20090.05 level. Comparisons are colored based on the adjusted P value. Significance groups were determined from the effect size plots for the vaccine received (D), preimmunity (E), and virus (F). Groups that share a letter are not significantly different compared to one another. Download FIG\u00a0S5, TIF file, 0.8 MB.Main effects of vaccine received, established preimmunity, and virus tested on the log 2 neutralization titer with pooled sera collected on day 42. ANOVA adjusted by Tukey\u2019s HSD method for the effect sizes for vaccines (A), preimmunity (B), and virus (C) when controlling for the main effects of the other variables. The horizontal line at 0.0 indicates identical means with no measured difference. If the 95% confidence intervals extended over this line, the difference between the two compared groups are not significant at the Copyright \u00a9 2021 Reneer et al.2021Reneer et al.https://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International license.This content is distributed under the terms of the 10.1128/mSphere.00052-21.6FIG\u00a0S6FIG\u00a0S6, TIF file, 0.9 MB.Establishment of preimmunity in ferrets. HAI titers for preimmune ferrets against the strains that were used to establish their influenza virus preimmunity. Serum from each ferret was obtained on day 60 postinfection and tested against the listed viruses for each preimmune group. Download Copyright \u00a9 2021 Reneer et al.2021Reneer et al.https://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International license.This content is distributed under the terms of the 10.1128/mSphere.00052-21.7FIG\u00a0S7FIG\u00a0S7, TIF file, 0.7 MB.Viral nasal wash titers for H3N2-H1N1 and H1N1-H3N2 preimmune groups. Nasal washes were performed on days 1, 3, 5, and 7 postinfection. The titers are recorded as log 10 PFU/ml. The H3N2-H1N1 preimmune ferrets are shown in panels A to D. The H1N1-H3N2 preimmune ferrets are shown in panels E to H. The height of the bars shows the mean, while the error bars represent mean standard error. Download Copyright \u00a9 2021 Reneer et al.2021Reneer et al.https://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International license.This content is distributed under the terms of the 10.1128/mSphere.00052-21.8TABLE\u00a0S1t tests of log 2 HAI titers measured after prime (day 14) and boost (day 42) vaccinations. The Holm correction was used to adjust for multiple comparisons. Samples were paired based on the ferret. The t tests were conducted to determine if HAI titers changed after stratification by preimmunity and vaccine received. * P\u2009<\u20090.05; ** P\u2009<\u20090.01; *** P\u2009<\u20090.001. Download Table\u00a0S1, TIF file, 0.3 MB.Paired Copyright \u00a9 2021 Reneer et al.2021Reneer et al.https://creativecommons.org/licenses/by/4.0/Creative Commons Attribution 4.0 International license.This content is distributed under the terms of the"}
+{"text": "The plethysmographic peripheral perfusion index (PPI) is a very useful parameter with various emerging utilities in medical practice. The PPI represents the ratio between pulsatile and non-pulsatile portions in peripheral circulation and is mainly affected by two main determinants: cardiac output and balance between sympathetic and parasympathetic nervous systems. The PPI decreases in cases of sympathetic predominance and/or low cardiac output states; therefore, it is a useful predictor of patient outcomes in critical care units. The PPI could be a surrogate for cardiac output in tests for fluid responsiveness, as an objective measure of pain especially in un-cooperative patients, and as a predictor of successful weaning from mechanical ventilation. The PPI is simple to measure, easy to interpret, and has continuously displayed variables, making it a convenient parameter for detecting the adequacy of blood flow and sympathetic-parasympathetic balance. The pulse oximeter is a basic monitor in medical practice with an essential role to evaluate the peripheral oxygen saturation and heart rate using plethysmography technology. Pulse oximetry-derived peripheral perfusion index (PPI) is another variable that is measured by pulse oximeters using relatively advanced technology, namely, co-oximetry. The PPI represents the ratio between the portions of the blood in the peripheral tissue, namely, the pulsatile and the non-pulsatile blood flow. The PPI is measured by different types of monitors .Peripheral perfusion index values depend on the blood flow in the peripheral circulation and the vascular tone; thus, it reflects two main determinants, which are the cardiac output and the balance between the sympathetic and the parasympathetic nervous systems. Being a representative of those two major hemodynamic parameters, the PPI could provide very useful information during the initial evaluation, risk stratification, and follow-up. The normal value of PPI was suggested to range between 0.2 and 20%; however, an observational study showed a median (quartiles) normal value of PPI of 4.3 (2.9\u20136.2) . The valThis review is aimed to clarify the different benefits of monitoring the PPI and the pitfalls and the limitations of its measurements. An overview of its uses in emergency departments, intensive care units, and operating theaters will be also provided.Being a representative of sympathetic-parasympathetic balance, the PPI decreases in conditions with sympathetic overactivity, which is predominant in critical illness and circulatory failure . The PPIRecent evidence also suggests that PPI could be used to guide and titrate vasopressor pressor therapy .As PPI is affected by both cardiac output and vasomotor tone, PPI could be an indicator of the cardiac output in case there is no change in the sympathetic activity. Furthermore, PPI showed a good ability to detect changes in the cardiac output in patients with septic shock .Hence, PPI was used in fluid responsiveness tests as a surrogate for cardiac output in various maneuvers . An incrPain assessment usually relies on subjective scores, which require patient cooperation. Therefore, evaluation of pain in un-cooperative patients, such as critically ill patients, is usually challenging and requires a cumbersome scoring system. Finally, there are no tools to provide real-time measurement of pain. Various studies used the relation between PPI and sympathetic activity as an indirect method for pain evaluation , 15. TheVarious uses were reported for the PPI in the operating room. Some uses of the PPI in the operating room rely on its relationship with the vasomotor tone, such as discrimination of failed and successful peripheral nerve blocks and neurIn critically ill patients, the PPI was evaluated for predicting several outcomes. Relying on its relation to the sympathetic tone, low PPI was able to predict hypotension during intermittent and continuous haemodialysis . A pre-dThe use of PPI in clinical practice has some limitations. (1) PPI is characterized by skewness and a wide range of measurements among normal persons; therefore, it is better to evaluate its changes in comparison to the bassline readings from the same person. (2) Care should always be paid to the possibility of poor signals especially in cold extremities low temperature and high doses of vasopressors. (3) Being a ratio between the pulsatile and non-pulsatile portions of peripheral blood flow, the PPI is not feasible for use in patients who receive extra-corporeal membrane oxygenation. (4) Being affected by two variables, namely, the cardiac output and the autonomic activity, evaluation of the change in the PPI should be performed over short intervals where one of these two variables is relatively constant so that the PPI could be closely correlated to only one variable. However, even if it was affected by the two variables, the PPI could provide a good idea of patient prognosis because both variables affect the PPI in the same direction.The utility of PPI in clinical practice is still a subject of ongoing research. Future studies are needed to evaluate the correlation between PPI and brain perfusion.From the currently available evidence, we can conclude that PPI is an irreplaceable vital sign with many important uses, such as a prognostic marker in critically ill and surgical patients, guiding fluid and vasopressor management, assessing the success of weaning from mechanical ventilation, and can be used as an objective measure for the assessment of regional anesthesia and pain.MME, MM, and RG contributed to the conception of the idea, literature search, collecting material, and drafting the manuscript. AH contributed to the conception of the idea, literature search, collecting materials, and drafting and revising the manuscript. All authors contributed to the article and approved the submitted version.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."}
+{"text": "The increasingly widespread diffusion of wearable devices makes possible the continuous monitoring of vital signs, such as heart rate (HR), heart rate variability (HRV), and breath signal. However, these devices usually do not record the \u201cgold-standard\u201d signals, namely the electrocardiography (ECG) and respiratory activity, but a single photoplethysmographic (PPG) signal, which can be exploited to estimate HR and respiratory activity. In addition, these devices employ low sampling rates to limit power consumption. Hence, proper methods should be adopted to compensate for the resulting increased discretization error, while diverse breath-extraction algorithms may be differently sensitive to PPG sampling rate. Here, we assessed the efficacy of parabola interpolation, cubic-spline, and linear regression methods to improve the accuracy of the inter-beat intervals (IBIs) extracted from PPG sampled at decreasing rates from 64 to 8 Hz. PPG-derived IBIs and HRV indices were compared with those extracted from a standard ECG. In addition, breath signals extracted from PPG using three different techniques were compared with the gold-standard signal from a thoracic belt. Signals were recorded from eight healthy volunteers during an experimental protocol comprising sitting and standing postures and a controlled respiration task. Parabola and cubic-spline interpolation significantly increased IBIs accuracy at 32, 16, and 8 Hz sampling rates. Concerning breath signal extraction, the method holding higher accuracy was based on PPG bandpass filtering. Our results support the efficacy of parabola and spline interpolations to improve the accuracy of the IBIs obtained from low-sampling rate PPG signals, and also indicate a robust method for breath signal extraction. In recent years, the increasing availability of wearable devices for mobile and smart healthcare monitoring, both in clinical e.g., ,2) and w and w2])Traditionally, HRV analysis is conducted using the ECG signal, from which the sequence of the RR intervals over time can be precisely derived. However, daily monitoring of HRV through ECG requires proper placement of the electrodes or the adoption of sensorized devices, such as smart shirts , whose lGiven the notable amount of information that this signal can provide, many studies have focused on improving HRV parameters extraction from PPG , In addition, the need for reducing power consumption in wearable devices, to increase battery life, brings to the adoption of low sampling rates. This operation has the following two main downsides: a further reduction in the PPG bandwidth and an increased sampling (or discretization) error. Both can substantially reduce the accuracy of fiduciary points detection, leading to biased computation of the inter beat intervals (IBIs) and inaccurate estimates of the derived HRV parameters ,35. In aMoreover, low sampling rates can affect the quality of PPG-derived respiratory signals. For example, Charlton and colleagues comparedIn this study, we analyzed beat-to-beat IBI estimates, and time- and frequency-domain HRV indices, extracted from low-resolution PPG signals , during an experimental protocol comprising three different tasks . In fact, subjects\u2019 position and administered tasks are known to influence the characteristics of the PPG waveform , possiblSit phase); 5\u201310 min in a standing position (Stand phase); 10\u201315 min in which the subject is again in a sitting position and performs controlled respiration (CR phase), with cycles lasting 5 s (respiratory rate = 0.2 Hz).Data were collected at the PHEEL laboratory of Politecnico di Milano . Eight healthy volunteers were informed about the study and asked to sign a written consent before data acquisition. The study was approved by the university Ethics Committee. During the experiment, ECG, PPG, and a breathing signal were recorded. ECG was collected with two electrodes positioned under the collarbones and one slightly above the navel. PPG was obtained through a sensor placed on the second phalanx of the middle finger (left hand), and the reference respiratory signal was recorded with a thoracic belt. The PPG sensor we employed uses an infra-red LED and measures the amount of light reflected by the skin, which varies with the blood volume present in the underlying vessels. All signals were collected at 256 Hz using the commercial polygraph ProComp Infiniti . During data acquisition, the preserved signal bandwidth was 0.05\u2013120 Hz for the ECG; for the PPG and respiratory signal, it was 0\u201364 Hz. Participants were tested while performing a three-phased protocol for a total of 15 min, divided as follows: 0\u20135 min in a seated position and processed with custom scripts. Each subject\u2019s RRIs were derived from the ECG by means of the Pan-Tompkins algorithm , which aSince power consumption is a concern only when signals are collected with wearable devices, we focused on sampling rates up to 64 Hz, which is the frequency used for PPG recording by many current research-grade wearables, such as the E4 .Two beat detection algorithms were implemented for the PPG signal. A pseudo-code explanation of both algorithms is provided in x\u2032(t)), the superior (maxe(t)) and inferior (mine(t)) envelopes are calculated from its local maxima and local minima, respectively. These envelopes are used in the following min-max normalization to limit the signal amplitude between 0 and 1:The ENVELOPE method a implemej) is selected as the one that minimizes the vector d:This method eliminates signal amplitude variations caused by the different protocol stages and fluctuations due to breath. Then, candidate heartbeats are identified as the local maxima exceeding an amplitude of 0.8. Local maxima of the PPG derivative correspond to the point of maximum slope on the ascending segments of the PPG signal. The 0.8 threshold was selected empirically, as it allowed us to detect almost all the true heartbeats, while avoiding slopes following dicrotic notches to be disguised as heartbeats. However, the local maxima detected with this threshold sometimes do not correspond to true heartbeats, as they can also arise from motion artifacts or dicrotic notches followed by particularly sharp peaks. In order to reduce such false positives, a procedure has been implemented that retains only those peaks producing the closest peak-to-peak distance to a weighted average calculated on the preceding beats , if any, are labeled as false positives; thus, during the next iteration, d is computed considering i = j. At the end of each iteration, j \u2212 tit is the IBI of the last accepted beat. For the first iteration, N peaks.In the equations, The second approach we employed, SLOPE b, makes These beat detection algorithms were applied to the original and subsampled PPG signals. First, the detected peaks were used to calculate the IBIs from the non-interpolated signals (hereafter ORIGINAL IBIs). Then, the position of the identified peaks was refined using the three strategies described below.Three different PPG interpolation techniques were tested to reduce the discretization error introduced by subsampling. The spline interpolation SPLINE, a and parThe intersection of the two linear curves in the REG method, the occurrence time of the maximum of the spline interpolation and of the vertex of the parabola were calculated with a resolution of 1 ms.Time- and frequency-domain features were computed from the extracted IBI series and reference HRV signals, considering each protocol phase separately. Time-domain indices included the average, standard deviation (SD), and root mean square of successive differences (RMSSD) of the IBIs and RRIs. As for the frequency-domain features, spectral powers in the low frequency and high frequency bands were computed in absolute and normalized units (n.u.), dividing each band power by the sum of LF and HF powers . We adopA more robust estimate of the RSA from HRV requires respiratory rate information. In fact, knowing the frequency content of respiration allows us to refine the frequency range for HF power calculation and enabWe examined three methods to extract breath signals from PPG, from which the respirograms were then derived. In the first one FILT, a, the PPxyP(f) is the cross-spectrum between the reference respirogram (x) and the one derived from PPG with each of the three estimation methods (y); xxP(f) and yyP(f) represent the power spectral densities of the two respirograms. Coherence values range from 0 to 1, where the closer they are to 1, the higher is the coherence between x and y signals.Respirograms were obtained from these surrogate breath signals by retaining only those samples corresponding to the R peaks of the ECG acquired simultaneously. The reference respirogram was derived in the same way from the respiratory signal collected through the thoracic belt. To assess frequency content similarity between the reference respirogram and the three PPG-extracted ones, we calculated the average magnitude-squared coherence around the modal respiratory rate of each phase. Specifically, the frequency band of interest was selected for each participant and protocol condition by descending from the peak of the reference spectrum until 20% of the maximum spectral power component was reached, as previously done in . The magIn one of the female participants, changing from the sitting to the standing posture caused the thoracic belt to move downwards , critically reducing the amplitude of the respiratory signal for the remainder of the session. Given the poor quality of the reference respiratory signal recorded in such a case, the analyses described in this section were carried out on 7 participants. This sample size was comparable to those adopted in several studies analyzing the accuracy of PPG- or ECG-derived breathing signals in healthy participants, as it can be inferred from the supplementary material of . MoreoveThe principal statistical assessments presented in this paper are schematized in First, the performances of the examined beat detection algorithms (ENVELOPE and SLOPE) were evaluated considering the number of missing and extra beats observed with each detection method with respect to the beats detected through the gold-standard . In particular, we compared three percentage measures, namely the false negative rate (FNR), the false discovery rate (FDR), and the overall accuracy, defined as follows:TP + FN represents the total number of beats detected on the ECG signal; TP + FP is the number of beats detected by each method, either correctly or not; TP + FP + FN represents the total number of detected and non-detected beats.Moreover, a Bland\u2013Altman analysis was conducted to assess the stability of the fiduciary points detected through the ENVELOPE and SLOPE algorithms by comparing the corresponding IBIs with the RRIs computed on the ECG. In particular, we focused on the 95% limits of agreement . To imprWe adopted the Bland\u2013Altman analysis also to assess the beat-to-beat accuracy of the original and refined IBIs compared to the ECG-extracted RRIs. In addition, to assess statistical differences between the original IBIs and those refined using the three tested interpolation methods, regardless of the sign of such differences, we evaluated the absolute errors (AEs) of the IBI series with respect to the RRI series. AEs were also computed between the HRV indices extracted from the different IBI series and those derived from the RRIs. A Friedman\u2019s test was then conducted for each protocol condition and sampling frequency to investigate the presence of statistically significant differences in the AEs calculated with the various PPG interpolation methods, followed by the appropriate Bonferroni-corrected post hoc tests.To compare the accuracy of the three estimated respiratory signals, we performed multiple Friedman\u2019s tests on the magnitude-squared coherence computed with each method with respect to the reference respirogram. Specifically, an independent test was conducted for each phase of the experimental protocol and sampling frequency of the PPG signal.r for Wilcoxon\u2019s signed-rank test [W for Friedman\u2019s test [The significance level was set to \u03b1 = 0.05 for all the statistical tests. In addition, non-parametric effect size measures, namely Cohen\u2019s ank test and Kendn\u2019s test , were emLowering the sampling rates from 64 to 8 Hz, we notice a detriment of the FNR and accuracy obtained with the ENVELOPE and SLOPE approaches . Indeed,To assess the stability of the fiduciary points sought by each method, we conducted a first beat-to-beat comparison of the IBIs measured through the ENVELOPE and SLOPE approaches with the RRIs computed on the ECG. p = 0.057), though not significantly, whereas they were very close during Sit and CR conditions . These results support our choice of the ENVELOPE approach over the SLOPE one for further analyses.In summary, the IBIs computed with the ENVELOPE and SLOPE methods reported similar mean and standard deviation. However, the smaller 95% LOA, the lower FNR, and the higher beat detection accuracy shown in the Stand phase brought to the choice of considering only the ENVELOPE approach for the subsequent analyses. To further support this choice, we conducted a Wilcoxon\u2019s signed-rank test between the AEs of IBI SLOPE and those of IBI ENVELOPE, both computed taking RRI as a reference and using PPGs sampled at 64 Hz. Test results show that the AEs related to the first pair , as shown in According to Merri and colleagues , the errp < 0.001 for all the sampling frequencies and protocol phases, indicating statistically significant AEs differences were always present among the evaluated methods. Yet, very different effect sizes , which than 0.5 ) at 32 Hsmall effect First, we explored HRV indices patterns per participant to assesp < 0.05) in the Stand condition at 8 Hz, the other HRV indices reported significant differences among interpolation methods at several sampling rates and protocol phases. In particular, larger effect sizes , thus mdall\u2019s W , where idall\u2019s W . As regadall\u2019s W .p < 0.05 in the post hoc test and large effect size to make our interpretation more robust.The AEs of several HRV indices significantly decreased after PPG interpolation, compared to the ones computed from the original IBIs, as confirmed by the Bonferroni-corrected post hoc tests see . For thiFirst of all, SPLINE and PARABOLA interpolations allowed us to estimate HRV parameters that always improved or, at least, maintained the accuracy of the original IBIs. Considering these two approaches together, statistically significant AEs improvements started at 64 Hz sampling frequency. HRV indices benefiting from interpolation at such a high sampling rate were SD IBI, RMSSD, and Power HF.r > 0.8).Significant improvements became more frequent and consistent at 32 Hz, especially for RMSSD and Power HF, showing a significant reduction in AEs for SPLINE and PARABOLA in every protocol condition. At 8 Hz, all the examined HRV indices, except for Power LF (reporting a significant improvement only during Stand) and Mean IBI , exhibited significant decreases in the AEs with SPLINE and PARABOLA approaches, with a notable effect size and/or following (8 Hz) sampling rates could be due to the inclusion of the REG method in the comparison, acting as a confounding factor. In fact, the drop in accuracy that this method exhibited at 16 Hz for many indices prevents the highest rank from being consistently assigned to the AEs of the original IBIs, as generally happens at higher sampling frequencies. As a consequence, higher dispersion in ranks can be observed. In fact, in many of those cases, removing the REG method from the comparison, significant differences also emerged at 16 Hz between the AEs of the original and interpolated IBIs .As in the beat-to-beat analysis , the REGLastly, at 8 Hz, all the HRV indices showed larger MAEs during Stand compared to the other protocol conditions. The PPG collected in standing posture might include higher frequency components that were not preserved at such a low sampling rate, as will be illustrated in the subsequent section.FS decreasing from 64 Hz to 8 Hz.To investigate if SPLINE and PARABOLA interpolations effectively compensated for the discretization error introduced by subsampling, we contrasted the AEs calculated on the IBI series extracted from the original PPG (sampled at 256 Hz) with those computed on the IBIs derived from the interpolated PPG. We employed the original PPG for this evaluation to maximize the bandwidth and minimize the discretization error in the reference signal. Specifically, the following AEs were computed for each protocol phase:p-values of each test and the related effect sizes (Cohen\u2019s r). Significant increases in AEs (p < 0.05) showing at least small effect sizes (r > 0.1) were detected for both the interpolation methods at 16 and 8 Hz. This result indicates that the IBIs extracted from the 256 Hz PPGs and those derived from the interpolated ones are substantially equivalent down to a sampling rate of 32 Hz. Concerning lower rates, the statistically significant difference observed at 16 Hz was characterized by medium effect size, ranging from 0.306 to 0.383. Therefore, beat-to-beat IBIs extracted from 16 Hz PPGs processed with SPLINE and PARABOLA interpolations already appear quite different from those derived through 256 Hz PPGs. When PPG is subsampled at 8 Hz, the effect size markedly increases and becomes large for both the interpolation methods (0.665 \u2264 r \u2264 0.688), suggesting that, at this sampling rate, the compensation provided by SPLINE and PARABOLA does not suffice to recover the information carried by the original signal. In fact, interpolation strategies only allow researchers to reduce the discretization error caused by subsampling, but they have no effects on the reduction in PPG bandwidth that comes with this operation. Overall, these results show that subsampling produces no substantial changes in the derived IBIs up to 32 Hz if SPLINE or PARABOLA interpolations are applied. Consequently, a PPG bandwidth of approximately 16 Hz is more than enough to achieve the highest accuracy in beat detection enabled by the PPG signal. In contrast, with PPG bandwidths of approximately 8 or 4 Hz\u2014which relate to 16 and 8 Hz sampling rates, respectively\u2014suboptimal accuracies are achieved, whose severity should be determined based on the specific application.The AEs of the original IBIs were pairwise compared with the AEs of the SPLINE and PARABOLA approaches using Wilcoxon\u2019s signed-rank tests. These considerations find support in Although the ENVL method shows the highest values of magnitude-squared coherence for sampling rates of 64 and 32 Hz, a decline in its performance is observed at 16 Hz and 8 Hz. FILT, instead, shows a more stable performance, with similar values for all the considered sampling frequencies, revealing higher stability compared to the other methods. Finally, INTR systematically shows the lowest magnitude-squared coherence, demonstrating poor performances compared to the other two techniques.p < 0.05) were detected mainly in the Sit and CR phases. The subsequent post hoc tests indicate that, concerning a sampling frequency of 64 Hz, a statistically significant difference (p < 0.05) is evident between the method that performs the best (ENVL) and the one that performs the worst (INTR); besides, a significant difference was detected between the latter and FILT, limited to the Sit condition. This result implicitly indicates that, at 64 Hz, ENVL and FILT perform in a similar way (since no significant difference arises between them). On the contrary, from 32 Hz below, statistical differences occurred between FILT and INTR methods, suggesting that FILT behaves better than INTR and, at the same time, FILT is comparable to ENVL, as well, due to the absence of statistical difference. In particular, concerning the PPGs sampled at 8 Hz, a significant difference occurs between FILT and ENVL during Stand, showing that, for lower sampling rates, ENVL performs poorly compared to FILT.To further investigate the advantages provided by each of the three methods and highlight possible differences between them, a Friedman\u2019s test was conducted for each PPG sampling rate and protocol phase to compare their magnitude-squared coherences. Significant differences for the IBI time series construction and assessed the efficacy of three interpolation strategies in the refinement of peaks detection while decreasing the PPG sampling rate. In the same framework, we have explored the application of three simple algorithms to extract breath information from PPG , again with decreasing time resolution. To the best of our knowledge, the current study is the first to assess all the aspects above in the following three different conditions: sitting, standing, and controlled respiration. Because of this, we were able to identify the ENVELOPE detection method not only as the one performing better but also as the most stable across these protocol phases. Results also confirm the usefulness of interpolation procedures for peak detection when the sampling rate drops to 32 Hz, with similar performance for SPLINE and PARABOLA, while the REG method showed a lower performance. A consequent improvement was also observed in several HRV indices both in time and frequency domains. The beat-to-beat IBIs computed after SPLINE and PARABOLA interpolations were found to resemble those derived from the original 256 Hz PPG down to a sampling rate of 32 Hz, with moderate performance detriment observed at 16 Hz. In general, the accuracy improvements generated by SPLINE and PARABOLA approaches were consistent across the three protocol conditions. However, at 8 Hz sampling rate, the consequent PPG bandwidth reduction affected the accuracy of the computed HRV indices more in the Stand phase than in the other conditions. This finding suggests that our considerations should not be generalized to any task demand. Our results should be considered valid only for PPG collected during tasks requiring minimal (Sit phase) or mild (Stand and CR phases) physical effort. The effectiveness of PPG interpolation strategies and the minimum sampling rate required with higher physical loads should be further assessed.Concerning the breathing signal estimation methods from PPG, the results indicate that ENVL is preferable at 64 Hz. Below that frequency, FILT should be preferred due to the higher stability of its performances across different sampling rates, especially considering Sit and CR phases. Thanks to its simplicity and the notable values of quadratic coherence achieved even with low sampling frequencies, FILT seems a convenient and accurate method to estimate respirograms from PPG.In conclusion, our results suggest that PPG should not be collected with sampling rates lower than 16 Hz. PPG interpolation strategies are recommended with sampling rates below or equal to 32 Hz. However, at 16 Hz sampling frequency, none of the interpolations allow us to achieve the beat-to-beat accuracy of the 256 Hz PPG, indicating that a signal bandwidth of 8 Hz might already be too low for applications requiring highly reliable IBIs. SPLINE and PARABOLA methods provided comparable performances in all the considered conditions. Among the breath signal extraction techniques, ENVL performs better at 64 Hz sampling rate; FILT should be favored with lower sampling frequencies.Given the increasingly widespread diffusion of PPG-based wearable devices for HRV monitoring, future extensions of this work are highly encouraged. In particular, since the rationale for choosing low sampling rates is to reduce power consumption, the additional power required by compensating techniques should be evaluated to ensure an actual reduction compared with the use of higher rates. Alternatively, strategies that do not require real-time processing, such as the interpolation methods we examined, may be offloaded to a separate device to extend battery life in wearable PPG devices.Further studies should validate our findings using PPG signals extracted from different body sites ,59, poss"}
+{"text": "We aimed to identify existing hypertension risk prediction models developed using traditional regression-based or machine learning approaches and compare their predictive performance.We systematically searched MEDLINE, EMBASE, Web of Science, Scopus, and the grey literature for studies predicting the risk of hypertension among the general adult population. Summary statistics from the individual studies were the C-statistic, and a random-effects meta-analysis was used to obtain pooled estimates. The predictive performance of pooled estimates was compared between traditional regression-based models and machine learning-based models. The potential sources of heterogeneity were assessed using meta-regression, and study quality was assessed using the PROBAST (Prediction model Risk Of Bias ASsessment Tool) checklist.Of 14,778 articles, 52 articles were selected for systematic review and 32 for meta-analysis. The overall pooled C-statistics was 0.75 [0.73\u20130.77] for the traditional regression-based models and 0.76 [0.72\u20130.79] for the machine learning-based models. High heterogeneity in C-statistic was observed. The age (p = 0.011), and sex (p = 0.044) of the participants and the number of risk factors considered in the model (p = 0.001) were identified as a source of heterogeneity in traditional regression-based models.We attempted to provide a comprehensive evaluation of hypertension risk prediction models. Many models with acceptable-to-good predictive performance were identified. Only a few models were externally validated, and the risk of bias and applicability was a concern in many studies. Overall discrimination was similar between models derived from traditional regression analysis and machine learning methods. More external validation and impact studies to implement the hypertension risk prediction model in clinical practice are required. Hypertension is a common medical condition affecting about 1 in 4 people . HypertePredicting the risk of developing hypertension through modeling can help identify important risk factors contributing to hypertension, provide reasonable estimates about future hypertension risk , and helWith this in mind, we aimed to 1) systematically review the literature to identify hypertension risk prediction models that have been applied to the general adult population and the risk factors that were considered in those models; 2) characterize the study populations in which these models were derived and validated, 3) compare the predictive performance of traditionally developed regression-based models and machine learning models, and 4) assess the quality of these prediction models to better inform the selection of models for clinical implementation.We conducted a systematic review and meta-analysis to identify existing hypertension risk prediction models and associated risk factors and evaluated the models\u2019 predictive performance. We searched MEDLINE, EMBASE, Web of Science, and Scopus (each from inception to December 2020) to identify studies predicting the risk of incident hypertension in the general adult population. Google Scholar and ProQuest (theses and dissertations) were searched for grey literature. Additionally, we explored the reference lists of all relevant articles. The search strategy focused on two key concepts: hypertension and risk prediction. We used proper free-text words and Medical Subject Headings (MeSH) terms to identify relevant studies for each key concept. Certain text words were truncated, or wildcards were used when required. The Boolean operators \u201cAND\u201d, \u201cOR\u201d, and \u201cNOT\u201d were used to combine the words and MeSH terms. A detailed search strategy for MEDLINE is provided in Although risk prediction models are generally developed using a cohort-based study design with follow-up information, we considered all types of study designs, anticipating that machine learning-based models may use other types of study design. Only original studies were included in this review: this excluded reviews, editorials, commentaries, and letters to the editor. Studies written in languages other than English and French were also excluded. The Population, Prognostic Factors (or models of interest), and Outcome frameworThe study population consisted of people free of hypertension at baseline and those around which hypertension risk prediction models were developed. No restrictions were imposed on the geographic region, time, or gender of the study participants. Nevertheless, only models developed on the adult population were considered, as outcome essential hypertension is expected in adults.We considered studies where risk prediction models for hypertension in the general adult population were developed. Studies that focused solely on the added predictive value of new risk factors to an existing prediction model, studies presenting a prediction model developed in patients with previous hypertension, or studies that derived risk prediction tools other than score-type tools were not considered. Further, we did not consider studies that only assessed bivariate association between predictors and hypertension. Instead, we focused on those studies where risk prediction models for hypertension were built incorporating risk factors that demonstrated significant prognostic contribution in predicting incident hypertension. When a model was assessed on more than one external population, information from all reported models was considered. However, when the model was presented both in a derivation and validation cohort, only data from the validation cohort were considered for meta-analysis.Our outcome of interest was hypertension, and we considered all definitions of hypertension to capture the maximum number of studies.Two reviewers (MC and IN) independently identified eligible articles using a two-step process. First, the title and abstracts of non-duplicated records were screened by two reviewers. Studies retained (based on eligibility criteria) during this stage of screening went to a full-text screening. Full-text articles were further screened for eligibility by the same two reviewers independently. Lastly, articles containing extractable data on hypertension prediction models and hypertension risk factors were selected for data extraction. Inter-rater reliability (Kappa coefficient) was estimated to measure agreement between the independent reviewers. Any disagreement between reviewers was resolved through consensus.Two reviewers (MC and IN) independently extracted data from each study using standardized forms. We classified the identified models into two categories: models developed using a traditional regression-based approach and models developed using machine learning algorithms. Separate data extraction sheets were used for each model type and included study name, the location where the model was developed/location of data used for the model developed and participants\u2019 ethnicity, study design used, sample size, age, and gender of the study participants, risk factors included in the model, number of events and total participants, an outcome considered, the definition used for hypertension, duration of follow-up, modeling method used, measures of discrimination and calibration of the prediction model, and the validation of the prediction model. In a separate form, information about the externally validated hypertension risk prediction models was extracted, including study name/model validated, the total number of validation studies, location of the validation study, follow-up period, number of events, and total participants, the definition of outcome and discrimination and calibration of the model. We also extracted information about risk factors, particularly how many times a specific risk factor was considered in the models. Each reviewer assessed study quality according to the Prediction model Risk Of Bias ASsessment Tool (PROBAST) checklist , 15. TheWe summarized the number of studies identified and those included and excluded (with the reason for exclusion) from the systematic review and subsequent meta-analysis using the PRISMA flow diagram . In data2 statistic. A p-value of less than 0.05 was considered statistically significant heterogeneity and was categorized as low, moderate, and high when the I2 values were below 25%, between 25% and 75%, and above 75%, respectively Reviewers' comments:Reviewer's Responses to Questions Comments to the Author1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0YesReviewer #2:\u00a0Partly********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 3. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0YesReviewer #2:\u00a0Yes********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0YesReviewer #2:\u00a0No********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0The authors compared the predictive performance of two types of hypertension risk prediction models: those developed using traditional regression-based and those using machine learning approaches. They searched the MEDLINE, EMBASE, Web of Science, Scopus, and the grey literature for studies predicting the risk of hypertension among the general adult population. They used the C-statistic, and a random-effects meta-analysis was used to obtain pooled estimates from the individual studies The potential sources of heterogeneity was assessed using meta-regression, and study quality was assessed using the PROBAST (Prediction model Risk Of Bias ASsessment Tool) checklist. They selected 52 articles for systematic review and 32 for meta-analysis out of the 14,778 citations that they retrieved. They observed modest and similar overall pooled C-statistics of 0.75 [0.73 \u2013 0.77] for the traditional regression-based models and 0.76 [0.72 \u2013 0.79] for the machine learning-based models. There was high heterogeneity in the C-statistic in both methods. The age (p = 0.011), and sex (p = 0.044) of the participants and the number of risk factors considered in the model (p = 0.001) were identified as sources of heterogeneity in traditional regression-based models. The authors concluded that only a few models were externally validated, that the risk of bias and applicability was a concern in many studies that many models with acceptable-to-good predictive performance were identified that overall discrimination was similar between models derived from traditional regression analysis and machine learning methods and that external validation and of the hypertension risk prediction model in clinical practice are required.The authors may wish to consider the following.1. Selecting a small number of studies may have led to biased conclusions.2. The variability in the duration of follow-up time (1.6 years to 30 years), the age of the participants (15 to 90 years), SBP \u2265 140 mm Hg, DBP \u2265 90 mm Hg, or SBP \u2265 130 mm Hg, DBP \u2265 80 mm Hg, and or use of antihypertensive medication may have led to biased conclusions.3. In addition, the variability on the geographic region, time, or gender of the study participants may have led to biased conclusions.4. The authors may wish to expand the limitations section of the Discussion in page 18 to include items 1, 2 and 3 above.5. Would the authors agree to include the last sentence of the manuscript \u201cwe attempted to provide a comprehensive evaluation of hypertension risk prediction models\u201d in the Abstract?Reviewer #2:\u00a0My review is attached as a document for ease of reading., but I also include it here:Review: Chowdhury et al \u201cPrediction of hypertension using traditional regression and machine learning models: A systematic review and meta-analysis\u201dOverviewIn this paper Chowdhury et al provide a systematic review and meta-analysis comparing prediction models for the development of hypertension in the general population derived using traditional regression-based and machine learning approaches.Meta-analysis was only possible for measures of discrimination. Overall the pooled c-statistics on meta-analysis are similar and of moderate-good performance between traditional regression-based and machine learning derived models. High heterogeneity was found, with sources identified for traditional regression-based models through meta-regression. Only one model has been extensively externally validated (Framingham Hypertension risk model) but it showed significant heterogeneity in meta-analysis. Performance of risk models for hypertension have only been appropriately checked in Asian and Caucasian populations and clinical implementation has not been assessed.Overall impressionI would like to congratulate the reviewers on an extremely thorough and methodologically sound systematic review and meta-analysis. My main concerns relate to the structure and writing of the discussion section, and the presentation the table.Major issues\u2022 The aims of the study are clearly delineated in the introduction (point 1-4). However I do not feel the structure of the discussion follows these aims or highlights the most salient findings of the analysis. Furthermore in my opinion the discussion section is too long. It would be better presented:o Major findings of the study (3-4 points)o Discussion of previous literature and how this differso Future areas for research / gaps in knowledgeo Limitationso Final conclusionhttps://academic.oup.com/europace/article/22/5/684/5721485)\u2022 The presentation of table 1 is extremely difficult to follow. The presentation of so many columns means that some of the entries for each study take up an entire page. It would be better to break this up into at least 2/3 tables e.g. between study population characteristics, model development characteristics/performance, variables used in model; and all these tables do not need to be in the main file \u2022 There are wide prediction intervals suggesting significant heterogeneity. Have you considered a Bayesian approach for meta-analysis? Frequentist methods can produce prediction intervals with poor coverage when there is a mixture of study sizes . If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Yes:\u00a0John B. KostisReviewer #1:\u00a0Reviewer #2:\u00a0Nohttps://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at\u00a0figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool,\u00a0AttachmentReview.docxSubmitted filename: Click here for additional data file. 8 Mar 2022Response to journal requirements and reviewers\u2019 commentsJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf andhttps://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdfRESPONSE: Thank you. We have revised our manuscript accordingly.2. Thank you for stating the following financial disclosure:\u201cThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\u201dAt this time, please address the following queries:a) Please clarify the sources of funding for your study. List the grants or organizations that supported your study, including funding received from your institution.b) State what role the funders took in the study. If the funders had no role in your study, please state: \u201cThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\u201dc) If any authors received a salary from any of your funders, please state which authors and which funders.d) If you did not receive any funding for this study, please state: \u201cThe authors received no specific funding for this work.\u201dPlease include your amended statements within your cover letter; we will change the online submission form on your behalf.RESPONSE: Thank you. None of the authors received any funding for this study. We have now stated, \u201cThe authors received no specific funding for this work\u201d in our revised manuscript and in the cover letter.http://journals.plos.org/plosone/s/data-availability.3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.Upon re-submitting your revised manuscript, please upload your study\u2019s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: We will update your Data Availability statement to reflect the information you provide in your cover letter.RESPONSE: Thank you. Since our study is a systematic review and we did not use any primary data in our analysis, we have now revised our data availability statement as follows: \u201cAll relevant data are within the manuscript and its Supporting information files\u201d. We have included this statement in our revised manuscript and in the cover letter.4. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.RESPONSE: Thank you. Yes, we would like to make changes to our Data Availability statement. Since our study is a systematic review and we did not use any primary data in our analysis, we have now revised our data availability statement as follows: \u201cAll relevant data are within the manuscript and its Supporting information files\u201d. We have included this statement in our revised manuscript and in the cover letter.http://journals.plos.org/plosone/s/supporting-information.5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: RESPONSE: Thank you. We have now included captions for Supporting Information files at the end of our manuscript.[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: YesReviewer #2: Partly________________________________________2. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: YesReviewer #2: Yes________________________________________3. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified.Reviewer #1: YesReviewer #2: Yes________________________________________4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: No________________________________________5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1: COMMENT. The authors compared the predictive performance of two types of hypertension risk prediction models: those developed using traditional regression-based and those using machine learning approaches. They searched the MEDLINE, EMBASE, Web of Science, Scopus, and the grey literature for studies predicting the risk of hypertension among the general adult population. They used the C-statistic, and a random-effects meta-analysis was used to obtain pooled estimates from the individual studies The potential sources of heterogeneity was assessed using meta-regression, and study quality was assessed using the PROBAST (Prediction model Risk Of Bias ASsessment Tool) checklist. They selected 52 articles for systematic review and 32 for meta-analysis out of the 14,778 citations that they retrieved. They observed modest and similar overall pooled C-statistics of 0.75 [0.73 \u2013 0.77] for the traditional regression-based models and 0.76 [0.72 \u2013 0.79] for the machine learning-based models. There was high heterogeneity in the C-statistic in both methods. The age (p = 0.011), and sex (p = 0.044) of the participants and the number of risk factors considered in the model (p = 0.001) were identified as sources of heterogeneity in traditional regression-based models. The authors concluded that only a few models were externally validated, that the risk of bias and applicability was a concern in many studies that many models with acceptable-to-good predictive performance were identified that overall discrimination was similar between models derived from traditional regression analysis and machine learning methods and that external validation and of the hypertension risk prediction model in clinical practice are required.RESPONSE: Thank you so much for your excellent comment.COMMENT. The authors may wish to consider the following.1. Selecting a small number of studies may have led to biased conclusions.2. The variability in the duration of follow-up time (1.6 years to 30 years), the age of the participants (15 to 90 years), SBP \u2265 140 mm Hg, DBP \u2265 90 mm Hg, or SBP \u2265 130 mm Hg, DBP \u2265 80 mm Hg, and or use of antihypertensive medication may have led to biased conclusions.3. In addition, the variability on the geographic region, time, or gender of the study participants may have led to biased conclusions.4. The authors may wish to expand the limitations section of the Discussion in page 18 to include items 1, 2 and 3 above.RESPONSE: Thank you so much for your excellent comments. We agree with the reviewer that items 1, 2, and 3 could be potential sources of bias. However, we would like to point out here that we considered most of those listed items as potential sources of heterogeneity in C-statistics in our analysis. For example, age, gender (sex), the definition of hypertension used (the cut-off level used to define hypertension as the reviewer indicated), and ethnicity (which reflected the influence of geographic region) were considered as the potential sources of heterogeneity in the C-statistics in our analysis. However, we acknowledge that variations on these items may lead to biased conclusions in study findings, and we have included these as limitations in our revised manuscript. The following lines were added to the revised manuscript: \u201cFinally, despite our attempt to capture potential sources of heterogeneity in our study, we asked readers to be cautious while interpreting our findings as there may be a potential bias in our findings due to a limited number of studies included in the analysis and the study's failure to incorporate additional potential sources of bias in the analysis.\u201dPlease see Page 18, lines 461-464 in the revised manuscript.COMMENT. 5. Would the authors agree to include the last sentence of the manuscript \u201cwe attempted to provide a comprehensive evaluation of hypertension risk prediction models\u201d in the Abstract?RESPONSE: Thank you. We have included this sentence in the abstract.Please see Page 3, lines 73-74 in the revised manuscript.Reviewer #2: My review is attached as a document for ease of reading., but I also include it here:Review: Chowdhury et al \u201cPrediction of hypertension using traditional regression and machine learning models: A systematic review and meta-analysis\u201dCOMMENT. OverviewIn this paper Chowdhury et al provide a systematic review and meta-analysis comparing prediction models for the development of hypertension in the general population derived using traditional regression-based and machine learning approaches.Meta-analysis was only possible for measures of discrimination. Overall the pooled c-statistics on meta-analysis are similar and of moderate-good performance between traditional regression-based and machine learning derived models. High heterogeneity was found, with sources identified for traditional regression-based models through meta-regression. Only one model has been extensively externally validated (Framingham Hypertension risk model) but it showed significant heterogeneity in meta-analysis. Performance of risk models for hypertension have only been appropriately checked in Asian and Caucasian populations and clinical implementation has not been assessed.Overall impressionI would like to congratulate the reviewers on an extremely thorough and methodologically sound systematic review and meta-analysis. My main concerns relate to the structure and writing of the discussion section, and the presentation the table.RESPONSE: Thank you so much for your comments and suggestionsCOMMENT. Major issues\u2022 The aims of the study are clearly delineated in the introduction (point 1-4). However I do not feel the structure of the discussion follows these aims or highlights the most salient findings of the analysis. Furthermore in my opinion the discussion section is too long. It would be better presented:o Major findings of the study (3-4 points)o Discussion of previous literature and how this differso Future areas for research / gaps in knowledgeo Limitationso Final conclusionRESPONSE: Thank you so much for taking the time to make such an insightful observation. It is true that the discussion portion is overly lengthy, as stated by the reviewer. However, we would want to point out that our objective was to provide a full explanation of the existing hypertension risk prediction models, which we have done. We discovered 117 models that are extremely huge as a result of our search and addressing the primary conclusions of these models took up a significant amount of space in the discussion section. We hope that offering a full discussion will assist readers in understanding the silent characteristics of the models that have been found.We appreciate your suggestions for the layout of the discussion part, and we acknowledge that we have made every effort to provide the discussion sections in the suggested manner. In addition, we have reduced the length of the discussion part by deleting redundant content whenever possible, as indicated by the reviewer. Please see the revised discussion section.Please see Pages 15-19, lines 374-474 in the revised manuscript.https://academic.oup.com/europace/article/22/5/684/5721485)COMMENT. \u2022 The presentation of table 1 is extremely difficult to follow. The presentation of so many columns means that some of the entries for each study take up an entire page. It would be better to break this up into at least 2/3 tables e.g. between study population characteristics, model development characteristics/performance, variables used in model; and all these tables do not need to be in the main file COMMENT. \u2022 There are wide prediction intervals suggesting significant heterogeneity. Have you considered a Bayesian approach for meta-analysis? Frequentist methods can produce prediction intervals with poor coverage when there is a mixture of study sizes hypertension. The people that the models were applied to did not have known hypertension at baseline. As would be expected, people with higher baseline blood pressure levels on the initial measurement were more likely to have sustained high blood pressure (or hypertension) long-term. While the predictor is highly correlated with the outcome, it is not synonymous with it.COMMENT. \u2022 Page 13 line 330 \u2013 please be more specific than \u2018basically\u2019RESPONSE: Thank you. We have changed the word now as suggested.Please see page 13, line 316 in the revised manuscript.COMMENT. \u2022 Page 15 line 388 \u2013 I belive it should be \u2018models\u2019RESPONSE: Thank you. We have changed the word now as suggested.Please see page 15, line 376 in the revised manuscript.COMMENT. \u2022 Page 17 line 446-447 does not make senseRESPONSE: Thank you. We have now removed the lines from the manuscript.Please see page 17, lines 420-423 in the revised manuscript.COMMENT. \u2022 Figure 1 \u2013 I believe the reasons for exclusion would be better ordered alphabetically or in descending number of records excluded.RESPONSE: Thank you. We have now changed Figure 1. The reasons for exclusion are now presented in descending order on the number of records excluded.Please see the revised figure 1.________________________________________6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: Yes: John B. KostisReviewer #2: Nohttps://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, AttachmentResponse to Reviewers.DOCSubmitted filename: Click here for additional data file. 21 Mar 2022Prediction of hypertension using traditional regression and machine learning models: A systematic review and meta-analysisPONE-D-21-31564R1Dear Dr. Turin,We\u2019re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you\u2019ll receive an e-mail detailing the required amendments. When these have been addressed, you\u2019ll receive a formal acceptance letter and your manuscript will be scheduled for publication.http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at onepress@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they\u2019ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact Kind regards,Antonio Palaz\u00f3n-Bru, PhDAcademic EditorPLOS ONEAdditional Editor Comments :Reviewers' comments:Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the \u201cComments to the Author\u201d section, enter your conflict of interest statement in the \u201cConfidential to Editor\u201d section, and submit your \"Accept\" recommendation.Reviewer #1:\u00a0All comments have been addressed********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1:\u00a0Yes********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1:\u00a0Yes********** 4. Have the authors made all data underlying the findings in their manuscript fully available?PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data\u2014e.g. participant privacy or use of data from a third party\u2014those must be specified. The Reviewer #1:\u00a0Yes********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1:\u00a0Yes********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. Reviewer #1:\u00a0In my opinion this manuscript is suitable for publication in PLOS ONE. The choice of the topic is timely and appropriate and the methodology used is correct in my opinion.********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose \u201cno\u201d, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Yes:\u00a0John B. KostisReviewer #1:\u00a0 28 Mar 2022PONE-D-21-31564R1 Prediction of hypertension using traditional regression and machine learning models: A systematic review and meta-analysis Dear Dr. Turin:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. onepress@plos.org.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact plosone@plos.org. If we can help with anything else, please email us at Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staffon behalf ofDr. Antonio Palaz\u00f3n-Bru Academic EditorPLOS ONE"}
+{"text": "Sex differences identified in the COVID-19 pandemic are necessary to study. It is essential to investigate the efficacy of the drugs in clinical trials for the treatment of COVID-19, and to analyse the sex-related beneficial and adverse effects. The histone deacetylase inhibitor valproic acid (VPA) is a potential drug that could be adapted to prevent the progression and complications of SARS-CoV-2 infection. VPA has a history of research in the treatment of various viral infections. This article reviews the preclinical data, showing that the pharmacological impact of VPA may apply to COVID-19 pathogenetic mechanisms. VPA inhibits SARS-CoV-2 virus entry, suppresses the pro-inflammatory immune cell and cytokine response to infection, and reduces inflammatory tissue and organ damage by mechanisms that may appear to be sex-related. The antithrombotic, antiplatelet, anti-inflammatory, immunomodulatory, glucose- and testosterone-lowering in blood serum effects of VPA suggest that the drug could be promising for therapy of COVID-19. Sex-related differences in the efficacy of VPA treatment may be significant in developing a personalised treatment strategy for COVID-19. The \u03b2 coronavirus pandemic, named severe acute respiratory syndrome coronavirus 2 infection COVID-19), has led to calls to identify effective drugs to treat the disease 9, has le.Most COVID-19 patients have a mild to moderate condition, while some have progressed to a critical condition. SARS-CoV-2 virus mainly affects the lungs, causing respiratory failure and secondary hypoxemia in one-fifth of hospitalised patients ,5. SeverThe meta-analysis of published global cases shows that men and women are at equal risk of SARS-CoV-2 infection. Men have a higher risk of severe COVID-19 are three times more likely to be treated in an intensive care unit . Male moThe median viral RNA content of nasopharyngeal swabs and saliva was higher in men than women . Sex-speThe novel bioinformatic approach includes a wider range of clinically approved drugs so that more possibilities are allowed for them to repurpose against COVID-19 . One sucResearch of sex-specific features may lead to a new approach to the COVID-19 treatment. This review aims to evaluate VPA as a potential medicine for the treatment of COVID-19 and to elucidate the possible biological sex-related mechanisms of pharmacology, to review VPA as a potential drug to prevent the progression of COVID-19 and to provide personalised treatment of the disease.VPA is completely absorbed; bioavailability is \u226580% . VPA molThe G protein system (MRP) is involved in the intracellular transport of VPA; drug transport via G protein substrates is higher in females than in males of experimental animals and humans ,47, as tpro, 3CLpro) [pro importance recognises Mpro as a target for antiviral drugs, designed as a virus 3CLpro inhibitor, for COVID-19 therapy [pro structure [The docking, binding energy calculation determines that VPA metabolite 4-ene-VPA-CoA creates a stable interaction with nsP12 of SARS-CoV2 RNA polymerase and VPA-CoA could specifically inhibit the target. SARS-CoV-2 RNA polymerase is an enzyme playing in viral RNA replication and the virus\u2019s survival in a host ,52. The 3CLpro) ,54,55. T therapy ,57. HDACtructure . The SARtructure ,60. Expetructure . The HDAtructure . VPA blotructure . VPA tretructure .The SARS-CoV-2 virus connects to the cell membrane-bound ACE2, a functional receptor for SARS-CoV-2, to mediate virus entry into human cells ,66. The TMPRSS2 expression in humans or mice\u2019s lungs [Co-expression of ACE2 and the transmembrane serine protease 2 (TMPRSS2) receptor is required for SARS-CoV-2 infection of cells . Human l\u2019s lungs . TMPRSS2\u2019s lungs . VPA red\u2019s lungs . After t\u2019s lungs . TMPRSS2\u2019s lungs . The enh\u2019s lungs . ADAM-17\u2019s lungs . The sAC\u2019s lungs . Cigaret\u2019s lungs .1 receptors, which would activate ADAM17 more. The ACE2 expression is transcriptionally suppressed due to AT1 activation [1 [Due to the high viral infection load, membrane ACE2 and its mRNA expression are significantly diminished in COVID-19 patients ,94,95,96tivation . Increasation [1 . ACE2 knation [1 . The ACEation [1 ,103,104,ation [1 ,105,106.ation [1 ,107,108.ation [1 . Compareation [1 ,108,110.ation [1 ,112. VPAation [1 ,114. VPAation [1 . Anti-hyation [1 ,116.\u03b1, IL-1\u03b1, IL-1\u03b2, IL-1RA, IL-6, IL-7, IL-10, IL-12p40, IL-12p70, IFN-\u03b3 and TGF-\u03b1 production. VPA was lowering immune cells\u2019 capacity to induce a pro-inflammatory response may offer new therapeutic options for managing septic shock [highF4/80low macrophage subset, repressed the CXCL1, IL-5, IL-6 and IL-10, tissue NF-\u043aB2 p100 protein generation in males and females [As a multifunctional regulator of innate and adaptive immune cells, VPA reduces macrophage infiltration in various models of inflammation . VPA attic shock . VPA attic shock . In the females . VPA dec females . In the females . In the females . VPA red females . VPA tre females ,128. VPA females . In the females .\u03b1, IL-6 in female BALB/c mice bone marrow-derived macrophages [VPA drastically inhibited the multiplication of the enveloped viruses . While it did not affect infection by the non-enveloped viruses, VPA abolished West Nile RNA and protein synthesis, indicating that VPA can interfere with the viral cycle at different steps of enveloped virus infection. VPA reduced vesicular stomatitis virus infection . VPA redrophages . VPA polrophages . VPA inhrophages . VPA effrophages . The VPArophages . VPA in rophages . VPA inhrophages . VPA reprophages . VPA-trerophages .Mitochondria functions range from supplying energy activation of anti-viral and anti-inflammatory mechanisms . The mitVPA metabolites significantly decrease pyruvate-driven oxidative phosphorylation in mitochondria by conflict with pyruvate transport, thus settling mitochondrial energy production . VPA treGonad hormones affect the immunological response, with the estrogens being both pro-inflammatory and anti-inflammatory ,159. Tespro; the active site of Mpro is structurally similar to the active site of FXa and thrombin and can therefore activate coagulation [pro is expected to inhibit Mpro pathways [The pathophysiology of COVID-19 complications is characterised by clinical features of thrombosis and disseminated intravascular coagulopathy in the airways, myocardium, kidneys, brain and other organs . Thrombogulation . The devgulation ,188,189.gulation . VPA binpathways . Older mpathways ,193,194.pathways ,197,198.The VPA effects on thrombogenesis have been explored in pre-clinical studies and during the treatment of patients with VPA . HDAC inSARS-CoV-2 activates the complement system, either directly or through an immune response. Activated complement promotes inflammation . ComplemTreatment with VPA in a rat thrombosis model reduced thrombus formation and did not increase bleeding tendency . VPA canSex-related differences in the COVID-19 progression and complications rate suggest that sex biological factors are important in the pathogenesis of COVID-19. Identifying the association of sex-specific factors with associated differences in risk of COVID-19 unfavourable outcome is essential for the development of effective personalised treatment. Detailed knowledge of the mechanisms underlying the differences in immune response between women and men, which may also be related to the risk of thrombotic complications, should lead to new therapeutic strategies.In this review, we could not provide more detailed information on sex differences in the effects of VPA, as most of the studies involved animals only of one sex or even without specifying the sex of the animal or the cells. In some cases, patients or cells of different sex were combined without addressing sex differences. Regulatory guidelines for pharmaceutical research call for assessing the influence of sex on drug effectiveness, and state that the drug development should provide adequate information on the efficacy of drugs in relationship to sex ,233. ClaInflammation alters the ratio of histone acetyltransferases to HDACs and in-vitro or in-vivo data suggest that HDAC inhibitors may be anti-inflammatory agents . VPA exeFurthermore, pro-inflammatory-immune cells derive most of their energy from aerobic glycolysis to generate more energy and maintain increased activity ,241. RapData from experimental, epidemiological and clinical studies suggest that VPA has anti-platelet and anti-thrombotic effects. Clinical use of VPA in the treatment of epilepsy is associated with a lower risk of thrombosis, myocardial infarction and stroke ,250,251.VPA treatment decreases serum testosterone levels , and in The anti-inflammatory, anti-thrombotic, immunomodulatory, serum glucose-lowering and testosterone-lowering effects of VPA suggest that it may be a promising investigational medicinal product for the treatment of COVID-19. The pharmacological mechanisms of VPA suggest that VPA could be a drug for the prevention of COVID-19 progression. The sex-specific differences in the course of COVID-19 and the mechanisms of action of VPA point to the need for prospective, controlled clinical trials to assess the sex-specific efficacy of valproic acid preparations."}
+{"text": "Siphluriscus chinensis (Ephemeroptera: Siphluriscidae) were evaluated in specimens collected from two sites in China: Niutou Mountain, Zhejiang Province (S. chinensis NTS) and Leigong Mountain, Guizhou Province (S. chinensis LGS) and were successfully sequenced. The lengths of the mt genomes of S. chinensis NTS and S. chinensis LGS were 15,904 bp (ON729390) and 15,212 bp (ON729391), respectively. However, an in-depth comparison of the two mt genomes showed significant differences between the specimens collected from the two sites. A detailed analysis of the genetic distance between S. chinensis NTS and S. chinensis LGS was undertaken to further achieve an accurate delimitation of S. chinensis. The genetic distance between S. chinensis NTS and the other three species within Siphluriscidae was a high value, above 12.2%. The two mt genomes were used to reconstruct phylogenetic relationships and estimate divergence time. The results demonstrated robust differences between S. chinensis NTS and S. chinensis LGS, which revealed that a kind of cryptic species existed. Maximum likelihood (ML) and Bayesian inference (BI) analyses produced well-supported phylogenetic trees that showed evolutionary relationships between Siphluriscidae (((S. chinensis HQ875717 + S. chinensis MF352165) + S. chinensis LGS) + S. chinensis NTS). The most recent common ancestor (MRCA) of four species within Siphluriscidae began to diversify during the Neogene , and S. chinensis NTS was first to diverge from the branches of S. chinensis LGS. In short, based on mitochondrial genomes, our results showed that the specimens collected from Leigong Mountain, Guizhou Province (S. chinensis LGS) belonged to S. chinensis, and the specimens collected from Niutou Mountain, Zhejiang Province (S. chinensis NTS) were a cryptic species of S. chinensis.In this study, the mitochondrial (mt) genomes of Siphluriscus chinensis collected from Guangdong Province in China and established the genus Siphluriscus, which was classified into Siphlonuridae . Our divergence time estimation indicated that the MRCA of S. chinensis (HQ875717) and S. chinensis (MF352165) began to diversify at 0.30 Mya [95% HPD = 0.10\u20130.64 Mya].This analysis estimated the divergence time among 43 Ephemeroptera species using four fossil calibration points based on the given tree topology in S. chinensis HQ875717, S. chinensis MF352165 and S. chinensis LGS, nine PCGs used complete stop codons and four PCGs used incomplete stop codons. However, only three PCGs used incomplete stop codons in S. chinensis NTS. In both invertebrate and vertebrate mt genomes, the incomplete stop codons of PCGs are a common phenomenon [S. chinensis HQ875717, S. chinensis MF352165 and S. chinensis LGS differed slightly, whereas S. chinensis NTS showed a significant difference compared to the three other species , whereas S. chinensis NTS had average RSCU values of less than one (RSCU < 1). Among the 22 tRNA genes in the mt genomes of S. chinensis NTS and S. chinensis LGS, mismatches occurred in the acceptor stem of trnI in S. chinensis NTS, which was not present in S. chinensis LGS.Among the thirteen PCGs of enomenon ,86,87,88 species . AdditioIn order to assess the phylogenetic relationships within Ephemeroptera, we performed analyses using the 13 PCGs dataset . Based oS. chinensis HQ875717 + S. chinensis MF352165) + S. chinensis LGS) was sister clade to S. chinensis NTS. We realized that S. chinensis NTS was distantly related to the above three species and had a distant phylogenetic placement within Siphluriscidae. In this study, the divergence time of Siphluriscidae was suggested to occur during the Jurassic period based on fossil and mt genome sequence data and S. chinensis NTS, thus supporting the conclusion that S. chinensis HQ875717, S. chinensis MF352165 and S. chinensis LGS were the same species.In this study, ML and BI analyses produced well-supported phylogenetic trees where to 0.3% (S. chinensis LGS\u2013S. chinensis HQ875717) and (S. chinensis MF352165\u2013S. chinensis HQ875717) . Except for the pairwise genetic distance within groups S. chinensis HQ875717, S. chinensis MF352165 and S. chinensis LGS, the other groups related to S. chinensis NTS were above 7% of regular insect reports [Baetis rhodani in different geographic locations was 8\u201319%, and then judged that some populations were cryptic species [S. chinensis. All three samples from Leigong Mountain belong to the same species, whereas the samples from Niutou Mountain (ON729391) belong to another species. Therefore, our study suggested that S. chinensis NTS was a cryptic species of S. chinensis.The pairwise genetic distance within Q875717) . By cont reports . William species . Based oS. chinensis NTS and S. chinensis LGS within Siphluriscidae, and we provided species delimitation of the S. chinensis complex based on a combination of genetic characteristics and genetic distance in the mt genome, phylogenetic relationship and divergence time. In combination with the collection sites, S. chinensis HQ875717, S. chinensis MF352165 and S. chinensis LGS were all collected from Guizhou Province, China, while S. chinensis NTS was collected from Zhejiang Province, China. The genetic distance between S. chinensis NTS and the other three species reached over 12.2%, which was higher than that of S. chinensis HQ875717, S. chinensis MF352165 and S. chinensis LGS, of 0.3%. BI and ML analyses indicated that S. chinensis NTS first separated from S. chinensis HQ875717, S. chinensis MF352165 and S. chinensis LGS at 11.80 Mya. Accordingly, it is highly probable that S. chinensis NTS was a cryptic species of S. chinensis, and the mt genome can be used as one of the effective molecular markers in the identification of cryptic species.Based on molecular analyses, a cryptic species belonging to Siphluriscidae was recognized. In this study, we successfully determined two newly sequenced mt genomes of"}
+{"text": "This reaction involves an interesting double catalytic cycle in which copper-catalyzed carboamination cyclization is favored to form the C-3 radical pyrrolidinoindoline intermediate, then a copper-catalytic radical alkoxylation reaction proceeds smoothly.We report a copper-catalyzed alkoxycyclization of tryptamine derivatives with O 2 oxidation conditions was reported.An oxazoline/copper-catalyzed cascade carboamination alkoxylation of substituted tryptamine under mild eco-friendly O However, by utilizing a similar strategy, the direct synthesis of 3-alkoxyl pyrroloindolines remains less developed. In 2020, Zhong et al.9 reported the first example of alkoxycyclization of tryptamine derivatives using molecular iodine catalyst with tert-butyl hydroperoxide as the oxidant. None of the other studies, like using transition-metal catalysts, have been described yet.As direct access to these complex products, the development of C3a-oxygenation/cyclization reactions of tryptamine or tryptophan derivatives has attracted extensive interest from synthetic chemists. Recently, some remarkable efforts have contributed to the one-step assembly of 3-hydroxyl,ii)-promoted radical intramolecular carboamination of alkene has proven to be an effective means toward the synthesis of N-fused heterocycles.10 Recent reports have utilized this strategy toward the cyclization and radical alkylation, aromatization and aminooxygenation of alkene.10 However, due to the difficulty in homolytic breakage of the oxygen\u2013hydrogen bond in alcohols with a high bond dissociation energy ,11 the related direct cyclization and radical alkoxylation of carbon\u2013carbon double bond with copper catalysts is still unknown. Inspired by the relevant research of copper-catalyzed radical alkoxylation reaction,12 we assume that if the catalytic carboamination and radical alkoxylation tandem reaction could be realized by a single copper catalyst, which will represent as a new effective protocol for the direct construction of alkoxyl-containing N-fused heterocycles. Herein, we report an oxazoline/copper-catalyzed cascade carboamination alkoxylation of substituted tryptamine under mild eco-friendly O2 oxidation conditions, which facilitate the construction of the 3-alkoxyl pyrroloindolinese motif in good yield with good to excellent levels of diastereoselectivity in 4 mL MeOH at 50 \u00b0C was selected as the optimal conditions for this reaction.In our studies, the commercial easily available 14/1 dr , entry 1 14/1 dr , entry 2 20/1 dr , entry 3N-methyl substituted tryptamines that contain either a methyl or ethyl group at different positions of the indole ring proceeded smoothly to furnish the desired products 2a\u2013e in good to excellent yields with moderate to excellent diastereoselectivities. Notably, the N-Bn and N-PMB substituted tryptamines were also suitable substrates for this reaction, the corresponding products 2f, 2o were obtained in 70, 79% yields with >20/1 dr. N-Bn substituted substrates that contain different functional groups at different positions of the indole ring employed the reaction conditions well, affording the desired products 2g\u2013m in good to excellent yields with high diastereoselectivities. Furthermore, the use of other alcohols, for instance, ethanol, n-butanol, sec-butyl alcohol or benzyl alcohol allowed the cyclic alkoxylation reaction smoothly (2p\u2013s). However, when the steric and bulky tert-butyl alcohol was employed under the optimal conditions, no trace amount of desired product was observed. The applicability of this protocol was further demonstrated by the short, rapid construction of bio-active natural product CPC-1 in a total yield of 54% with 4/1 dr from material 1u were involved in the standard conditions, the reaction did not take place species with substrate 1a proceeds to form the chelation intermediate A. Subsequent nitrogen intramolecular addition\u2013cyclization forms the C3a Cu(ii) pyrrolidinoindoline intermediate B, Then, homolytic cleavage of carbon\u2013Cu(ii) bond to generate the Cu(i) species and C3a radical intermediate C. The C3a radical could be oxidized by CuII species to generate the C3a cation intermediate D. Subsequent nucleophilic attack of alcohol delivers the product 2a. Meanwhile, CuII complex was produced in situ through the reaction of Ln\u2013CuI complex with O2 on the basis of the previous reports,14 completing the catalytic cycle.Combining with the previous reports about copper-catalyzed carboamination,2 oxidation conditions, affording C3a-alkoxylation pyrrolidinoindolines in good yields with high diastereoselectivities. This protocol was proved practicable and useful by the rapid concise total synthesis of natural product CPC-1. Mechanistic studies illustrated that the copper-catalyzed carboamination cyclization was favored to form the C-3 radical pyrrolidinoindoline intermediate, then a copper-catalyzed radical alkoxylation reaction proceeded to deliver the desired product. The extension of the present catalytic protocol to other useful reactions and biological evaluation of these products are undergoing in our laboratory.In conclusion, we have successfully developed copper-catalyzed alkoxycyclization of tryptamine under mild OThere are no conflicts to declare.RA-011-D1RA02679H-s001"}
+{"text": "The local and global order in dense packings of linear, semi-flexible polymers of tangent hard spheres are studied by employing extensive Monte Carlo simulations at increasing volume fractions. The chain stiffness is controlled by a tunable harmonic potential for the bending angle, whose intensity dictates the rigidity of the polymer backbone as a function of the bending constant and equilibrium angle. The studied angles range between acute and obtuse ones, reaching the limit of rod-like polymers. We analyze how the packing density and chain stiffness affect the chains\u2019 ability to self-organize at the local and global levels. The former corresponds to crystallinity, as quantified by the Characteristic Crystallographic Element (CCE) norm descriptor, while the latter is computed through the scalar orientational order parameter. In all cases, we identify the critical volume fraction for the phase transition and gauge the established crystal morphologies, developing a complete phase diagram as a function of packing density and equilibrium bending angle. A plethora of structures are obtained, ranging between random hexagonal closed packed morphologies of mixed character and almost perfect face centered cubic (FCC) and hexagonal close-packed (HCP) crystals at the level of monomers, and nematic mesophases, with prolate and oblate mesogens at the level of chains. For rod-like chains, a delay is observed between the establishment of the long-range nematic order and crystallization as a function of the packing density, while for right-angle chains, both transitions are synchronized. A comparison is also provided against the analogous packings of monomeric and fully flexible chains of hard spheres. Over the last few decades, developments in the synthesis of novel polymers and the fabrication of polymer-based materials have turned them into key components of our daily lives. The research is ever-growing in the pursuit of polymer-based materials with enhanced properties as chain connectivity endows macromolecules with unique properties compared to monoatomic systems ,2. DescrIn parallel, many aspects of the phase behavior and self-organization of general atomic and particulate systems remain unknown or poorly understood. The analysis becomes a great deal harder when macromolecular systems are tackled. Advances in experimental, theoretical, and simulation methods continually enrich our fundamental knowledge of the phenomenon in a wide range of physical systems ,9,10,11.The hard-sphere model is frequently used in the study of crystallization due to its simplicity and athermal nature, despite the obvious disadvantage of lacking chemical detail. Through molecular dynamics (MD) simulations, early and pioneering works have reported the entropy-driven crystallization of systems of hard spheres ,28 once reached and give reached ,31. For reached ,33,34. H reached ,38,39,40 reached ,43,44,45 reached , while t reached ,48,49,50 reached ,52.The mechanism of the crystallization of hard colloidal polymers ,58,59,60lization ,81. Howelization . It has lization , the prelization and chailization ,86,87,88For semi-flexible polymers ,90,91,92Although rod-like chains have been extensively researched in the literature, the corresponding body of work on semi-flexible chains with different bending angles is very limited. Recent MD simulations, studying the jamming and solidification of semi-flexible polymers employing the freely-rotating model varying the fixed bending angle, have revealed the different mechanisms in solidification depending on the equilibrium bending angle, \u03b80 ,126.q. The local order, at the level of the monomers, is gauged through the Characteristic Crystallographic Element (CCE) norm descriptor [The present manuscript analyzes the phase behavior of linear, semi-flexible chains of tangent hard spheres, as a function of the equilibrium bending angle and packing density. This is achieved through extensive Monte Carlo (MC) simulations using the Simu-D simulator-descriptor , built ascriptor ,133. ComThis article is organized as follows: in \u03c3, which is considered the characteristic length. The systems are composed of atN monomers distributed in chN chains of average chain length of avN (in monomers). All of the systems consist of chN = 100 chains of average chain length avN = 12, resulting in atN = 1200. The chains present dispersity in their lengths following a uniform distribution in the range Polymers are modeled as linear chains of identical hard spherical monomers with a collision diameter ijr is the distance between the centers of the monomers i and j. Periodic boundary conditions are applied in all dimensions, corresponding to a bulk, unconstraint system. For non-overlapping objects\u2019 packing density (volume fraction), monV is the total volume occupied by the monomers and cellV is the volume of the cubic simulation cell.The non-overlapping condition of the hard monomers is adopted by employing the Hard Sphere (HS) potential to describe all of the non-bonded interactions between the monomers. According to the HS potential, the pair-wise energy, l, bending angles, \u03b8, and torsion angles, dl is the maximum gap between successive monomers of the chain. For all of the systems simulated here, \u03c3), practically enforcing tangency between bonded monomers [\u03b8, which is the angle formed by two successive bond vectors , as implemented in the Simu-D simulator . The MC The initial configurations are generated at very dilute conditions , where N(j) is the number of monomers of the chain j, im is the mass of the monomer (considered here as unity), xi is the coordinate vector of the monomer, cmx is the coordinate vector of the centre of mass of the chain and \u03b4 is the second order isotropic tensor. The long axis of each chain corresponds to the eigenvector, I so that the unit vector u is calculated normalizing the eigenvector The chain orientational order is determined through averages of a second-order invariant . The oriQ according to:As an average of all the molecular orientations, we define the second-order tensor Q tensor is: (1) symmetric, and (2) traceless (QTr = 0). These two properties reduce the independent components of the second-order tensor from 9 to 5.The Q can be calculated for the different ideal phases . For thn. In the same coordinate system, the matrix representation of Q for the oblate mesogen, nematic mesophase, denoted here as \u201cOBL\u201d, takes the form:In the oblate mesogen, nematic mesophase, it is the short axis of the molecules that tends to be aligned with the nematic director Q and use its normalized eigenvalues as unit vectors of a new Cartesian coordinate system where Q\u2019 is diagonal and its diagonal elements are its eigenvalues , ordered by decreasing the absolute value. The eigenvector of the largest absolute eigenvalue 1\u03bb, in the case of a nematic mesophase (denoted here as \u201cNEM\u201d), would correspond to the preferred orientation of the system or nematic director n.In order to identify the similarities with these ideal cases, we diagonalize q is obtained by comparing the diagonal tensor Q\u2019 with the corresponding tensor PROQ of a perfect prolate mesogen, nematic mesophase. This scalar q represents the degree of alignment between the chains. In an isotropic system, where Q is null, q \u2192 0. For the perfectly aligned prolate mesogen, nematic mesophase system, A scalar order parameter q and the nematic director n.Accordingly, the long-range orientational order is characterized by the scalar orientational order parameter The metric used in this work to identify the disorder-order transition at the local level and quantify the degree of crystallinity of the simulated systems is the Characteristic Crystallographic Element (CCE) norm, which is explained in detail in Refs. ,133. Thei of a system, the CCE norm descriptor identifies the nearest neighbors and quantifies the orientational and radial deviations of the \u201creal\u201d local environment with respect to the \u201cideal\u201d environment of each reference crystal. This comparison provides, for the given site i, a CCE norm value with respect to each X reference crystal, X-CCE norm is to zero, the higher the similarity of the local environment to the respective reference crystal X. Site i is identified as an X-type crystal when the calculated CCE norm is lower than a critical threshold, The CCE norm descriptor gauges the crystallinity of an atomic or particulate system in two (2D) or three (3D) dimensions by comparing the local environment around each site with a set of ideal reference crystals under the main concept that each ideal crystal is uniquely identified by a set of symmetry operations ,139,140. spheres ,132,133 spheres ,58,80,81 spheres ,60. Due X, X.The process explained before is repeated over all sites of the system for each reference crystal. Once the CCE norm has been evaluated for every site and reference crystal, an order parameter for each reference crystal Additionally, a degree of crystallinity , HEXS, BCCS \u2192 0). Thus, in the continuation, we consider only the HCP and FCC crystals, as well as fivefold local symmetry (FIV).In the present work, no appreciable population of sites with HEX or BCC similarity was detected ordering and the local structure.Prior to the simulation data being analyzed in a post-processing step, a preliminary visual inspection of the initial and resulting system configurations is performed. In the case of rod-like chains is established at packing densities approximately equal to avN = 12), the transition from the isotropic to the nematic phase, ISO \u2192 PRO for rod-like athermal chains occurs at significantly lower packing densities appears for semi-flexible chains with In addition to this expected isotropic-to-nematic (ISO \u2192 PRO) transition exhibited by the rod-like chains, a nematic mesophase with oblate mesogens, OBL, enforces specific bending angles which are compatible with these crystals. If such bending angles are not available through intra-chain arrangements, then the only other option is inter-chains ones, thus inducing partial alignment among the chains at a local level.For the remaining values of the equilibrium bending angles, each chain system remains in the isotropic phase in the whole concentration range. Still, a small peak is produced at very high packing densities that can be associated with chains primarily of q, as a function of the MC steps for the packing densities where a certain degree of nematic order was observed in q approaching the ideal value In the case of semi-flexible chains with The right panels of X-type, where X is the reference crystal or local symmetry. In the present work, given that no appreciable population of BCC or HEX sites is detected in any of the simulated systems, X corresponds to HCP, FCC, and FIV. In the continuation, and throughout the manuscript, the corresponding colors to be used for the representation of the HCP, FCC, and FIV sites (in snapshots) and curves (in figures) are blue, red, and green, respectively. The results from the semi-flexible systems are also compared with the ones of fully flexible (freely-jointed) chains of tangent hard spheres simulated and analyzed through the Simu-D software under the same conditions of volume fraction and average chain length [The local structure is gauged through the CCE norm descriptor ,133. Forn length ,130,131.ished in ,56,57,79ished in ,80,81,82Once crystallites of the HCP and FCC characters start growing, at almost identical rates, the fraction of the FIV-like sites decreases gradually until it practically disappears. The structural competition, observed here for all of the semi-flexible chain systems, between the FIV local symmetry and crystallization in the form of the HCP and FCC sites is in perfect match with identical observations in simulations of fully flexible ,58,59,60The formation of RHCP crystals of mixed HCP/FCC structures, with unique or multiple stacking directions, is also observed for all of the equilibrium bending angles and packing densities studied where crystallization takes place.Although almost all of the semi-flexible systems crystallize in RHCP crystals of mixed HCP/FCC layers, two important exceptions exist for the rod-like chains levels as a function of the density for rod-like and right-angle chains is presented in The phase behavior at the local and the second index YY to the global structure . Accordingly, for the rod-like chains studied here norm, while the global structure is gauged through the nematic order parameter. A rich one-dimensional phase diagram as a function of packing density is identified where chains crystallize in close-packed morphologies, including random hexagonal close (RHCP) ones of single or multiple stacking directions, or in almost perfect HCP and FCC crystals in the case of rod-like chains. The analysis of the long-range orientational tensor reveals the formation of prolate mesogen, nematic mesophase (PRO) for rod-like chains at rather low volume fractions and of oblate mesogen, nematic mesophase (OBL) at high packing densities. Although all of the systems of semi-flexible chains crystallize, the equilibrium bending angle significantly affects the melting point. While equilibrium angles of 108\u00b0 and 120\u00b0 degrees favor crystallization compared to the freely-jointed model, chains with 90\u00b0 show a behavior that almost coincides with the fully flexible chains, and the acute angle of 60\u00b0 hinders crystallization, enforcing nucleation and growth to take place at higher concentrations.\u03c6 \u2264 0.45), where the packing is amorphous locally and nematic globally; and (iii) CRY-PRO AMO-ISO (The present simulations are currently expanded to treat semi-flexible chains of tangent hard spheres in composites with nanofillers, under confinement, and in mixtures with different species in the form of linear chains and monomeric counterparts."}
+{"text": "Vascular risk factors may influence cognitive function and thus represent possible targets for preventive approaches against dementia. Yet it remains unknown, if they associate with cognition independently of the individual genetic risk for dementia.In a population-based study of 1172 community-dwelling individuals aged \u226565 years in Greece, we constructed a vascular burden score and a polygenic risk score (PRS) for clinically-diagnosed Alzheimer's disease (AD) based on 23 genetic variants. We then explored in joint models the associations of the PRS for AD and VBS with global cognitive performance, cognitive performance across multiple cognitive domains, and odds of dementia.The mean age of study participants was 73.9\u00a0\u00b1\u00a05.2 years . Both the PRS for AD and VBS were associated with worse global cognitive performance , worse performance across individual cognitive domains , and higher odds of dementia . There was no evidence of an interaction between the two scores. Higher VBS was associated with worse cognitive performance equally across tertiles of the PRS for AD, even among individuals at the highest tertile.Both genetic risk and vascular burden are independently and additively associated with worse cognitive performance and higher odds of dementia. Dementia is a devastating clinical diagnosis posing a substantial burden on patients, their proxies, and public healthcare systems ,2. GivenYet, moving towards personalized preventive approaches would require considering the individual background risk for dementia. Recent large-scale meta-analyses of genome-wide association studies (GWAS) have provided important insights with regards to genetic factors increasing the risk of Alzheimer's disease, which underlies 70% of all dementia cases ,13. Up tHere, to address this issue, we use data from a population-based study of 1431 community-dwelling individuals in Greece to explore (i) whether a PRS for Alzheimer's disease is associated with cognitive performance and odds of dementia, (ii) whether a score representing the burden of vascular risk factors and vascular diseases is associated with cognitive performance and odds of dementia, (iii) whether vascular burden and genetic risk for dementia are jointly associated with cognitive performance and odds of dementia, and (iv) whether vascular burden associates with cognitive performance even among individuals with a high genetic risk for Alzheimer's disease.Participants for the current study were drawn from the Hellenic Longitudinal Investigation of Aging and Diet (HELIAD) cohort. HELIAD is a population-based, multidisciplinary, collaborative study in Greece. Details about the study design and methodology are detailed elsewhere In structured standardized face-to-face intensive interviews, study participants provided information regarding their medical history including previous or current diseases, neurological conditions, neuropsychiatric symptoms, current medications, hospitalizations, surgeries and injuries. Medical records of previous diagnoses, physician visits, or hospitalizations were inspected for all participants. Additionally, an extensive structured and standardized physical examination was conducted, evaluating neurological signs and symptoms. Structured questionnaires were used in order to gather information about participants\u2019 functioning, social, mental and physical activities, as well as sleep and dietary habits. Sociodemographic information and information about tobacco use was also collected. Height and weight were measured and body mass index (BMI) was calculated.The evaluation of cognitive function was performed by neuropsychologists through a comprehensive neuropsychological assessment of all major cognitive domains: (i) orientation MMSE , non-verz-scores using mean and SD values derived from the subset of cognitively normal study participants (no mild cognitive impairment or dementia). Subsequently, these individual neuropsychological test scores were used to produce an average domain composite z-score for memory, executive function, attention, language, and visuospatial ability z-scores were then averaged in order to calculate a global neuropsychological z-score Participants\u2019 raw scores on each cognitive test were converted into Presence of dementia was also a secondary outcome. Diagnoses of dementia were made according to the DSM-IV-TR criteria A score reflecting the burden of vascular risk factors and vascular disease (VBS) was constructed in accordance with previous studies ,38. The p\u00a0<\u00a010\u221210. The Hardy-Weinberg equilibrium tests (p\u00a0<\u00a05\u00a0\u00d7 10\u20138) were performed only in controls and for each genotyping center/country separately.Genome-wide genotyping was performed at the facilities of the \u201cCenter national de recherche en g\u00e9n\u00e9tique humaine\u201d using the Illumina Infinium Global Screening Array , as detailed elsewhere 2 >3000 in both the HRC and the gnomAD or a \u03c72>3000 in one reference panel without being present in the other were excluded. Finally, GWAS analyses were performed between controls across genotyping centers to assess frequency differences between genotyping centers, using the software SNPTEST p\u00a0<\u00a010\u22125 were excluded. Furthermore, we removed ambiguous variants with minor allele frequency (MAF) > 0.4 and we kept only one copy of any duplicated variants, prioritizing the one with the lowest missingness. All samples and variants, passing the above QC metrics were imputed in the Michigan Imputation Server (v1.2.4) To improve the accuracy of imputation, we compared the frequencies of variants (chi-square test) against two reference panels, the population of the Haplotype Reference Consortium r1.1 (HRC) http://prsice.info/) was utilized to construct PRSs for each individual applying the clumping and thresholding (C+T) method Supplementary Table 1) Imputed dosages for a total of 5611,082 SNPs with MAF>0.05, call rate >95% and imputation quality score >0.4 were converted to best-guess genotypes for PRS computation. The PRSice software . We also explored the same associations with odds of dementia in logistic regression models including the same variables.We then explored interactions between the VBS and the PRS for Alzheimer's disease. For the continuous outcome of cognitive performance, we included the product of the two variables in a linear model and used its coefficient as a measure of additive interaction. For the binary outcome of dementia, we included the interaction term of the two variables in a logistic regression model and used its coefficient as a measure of multiplicative interaction. To assess the interaction on the additive scale, we calculated the relative excess risk due to interaction (RERI); confidence intervals for RERI were calculated using the delta method. We also split the sample in three tertiles depending on participants\u2019 PRS for Alzheimer's disease and explored associations between of the VBS with global and domain-specific cognitive performance across the tertiles., All analyses were performed using R .Supplementary Fig. 1). There were considerable differences with regards to cognitive performance across the three tertiles of PRS for Alzheimer's disease with individuals with a higher PRS performing lower across cognitive domains . The study participants across the three tertiles of a PRS for Alzheimer's disease did not differ with regards to demographic characteristics or individual vascular risk factors . There w domains .Table 1B). There was also no evidence of significant deviation from the additive scale in the associations with the odds of dementia . Indeed, in multivariable analyses, VBS showed consistent associations with worse global cognitive performance across all tertiles of the PRS for Alzheimer's disease (B). Again, similar associations were observed across all cognitive domains .Participants scoring higher in VBS showed worse global cognitive performance consistently across the three tertiles of the PRS for Alzheimer's disease A. Indeed disease B. Again,In this population-based study of 1172 community-dwelling individuals in Greece, both a higher VBS and a higher PRS for Alzheimer's disease were additively and independently of each other associated with worse global cognitive performance, worse domain-specific cognitive performance and higher odds of dementia. Across tertiles of the PRS for Alzheimer's disease, a higher VBS was equally associated with worse global cognitive performance. Even among individuals at highest genetic risk for dementia, VBS was still significantly associated with worse cognitive performance.post hoc analyses of clinical trials testing whether vascular risk factor modification, such as blood pressure lowering These data provide additional support to the notion of focusing on the modification of vascular risk factors and prevention of vascular disease in order to decrease the rates of dementia ,51. WhilOur findings confirm and extend previous studies showing that modifiable risk factors can contribute to the risk of dementia independently of the individual genetic risk score for dementia. A study of 196,383 individuals in the population-based UK Biobank showed that a healthy lifestyle profile, as indicated by no current smoking, regular physical activity, healthy diet, and moderate alcohol consumption, was associated with a lower risk of dementia even among individuals at the highest genetic risk quantile for dementia Several strengths of the current study should be noted. First, this is a population-based study representative of the general elderly population of community-dwelling individuals. Second, the extensive cognitive testing allowed for multiple layers of analyses with regards to cognitive performance. Third, a highly structured approach was followed for determining diagnoses of dementia involving a consensus meeting of neurologists and neuropsychologists on the basis of a detailed cognitive and functional assessment and inspection of all medical records of the participants.This study has several limitations. First, the prevalence of dementia was relatively low in the overall sample (4%), probably reflecting an underrepresentation of individuals at high risk for dementia in the examined population. This may influence the external validity of our findings or add collider bias in the reported associations if the genetic score or the vascular risk factors also influenced participation in the study. Second, our analyses represent cross-sectional associations and cannot be used to establish causal associations between vascular burden and cognition, as reverse causation cannot be excluded. Third, unlike the genetic risk score for dementia, individuals were not randomly assigned to the vascular burden score and its associations with cognition could be biased by confounding. Fourth, this study is restricted to individuals of European ancestry and the results may thus not be generalizable to other populations. Fifth, several of our analyses have been limited by a relatively small sample size leading to uncertainty in many of the estimates, especially those referring to odds of dementia. Sixth, despite a rigorous evaluation of all individuals by neurologists and examination of pharmacotherapy and other background medical records (if available by participants), information for the vascular burden score was to a large extent based on self-report.In conclusion, in a community-based sample, vascular risk and genetic risk associate with cognitive performance and odds of dementia additively and independently of each other. Even among individuals at high genetic risk for dementia, a low vascular burden is associated with better cognitive performance. Whether targeting vascular risk factors could offset a high genetic risk score for dementia should be further explored in future studies.This work has been supported by the following grants: IIRG-09\u2013133,014 from the Alzheimer's Association, 189 10,276/8/9/2011 from the NSRF-EU program Excellence Grant (ARISTEIA), which is co-funded by the European Social Fund and Greek National resources, and \u0394\u03a52\u03b2/\u03bf\u03b9\u03ba.51657/14.4.2009 from the Ministry for Health and Social Solidarity (Greece). MG acknowledges support in form of a Walter-Benjamin fellowship from DFG (GZ: GE 3461/1\u20131) and from the F\u00f6FoLe program of LMU Munich (Reg.-Nr. 1120).Nothing to report."}
+{"text": "Background: As a fibrotic disease with a high incidence, the pathogenesis of hypertrophic scarring is still not fully understood, and the treatment of this disease is also challenging. In recent years, human adipose-derived mesenchymal stem cells (AD-MSCs) have been considered an effective treatment for hypertrophic scars. This study mainly explored whether the therapeutic effect of AD-MSCs on hypertrophic scars is associated with oxidative-stress-related proteins. Methods: AD-MSCs were isolated from adipose tissues and characterized through flow cytometry and a differentiation test. Afterwards, coculture, cell proliferation, apoptosis, and migration were detected. Western blotting and a quantitative real-time polymerase chain reaction (qRT\u2013PCR) were used to detect oxidative stress-related genes and protein expression in hypertrophic scar fibroblasts (HSFs). Flow cytometry was used to detect reactive oxygen species (ROS). A nude mouse animal model was established; the effect of AD-MSCs on hypertrophic scars was observed; and hematoxylin and eosin staining, Masson\u2019s staining, and immunofluorescence staining were performed. Furthermore, the content of oxidative-stress-related proteins, including nuclear factor erythroid-2-related factor 2 (Nrf2), heme oxygenase 1 (HO-1), B-cell lymphoma 2(Bcl2), Bcl2-associated X(BAX) and caspase 3, was detected. Results: Our results showed that AD-MSCs inhibited HSFs\u2019 proliferation and migration and promoted apoptosis. Moreover, after coculture, the expression of antioxidant enzymes, including HO-1, in HSFs decreased; the content of reactive oxygen species increased; and the expression of Nrf2 decreased significantly. In animal experiments, we found that, at 14 days after injection of AD-MSCs into human hypertrophic scar tissue blocks that were transplanted onto the dorsum of nude mice, the weight of the tissue blocks decreased significantly. Hematoxylin and eosin staining and Masson\u2019s staining demonstrated a rearrangement of collagen fibers. We also found that Nrf2 and antioxidant enzymes decreased significantly, while apoptotic cells increased after AD-MSC treatment. Conclusions: Our results demonstrated that AD-MSCs efficiently cured hypertrophic scars by promoting the apoptosis of HSFs and by inhibiting their proliferation and migration, which may be related to the inhibition of Nrf2 expression in HSFs, suggesting that AD-MSCs may provide an alternative therapeutic approach for the treatment of hypertrophic scars. Hypertrophic scarring is a prevalent fibroproliferative disease in plastic surgery, and its incidence can be as high as 70% in burn patients . However2\u2212), hydrogen peroxide (H2O2), hydroxyl (OH\u2212), etc., which have a particularly destructive effect on lipids, proteins, and nucleic acids [Helmut proposedic acids ,9. Nucleic acids ,11,12,13ic acids . Oxidatiic acids ,15,16. Iic acids found hiic acids demonstric acids identifiAdipose-derived mesenchymal stem cells (AD-MSCs) have shown prominence in the field of regenerative medicine because they are easier to obtain and have a wide variety of sources. AD-MSCs are currently considered potential therapeutic strategies for several diseases, particularly hypertrophic scars. An increasing number of studies have shown that AD-MSCs have a significant therapeutic effect on hypertrophic scars ,21; howeNotably, our study found that AD-MSCs could affect HSFs by promoting their apoptosis and inhibiting their proliferation, thus exerting their antifibrotic effects through in vivo and in vitro studies. In addition, we suggested that AD-MSCs caused apoptosis by downregulating the expression of Nrf2 in HSFs, leading to a reduced antioxidant enzyme expression and the accumulation of ROS.2). The medium was changed every two days. Cells in passage four were used in this experiment.Human adipose tissue was obtained from patients who had undergone lipoplasty, and all enrolled patients signed the informed consent form to indicate their agreement and consent to the use of their adipose tissue in this study. The liposuction sites were bilateral thighs and buttocks. The adipose tissue was rinsed three times with phosphate-buffered saline , and then 0.1% type I collagenase was used to digest the adipose tissue for 50 min. Then, the stromal vascular fraction was filtered through a 70 \u03bcm porous filter after centrifugation at 2000 rpm for 10 min. After that, AD-MSCs were resuspended in Dulbecco\u2019s modified Eagle medium: F-12 containing 10% fetal bovine serum and 1% penicillin\u2013streptomycin at 37 \u00b0C in an incubator with 5% carbon dioxide . The cell suspension was incubated with fluorescein isothiocyanate (FITC)-conjugated antibodies against CD90 and CD34; phycoerythrin (PE)-conjugated antibodies against CD105, CD31, and CD45; and Brilliant Violet 421 (BV421)-conjugated antibodies against CD73 at 4 \u00b0C for 30 min in the dark. After washing twice, the cells were resuspended in 2% BSA and detected with a FACSCalibur instrument . Data were analyzed using FlowJo software .5/well until the confluence of cells reached 80%. Adipogenic differentiation was proceeded using basic medium A containing 10% FBS, 1% penicillin\u2013streptomycin, 1% glutamine, 0.2% insulin, 0.1% 3-isobutyl-1-methyl xanthine, 0.1% rosiglitazone, and 0.1% dexamethasone for 3 days and basic medium B containing 10% FBS, 1% penicillin\u2013streptomycin, 1% glutamine, and 0.2% insulin for 1 day, and they alternated 4 times .AD-MSCs were inoculated in six-well plates at a cell density of 2 \u00d7 10Osteogenic differentiation was proceeded using basic medium containing 10% FBS, 1% penicillin\u2013streptomycin, 1% glutamine, 0.2% ascorbate, 1% \u03b2-glycerophosphate, and 0.01% dexamethasone for 3 weeks .At the end of induction, 4% paraformaldehyde was used to immobilize the cells for 30 min, and Oil Red O and Alizarin Red S dye solutions were used to assess adipogenic and osteogenic differentiation according to the manufacturer\u2019s instructions, respectively. The cells were observed under a microscope after staining.5/tube. The medium contained 0.3% ascorbate, 0.01% dexamethasone, 1% insulin ferro-selenium transporter supplement, 0.1% sodium pyruvate, 0.1% proline, and 1% transforming growth factor-\u03b23 ). The cells were cultured at 37 \u00b0C in 5% CO2 for 21 days.AD-MSCs were harvested and resuspended in a centrifuge tube at a cell density of 4 \u00d7 10After induction, 4% paraformaldehyde was used to immobilize the cartilage balls for 30 min at room temperature, and Alcian Blue staining was used to assess chondrogenic differentiation according to the manufacturer\u2019s instructions. The sections were examined under the microscope.Hypertrophic scar samples were obtained from patients who had undergone plastic surgery, and all enrolled patients signed the informed consent form to indicate their agreement and consent to the use of their hypertrophic scar tissue in this study. Finally, a total of 8 samples were collected, including five males and three females with an average age of 35.0 \u00b1 11.7 years.2. The medium was changed every two days. After 7\u201310 days in the primary culture, the cells proliferated at the edge of the explanted tissues at which time we removed the explanted tissues. When the confluence of the cells reached 80%, the cells were passaged. Cells in passage four were used in this experiment.Dermal tissues were washed three times with PBS and then minced into pieces (~1 mm). Pieces were explanted to Dulbecco\u2019s Modified Eagle Medium containing 10% FBS and 1% penicillin\u2013streptomycin and were incubated at 37 \u00b0C in 5% COHSFs were resuspended in DMEM/F-12 and inoculated into the lower chamber of a Transwell coculture plate. AD-MSCs were resuspended in DMEM/F-12 and inoculated into the upper chamber of the plate. Only the same amount of medium was added to the upper chamber in the control group. The number of inoculated AD-MSCs was adjusted to ensure that the final AD-MSC to HSF ratio was 0:1, 0.5:1, 1:1, and 2:1, respectively, with the 0:1 group being the control group.A 6.5 mm Transwell with 0.4 \u00b5m sterile pore polycarbonate membrane insert was used in Cell Counting Kit-8 (CCK-8) trial and immunofluorescence staining. A 24 mm Transwell with 0.4 \u00b5m sterile pore polycarbonate membrane insert was used in the scratch assay and other experiments.5, and the number of AD-MSCs in the upper chamber was 105, 2 \u00d7 105, and 4 \u00d7 105 for 0.5:1, 1:1, and 2:1 groups, respectively. As for the experiments using Transwell plates , the number of HSFs in the low chamber was 5 \u00d7 104, and the number of AD-MSCs in the upper chamber was 2.5 \u00d7 104, 5 \u00d7 104, and 105 for 0.5:1, 1:1, and 2:1 groups, respectively.In the experiments using Transwell plates , the number of HSFs in the low chamber was 2 \u00d7 10Cell proliferation was determined using Cell Counting Kit-8 . HSFs were collected and resuspended in DMEM/F-12 after culture with AD-MSCs for 24 h, 48 h, and 72 h, and the HSFs in each well were then divided equally into three portions and were inoculated in 96-well plates. Until the cells were attached to the dish entirely, 100 \u03bcL of DMEM/F-12 medium containing 10% CCK-8 solution was added to each well. The optical density at 450 nm was measured with a multiwell plate reader .(W0 \u2212 t)/WW0\u00d7 100%, where W0 is the original width and tW is the remaining width at the measured time point.The migration property was evaluated through scratch assay. HSFs were cultured in a six-well plate, and, when 70% confluence was reached, the cells were scratched with a pipet tip through the well bottom center. Additionally, AD-MSCs were added to the upper chamber of the plate as previously described. Images were taken using a microscope every 24 h. ImageJ software was used to measure the area and length of the scratches to calculate the average width of the scratches. The migration rate of the scratches was calculated as follows: migration rate (%) = Cell cycle was determined through flow cytometry using a Cell Cycle Assay Kit according to manufacturer\u2019s instructions. Briefly, HSFs were harvested and fixed with 70% cold ethanol overnight at 4 \u00b0C. On second day, propidium iodide (PI) solution was added for DNA staining for 30 min at 37 \u00b0C, and the cells were detected with a FACSCalibur instrument . Data were analyzed using FlowJo software .Cell apoptosis was detected with an Annexin V-FITC PI Apoptosis Kit . After coculture for 24 h to 48 h, the HSFs were collected and resuspended in a flow tube with 1 \u00d7 Annexin V Binding Buffer, and Annexin V-FITC and PI were added according to the instructions. After incubation for 15 min, the cells were detected with a FACSCalibur instrument . Data were analyzed using FlowJo software .Intracellular ROS were measured through flow cytometry using ROS Assay Kit with 2,7-dichlorofluorescein diacetate (DCFH-DA) as a fluorescent probe. After 48 h of coculture, HSFs were washed three times with PBS and were incubated with DCFH-DA for 30 min at 37 \u00b0C in the dark according to the instructions. The labelled cells were washed with PBS three times and evaluated immediately with a FACSCalibur instrument . Data were analyzed using FlowJo software .A T-SOD Activity Assay Kit was used to detect the T-SOD activity of HSFs after 48 h of coculture based on the autoxidation of hydroxylamine. All procedures were carried out according to the instructions, and the developed color was measured at 550 nm using a multiwell plate reader .TM Real-Time System and SYBR Premix Ex Taq II in a 12 \u03bcL PCR solution. Primers were obtained from Takara Biotechnology. The primer pairs used for gene amplification were as follows. Nrf2: forward GTATGCAACAGGACATTGAGCAAG and reverse TGGAACCATGGTAGTCTCAACCAG. Keap1: forward CATCGGCATCGCCAACTTC and reverse ACCAGTTGGCAGTGGGACAG. NQO1: forward GGATTGGACCGAGCTGGAA and reverse GAAACACCCAGCCGTCAGCTA. GAPDH: forward GCACCGTCAAGGCTGAGAAC and reverse TGGTGAAGACGCCAGTGGA. The results were normalized against the mean Ct values of GAPDH using the \u0394Ct method as follows: \u0394Ct = Ct gene of interest\u2014mean Ct (GAPDH). The fold increase was calculated as 2\u2212\u0394\u0394Ct.Briefly, total RNA was extracted from HSFs after 48 h of coculture with AD-MSCs using TRIzol Reagent Kit . The concentration and purity of RNA was detected using a nanophotometer , and samples with A260/A280 value between 1.8 and 2.0 were considered to be of high purity. The RNA was reverse transcribed into complementary DNA using the Prime Script RT Reagent Kit . Quantitative PCR was performed using the CFX96g at 4 \u00b0C to remove the cell debris. For tissue samples, after being minced into pieces, the tissue samples were immediately frozen in liquid nitrogen. Frozen tissues were collected in RIPA Assay Buffer with protease inhibitors and phosphatase inhibitors . Nuclear proteins were prepared with a Nuclear and Cytoplasmic Protein Extraction Kit . Samples (60 \u03bcg protein) were separated on 10% SDS\u2013PAGE gels; transferred to a polyvinylidene fluoride membrane; blocked with 5% nonfat dried milk in TBST ; and incubated with primary antibodies, including Nrf2 mouse monoclonal antibody , HO-1 rabbit polyclonal antibody , Bcl2 rabbit polyclonal antibody , BAX rabbit polyclonal antibody , Caspase 3 rabbit polyclonal antibody , cleaved Caspase 3 rabbit monoclonal antibody , and \u03b2-actin mouse monoclonal antibody at 4 \u00b0C overnight. After washing with TBST, the membranes were incubated with horseradish peroxidase-conjugated goat anti-rabbit or anti-mouse secondary antibodies . The immunoreactive bands were developed using an ECL Kit and were detected with the Bio-Rad Molecular Imager Gel Doc TM XR+ . Protein expression levels were quantified through densitometry analysis using ImageJ software .Radioimmunoprecipitation Assay Buffer was used to lyse cells for 30 min on ice. The cells were then centrifuged at 12,000 \u00d7 In this study, we used twelve female nude mice . Mice were obtained from SPF Biotechnology and maintained in the animal facility of the Fourth Medical Center of Chinese PLA General Hospital. Hypertrophic scar tissue sources are described above. All experimental procedures were performed following the regulations of the Institutional Animal Care and Use Committee.6). Triamcinolone acetonide group: 0.2 mL triamcinolone acetonide . For the AD-MSC group, cells were injected at four sites within the hypertrophic scar implant. The F12 group and the triamcinolone acetonide group underwent an equivalent procedure with DMEM. When the injection was successful, an elevation of the skin could be observed. Two weeks later, the injection was repeated. We collected tissues two weeks after each injection. After being weighed, the transplanted tissue collected was immediately stored in the \u221280 \u00b0C refrigerator for later use.Hypertrophic scar tissues were washed in PBS and divided into multiple small specimens (1.0 cm \u00d7 0.6 cm). After disinfection and anesthesia, four 1 cm incisions were made in the back skin of each mouse with scissors, and the scar tissues were implanted subcutaneously. The wound was sutured with 5\u20130 nylon thread and was left exposed. The transplanted scar tissues were stable after 4 weeks. Next, four hypertrophic scar tissue patches on the back of each mouse were treated four different ways. Blank group: no treatment. F12 group: 0.2 mL of DMEM/F-12. AD-MSC group: 0.2 mL of AD-MSCs for 10 min at room temperature. Nonspecific binding sites were blocked for 1 h with PBS containing 1% bovine serum albumin and 0.1% Tween 20. The fixed cells were incubated overnight at 4 \u00b0C with antibodies specific for Ki67 mouse monoclonal antibody and Nrf2 mouse monoclonal antibody . Specific labelling was visualized using secondary antibodies conjugated with either Alexa 488 or Alexa 647 . Nuclei were visualized through staining with DAPI . Images were acquired with a confocal microscope .Tissue samples of all groups were excised, fixed in 4% paraformaldehyde, embedded in paraffin, sectioned at 5 \u03bcm thickness, mounted on slides, and stained with hematoxylin and eosin following the instructions. Masson\u2019s staining was conducted using the ready-to-use kit (Trichrome Stain (Masson) Kit, HT15, Sigma\u2013Aldrich). Briefly, the tissue was cut into 5 \u03bcm sections. Sections were immersed in Bouin\u2019s solution , stained in Weigert\u2019s hematoxylin, incubated in phosphotungstic\u2013phosphomolybdic acid, dyed with Aniline Blue, and fixed in 1% acetic acid. Then, the slides were rinsed in distilled water, dehydrated, observed, and photographed under a microscope .Apoptosis was analyzed on paraffinic hypertrophic scar tissue sections of the different groups with TUNEL Assay Kit . The slides were treated with 20 \u03bcg/mL of DNase-free proteinase K at 37 \u00b0C for 30 min and were washed with PBS 3 times. Then, the slides were dyed using the TUNEL reaction solution prepared in a humid dark box for 1 h at 37 \u00b0C. After washing with PBS, the tissues were dyed using ProLong Diamond Antifade Mountant with DAPI . Images were acquired with the confocal microscope.t-test was used for analysis between two groups. Differences with a p-value of <0.05 were considered statistically significant .The results are presented as average value \u00b1 standard deviation (SD). The data were analyzed using GraphPad Prism software 8.0 . Student\u2019s The cultured primary and passaged AD-MSCs exhibited a spindle-shaped, fibroblast-like morphology as shown in We examined the multipotential differentiation capacity of AD-MSCs using adipogenic, osteogenic, and chondrogenic assays. As was shown through Oil Red O staining, AD-MSCs developed an adipogenic phenotype after they were induced with the adipogenic medium for 21 days.The Alizarin Red stain showed obvious orange calcium deposits and calcified nodules after it was induced with the osteogenic medium for 21 days. We also cultured AD-MSCs with the chondrogenic medium for 3 weeks and stained them with Alcian Blue, and the endo acid mucopolysaccharides were stained blue in the cartilage globules c. The reIn cases where we ensured that each group was initially inoculated with an equal number of cells, the CCK-8 assay reflected the proliferation of cells. The results showed that the number of HSFs in the control group were sharply increased when compared with the cocultured groups, indicating that significant vitality inhibition in HSFs was induced by AD-MSCs a. We furTo examine whether AD-MSCs affect the migration capacity of HSFs, we scratched the confluent cells to create a linear wound, and the cells were then cocultured for 48 h. The wound margin was marked on the picture, and the wound healing rate was quantified every 24 h after scratching. The results showed that a lower cell migration was observed in the presence of AD-MSCs as compared to that observed in the control group. At 24 h, the migration rate was 61.48% \u00b1 7.42% in the control group, 36.61% \u00b1 1.46% in the 0.5:1 group, and 28.39% \u00b1 7.44% in the 1:1 group, whereas, in the 2:1 group, it was 24.72% \u00b1 1.68%. Similarly, at 48 h, the migration rate was 89.34% \u00b1 0.23% in the control group, 59.98% \u00b1 5.58% in the 0.5:1 group, and 45.47% \u00b1 3.49% in the 1:1 group, whereas in the 2:1 group, it was 49.45% \u00b1 1.41%. At both time points, the coculture groups were statistically different from the control group d,e. The To investigate the biological effects of AD-MSCs on the apoptosis of HSF cells, we cocultured AD-MSCs with HSFs at different ratios for 48 h. The 0:1 group was used as the control group. Flow cytometry analysis was performed every 24 h after culturing. As shown in Additionally, the associated apoptotic proteins Bcl-2, Bax, Caspase 3, and cleaved Caspase 3 in HSFs were detected through Western blot. When compared with the control group, BAX and cleaved Caspase 3 levels were increased, and Bcl-2 levels were decreased in the cocultured groups c,d. The.Intracellular ROS accumulation is a marker of oxidative stress which can lead to apoptosis. Flow cytometry analysis was performed to detect intracellular ROS. Results were expressed as a multiple of the control group. The relative accumulation of ROS in HSFs after the groups were cocultured with AD-MSCs for 48 h was increased compared with that in the control group (0.5:1: (2.21 \u00b1 0.27), 1:1: (3.59 \u00b1 0.48), 2:1: (3.70 \u00b1 0.58), fold over control). Among the three cocultured groups, the ROS content in the 0.5:1 group was lower than that in the other two groups, and there was no statistical significance between the 1:1 group and the 2:1 group e,f.To clarify whether the effect of AD-MSCs on HSFs was ROS-dependent, we added tempol , an ROS scavenger, and repeated the proliferation, migration, and apoptosis assays.It can be clearly seen from KEAP1/Nrf2 has been proven to regulate the gene expression of many antioxidant enzymes. As shown in SOD is capable of scavenging intracellular oxygen free radicals via the dismutation of superoxide radicals. We detected T-SOD activity in HSFs after the groups were cocultured with AD-MSCs for 48 h using a T-SOD Activity Assay Kit. The developed color was measured at 550 nm using an enzyme immunoassay analyzer, and the results were expressed as a multiple of the control group as shown in To further verify the therapeutic effect of AD-MSCs on hypertrophic scars, we established a model of hypertrophic scar transplantation into nude mice a. Two weFirst, no significant Ki67-positive cells were observed in any of the four groups a. TUNEL We also detected the expression of Nrf2 and HO-1 in the transplanted hypertrophic scar tissues after treatment to identify changes in the Nrf2 and antioxidant enzymes in vivo. As expected, the changes in these substances were consistent with those seen in vitro. As shown in The incidence of hypertrophic scars is particularly high, especially after burns and trauma. For patients, scar contracture, deformity, dysfunction, and accompanying itching are the biggest obstacles that hinder their return to society and their buildup of confidence in life. Although researchers have explored the pathogenesis of hypertrophic scars for decades, it remains unclear. It has been confirmed that many factors in the process of wound healing may be involved, such as the PI3K/AKT signaling pathway and the TGF-\u03b2/Smad signaling pathway, which regulates the process of fibrosis ,24.Oxidative stress may affect the process of fibrosis. Lee et al. identifiThere are many treatments for hypertrophic scars, including surgical resection, lasers, and cryotherapy, but all of these treatments have side effects that need to be solved . In receWe successfully extracted and cultured human AD-MSCs. According to previous studies , AD-MSCsHSFs were extracted from hypertrophic scar tissues. According to the published literature, although HSFs have the same spindle-shaped morphology as normal fibroblasts, their gene expression and biological behavior are different; they produce more ECM and proliferate faster . To idenThe indirect coculture of two types of cells using a Transwell is a method accepted by researchers. Transwell plates with 0.4 \u00b5m pore polycarbonate membranes were used in our study in which cells could not pass through the micropores. We assessed the cell viability, surface markers, and differentiation potential of AD-MSCs after coculturing for 48 h. As shown in In our study, after the groups were cocultured at different concentrations, AD-MSCs significantly inhibited the proliferation and migration of HSFs as shown in Previous studies have shown that AD-MSCs induce apoptosis and inhibit proliferation in KFs through paracrine effects . Our teaIt is well known that an increased intracellular accumulation of ROS can lead to apoptosis . Flow cyTo figure out whether the effect of AD-MSCs on HSFs is ROS-dependent, we repeated the proliferation, migration, and apoptosis assays by adding tempol and a stable, effective ROS scavenger. It was clear that tempol corrected the apoptosis-promoting effect of AD-MSCs on HSFs, indicating that the effect was ROS-dependent. In contrast, tempol did not affect the inhibitory effect of AD-MSCs on the proliferation and migration of HSFs, indicating that these effects were not ROS-dependent. To the best of our knowledge, AD-MSCs can inhibit the PI3K/AKT and MAPK signaling pathways in keloid-derived fibroblasts, which are closely related to cell proliferation and migration . The PI3We detected the levels of various antioxidant enzymes using different methods, including T-SOD, HO-1, and NQO1 and found that the level of these enzymes decreased as shown in To further verify the therapeutic effect of AD-MSCs on hypertrophic scars, we established a hypertrophic scar transplantation model based on the absence of thymic immunity in nude mice. The nude mouse is homozygous null for the Foxn1 gene, resulting in a lack of thymic epithelium and, as a consequence, a lack of T cells ,38,39. WInterestingly, in wound healing studies, researchers have shown that AD-MSCs promote the proliferation; migration; and, what\u2019s more, inhibited apoptosis of fibroblasts derived from normal skin tissue ,41. WounIn summary, we showed that AD-MSCs are effective in hypertrophic scar treatment due to the reduction of scar weight and the promotion of collagen fiber remodeling and rearrangement. AD-MSCs inhibited the proliferation and migration of HSFs and promoted apoptosis. This was because they inhibited Nrf2 expression, leading to a reduction in antioxidant enzymes and an accumulation of ROS.Our experiments revealed the therapeutic effects of AD-MSCs on hypertrophic scars and found that AD-MSCs inhibit the biological activity of HSFs. More importantly, AD-MSCs could downregulate the expression of Nrf2 in HSFs, resulting in a reduction in the expression of antioxidant enzymes and an accumulation of intracellular ROS, eventually activating the apoptosis program. We suspected that the downregulation of Nrf2 played an essential role in mediating AD-MSC-induced HSF apoptosis and antiproliferation effects. This pathway may act as a critical contributor to the mechanism of the antifibrotic effect of AD-MSCs."}
+{"text": "Junco hyemalis) is one of the most common passerines of North America, and has served as a model organism in studies related to ecophysiology, behavior, and evolutionary biology for over a century. It is composed of at least 6 distinct, geographically structured forms of recent evolutionary origin, presenting remarkable variation in phenotypic traits, migratory behavior, and habitat. Here, we report a high-quality genome assembly and annotation of the dark-eyed junco generated using a combination of shotgun libraries and proximity ligation Chicago and Dovetail Hi-C libraries. The final assembly is \u223c1.03\u2009Gb in size, with 98.3% of the sequence located in 30 full or nearly full chromosome scaffolds, and with a N50/L50 of 71.3\u2009Mb/5 scaffolds. We identified 19,026 functional genes combining gene prediction and similarity approaches, of which 15,967 were associated to GO terms. The genome assembly and the set of annotated genes yielded 95.4% and 96.2% completeness scores, respectively when compared with the BUSCO avian dataset. This new assembly for J. hyemalis provides a valuable resource for genome evolution analysis, and for identifying functional genes involved in adaptive processes and speciation.The dark-eyed junco is one of the most common passerines of North America, and has served as a model organism in different research disciplines for over a century. Here we report a high-quality genome assembly and annotation of the dark-eyed junco generated using a combination of shotgun and proximity ligation libraries. This new assembly for J. hyemalis provides a valuable resource for genome evolution analysis, and for identifying functional genes involved in adaptive processes and speciation.The dark-eyed junco ( Junco hyemalis) is a common and widespread North American passerine that has been the subject of extensive research in multiple scientific disciplines for over 100\u2009years , is composed of recently diversified yet phenotypically differentiated lineages, among which the signals of drift and selection at the molecular level are still recent and detectable. Second, the complex includes forms with broad geographic distributions encompassing heterogeneous habitats across ecological clines, but also spatially discontinuous habitats so that selective and neutral processes of divergence can be assessed in different spatial settings. Third, dark-eyed juncos show large variability in the degree of geographical isolation among phenotypically differentiated forms, from extensive gene flow to total isolation, which along with the first 2 points, makes them a suitable system for studying evolutionary processes related to dispersal, directional selection, and neutral evolution and the shotgun data generated for the assembly. Shotgun data are also used for the assembly of the mitochondrial genome.Here, we report a high-quality, chromosome-level assembly obtained using shotgun and proximity ligation libraries as a resource for genome-based studies on https://dovetailgenomics.com, last accesed on April 10, 2022) was conducted at Dovetail Genomics, LLC. The sequenced sample consisted of muscle tissue obtained from a female J. hyemalis carolinensis, collected at Mountain Lake Biological Station in Pembroke, Virginia, USA , currently deposited at the Moore Laboratory of Zoology, Occidental College, Los Angeles, CA, USA (voucher number: MLZ: bird: 69236). Briefly, a de novo draft assembly was first built using shotgun, paired-end libraries (mean insert size \u223c350\u2009bp) and the Meraculous pipeline . We only considered as positives those hits covering at least 2/3 of the query sequence length or 80% of the total subject sequence. We also used InterProScan v5.31 (Gene prediction was conducted using BRAKER v2.1.5 annotatean v5.31 . In addiTaeniopygia guttata) genome bTaeGut2.pri.v2 available at NCBI under the accession GCA_009859065.2.We assessed gene completeness in the genome assembly and the gene annotation using BUSCO v4.0.5 was usedneration .We sequenced and assembled a reference genome of the dark-eyed junco. Shotgun library produced 465 million read pairs (2\u2009\u00d7\u2009150\u2009bp). Chicago and Dovetail Hi-C libraries produced 218 million and 121 million read pairs (2\u2009\u00d7\u2009151\u2009bp), respectively. Overall, 121\u2009Gb were generated. Genome scaffolding with HiRise yielded an assembly of 4,684 scaffolds and 1.03\u2009Gb, with a sequence coverage of 117x; an L50/N50 equal to 5 scaffolds/71.3\u2009Mb and an L90/N90 of 19 scaffolds/14.1\u2009Mb; and a relatively low number of ambiguous bases (i.e. N) inserted in the genome 3.13%; .Taeniopygia guttata , while averaged lengths for these elements remained similar to other species annotations . The genome assembly, including the raw shotgun sequencing data, Chicago and Hi-C libraries have been deposited at NCBI under accession QZWM00000000.2; BioProject PRJNA493001; BioSample: SAMN10120167.Genome assembly, mitochondrial genome, genome annotation and related supporting resources have been deposited at DRYAD (doi: G3 online.jkac083_Supplementary_DataClick here for additional data file."}
+{"text": "Excessively deposited fibrotic scar after spinal cord injury (SCI) inhibits axon regeneration. It has been reported that platelet-derived growth factor receptor beta (PDGFR\u03b2), as a marker of fibrotic scar-forming fibroblasts, can only be activated by platelet-derived growth factor (PDGF) B or PDGFD. However, whether the activation of the PDGFR\u03b2 pathway can mediate fibrotic scar formation after SCI remains unclear.A spinal cord compression injury mouse model was used. In situ injection of exogenous PDGFB or PDGFD in the spinal cord was used to specifically activate the PDGFR\u03b2 pathway in the uninjured spinal cord, while intrathecal injection of SU16f was used to specifically block the PDGFR\u03b2 pathway in the uninjured or injured spinal cord. Immunofluorescence staining was performed to explore the distributions and cell sources of PDGFB and PDGFD, and to evaluate astrocytic scar, fibrotic scar, inflammatory cells and axon regeneration after SCI. Basso Mouse Scale (BMS) and footprint analysis were performed to evaluate locomotor function recovery after SCI.We found that the expression of PDGFD and PDGFB increased successively after SCI, and PDGFB was mainly secreted by astrocytes, while PDGFD was mainly secreted by macrophages/microglia and fibroblasts. In addition, in situ injection of exogenous PDGFB or PDGFD can lead to fibrosis in the uninjured spinal cord, while this profibrotic effect could be specifically blocked by the PDGFR\u03b2 inhibitor SU16f. We then treated the mice after SCI with SU16f and found the reduction of fibrotic scar, the interruption of scar boundary and the inhibition of lesion and inflammation, which promoted axon regeneration and locomotor function recovery after SCI.Our study demonstrates that activation of PDGFR\u03b2 pathway can directly induce fibrotic scar formation, and specific blocking of this pathway would contribute to the treatment of SCI. The inhibitory microenvironment composed of an inflammatory response in the acute phase and scar tissue formation in the chronic phase is considered to be the main reason that hinders axon regeneration after spinal cord injury (SCI) , 2. Beca+ pericytes/fibroblasts, which can be aborted by the PDGFR\u03b2 inhibitor SU16f [Platelet-derived growth factors (PDGFs) are a cysteine-knot-type growth factor family composed of four polypeptide chains A, B, C, and D . These gor SU16f . NeverthIn this study, our results showed that the expression of PDGFD occurred earlier than that of PDGFB after SCI, and PDGFB was mainly secreted by astrocytes, while PDGFD was mainly secreted by macrophages/microglia and fibroblasts. Intrathecal injection of the PDGFR\u03b2 inhibitor SU16f blocked the fibrosis induced by exogenous PDGFB or PDGFD in the uninjured spinal cord. In addition, SU16f blockade of the PDGFR\u03b2 pathway resulted in the reduction and interruption of fibrotic scar and the resolution of lesion and inflammation, thereby facilitating axon regeneration and locomotor function recovery after SCI. These results indicate that the PDGFR\u03b2 pathway is essential for fibrotic scar formation after SCI and is expected to be a therapeutic target for SCI.All experiments involving animals were approved by the Ethics Committee of Anhui Medical University . Eight-week-old C57BL/6 mice were acquired from the Animal Experiment Center of Anhui Medical University and were housed in an environment with controlled temperature and humidity and a 12:12\u00a0h light:dark cycle. The animals were randomly grouped and kept in standardized cages, where water and food were readily available.The establishment of the spinal cord compression injury model has been described in detail in our previous study . In brieThe object of in situ injection of PDGFB was the uninjured spinal cord of mice. The T10 spinal cord was exposed according to the established method of the spinal cord injury model, and then the mouse was fixed on the stereotaxic device. The insertion site of the microinjection needle was 0.3\u00a0mm lateral to the midline and 0.8\u00a0mm deep to the dorsal surface of the mouse spinal cord . Two micThe needle insertion site was located in the dorsal midpoint of the lumbar 5\u20136 intervertebral space as previously reported . It was To label proliferating fibroblasts, mice received intraperitoneal injection of 50\u00a0mg/kg body weight BrdU daily for 1\u20136 dpi. All mice were sacrificed at 7 dpi.After cardiac perfusion with 0.1\u00a0M PBS followed by 4% paraformaldehyde , the 0.5\u00a0mm segment of spinal cord tissue containing the injured core was placed in 4% PFA and postfixed for 5\u00a0h. The tissue was then placed in a 30% sucrose solution and dehydrated at 4\u00a0\u00b0C for 24\u00a0h until the tissue sank to the bottom. Finally, the tissue was cut into 18\u00a0\u03bcm-thick serial sagittal or coronal sections using a cryostat . The sections encompassing the lesion core or injection site were used.For BrdU staining, the sections were pretreated with 2\u00a0N hydrochloric acid at 37\u00a0\u00b0C for 30\u00a0min followed by 0.1\u00a0M borate buffer at room temperature for 10\u00a0min and were subjected to an immunofluorescence staining protocol. The sections were blocked in 10% donkey serum containing 0.3% Triton X-100 at room temperature for 1\u00a0h, followed by incubation with primary antibodies at 4\u00a0\u00b0C overnight. The primary antibodies included goat anti-PDGFR\u03b2 , goat anti-CD31 , goat anti-5-hydroxytryptamine (5-HT) , rabbit anti-PDGFB , rabbit anti-PDGFD , rabbit anti-fibronectin , rabbit anti-laminin , rabbit anti-neurofilament (NF) , rat anti-GFAP , rat anti-CD68 , rat anti-BrdU and rat anti-Ki67 . Subsequently, the sections were incubated with appropriate secondary antibodies at room temperature for 1\u00a0h, including donkey anti-goat Alexa Fluor 488, donkey anti-goat Alexa Fluor 555, donkey anti-goat Alexa Fluor 647, donkey anti-rabbit Alexa Fluor 555, donkey anti-rat Alexa Fluor 488 and donkey anti-rat Alexa Fluor 555 . Finally, the sections were stained with DAPI to label the nuclei. The negative control sections were incubated with secondary antibody alone.Representative images of the sections were acquired using a Zeiss LSM 900 confocal microscope system and a Zeiss Axio Scope A1 fluorescence microscope. Staining colocalization was determined using ZEN 3.3 software to examine each of the ten one-micron Z-stack slices. Image processing was performed using ImageJ version 2.0 .+, CD68+, CD31+, PDGFR\u03b2+, PDGFB+ and PDGFD+ cells, 100\u00a0\u03bcm square grids were generated over the injured site [+ cells were counted. One section encompassing the lesion core in each sample was used for counting, with 5 samples per group.All quantitative analyses were performed in a blind fashion. To quantify GFAPred site . Every 6\u2212 area and CD68+ area was normalized to the area of the spinal cord segment spanning the injured core in a 4\u2009\u00d7\u2009image. To evaluate axon regeneration, the immunoreactivity of 5-HT was normalized to the area of the spinal cord segment spanning the injured core in a 10\u2009\u00d7\u2009image, and the number of NF+ axons longer than 1\u00a0\u03bcm in the GFAP\u2212 region was counted and normalized to the area of the GFAP\u2212 region. For each sample, sections spanning the injured core and two adjacent sections spaced 180\u00a0\u03bcm apart were quantified, and the results from each section were averaged, with 5 samples per group.To evaluate the area of fibrotic scar, the immunoreactivities of PDGFR\u03b2, fibronectin and laminin were normalized to the area of the spinal cord segment spanning the injured core in a 4\u2009\u00d7\u2009image . Similar+ PDGFR\u03b2+ or Ki67+ PDGFR\u03b2+ cells were counted on 40\u2009\u00d7\u2009images spanning the injured core. The average of three random 40\u2009\u00d7\u2009images was used as the final result of each sample, with 5 samples per group.To evaluate the proliferation of fibroblasts, BrdUThe Basso Mouse Scale (BMS) is widely used to evaluate locomotor function recovery after SCI in mice . In thisFootprint analysis was used to further evaluate locomotor function recovery at 28 dpi and was performed according to previous reports . The micAll behavioural assessments were performed in a blind fashion.t test. Data analysis and chart production were performed using GraphPad Prism 8.0 , and a value of p\u2009<\u20090.05 was considered statistically significant.The data are presented as the mean\u2009\u00b1\u2009standard error of the mean (SEM), and individual data points are plotted in the figures. The statistical methods used are presented in the figure legends. Multiple comparisons were analysed with one-way or two-way analysis of variance (ANOVA) with a post hoc Tukey\u2013Kramer test, and comparisons between two groups were performed using Student\u2019s + fibroblasts increased significantly and aggregated gradually to the injured site at 3 to 7 dpi, while a contiguous fibrotic scar boundary formed to corral the injured core at 14 to 28 dpi (Figs. After SCI, PDGFR\u03b2 is expressed in all fibrotic scar-forming fibroblasts , and PDG+ astrocytes or PDGFR\u03b2+ fibroblasts at 14 dpi (Fig.\u00a0+PDGFB+ cells and PDGFR\u03b2+PDGFB+ cells accounted for 83.26\u2009\u00b1\u20091.56% and 13.69\u2009\u00b1\u20090.85% of PDGFB+ cells, respectively (Fig.\u00a0+ macrophages/microglia or CD31+ vascular endothelial cells (F+PDGFB+ cells and PDGFR\u03b2+PDGFB+ cells were adjacent to each other at the edge of the injured core (Fig.\u00a0To preliminarily explore the cell sources of PDGFB and PDGFD, we detected their costaining with the main cell components of the injury site, including macrophages/microglia, fibroblasts, astrocytes and vascular endothelial cells. GFAP was used to label astrocytes, CD31 was used to label vascular endothelial cells, and CD68 was used to label macrophages/microglia. The staining results showed substantial colocalization between PDGFB and GFAP+ macrophages/microglia or PDGFR\u03b2+ fibroblasts at 14 dpi (Fig.\u00a0+PDGFD+ cells accounted for 45.63\u2009\u00b1\u20091.68%, while PDGFR\u03b2+PDGFD+ cells accounted for 46.23\u2009\u00b1\u20091.59% of PDGFD+ cells (Fig.\u00a0+ vascular endothelial cells and no colocalization between PDGFD and GFAP+ astrocytes (F+PDGFD+ cells accounted for 5.18\u2009\u00b1\u20090.61% of PDGFD+ cells (Fig.\u00a0+PDGFD+ cells were in close contact with CD68+PDGFD+ cells at the injured core (Fig.\u00a0PDGFD mainly colocalized with CD68+ fibroblasts to accumulate in the uninjured spinal cord (Fig.\u00a0To directly investigate the effect of the PDGFR\u03b2 pathway, a single factor, on fibrotic scar formation after SCI, we injected exogenous PDGFB or PDGFD into the uninjured spinal cord to activate the PDGFR\u03b2 pathway. Immunofluorescence staining was used to detect PDGFR\u03b2, fibronectin and laminin to observe the changes in fibroblasts and fibrous ECM. The results of the control group showed that the injection itself did not lead to PDGFB or PDGFD expression and fibroblast aggregation (Fig.\u00a0ord Fig.\u00a0A\u2013X and tord Fig.\u00a0M\u2013P, 2M\u2013Pord Fig.\u00a0D\u2013F, J\u2013L,ord Fig.\u00a0P\u2013R, V\u2013X.+, fibronectin+ and laminin+ staining, was significantly reduced in the uninjured spinal cord of the mice that received the combined injection of SU16f and PDGFB or PDGFD (Fig.\u00a0To further verify the specific role of the PDGFR\u03b2 pathway in fibrotic scar formation, intrathecal injection of SU16f was used to block the activation of PDGFR\u03b2 in the uninjured spinal cord that received the injection of PDGFB or PDGFD. SU16f is a potent and highly selective PDGFR\u03b2 inhibitor that displays\u2009>\u200914-fold,\u2009>\u2009229-fold and\u2009>\u200910,000-fold selectivity over VEGFR2, FGFR1 and EGFR, respectively, and SU16f has been used to specifically block PDGFR\u03b2 \u201324. In o+, fibronectin+ and laminin+ areas, was significantly reduced at 28 dpi after the intrathecal injection of SU16f compared with the control group (Fig.\u00a0To further confirm the role of the PDGFR\u03b2 pathway in regulating fibrotic scar formation after SCI, intrathecal injection of SU16f was used to treat the mice with SCI. The injured spinal cord is in the period of apoptosis and necrosis at 3 dpi, from which fibroblasts begin to proliferate and aggregate in the injured site . Therefo+PDGFR\u03b2+ cells (Fig.\u00a0+PDGFR\u03b2+ cells (FIt has been reported that the number of fibroblasts reaches its peak at 7 dpi, which is mainly caused by the proliferation of fibroblasts inherent in the spinal cord, suggesting that fibroblasts proliferation is an important process of fibrotic scar formation after SCI . Therefo\u2212 area indicated that the lesion size was significantly reduced at 28 dpi after the intrathecal injection of SU16f (Fig.\u00a0Following SCI, astrocytic scar and fibrotic scar form a dense and contiguous boundary surrounding the injured core, which is one of the important reasons for the failure of axon regeneration , 6, 17. + inflammatory cell area at 28 dpi (FIt has been reported that fibrotic scar corrals inflammatory cells to the injured core after SCI, contributing to limiting inflammation , 17. How+ or 5-HT+ axons. The GFAP\u2212 area was used to distinguish the injured core. The results showed that compared with the control group, the NF+ axon density of the injured core in the SU16f group increased significantly after SCI (Fig.\u00a0+ axons of the injured site after SCI (F+ axons that passed through the injured core to the caudal side after SCI, which was not observed in the control group (Fig.\u00a0To further confirm whether the PDGFR\u03b2 pathway can be used as a therapeutic target for SCI and the effect of SU16f on axon regeneration after SCI, immunofluorescence staining was used to assess the regeneration of NFSCI Fig.\u00a0A\u2013H, R. Iter SCI FI\u2013P, S. Toup Fig.\u00a0 and Q. Toup Fig.\u00a0.Fig. 9InFurthermore, BMS score and footprint analysis were used to analyse the recovery of locomotor function after SCI. Compared with the mice in the control group, the mice injected with SU16f obtained better hind limb locomotor function at 14, 21 and 28 dpi, corresponding to a higher BMS score Fig.\u00a0B. In addIn this study, we found that the expression of PDGFD occurred earlier than that of PDGFB after SCI, and PDGFB was mainly secreted by astrocytes, while PDGFD was mainly secreted by macrophages/microglia and fibroblasts. Moreover, in situ injection of exogenous PDGFB or PDGFD can lead to fibrosis in the uninjured spinal cord, while SU16f blockade of the PDGFR\u03b2 pathway reduced the fibrotic scar area, interrupted the fibrotic/astrocytic scar boundary, shrunk the lesion and inhibited inflammation, promoting axon regeneration and locomotor function recovery after SCI. Therefore, the PDGFR\u03b2 pathway is expected to be a therapeutic target after SCI.SCI is a devastating trauma and causes sensory and locomotor dysfunction in patients, and there is currently a lack of effective clinical treatments , 26. TheFollowing SCI, perivascular fibroblasts leave blood vessels, proliferate and migrate to the injured site at 3\u20137 dpi , 5. At 7+ cells, indicating that PDGFR\u03b2 is specifically expressed in fibroblasts after SCI [PDGFR\u03b2 is a transmembrane receptor tyrosine kinase composed of an intracellular tyrosine kinase domain and an extracellular ligand binding domain . PDGFB ofter SCI , 8, 13. Fibroblasts, astrocytes, vascular endothelial cells and macrophages/microglia are important cellular components at the injured site of SCI, and recent evidence has demonstrated extensive crosstalk among them , 39, 40.To directly explore the effect of the PDGFR\u03b2 pathway, a single factor, on fibrotic scar formation, we injected exogenous PDGFB or PDGFD into the uninjured spinal cord instead of the injured spinal cord to avoid the influence of the complex microenvironment of SCI. Our results showed that both PDGFB and PDGFD can promote fibrosis in the uninjured spinal cord, and the profibrotic effect can be blocked by the PDGFR\u03b2 inhibitor SU16f. The results of FN- or LN-labelled fibrosis was consistent with those of PDGFR\u03b2-labelled fibrosis. Therefore, our results were reliable and preliminarily confirmed that the activation of the PDGFR\u03b2 pathway is sufficient to induce fibrosis. Notably, SU16f completely blocked PDGFD-induced fibrosis but only partially blocked PDGFB-induced fibrosis in the uninjured spinal cord, suggesting that PDGFB and PDGFD may be involved in different phases of fibrotic scar formation. We emphasize that the process and mechanism are worthy of in-depth study. In addition, SU16f blockade of the PDGFR\u03b2 pathway was performed to further confirm the effect of the PDGFR\u03b2 pathway on fibrotic scar formation after SCI. The results showed that SU16f significantly inhibited the proliferation of fibroblasts and reduced fibrotic scar after SCI. Therefore, our results provide direct evidence that the PDGFR\u03b2 pathway mediates fibrotic scar formation after SCI, which can be blocked by SU16f inhibiting the proliferation of fibroblasts.+ or 5-HT+ axons that passed through the injured core. Interestingly, our results showed that SU16f-induced reduction in fibrotic scar led to a smaller area of inflammatory cells at 28 dpi. The Jonas Fris\u00e9n group used Glast\u2013Rasless transgenic mice to completely eliminate fibrotic scar after SCI, leading to the spread of inflammatory cells at 14 dpi. However, a moderate reduction in fibrotic scar did not lead to the spread of inflammatory cells at 14 dpi but led to a reduction in inflammatory cells at 28 dpi [+ fibroblasts [The dense contiguous fibrotic/astrocytic scar boundary is an important component of the inhibitory microenvironment after SCI . The phyroblasts , suggestroblasts , SCI in roblasts , 45. Our+ fibroblasts. PDGFD is mainly secreted by macrophages/microglia and fibroblasts and distributed at the lesion epicentre, while PDGFB is mainly secreted by astrocytes and distributed around the lesion epicentre. Intrathecal injection of the PDGFR\u03b2 inhibitor SU16f blocked the fibrosis induced by exogenous PDGFB or PDGFD in the uninjured spinal cord. Furthermore, blocking the PDGFR\u03b2 pathway with SU16f reduces fibrotic scar, interrupts scar boundary and inhibits lesion and inflammation, promoting axon regeneration and locomotor function recovery after SCI. This study confirms that the PDGF/PDGFR\u03b2 pathway plays a critical role in fibrotic scar formation after SCI and is expected to be a specific target for the treatment of SCI.The present study reveals that PDGFD and PDGFB increase successively after SCI and can activate PDGFR\u03b2"}
+{"text": "Chronic pain (CP) is a prevalent problem, and more than half of patients with CP have sleep disorders. CP comorbidity with sleep disorders imposes immense suffering and seriously affects the patient\u2019s quality of life, which is a challenging issue encountered by clinicians. Although the reciprocal interactions between pain and sleep have been studied to some degree, there is still a lack of awareness and comprehensive description of CP comorbidity with sleep disorders. In this narrative review article, we summarize the current knowledge about the present estimates of the prevalence of comorbid sleep disorders in CP patients, sleep detection methods, sleep characterization in CP, and the effect of sleep disorders on CP and current therapies. We also summarize current knowledge of the neurochemical mechanisms of CP comorbidity with sleep disorders. In conclusion, insufficient attention has been paid to the role of sleep disorders in CP patients, and CP patients should be screened for sleep disorders in the clinic. Special attention should be given to a possible risk of drug\u2013drug interaction when using two types of drugs targeting pain and sleep simultaneously. The current insight into the neurobiological mechanisms underlying CP comorbidity with sleep disorders is still rather limited. AccordiSleep is an important physiologic process to maintain homeostasis and function of the body . Over 652.2.1.Sleep disorders are also a major public health problem that plagues human physical and mental health . Accordi2.2.Various sleep indicators can be used to assess CP patients\u2019 sleep quality or duration, including sleep onset time, wakefulness after sleep onset (WASO), sleep onset latency (SOL), sleep efficiency (SE), and total sleep time (TST). There are both subjective and objective methods to assess sleep quality. In terms of objective methods for monitoring sleep, polysomnography (PSG) and actigraphy have high reliability in obtaining information on sleep parameters. PSG is the gold standard method for analyzing sleep quality, using many sensors and electronics; however, it requires a cumbersome, complex setup of electronic sensors and needs to be performed in a laboratory under the control of trained technicians. These requirements may disrupt natural sleep patterns, and one-night sleep data are often insufficient to represent normal sleep behavior. Therefore, it cannot be used frequently in clinical practice due to the disadvantages of economic cost and time costs . Nowaday2.3.Sleep disturbances include reduced SE and altered sleep architecture, which occur in patients with CP . CP pati2.4.Poor sleep is a key factor in the development and maintenance of CP . Sleep d2.5.Due to the significant sleep-pain interactions, it has been suggested that multidisciplinary treatment is required to manage CP. Currently, treatments for sleep disorder of patients with CP include pharmacotherapy and non-pharmacological therapy.2.5.1.In terms of pharmacotherapy, two aspects are particularly important to consider. The first aspect is that medications used to treat pain or sleep can also have direct effects on sleep or pain, respectively. The second aspect is that multiple drugs are frequently used in clinical practice to target sleep disorders and CP simultaneously, which might increase the risk of drug interactions. This topic is discussed in detail in a review article by Herrero et al. . Here, t2.5.1.1.2.5.1.1.1.OPRM1 118-GG genotype were more susceptible to an increase in sleep problems and worsening sleep patterns while taking opioids analogs for the treatment of neuropathic pain and have positive effects on sleep disturbances in neuropathic pain . A meta-2.5.1.1.4.Tricyclic antidepressants (TCAs), as first-line or augmenting drugs, were widely used to treat CP conditions, including headache, migraine, neuropathic pain, chronic low back pain, fibromyalgia, chronic widespread pain, and abdominal and gastrointestinal pain . Amitrip2.5.1.2.2.5.1.2.1.BZRAs are the most well-known extensively prescribed medication to treat sleeping disorders used as adjuvant therapy for pain management. Recently, a narrative review has suggested that BZRAs have analgesic benefits for burning mouth syndrome and stiff person syndrome and for treating co-occurring insomnia and anxiety disorders for short periods of time (2\u20134\u2009weeks) in CP management . Special2.5.1.2.2.Melatonin has a significant role in regulating the sleep\u2013wake cycle and inhibits arousal signals. The existing evidence shows that melatonin can reduce the CP . A rando2.5.1.2.3.Suvorexant is a selective, dual orexin receptor antagonist approved in the USA and Japan for the treatment of insomnia . A doubl2.5.2.2.5.2.1.Cognitive behavioural therapy (CBT) is also the most commonly used psychological approach to treat CP . Meanwhi2.5.2.2.Althogh not recommended by current guidelines, many alternative and complementary therapies are popular worldwide and have been increasingly studied to treat CP and sleep disorders, including music therapy, aromatherapy, massage, and acupuncture , 86. Amo3.Although CP accompanied by sleep disorders is commonly encountered clinically, our knowledge about the basic neurochemical mechanisms remains rudimentary. Here, we review the majority of recent findings on the perspective of neurochemical mechanisms.3.1.Serotonin, dopamine, and norepinephrine are neurotransmitters of the monoaminergic system, which are involved in the regulation of the endogenous pain system and the sleep\u2013wake system , 92. Hyp3.1.1.Serotonin , the main effector of the serotonergic system, exhibits its effect by activating different receptor subtypes. Neuropathic pain can accelerate the activity of 5-HTergic neurons of the dorsal raphe nucleus (DRN), and activated 5-HTergic neurons produce a significant increase in wakefulness and a significant decrease in NREM sleep in a sciatic nerve ligation (CCI) mouse model . Mirtaza3.1.2.Dopamine is a neurotransmitter and neuromodulator that can target dopamine neurons to cause inhibitory or excitatory effects . A cross3.1.3.Norepinephrine (NE) is an important neurotransmitter in the central nervous system. The locus coeruleus (LC) is the primary source of NE in the brain. The LC-spinal cord noradrenergic pathway is one of the most important pain inhibitory pathways that release norepinephrine (NE) to inhibit the ascending of pain signals . NE leve3.2.Adenosine is a purine nucleoside that exerts a broad range of biological effects by binding to adenosine receptors (ARs). Adenosine exerts the analgesic effect primarily via the activation of A1AR located at peripheral, spinal, and supraspinal sites . On the 3.3.Melatonin is a neuroendocrine hormone, mainly synthesized and secreted by the pineal gland, which has a wide range of physiological functions, including regulation of circadian rhythms, enhancement of immune function, improvement of sleep, and pain . Sleep d3.4.Gamma-aminobutyric acid (GABA) and glutamate (Glu) are major inhibitory and excitatory neurotransmitters, respectively. A proton magnetic resonance spectroscopy study has found that patients with chronic migraine had significantly lower levels of GABA in the dentate nucleus (DN) and higher levels of Glu in the periaqueductal gray (PAG), and higher GABA levels in the PAG were significantly associated with poorer sleep quality in all patients with migraine . The ext4.Sleep disorders secondary to chronic pain are a very common phenomenon. There is considerable evidence showing a reciprocal association between pain and sleep, especially in CP. CP results in insufficient sleep time and quality, which in turn increases pain sensitivity and severely compromises the pain management and treatment outcomes of patients. CP can affect sleep in terms of sleep time, sleep structure, sleep depth, etc., resulting in reduced sleep time, sleep structure disorder, sleep fragmentation, and reduced sleep depth. Sleep fragmentation in synergy with CP may lead to prolonged and exacerbated allodynia. Moreover, sleep quality can be a predictor of next-day pain, and short-term improved sleep can contribute to long-term clinical benefits in CP patients. However, sleep assessment is not performed in patients with CP in routine clinical practice . Thus, eAt present, the first choice for CP control is still drug therapy; however, there is still a lack of ideal drugs that are proven to be effective in both aspects. In CP, many drugs can only relieve pain but cannot improve sleep disorders or have hypnotic effects but cannot solve the pain problem. Therefore, it is of great significance to elucidate the underlying mechanisms of the interaction between CP and sleep disorders and seek new treatments. Special attention should be given to the possibility of a risk of drug\u2013drug interaction when using two types of drugs targeting pain and sleep simultaneously. Therefore, more clinical data and basic research are required. At the same time, the available clinical evidence has suggested that nonpharmacologic therapy has certain therapeutic effects on pain and sleep and should receive more attention.There is a considerable amount of research on underlying mechanisms for the development of CP and sleep disorders. Neurotransmitters, such as melatonin, cortisol, norepinephrine, and dopamine, are involved in the control of the circadian clock, as well as the regulation of pain perception and pain. CP-induced sleep disorders are closely related to the monoaminergic, adenosine, histamine, melatonin, GABAergic, and orexinergic systems. The initiation and maintenance of sleep, as well as sleep homeostasis, are regulated by complex pathways in these systems . Such reThis study has several limitations. As a narrative review, some relevant articles may have not been included. We only focused on neurochemical mechanisms. Other pathologic mechanisms in neuroplasticity and neural circuitry on sleep disorders secondary to CP are also very important, which were not involved here. Furthermore, sleep problems in persons with CP were a complicated phenomenon involving not only physiological but also psychological and social factors.Our review aimed to raise awareness for both psychiatric and non-psychiatric practitioners about the importance of sleep disorders secondary to chronic pain. The presence of sleep disorders in CP can aggravate the pain, as well as seriously affect the quality of life of patients. In terms of treatment, CBT is the best non-pharmacological intervention, while pharmacological treatments require further in-depth research. The research on sleep disorders secondary to CP is still in its infancy, and further elucidation of the underlying mechanisms of the interaction between CP and sleep disorders is crucial for developing more effective therapeutic strategies to improve pain and sleep.KW and JZ directed the project and revised the manuscript. KW, LD, and XY designed research. LD, XY, RH, and XD were involved in bibliographic research and data collection. LD and XY wrote the manuscript. LD, XY, RH, XD, JZ, and KW discussed the results and commented on the manuscript. All authors contributed to the article and approved the submitted version.This work was financially supported by the National Natural Science Foundation of China (81973940), Shanghai Clinical Research Center for Acupuncture and Moxibustion (20MC1920500), and Shanghai Municipal Commission of Health and Family Planning (ZY(2021-2023)-0208).The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher."}
+{"text": "The North Atlantic Ocean hosts the largest volume of global subtropical mode waters (STMWs) in the world, which serve as heat, carbon and oxygen silos in the ocean interior. STMWs are formed in the Gulf Stream region where thermal fronts are pervasive and result in feedback with the atmosphere. However, their roles in STMW formation have been overlooked. Using eddy-resolving global climate simulations, we find that suppressing local frontal-scale ocean-to-atmosphere (FOA) feedback leads to STMW formation being reduced almost by half. This is because FOA feedback enlarges STMW outcropping, attributable to the mixed layer deepening associated with cumulative excessive latent heat loss due to higher wind speeds and greater air-sea humidity contrast driven by the Gulf Stream fronts. Such enhanced heat loss overshadows the stronger restratification induced by vertical eddies and turbulent heat transport, making STMW colder and heavier. With more realistic representation of FOA feedback, the eddy-present/rich coupled global climate models reproduce the observed STMWs much better than the eddy-free ones. Such improvement in STMW production cannot be achieved, even with the oceanic resolution solely refined but without coupling to the overlying atmosphere in oceanic general circulation models. Our findings highlight the need to resolve FOA feedback to ameliorate the common severe underestimation of STMW and associated heat and carbon uptakes in earth system models. Frontal-scale ocean-to-atmosphere feedback determines the formation of subtropical mode water in the North Atlantic. Subtropical mode waters (STMWs) are upper-ocean voluminous water masses characterized by vertically homogeneous temperature and salinity, and originate from the wintertime deep mixed layer on the warm flank of the western boundary current systems . In the Given the importance of STMW with regard to memorizing climate variations and regulating ocean biogeochemical cycles, the physics governing its formation and destruction has received much attention. Traditionally, it has been considered that the bowl shape of STMW south of the Gulf Stream is constructed by wintertime intense convective mixing and eroded by diapycnal mixing . The basHigh-resolution ocean models provide some insight into the Ekman-driven convection induced by winds blowing in the downstream direction of oceanic thermal fronts ,20, the To quantify the contribution of frontal-scale ocean-to-atmosphere (FOA) feedback to STMW production, a set of twin eddy-resolving global climate simulations were conducted using the Community Earth System Model .The reliability of the CESM simulations in capturing STMW, characterized by its low potential vorticity (PV) within the pycnocline, was first verified by comparing vertical profile of PV with three observation-based data sets . AccordiTRL Fig.\u00a0. The inc\u22122 between CTRL and FILT , which is partly replenished by vertical eddy heat transport and turbulent vertical mixing . The heat transport convergence by the mean flows (mfQ) and the lateral eddy heat flux (eddyhQ) are found to have negligible contributions. The increased vertical eddy and turbulent heat transport is a likely consequence of the ageostrophic secondary circulation associated with turbulent thermal wind balance due to the strong surface cooling in the presence of FOA feedback [The cooling STMW core layer implies a change in the upper-ocean heat content in the presence of FOA feedback. To shed light on the role of surface forcing and internal oceanic processes, we performed the upper-ocean heat content budget analysis based on the twin CESM simulations is almost double that in FILT (10.5\u00a0Sv), indicating the much more intense air-sea buoyancy exchange on rather short time scales during STMW formation in the presence of FOA feedback Fig.\u00a0. Such hiA natural question then arises as to what the relative importance of different components of air-sea buoyancy flux is to the increased FR in the presence of FOA feedback. Those components include surface net heat flux, net freshwater flux and Ekman flux driven by wind-induced cross-front advection of density. Clearly, the wintertime-mean STMW formation map by the air-sea buoyancy flux is attributable to that by surface net heat flux, with both being confined to the recirculation gyre region south of the Gulf Stream . The conWe decomposed the LHF-induced FR difference between CTRL and FILT into the direct contribution from LHF difference and the indirect contribution from STMW outcrop area difference reveals that surface outcrops in late winter (February\u2013March) and the injection of low PV of STMW into the ocean interior are closely associated with the seasonal deepening of the mixed layer . As exper = \u20130.77, p\u00a0<\u00a00.001; Fig.\u00a0\u22122 increase in the accumulated latent heat release within two months precedes the occurrence of \u223c23\u00a0m MLD increase in late winter. Hence, the deeper mixed layer in CTRL than FILT is attributable to the cumulative excessive ocean latent heat release driven by FOA feedback. Further decomposition at standard horizontal resolutions (\u223c250\u00a0km atmosphere and 100\u00a0km ocean) are biased towards lower production ,37. Thesr = \u20130.71, p\u00a0<\u00a00.02), indicating that modeling more realistic FOA feedback is crucial to alleviating the common bias of a too-small STMW volume in CGCMs at standard CMIP resolutions, and lending further support to our CESM experimental findings.Critically, the total volumes of STMWs in the eddy-free configurations of six CGCMs are 2.3\u20136.6 Svy, on average, only one fifth of the observational mean (16.3 Svy) Fig.\u00a0. The totCollectively, our study demonstrates that the feedback of sharp surface thermal fronts, shaped by the Gulf Stream, to the overlying atmosphere is essential for STMW formation as it transforms plenty of lighter water masses into STMW through the cumulative extensive latent heat loss and the resultant increased surface outcropping. The enhanced surface heat release into the atmosphere is mainly caused by higher surface wind speed and sharper air-sea humidity contrast driven by the Gulf Stream fronts, and leads to the STMW-related upper-ocean cooling, which is partly compensated by the increased vertical eddy and turbulent heat transport. According to the annually integrated volume budget, the dispersion of STMW is also slightly increased in the presence of FOA feedback, maybe due to the increased lateral transportation by the strengthened Gulf Stream extension current . The mesRecent studies have pointed out changes in large-scale atmospheric circulation under different SST resolutions ,41, whicWe also demonstrate that the eddy-present and eddy-rich CGCMs reproduce more realistic spatial distributions and total volumes of STMWs compared to their low-resolution eddy-free counterparts and OGCMs, due to stronger FOA feedback intensity within the STMW formation region as the model resolution becomes finer. Resolving FOA feedback, therefore, is of paramount importance in reducing the severe underestimation of STMW in most models participating in CMIPs, and would improve representation of STMW\u2019s climatic and biogeochemical impacts. This could be achieved by a coordinated increase in oceanic and atmospheric resolutions or by parameterization of SST front-driven winds in coarse-resolution models ,43. The Detailed descriptions of methods are available in the nwad133_Supplemental_FileClick here for additional data file."}
+{"text": "With an external additional working channel (AWC) endoscopic mucosal resection (EMR) as well as endoscopic submucosal dissection (ESD) can be extended to techniques termed \u201cEMR+\u201d and \u201cESD+.\u201d These novel techniques are systematically compared to EMR and ESD under the use of a double-channel endoscope (DC).Our trial was conducted prospectively in a pre-clinical porcine animal model (EASIE-R simulator) with standardized gastric lesions measuring 3 or 4\u00a0cm.p\u2009=\u20090.70). They came to their limits in 4\u00a0cm lesions with muscularis damages of 20.00% (EMR+), 13.33% and decreasing en bloc resection rates of 60.00% (EMR+) and 46.67% .EMR+\u2009and EMR DC showed both good results for 3\u00a0cm lesions with no adverse events and an en bloc resection rate of 73.33% (EMR+) and 60.00% and adverse events .p\u2009<\u20090.05*).Resection time was slightly shorter in all groups with the AWC compared to DC although only reaching significance in 3\u00a0cm ESD lesions (With the AWC, a standard endoscope can easily be transformed to double-channel functionality. We could show that EMR+\u2009and ESD+\u2009are non-inferior to EMR and ESD under the use of a double-channel endoscope. Consequently, the AWC presents an affordable alternative to a double-channel endoscope for both EMR and ESD. Endoscopic mucosal resection (EMR) offers a safe, cost-effective, and well-established interventional endoscopic technique for the successful resection of many precancerous gastrointestinal lesions . With itConsequently, endoscopic submucosal dissection (ESD) needs to be considered in lesions\u2009\u2265\u20093\u00a0cm. ESD has become a standard interventional endoscopic procedure also in western expert centers since its initial development in Japan \u201316. In tIn endoscopic expert centers, ESD is frequently performed with double-channel endoscopes. This offers a grasp and snare technique comparable to ESD+\u2009, 19. HowIn a pre-clinical porcine ex vivo animal model, we prospectively compare the novel techniques EMR+\u2009and ESD+\u2009with EMR under the use of a double-channel endoscope and, respectively, ESD via a double-channel endoscope. This is carried out in order to investigate whether EMR+\u2009and ESD+\u2009are non-inferior to EMR and ESD via a double-channel endoscope.This trial was a prospectively designed ex vivo study. Since no living animals or humans were included, it was exempted from IRB. The experiments were conducted at the Laboratory for Experimental Endoscopy in the Department of Gastroenterology, Gastrointestinal Oncology and Endocrinology of the University Medical Center G\u00f6ttingen in Germany.The cleaned porcine stomachs used for the experiments were defrosted prior to intervention. Afterward, they were placed into the EASIE-R simulator , a well-established model for interventional endoscopic training and research that has also been evaluated at our research unit for several endoscopic procedures , 20, 21.A well-trained senior endoscopist with previous EMR and ESD expertise in humans as well as in animal models performed all interventions (EMR+\u2009and ESD+\u2009with the AWC as well as EMR and ESD with the double-channel endoscope). The endoscopist was assisted by an experienced endoscopic nurse.EMR becomes particularly challenging beyond a lesion size of 2\u00a0cm . TherefoIn analogy to the setup known from the full-thickness resection device (FTRD), the AWC can be mounted at the tip of a standard endoscope. The AWC features a shaft with a length of either 122\u00a0cm (endoscope insertion length: 103\u2013110\u00a0cm) or 185\u00a0cm (endoscope insertion length: 160\u2013170\u00a0cm). It has a flexible attachment for endoscope diameters from 8.5 to 13.5\u00a0mm. Via an adaptor, the AWC is fixed at the endoscope handle. A valve can be connected to the adaptor via Luer-lock. The AWC comes along with a sleeve and an adhesion tape.Instruments with an outer diameter of up to 2.8\u00a0mm can be introduced via the AWC .Principally, the AWC can be rotated 360\u00b0 on the distal tip of the endoscope. In our experiments, all AWC procedures were conducted with the AWC in the counterpart position to the working channel Fig.\u00a0A. The AWEMR+\u2009and ESD+\u2009were conducted with a conventional gastroscope , the AWC device and, in the case of ESD+, with the AqaNife 2.5\u00a0mm needle length. Setup and principle of EMR+\u2009and ESD+\u2009technique with the help of the AWC are shown in Figs. EMR and ESD were performed with the double-channel endoscope EG-530D Fujinon, Fujifilm, Tokyo, Japan and, in the case of ESD, with the AqaNife 2.5\u00a0mm needle length. The setup of EMR DC and ESD DC is shown in Figs.\u00a0In all resections, the FTRD grasper was used. In EMR and EMR+\u2009resections, a 33-mm snare was applied. In both ESD techniques, the ESD injection fluid was Hydroxyethyl starch (HAES) mixed with methylene blue dye for better visualization and optimal tissue differentiation. The electrosurgical unit was ERBE VIO 200 D with the mode EndoCut Q.Primary end point Rate of en bloc resection.Secondary end points Time of procedure for EMR+\u2009and ESD+\u2009as well as for EMR and ESD via double-channel endoscope (minutes), adverse events .By an independent observer, the following parameters were recorded:After every resection the specimens were spread out and pinned on cork plates. The en bloc resection was evaluated and documented. En bloc resection was defined as the complete resection with all previously marked coagulation dots within the resected specimen. All procedures were intended to be en bloc resections. The procedure time was defined from submucosal injection of the lesion until its complete resection. Every resection site was visually inspected for muscular damage. By an insufflation test of the porcine stomach, potential perforations were evaluated.U Test. As usual, we considered p values less than 0.05 as statistically significant. They are marked by asterisk.Data analysis was performed with SPSS Version 28.0.1.1 and Prism 9 for macOS Version 9.4.1 . The analysis of adverse events and en bloc resection rates was conducted with Fisher\u2019s Exact Test. The time of procedure was analyzed by Mann\u2013Whitney Lesions with two different sizes with a diameter of 3\u00a0cm and 4\u00a0cm were set in the EMR as well as in the ESD groups Fig.\u00a0. AltogetOverall, 96 endoscopic procedures were conducted in the porcine ex vivo model Fig.\u00a0. In detap\u2009=\u20090.70.In 3\u00a0cm lesions, EMR+\u2009reached an en bloc resection rate of 73.33% (11/15) compared to 60.00% (9/15) with EMR DC, p\u2009=\u20090.72.In 4\u00a0cm lesions, EMR+\u2009reached an en bloc resection rate of 60.00% (9/15) compared to 46.67% (7/15) with EMR DC, Both ESD+\u2009and ESD DC showed an en bloc resection rate of 100% in all lesions\u2019 sizes (36/36).p\u2009=\u20090.48; EMR DC 7.13 (SD 2.23) vs. 9.20 (SD 2.46) min, p\u2009=\u20090.02*; ESD+\u200921.60 (SD 5.17) vs. 29.25 (SD 7.36) min, p\u2009=\u20090.03*; ESD DC 26.60 (SD 5.19) vs. 35.75 (SD 6.27) min, p\u2009<\u20090.01*].In all groups, the mean procedure time was shorter in 3\u00a0cm lesions compared to 4\u00a0cm lesions vs. 7.13\u00a0min (SD 2.23), vs. 7.13p\u2009=\u20090.28] vs. 9.20\u00a0min (SD 2.46), p\u2009<\u20090.05*] vs. 26.60\u00a0min (SD 5.19), 5*] Fig.\u00a0B.p\u2009=\u20090.07] vs. 35.75\u00a0min (SD 6.27), 07] Fig.\u00a0B.In 3\u00a0cm lesions, no perforations or muscularis damages occurred neither in the EMR nor in the ESD groups.p\u2009\u2265\u20090.99). Also, in 4\u00a0cm lesions, there was 1 muscularis damage under ESD+\u2009as well as 1 under ESD DC (p\u2009\u2265\u20090.99).In 4\u00a0cm lesions, we observed 3 muscularis damages with EMR+\u2009and 2 muscularis damages with EMR DC with the full-thickness resection device (FTRD) in particular indications \u201324. A loCompared to EMR, ESD is a reliable and elegant technique for extended endoscopic resections featuring higher rates of R0 resections and consequently lower rates of recurrence . With reTo address these challenges, ESD+\u2009was developed in order to improve feasibility and safety of ESD. Similar to EMR+, its principle is based on the AWC , 10, 17.Certainly, in EMR as well as in ESD, a double-channel endoscope can be used to achieve better tissue traction with a simultaneous grasp and snare technique , 38\u201340. In our study, EMR+\u2009and EMR DC provide convincing data in terms of en bloc resection rates and safety in 3\u00a0cm lesions but both come to technical limits in 4\u00a0cm lesions. This validates the previous data . We can Our prospective study was conducted in a well-established porcine ex vivo model. This comes along with inherent limitations concerning transferability to living humans. The model can obviously not recapitulate bleeding, tissue movement, and other physiological features, e.g., neoplastic recurrence and stricture outcome. Also, a histopathological examination is not expedient. Furthermore, pigs have a thicker gastric mucosa and consequently a higher mucosal rigidity compared to humans. This may affect technical opportunities of all techniques applied in our study. Due to our experimental setup, in all groups we sought for a homogenous arrangement of the lesions\u2019 positions (antegrade vs. retrograde). Since the study design would have become too confusing, we explicitly decided not to further subdivide our study arms to different positions of the lesions. Therefore, this study is not randomized which can also be regarded as a limitation.EMR+\u2009and ESD+\u2009under use of the AWC allow fast and safe endoscopic resections. With the AWC, a standard single-channel endoscope can easily be transformed to double-channel functionality leading to better intraluminal tissue control.In the ex vivo porcine model, we could show that EMR+\u2009and ESD+\u2009are not less than equivalent to EMR and, respectively, ESD under the use of a double-channel endoscope. As double-channel endoscopes are expensive investments for endoscopy units, the AWC presents an affordable alternative with good applicability in endoscopic everyday practice as well in the case of EMR+\u2009as with ESD+."}
+{"text": "Here we show that a close mimic that is also positron-emitting of the non-mammalian Mtb disaccharide trehalose \u2013 2-[18F]fluoro-2-deoxytrehalose ([18F]FDT) \u2013 can act as a mechanism-based enzyme reporter in vivo. Use of [18F]FDT in the imaging of Mtb in diverse models of disease, including non-human primates, successfully co-opts Mtb-specific processing of trehalose to allow the specific imaging of TB-associated lesions and to monitor the effects of treatment. A pyrogen-free, direct enzyme-catalyzed process for its radiochemical synthesis allows the ready production of [18F]FDT from the most globally-abundant organic 18F-containing molecule, [18F]FDG. The full, pre-clinical validation of both production method and [18F]FDT now creates a new, bacterium-specific, clinical diagnostic candidate. We anticipate that this distributable technology to generate clinical-grade [18F]FDT directly from the widely-available clinical reagent [18F]FDG, without need for either bespoke radioisotope generation or specialist chemical methods and/or facilities, could now usher in global, democratized access to a TB-specific PET tracer.Tuberculosis remains a large global disease burden for which treatment regimens are protracted and monitoring of disease activity difficult. Existing detection methods rely almost exclusively on bacterial culture from sputum which limits sampling to organisms on the pulmonary surface. Advances in monitoring tuberculous lesions have utilized the common glucoside [ Mycobacterium tuberculosis (Mtb) still remains a serious global health challenge causing an estimated 1.5 million deaths worldwide in 2020 following the first year-on-year increase since 2005.1 Prompt, short-term diagnoses of TB are crucial for public health infection control measures, as well as for ensuring appropriate treatment for infected patients and controls.2 However, 2019\u20132020 saw a 1.3 million drop in diagnoses globally despite estimated increasing disease levels.1 Additionally, long-term accurate monitoring of chronic disease burden and the effectiveness of treatment is critically important in trials of new antitubercular agents and regimens. Sensitive and TB-specific reporters with the potential for ready democratization are therefore urgently required to address the development of new antituberculosis agents and regimens with the potential to shorten the duration of therapy.Tuberculosis (TB) caused by 3) national healthcare systems for noninvasively imaging the whole body whilst diagnosing, staging and assessing response to therapy in diseases such as cancer and inflammation. The analysis4 and internationally-agreed, comprehensive monitoring of access to PET-CT helps, in part, to drive global equity of access to such diagnostic imaging. However, this would be importantly aided by development of (i) novel reporters specific to other (e.g. communicable) diseases of similar, or even greater, relevance to the developing world, such as TB, and (ii) strategies and methods for their ready, distributed implementation.Positron emission tomography (PET) integrated with computed tomography (CT) now routinely provides a prominent method in some 16. However, the sensitivity and specificity of these radiotracers is either only similar to (or worse than) [18F]FDG and so only can only help to distinguish TB from malignancy when also combined with [18F]FDG. Moreover, neither probes are yet readily accessible from common precursors; both require specialist generation of radioisotopes (e.g. [18F]fluoride via cyclotron-mediated proton (1H) irradiation of H218O) combined with specific chemical technologies (e.g. appropriate synthetic laboratories and methods).With the aim of improving detection specificity, other FDT was slower and less efficient , 60 min to 4h). Moreover, whilst 4-epi-[18F]FDT and 6-[18F]FDT could be prepared from [18F]-fluoride used at a later stage led to low specific activities and so this candidate was dismissed at an early stage.Whilst the syntheses of [18F]FDT, 4-epi-[18F]FDT and 6-[18F]FDT were tested. Although, human trehalose-degrading trehalase activity is low and restricted primarily to the brush-border of the kidney where it is thought to be GPI-anchored,33 it can be elevated in some diseases,29 therefore we also tested degradation of the analogs in vitro using high concentrations of mammalian (porcine) trehalase .The metabolic stabilities of the remaining three analogs, in vivo using both Mtb-infected and na\u00efve rabbits was observed (30% after 3h (18F]FDT showed no apparent degradation (see below).Finally, putative degradation was also probed directly bits see . Consistion, see ,F). Whilafter 3h ,D. Howevafter 3h , [18F]FD18F]FDT and both the partial degradation and less efficient (lower RCY and a need for more stringent purification) synthesis of 4-epi-[18F]FDT led to their dismissal as candidate tracers; [18F]FDT was evaluated further.Taken together, the rapid degradation of 6-[18F]FDT . Global access via commercial sources would therefore, in principle, allow the development of synthesis that could implemented in many locations.Three parallel strategies were envisaged for the synthesis of [18F]FDT and thes18F]FDG is typically supplied in aqueous solution and so all routes adopted a protecting-group-free approach that would not only obviate the need for additional (e.g. protection-deprotection) steps, but, through the use of biocatalysts, allow the use of such solutions directly. Given the often highly selective nature of biocatalysts we envisaged that all three routes would have potential for application in a \u2018one-pot\u2019 operation for ease-of-use. Four biocatalyst systems were evaluated for mediating the critical formation of the \u03b1,\u03b1-1,1-bis-glycosidic linkage that is found in [18F]FDT. Due to the nature of its dual cis-1,2-linkage,34 this is a particularly difficult linkage to form and control selectively using chemical methods, further highlighting another potential advantage of the use of biocatalysis. Two homologues of the, so-called, trehalose glycosyltransferring synthase enzyme, TreT,35 (from T. tenax37 and P. horikoshii38) were tested via Route A; and two variant constructs containing the trehalose 6-phosphate synthase domain OtsA,39 one in Route B as a fusion to the dephosphorylating enzyme OtsB and one as a single enzyme construct via Route C.-1) in some batches could be readily removed both during manual and automated syntheses (see below), further confirming the advantage of Route C. In this way we could readily, consistently and repeated access [19F]FDT at 100 mg scales through one-pot biocatalytic syntheses; allowing the ready synthesis of grams of [19F]FDT that when characterized and then purification of the resulting filtrate using ion-exchange over three steps of 34 \u00b1 14 % (n = 5) and with a radiochemical purity >99% . Depending on starting material concentration, activity and condition, overall production times for [18F]FDT of 30\u2013120 min were tested. For example, reaction times could be reliably reduced from 60 min to 30 min by increasing concentration of enzymes, to ensure >99% conversion to provide comparable quantities and the same high purities of [18F]FDT. No detectable, residual protein from the reaction was observed in the final product, as tested by Western blot , 41 \u00b1 4 % (n=2) non-decay corrected yield of [18F]FDT was obtained in 50 min. The identity of product [18F]FDT was confirmed via co-injection of an authentic standard using LC-MS; radiochemical purity was >98%, determined by HPLC (46 of decayed samples (see Methods), revealed that, as expected, activity was essentially dependent on [18F]FDG source \u2013 useful activities could be routinely achieved (average 0.21 \u00b1 0.08 \u03bcmol [18F]FDT at specific activity of 69 \u00b1 26 mCi/mg (23.6 \u00b1 8.8 Ci/mmol) at the end of synthesis and formulation, see Next, after this successful manual standardization and variation, a fully automated synthesis was performed in a GE Tracerlab FX-N module . Even hi by HPLC , S34. SpIn vitro [19F]FDT in human plasma showed no degradation were injected into mice, and plasma subsequently analyzed .Although we have previously shown that FITC-Tre can be iex vivo with Mtb-infected lung homogenate (106Mtb/mL) and [18F]FDT added and incubated at 37 \u00b0C for 60 min. Upon extraction (chloroform) radio-TLC (f and comparison to authentic standards of TMM and TDM), consistent with the successful formation of [18F]TMM and [18F]TDM, respectively. Furthermore, consistent with a common, conserved mechanism, when the caseum from a large tubercular cavity in diseased New Zealand White rabbits that had been probed with 4-epi-[18F]FDT marmoset TB model47[18F]FDT displayed low background signal in the lung of a na\u00efve marmoset compared to ~45-day Mtb-infected marmoset (19F]FDT (administered 1 h and 5 minutes) prior to the use of [18F]FDT radiotracer dose. Consistent with specificity and mode-of-action, these reduced average uptake of [18F]FDT into lesions by 40% (F). Since non-specific uptake resulting from transit or non-bound normal accumulation of the radiotracer should not be displaced by competition with blocker / \u2018cold\u2019 compound, a reduction in PET signal is consistent with specific binding.A useful PET probe for diagnosing tuberculosis should have minimal signal in the absence of marmoset , 3B, 3C.[18F]FDT uptake was assessed by measuring lesion uptake (SUV) in animals scanned 48 hours apart (44 and 46 days-post-infection (dpi)), sufficient time to allow full tracer clearance but short enough to minimize possible infection progression ([18F]FDT uptake (SUVs) was observed confirming reproducible uptake.Optimal imaging time was determined to be 90 minutes post-dose ; there wgression ; consist[18F]FDT correlated with disease, we determined bacterial burden at sites that were imaged by [18F]FDT. Post-necropsy excision of lesions (n = 21) revealed that [18F]FDT uptake into individual lesions (SUV per lesion) was significantly correlated with the number of culturable Mtb bacteria (CFU) from each lesion , with sufficient intervening time to clear with first-line HRZE (combined isoniazid (H), rifampicin (R), pyrazinamide (Z) and ethambutol (E)) therapy and serial images were taken using both [18F]FDT in SUV signal and a lower total [18F]FDT uptake and D, dT uptake and H du54 To confirm the results (see above) found in the \u2018New World\u2019 (marmoset) NHP models, we also tested the [18F]FDT in \u2018Old World\u2019 cynomolgus macaques (n = 3) infected with Mtb (strain Erdman) . TB in c lesions in a man, kidneys (0.119 mSv/MBq), and adrenals (0.022 mSv/mBq) , were conducted in both rats and beagle dogs. The animals were given either daily iv injections of [19F]FDT at 100 \u00d7 the expected human dose for seven consecutive days or a single iv injection at either 100 \u00d7 or 1000 \u00d7 the expected human dose once. Thereafter animals were observed daily for mortality and morbidity, clinical observations, body weight, food consumption, clinical pathology . All animals survived until scheduled euthanasia (day 9 or day 21). There were no adverse findings in any the parameters evaluated in the study.First, dynamic PET imaging was performed in a healthy rhesus macaque for approximately 115 min after bolus intravenous administration of 151 MBq of [18F]FDT . This inmSv/mBq) . The effn kidney . Finally[18F]FDT appears to complement FDG as an effective tracer of TB, with better selectivity and correlation to mycobacterial burden in lesions. We suggest that this selectivity is likely a result of its mechanism-based mode-of-action . The lack of direct co-location that can be directly generated in a one-step reduction and has shown powerful utility in other bacterial species such as E. coli;60 the extension here to a ready, one-pot multi-step method now reveals the potential to consider even more complex sugar-based probes. This now enables potential, distributable radiochemical synthesis of [18F]FDT to be conducted anywhere there is access to FDG; scales of up to grams have now been demonstrated. Furthermore, full, pre-clinical assessments (see above) reveal no adverse effects. This, as well as the high specific activities and good radiochemical efficiencies that we disclose here now suggest [18F]FDT as a new, viable radiotracer for TB, suitable for Phase 1 trials.The scaleable, pyrogen-free synthesis methods that we describe here use highly selective biocatalysis under aqueous conditions that are therefore readily implemented by the non-expert. This biocatalytic approach to utilization of FDG as a ready organic source of 19F]FDG was purchased from CarboSynth. For radioactive synthesis, [18F]FDG was purchased from Cardinal Health Ltd. Normal saline was obtained from Quality Biological . All other chemicals and solvents were received from Sigma-Aldrich and used without further purification. Column and Sep-Pak cartridges used in this synthesis were obtained from Agilent Technologies and Waters , respectively. Sep-Paks were conditioned prior to use with 5 mL absolute ethanol. Analytical HPLC analyses for radiochemical work were performed on an Agilent 1200 Series instrument equipped with multi-wavelength detectors. Mass spectra (MS) of decayed [18F]-FDT solutions were recorded on a 6130 Quadrupole LC/MS, Agilent Technologies instrument equipped with a diode array detector. LC-MS analysis of [18F]-FDT was performed on Agilent 1260 HPLC system coupled to an Advion expression LCMS mass spectrometer with an ESI source. The LC inlet was Agilent 1200 series chromatographic system equipped with 1260 quaternary pump, 1260 Infinity autosampler, 1290 thermostatted column compartment and radiation detector.Non-radioactive -FDG (20\u201330 mCi in 0.8 \u22121 mL) was added to the reaction mixture containing 100 \u03bcL 1 M HEPES buffer, pH 7.6, 20 \u03bcL 1 M MgCl2, 20 \u03bcL 1 M ATP, 60 \u03bcL 1 M UDP-glucose, ~50 \u03bcL OtsA (1 mg), ~50 \u03bcL OtsB (1 mg), 20 \u03bcL hexokinase (5 mg). The reaction mixture was incubated at 37 \u00b0C for 30 min. After 30 min, the mixture was diluted with absolute ethanol (4 mL) and passed through a 5 \u03bcm syringe filter. The eluent was passed slowly through an amine Sep-Pak SPE cartridge at a flow rate of 1\u20132 drops per second. The eluent was then concentrated in vacuo. The resulting solution was filter-sterilized into a sterile vial for delivery. Identity of the compound was confirmed by LC-MS.For the biodistribution and blocking studies, [18F]-FDG was transferred under vacuum to the Reactor 1 containing 100 \u03bcL 1 M HEPES buffer, pH 7.6, 20 \u03bcL 1 M MgCl2, 20 \u03bcL 1 M ATP, 60 \u03bcL 1 M UDP-glucose, ~50 \u03bcL OtsA (1 mg), ~50 \u03bcL OtsB (1 mg), 20 \u03bcL hexokinase (5 mg). The reaction mixture was incubated for 30 min at 45 \u00b0C and absolute ethanol was added (3 \u2013 6 mL). The reaction mixture was passed through the filter (5 \u03bcm) and a stack of three NH2-cartridges and collected in Reactor 2. An additional 1 mL 75% ethanol in water was added to rinse the Reactor 1 and transferred into the Reactor 2. The combined solution was concentrated under nitrogen at 60 \u00b0C for 10 min. 2 mL saline was added to the Reactor 2. The final [18F]-FDT solution was transferred to the product vial through a sterile filter . The quality of the product was determined by analytical HPLC . Identity of the compound was confirmed by LC-MS was generated, following the enzymatic assay using known concentrations of trehalose. The decayed masses of [18F]FDT from the syntheses were calculated based on this calibration curve. Then, the specific activity of the final product was calculated, following the definition, radioactivity at the end of the synthesis/unit mass of compound.The radioactivity of the final products, + requires 791.1287, found 791.1246.In a 10 mL tube, 50 mg of 4-OH trehalose Ac7 and 77 mg of 2,6-di-t-butyl-4-methylpyridine were dissolved in 10 mL of anhydrous dichloromethane. The mixture was cooled to 0\u00b0C and 55 \u03bcL of triflic anhydride was added slowly. The mixture was stirred while monitoring by TLC, 1 h at 0\u00b0C followed by 24 h at r.t. The reaction mixture was evaporated to dryness at low temperature and purified by silica column chromatography to give 93.3 mg of 6-OTf Tre Ac7 as a white solid in 48% yield. TLC (1:1 CH2Cl2:EtOAc) Rf=0.58 for the product and Rf=0.40 for starting material 6-OH Tre Ac7. 1H NMR (400 MHz CDCl3-d) \u03b4 ppm 2.03, 2.03, 2.04, 2.05, 2.08, 2.09, 2.09 , 3.99 , 4.05 , 4.22 4.25 , 4.41 , 4.52 , 5.00\u20135.10 , 5.28 , 5.33 , 5.48 , 5.50 ; 13C NMR (101 MHz CDCl3-d) \u03b4 ppm 20.6, 20.7, 20.7 (7x CH3 acetates), 61.7 (C-6\u2019 OAc), 67.8 (C-5), 68.3 (C-5\u2019), 68.3 (C-4), 68.4 (C-4\u2019), 69.4 , 69.5 (C-3\u2019), 70.0 (C-3), 73.3 (C-6 OTf), 92.4 (C-1), 93.1 (C-1\u2019), 169.3, 169.4, 169.5, 169.6, 169.9, 170.0, 170.6 (7x C=O acetates); 19F NMR :\u221274.4, s, CF3 OTf; HRMS (ESI+ ) calcd. for C27H35F3O20SNa+ (M+Na+ ): 791.1287, found 791.1259In a 50 mL flask, 160.6 mg of 6-OH trehalose AcM. tuberculosis. All NIH studies were performed in accordance with the regulations of the Division of Radiation Safety, at the National Institutes of Health . The University of Pittsburgh cynomolgus macaque studies were approved by its IACUC and Division of Radiation Safety and animals were pair housed in an approved ABSL3 facility .This study was carried out in accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. Biodistribution studies with naive rhesus macaques were approved by the Institutional Animal Care and Use Committee (IACUC) of the NIH Clinical Centre . The IACUC of the NIAID, NIH approved the experiments described herein with rabbits and marmosets under protocols LCID-3 and LCID-9 respectively , and all efforts were made to provide intellectual and physical enrichment and minimize suffering. Once infected, female rabbits or marmosets of both sexes were housed individually or paired in biocontainment cages in a biological level 3 animal facility approved for the containment of 19F]FDT i.e. 10 \u03bcM, 20 \u03bcM, 40 \u03bcM, 120 \u03bcM, 160 \u03bcM, and 300 \u03bcM in 1.5 ml Eppendorf tube and diluted with up to 50% acetonitrile solution. Samples were vortexed for 1 min and centrifuged at 13000 rpm for 10 min and supernatant was analysed directly by LC-MS using LC method 2.FDT standards were prepared in human plasma (Sigma Aldrich), whereby 20 \u03bcL plasma samples were spiked with various known concentrations of FDT i.e. 500 \u03bcM and 50 \u03bcM were injected into the mice, through the tail vein injection. Prior to injection, each mouse was pre-bled (100 \u03bcL) and control samples were obtained. Five min post injection, blood samples were drawn up (775 \u03bcL) in syringe containing 75 \u03bcL sodium citrate as anticoagulant, hence total blood volume was 700 \u03bcL.For Samples were spun at 13000 rpm for 10 min and the supernatant (plasma) was used for further analysis. 200 \u03bcL of plasma sample was transferred into fresh 1.5 mL Eppendorf tube and diluted with equal amount of acetonitrile to precipitate out blood plasma protein and other macromolecules. Samples were further spun at 13000 rpm for 5 min. Supernatant was transferred into a mass spectrometry vial and analyzed using LC-MS analysis method 2 & 3 as shown in 63. Briefly, syringes of FDG or FDT were measured in a dose calibrator immediately before and after injection, targeting = injected doses of 2 mCi/kg. During uptake and distribution of the probes, a CT scan from the base of the skull spanning the lungs and the upper abdominal cavity was acquired as described for each species on a helical eight-slice Neurological Ceretom CT scanner operated as part of a hybrid pre-clinical PET/CT system utilizing a common bed. The animal bed was then retracted into the microPET gantry and sixty minutes \u00b1 5 min post FDG injection, a series of 2 or 3, 10 minute emission scans with 75 mm thick windows with a 25 mm overlap were acquired caudal to cranial. The FDT scans were acquired beginning at 60, 90, and 120 minutes \u00b1 5 min post injection with the same duration, window and overlap. For the blocking studies, injections of 150 \u03bcg of [19F]-FDT synthesized as described above were administered 60 min and 5 min prior to the FDT tracer and the scans were acquired as before. Two animals were used in the blocking studies with one animal receiving the blocking agent while the other was administered saline only prior to the FDT tracer. Two days later, the administrations were reversed so that the second animal received the blocking agent. The emission data for all scans were processed and corrected as described previously62. For the blocking studies, injections of 150 \u03bcg of [19F]-FDT synthesized as described above were administered 60 min and 5 min prior to the FDT tracer injection and the scans were acquired as before. Two animals were used in the blocking studies with one animal receiving the blocking agent while the other was administered saline only prior to the FDT tracer. Two days later the administrations were reversed so that the second animal received the blocking agent. The emission data for all scans were processed and corrected as described previously62. Cynomolgus macaques at the University of Pittsburgh were imaged using a Siemens microPET Focus 220 PET scanner and a Neurological Ceretom CT scanner as previously described 52. Scans were viewed using OsiriX to identify and analyze individual granulomas as done previously 64.Rabbits and marmosets at NIAID were anaesthetized and maintained during imaging as previously described62For FDG glycolytic activity measurements, PET/CT images were loaded into MIM fusion software to create lung contours using the CT 3D region growing application with upper and lower voxel threshold settings of 2 and \u22121024 HU respectively with hole filling and smoothing applied. Dense lesion centers were subsequently identified for inclusion in the lung region manually and the program calculated the FDG signal parameters. In addition, each lesion within the lung was marked with a 3-D region of interest (ROI) and the SUV statistics for the ROIs were captured into excel sheets for analysis as previously described.18F]FDT via saphenous vein catheter after being anesthetized with ketamine and then maintained with 2\u20134% isoflurane/97% oxygen. Dynamic whole-body PET images were obtained on a Siemens mCT PET/CT scanner. The images were reconstructed using an iterative time-of-flight algorithm with a reconstructed transaxial resolution of about 4.5 mm. The PET acquisition was divided into 22 time frames of increasing durations, 2 \u00d7 15 sec, 4 \u00d7 30 sec, 8 \u00d7 60 sec, and 8 \u00d7 120 sec. Each frame in the sequence was gathered from 4 bed positions to obtain whole body dynamic data over the scan duration was administered 151 MBq of -FDT was synthesized and tested for quality control as described above and in For toxicity studies, 2 g of [19F]-FDT at 1.32 mg/kg or 13.2 mg/kg on day 7 or 1.32 mg/kg/day for 7 consecutive days was given 7 days of vehicle at an equivalent dose volume of 1 mL/kg on days 1\u20137. Animals were euthanised on day 9 (main groups) or day 21 (recovery groups).For studies in rats, male and female Sprague Dawley rats (15/sex/group) were given a single iv administration of [19F]-FDT in dogs, male and female Beagle dogs (5/sex) were given a daily iv injection of [19F]-FDT at 0.4 mg/kg/day for 7 consecutive days, or a single iv administration of [19F]-FDT at 0.4 or 4 mg/kg on Day 7. A control group , was given 7 days of vehicle , at an equivalent dose volume of 0.25 ml/kg on Days 1\u20137. Animals were sacrificed on Day 9 or Day 21 . The single dose administration for Groups 3\u20134 were initiated on Day 7 so that clinical pathology and necropsy occurred on the same calendar day for all Groups (1\u20134), thus allowing for sharing of vehicle control data for analysis of clinical pathology and necropsy results between the two dose regimens.To assess the toxicity associated with [Supplement 1"}
+{"text": "Helicobacter, Mycobacterium, and Streptococcus, which clustered patients into three subtypes with different survival rates. In total, 74 prognostic genes were screened from 925 feature genes of the subtypes, among which five genes were identified for prognostic model construction, including NTN5, MPV17L, MPLKIP, SIGLEC5, and SPAG16. The prognostic model could stratify patients into different risk groups. The high-risk group was associated with poor overall survival. A nomogram established using the prognostic risk score could accurately predict the 1, 3, and 5\u2009year overall survival probabilities. The high-risk group had a higher proportion of histological grade 3 and recurrence samples. Immune infiltration analysis showed that samples in the high-risk group had a higher abundance of infiltrating neutrophils. The Notch signaling pathway activity showed a significant difference between the high- and low-risk groups. In conclusion, a prognostic model based on five feature genes of microbial subtypes could predict the overall survival for patients with STAD.We aimed to characterize the stomach adenocarcinoma (STAD) microbiota and its clinical value using an integrated analysis of the microbiome and transcriptome. Microbiome and transcriptome data were downloaded from the Cancer Microbiome Atlas and the Cancer Genome Atlas databases. We identified nine differentially abundant microbial genera, including It is a disease with high molecular and phenotypic heterogeneity, with adenocarcinoma being the most common type . The occ factors . In Chin factors . Due to factors . TherefoH. pylori can trigger the development of GC, and chronic infection causes decreased acid secretion, resulting in the development of a different gastric bacterial community [H. pylori eradication, the Shannon and richness indices of the gastrointestinal microbial community were significantly increased in H. pylori-positive GC patients, involving obvious changes in 18 gastric microbial genera [Propionibacterium acnes and Prevotella melaninogenica was elevated, while Bacteroides uniformis, H. pylori, and Prevotella copri were reduced [Gut microbiota affects the morphological, immunological, and nutritional functions of the digestive tract and may be implicated in the development of many diseases . Gut micommunity ,10. For cterium) . Homeostcterium) . Based o reduced . Dang et reduced .SALL4 might be a key prognostic gene in gastric adenocarcinoma [Recent advances in transcriptome sequencing have provided an unprecedented global view of transcriptomes. Transcriptome sequencing is widely used to identify the key genes and pathways involved in gastric adenocarcinoma . A previarcinoma . Moreovearcinoma . Huang earcinoma . Despite22.1The normalized log expression data of 407 STAD samples (Table S1) were acquired from the TCGA database . Data weAdditionally, the microarray data GSE62254 that hadThese datasets were analyzed as per the workflow in 2.2The differences in the microbial abundance between 91 STAD and 32 histologically normal samples were compared using unpaired t-tests. Based on the abundance of differential microbiota, microbial subtypes in the STAD samples were identified using ConsensusClusterPlus (version 1.54.0) in R 3.6.1 . Surviva2.3Using the Limma package (version 3.34.7) , the DEGP < 0.05 was used to select the significantly enriched Gene Ontology (GO) biological process and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways.Functional enrichment analysis of the DEGs in the three groups was conducted using the Database for Annotation, Visualization, and Integrated Discovery . P < 0.02.4P < 0.05 were analyzed by multivariable Cox regression. Genes with P < 0.05 in the multivariable Cox regression analysis were defined as independent prognostic genes. In addition, optimal genes among the independent prognostic genes were further screened by LASSO regression analysis using the lars package (version 1.2) in R 3.6.1 [genes refers to the LASSO prognostic coefficient, and Expgenes refers to the expression level of each gene. KM survival curves were used to evaluate the differences in prognostic value of genes between different gene expression groups using the survival package (version 2.41-1). In addition, the prognostic risk score was calculated for samples in both the TCGA dataset and GSE62254 validation dataset. The tumor samples in each dataset were grouped into two risk groups based on the median value of the prognostic risk score. The survival differences of the different risk groups were evaluated using KM-survival analysis. The 1, 3, and 5\u2009year prediction accuracy of the prognostic model was analyzed using the survival ROC package (version 1.0.3) [The prognostic value of the DEGs was assessed by univariate Cox regression analysis using the Survival package (version2.41-1), in which genes with R 3.6.1 . These on 1.0.3) in R. Mo2.5P < 0.05) in the univariate Cox regression analysis were included in the multivariate Cox regression analysis. Based on the independent prognostic factors elucidated by the multivariate Cox analysis, a nomogram was established to predict the 1, 3, and 5\u2009year overall survival probabilities of patients with GC. Calibration of the nomogram was evaluated graphically using calibration curves.Based on the clinical data in the TCGA dataset, univariate and multivariate Cox regression analyses were utilized to determine the independent prognostic factors by analyzing the prognostic risk score and the various clinical variables, including age, sex, neoplasm histologic grade, pathologic stage, recurrence, and pathologic M, N, and T. Clinical variables , five differential microbial classes, eight differential microbial orders, eight differential microbial families, and nine differential microbial genera were obtained (Table S3). There was more microbiota diversity at the genus level, and hence, the data at the genus level were used in the following analysis. Among the nine differential microbial genera, Mycobacterium and Helicobacter showed a higher abundance in the histologically normal samples, while the remaining seven microbial genera showed a higher abundance in the STAD samples were clustered into subtype 1, while 8 and 11 samples were clustered into subtypes 2 and 3, respectively. As shown in Streptococcus, subtype 2 mainly contained a higher abundance of Helicobacter, and subtype 3 mainly contained a higher abundance of Neisseria, Selenomonas, and Capnocytophaga. Survival analysis revealed that subtype 3 had a favorable overall survival, while subtype 2 had a worse overall survival than the other subtypes . In subtype 1 vs subtypes 2 and 3 group, most of the DEGs were upregulated, whereas most of the DEGs were downregulated in subtype 2 vs subtypes 1 and 3 and subtype 3 vs subtypes 1 and 2 groups . The DEGFunctional enrichment revealed that the DEGs in subtype 1 vs subtypes 2 and 3 group were mainly enriched in 18 GO biological processes, such as GO:0071805 \u223c potassium ion transmembrane transport and GO:0060070 \u223c canonical Wnt signaling pathway, and nine KEGG pathways, such as hsa04310:Wnt signaling pathway and hsa04390:Hippo signaling pathway . The DEG3.3NTN5), sialic acid binding Ig like lectin 5 (SIGLEC5), MPV17 mitochondrial inner membrane protein-like (MPV17L), M-phase-specific PLK1 interacting protein (MPLKIP), and sperm associated antigen 16 (SPAG16) , whereas SIGLEC5 and SPAG16 were risk factors (hazard ratio > 1) . Forest tio > 1) . Consistsurvival .A prognostic model was constructed based on these five aforementioned genes. The samples were then grouped into two risk groups based on the median risk score. In the TCGA dataset, the distribution of risk scores indicated that high-risk patients tended to have a worse prognosis . KM curv3.4NTN5, MPLKIP, and MPV17L gradually decreased as the risk score increased, while the expression of SIGLEC5 and SPAG16 gradually increased as the risk score increased. The clinical factors of the two risk groups are shown in Table S6. There were significant differences in the neoplasm histologic grade (P = 0.0479) and reference (P = 0.0468) between the high- and low-risk groups. Specifically, the low-risk group had a higher proportion of samples without recurrence than the high-risk group. In addition, the high-risk group had a higher proportion of histological grade 3 samples than the low-risk group (The expression patterns of these five genes in the prognostic model are shown in sk group . This in3.5Univariate and multivariate Cox regression analyses showed that the prognostic risk score was an independent prognostic factor for patients with STAD . Using t3.6P < 0.05 were screened, including Notch signaling pathway, complement and coagulation cascades, and adipocytokine signaling pathway (The abundance of 22 infiltrating immune cells was evaluated using the CIBERSORT algorithm. The abundance of six infiltrating immune cells, including M0 macrophages, M2 macrophages, resting mast cells, resting NK cells, monocytes, and neutrophils, were significantly different between the two risk groups. The samples in the high-risk group had a higher abundance of infiltrating M2 macrophages, resting mast cells, and neutrophils . KEGG pa pathway .4Helicobacter, Mycobacterium, Streptococcus, and Veillonella. Of which, Helicobacter and Streptococcus were more abundant in both tumor and normal samples than other microbial genera.The gastrointestinal tract is a repository of bacteria in the human body. Intestinal flora form a symbiotic relationship with the human body, which is not only involved in the metabolism of nutrients, the development of the body\u2019s immune system, intestinal barrier function, and other normal physiological processes, but it is also closely related to the development of a variety of human diseases, especially gastrointestinal tumors ,31,32. MH. pylori is a species of the Helicobacter genus, and its infection is a well-known risk factor for the development of GC [H. pylori infection [H. pylori will develop stomach problems, nor will all people with stomach problems become infected with H. pylori [Mycobacterium abscessus was highly prevalent in GC patients, and gastric Mycobacterium abscessus was primarily colonized in the epithelial cells, especially gastric gland-bearing cells and mucosa [Mycobacterium conceptionense infection has been reported in patients with advanced STAD [Mycobacterium in GC. The Streptococcus genus can survive in low gastric pH and is acid-tolerant [Streptococcus [Veillonella genus, Streptococcus mitis, and Streptococcus salivarius are all associated with GC risk, and they display a better diagnostic value in differentiating patients with GC from healthy individuals [Veillonella and Streptococcus showed positive correlations with serum levels of l-threonine, l-alanine, and methionol in patients with GC [Streptococcus and Helicobacter, respectively, which had a worse prognosis than subtype 3, further emphasizing the importance of Streptococcus and Helicobacter in GC development.nt of GC . There infection . However. pylori . Based od mucosa . Mycobacced STAD . These ttolerant . It has tococcus . Veillonividuals . Veillon with GC . These sNTN5, MPV17L, MPLKIP, SIGLEC5, and SPAG16. NTN5 encodes netrin-5 belonging to the netrin family, which is homologous to the C345C domain of netrin-1 and promotes tumorigenesis through cell adhesion, apoptosis, angiogenesis, and other processes [MPV17L, a crucial paralog of MPV17, encodes a transmembrane protein involved in the metabolism of peroxisomal reactive oxygen species. Krick et al. indicated that MPV17L could be involved in protecting mitochondria from apoptosis and oxidative stress [MPLKIP, also named TTDN1, encodes a protein that plays an important role in maintaining the integrity of the cell cycle by interacting with polo-like kinase 1, and its inhibition or overexpression results in multi-nuclei or multipolar spindles [SIGLEC5 encodes a siglec belonging to the sialic acid-binding immunoglobulin-like receptor family that regulates immune cell function in various disorders [Streptococcus, which means it has significance in regulating host immunity [We then screened the feature genes of these three microbial subtypes, of which 74 showed prognostic value. Multivariable Cox and LASSO regression analyses identified the five genes with the most prognostic valuable: rocesses . MPV17L,e stress . MPLKIP,spindles . SIGLEC5isorders . SIGLEC5immunity . SolubleImmune infiltration analysis showed that samples in the high-risk group had a higher abundance of infiltrating M2 macrophages, resting mast cells, and neutrophils. Tumor-associated macrophages are heterogeneous, with a tumor-promoting M2 phenotype and a tumor-inhibiting M1 phenotype. Macrophages in the tumor microenvironment are generally polarized to the M2 phenotype to promote tumor progression . ElevateH. pylori [Furthermore, the Notch signaling pathway activity showed a significant difference between the high-and low-risk groups. Notch signaling, a key regulator of multiple cellular functions, is highly expressed and activated in gastric cancer . Activat. pylori . Given tOur study had several limitations. First, the five prognostic genes used for the prognostic model construction were not validated in clinical samples. Second, our analysis was based on online public data, and the robustness of our constructed prognostic model needs to be validated in prospective clinical cohorts. Further studies are required to confirm our findings.In conclusion, we have identified nine differential microbial genera in STAD that could cluster STAD patients into three microbial subtypes with significantly different survival rates. The prognostic model based on the key feature genes of these microbial subtypes could predict the overall survival of STAD patients, and the model showed the associations with the clinical characteristics and immune microenvironments of the patients. The Notch signaling pathway may be a key mechanism that affects the role of the microbiota in GC development and prognosis. These results deepened our understanding of the importance of microbiota and their clinical predictive value."
\ No newline at end of file