diff --git "a/deduped/dedup_0411.jsonl" "b/deduped/dedup_0411.jsonl" new file mode 100644--- /dev/null +++ "b/deduped/dedup_0411.jsonl" @@ -0,0 +1,64 @@ +{"text": "Nevirapine and efavirenz are the most commonly prescribed of the class of antiretroviral drugs called non-nucleoside reverse transcriptase inhibitors (NNRTIs). Efavirenz has the advantage of once-daily dosing. In a recent study called the 2NN study (Lancet 363: 1253\u20131263), it appeared to be only marginally superior to nevirapine in terms of clinical success and virological suppression. Van Leth and colleagues have now shown that while nevirapine and efavirenz both raise high-density lipoprotein (HDL) cholesterol (the \u201cgood\u201d type of cholesterol), the overall lipid profile is better with nevirapine than with efavirenz.\u201cThese data suggest that nevirapine may be preferable to efavirenz in HIV-infected adults with other cardiovascular risk factors,\u201d says the study's academic editor, Andrew Carr of St. Vincent's Hospital in Darlinghurst, Australia. \u201cHowever, perceived cardiovascular risk is only one factor that would affect the choice between these two drugs.\u201dVan Leth and colleagues prospectively analyzed the lipids of patients enrolled in the 2NN study, a randomized, open-label efficacy study that included adults with HIV who had never been on antiretroviral drugs. All patients were given stavudine and lamivudine and were then randomized into three treatment groups: nevirapine, efavirenz, or both.For the lipid analysis, which was preplanned, the researchers included only the nevirapine and efavirenz groups . This was because the 2NN study showed that simultaneous use of nevirapine and efavirenz should be avoided\u2014the combination is associated with increased toxicity without increased efficacy. The increase in HDL cholesterol was significantly higher with nevirapine than with efavirenz. There was a decrease in the ratio of total cholesterol to HDL cholesterol with nevirapine and an increase with efavirenz.The study does not prove, however, that the rise in HDL cholesterol seen with NNRTIs actually leads to a reduction in coronary heart disease. \u201cThere are no vascular functional data,\u201d says Carr, \u201cor clinical vascular endpoint data that confirm that the statistically significant lipid differences observed are clinically significant.\u201dThe study was funded by Boehringer Ingelheim, the manufacturer of nevirapine. The authors clearly state that the company had \u201ca nonbinding input on issues of study design and analyses\u201d but it had \u201cno influence on reporting of the data or the decision to publish.\u201dDespite its limitations, van Leth and colleagues' study \u201cmoves clinicians and patients away from \u2018one-size-fits-all\u2019 antiretroviral therapy,\u201d says Carr. \u201cIt takes us further along the path of choice of antiretroviral therapy being individualized according to other patient comorbidities and risk factors, as well as therapy simplicity and side effects.\u201d"} +{"text": "The recently observed low reproducibility of focus score (FS) assessment at different section depths in a series of single minor salivary gland biopsies highlighted the need for a standardized protocol of extensive histopathological examination of such biopsies in Sj\u00f6gren's syndrome. For this purpose, a cumulative focus score (cFS) was evaluated on three slides cut at 200-\u03bcm intervals from each of a series of 120 salivary biopsies. The cFS was substituted for the baseline FS in the American\u2013European Consensus Group (AECG) criteria set for Sj\u00f6gren's syndrome classification, and then test specificity and sensitivity were assessed against clinical patient re-evaluation. Test performances of the AECG classification with the original FS and the score obtained after multilevel examination were statistically compared using receiver operating characteristic (ROC) curve analysis. The diagnostic performance of AECG classification significantly improved when the cFS was entered in the AECG classification; the improvement was mostly due to increased specificity in biopsies with a baseline FS \u2265 1 but <2. The assessment of a cFS obtained at three different section levels on minor salivary gland biopsies can be useful especially in biopsies with baseline FSs between 1 and 2. No singace unit ,11. Difface unit ,9. Althoace unit ,11, the ace unit ,13. To aace unit .In this study, we tried to standardize a protocol for histopathological MSGB evaluation in which the FS is assessed by examining a larger area of the biopsy tissue, and we investigated how the FS obtained affects the number of patients classified as having SS, as compared with the routine method, using the classification criteria recently proposed by the AECG . The dia2) was not considered a criterion for exclusion, provided that at least one normotrophic glandular lobule had been sampled.We retrospectively studied a consecutive series of patients thoroughly investigated at our hospital between 1998 and 2002 for suspected primary SS, including a follow-up of at least 1 year after the diagnostic evaluation. Patients with secondary SS or who had been diagnosed by biopsy as having nonspecific inflammation, fibrosis, and atrophy of the gland were excluded -12. Less2 (0 < FS < 1) [2 was considered positive when the adjacent glandular parenchyma was histologically normal. We further classified patients with a positive FS into two groups, those with fewer than two foci per 4 mm2 (1 \u2264 FS < 2) and those with two or more (FS \u2265 2). The area of the biopsy sections was assessed with video-assisted morphometric software capable of measuring the area of delineated surfaces . The comparison of automated and manual area measurements of a smaller series of MSGB sections did not show a significant difference (data not shown). This prompted us to choose the automated system to simplify the examination of the large number of samples involved in the study.All patients had undergone thorough clinical and instrumental evaluation ,4, inclu FS < 1) ; the preSample blocks were recut at two additional levels, about 200 and 400 \u03bcm deeper than the original section. Sections 4 \u03bcm thick corresponding to these levels were collected on separate slides and stained with hematoxylin and eosin. Considering that an infiltrate of 50 lymphocytes in our section had a mean diameter of 50 \u03bcm, we assumed that the interposition of 200 \u03bcm between the evaluated sections was enough to ensure that the FS recorded at each level was independent of the other two and that if the same focus was present in two section levels, the focus itself was large enough to justify repeated scoring. The two new sections were blindly examined by the same pathologist, who again recorded the area and the focus score for each level. For each patient, the total number of foci at all three levels and the total surface area measured at all levels were used to calculate a cumulative FS (cFS) for the three sections.The cFS obtained after re-evaluation was entered in the AECG criteria set , to obtaP value of less than 0.05 was considered to indicate statistical significance. All tests were two-sided. Analyses were performed with Statistica for Windows and MedCalc software.Quantitative data are shown as means \u00b1 standard deviation (SD). Specificity and sensitivity were assessed with their 95% confidence intervals (CI). Differences in frequencies were evaluated by means of chi-square statistics or the Fisher exact test, as appropriate. Given the known limitations of diagnostic accuracy as a parameter for measuring the diagnostic performance of a test, specificity and sensitivity were compared using receiver operating characteristic (ROC) curves . A P valThe study series comprised 138 patients, 65 of whom had a baseline FS = 0, 14 with 0 < FS < 1, 18 with 1 \u2264 FS < 2, and 41 with FS \u2265 2. Eighteen patients had incomplete clinical data that hampered either the AECG classification or the clinical re-evaluation. These patients were excluded from further analysis. The final series included 120 patients, for whom demographic, biopsy, and clinical data and the result of the clinical re-evaluation are presented in Table In 96 (80%) of the 120 biopsies, the FS group did not change after serial sectioning and calculation of the cFS. In 14 of these biopsies, the FS group changed but this did not affect that patient's negative or positive status. In the biopsies for the other 10 patients, 1 (1.7%) of the 57 with a baseline FS = 0 and 1 (9%) of the 11 with a baseline score of 0 < FS < 1 switched to a FS consistent with SS according to AECG criteria (FS \u2265 1). At clinical re-evaluation, these two patients were considered not to have SS. Seven (46%) of the 15 patients with a baseline score of 1 \u2264 FS < 2 and one (3%) of 37 with a baseline FS \u2265 2 switched to a grade inconsistent with SS (FS < 1). On clinical re-evaluation, 7 of these 8 patients were assessed as not having SS.When the cFSs were entered in the AECG criteria set , the basP = 0.056), increasing the accuracy from 88.3% (95% CI 81.2\u201393.5) to 94.2% (95% CI 88.3\u201397.6). Pairwise comparison of the ROC curves showed a statistically significant difference between patient classification before and after multilevel FS evaluation only in biopsies with 1 \u2264 FS < 2 .In the present series of 120 patients fully evaluated for SS, the sensitivity and specificity of the baseline AECG criteria set were 93.9% and 84.5%, respectively. Reclassification with cFS did not affect sensitivity, whereas specificity changed to 94.4% changes the baseline classification in 6% of patients evaluated for SS and increases the diagnostic performance of the criteria recently proposed by the AECG for SS classification . In partThe present study was prompted by a recent paper documenting that MSGB grading of inflammation was scarcely reproducible at different section depths, and that the difference between grades recorded at baseline and at deeper levels was sufficient to change the biopsy from positive to negative or vice versa in 10% of grade I (FS = 0), 44.4% of grade II (0 < FS < 1), 88.8% of grade III (1 \u2264 FS < 2), and 40% of grade IV (FS \u2265 2) biopsies . The autOn this basis, we aimed at assessing if the histopathological evaluation of a larger area of MSGB tissue, as obtained by cutting the biopsy sample at additional section levels, could increase the diagnostic performance of the histopathological study and of the AECG criteria set proposed for the classification of SS. We chose a minimum requirement of three different section levels, by analogy with the procedure standardized for the histopathological study of endomyocardial biopsies , assuminWith reference to the diagnostic gold standard, when patients were classified according to the AECG criteria set including the cFS, specificity increased by 9.8%, and the pairwise comparison of the ROC curves showed a statistically significant improvement of the diagnostic performance, mostly due to the increased test specificity in biopsies with 1 \u2264 FS < 2, whereas the increase was minimal in FS \u2265 2 and null in biopsies inconsistent with SS (0 < FS < 1). One advantage of the proposed method of MSGB evaluation is that specificity is increased without affecting sensitivity; on the other hand, it was shown that improving sensitivity by means of increasing the cutoff value of positive FS resulted in a substantial reduction of specificity .To explain the increased specificity observed with examination of multilevel salivary gland biopsies, it should be considered that, because of the uneven distribution of inflammatory infiltrates in the gland , the exaOne potential limit of the present study is represented by the need to introduce a gold standard reference to assess the diagnostic accuracy of the test, independent of the widely accepted AECG criteria set for SS classification. In fact, after clinical re-evaluation, which we adopted as a gold standard, some patients appeared to have been misclassified according to AECG criteria. This only partial correspondence between the judgement of experienced clinicians and classification criteria is a well-known problem in the diagnosis of rheumatological disorders and justifies the requirement of a wide criteria set for patient classification. In the absence of single, straightforward diagnostic parameters, a thorough patient's chart and follow-up revision by experienced rheumatologists was chosen as reference gold standard, by analogy with what has been done in many rheumatological studies, including that of the European Community Study Group on Diagnostic Criteria for SS -5. AccorThe assessment of a cumulative focus score (cFS) obtained at three different section levels on minor salivary gland biopsies, cut at least 200 \u03bcm apart, can improve the diagnostic accuracy of the criteria set used for SS classification, especially in biopsies with a baseline FS between 1 and 2. Since the value of the MSGB biopsy has been confirmed by the recent AECG revision of the SS classification criteria , the incAECG = American-European Consensus Group; cFS = cumulative FS; CI = confidence interval; FS = focus score; MSGB = minor salivary gland biopsy; ROC = receiver operating characteristic; SE = standard error; SS = Sj\u00f6gren's syndrome.The author(s) declare that they have no competing interests.PM participated in the design of the study, performed the histopathological analysis, coordinated the study, and drafted the manuscript. AM and RC reviewed and discussed patients' charts for clinical re-evaluation. OE performed all salivary gland biopsies. CV participated in case collection and data analysis. CT participated in the design of the study and performed the statistical analysis. ES and CM conceived the study and participated in its design. CM also participated in the clinical re-evaluation of patients. All authors read and approved the final manuscript."} +{"text": "Dengue disease severity is usually classified using criteria set up by the World Health Organization (WHO). We aimed to assess the diagnostic accuracy of the WHO classification system and modifications to this system, and evaluated their potential practical usefulness.Patients, admitted consecutively to the hospital with severe dengue, were classified using the WHO classification system and modifications to this system. Treating physicians were asked to classify patients immediately after discharge. We calculated the sensitivity of the various classification systems for the detection of shock and the agreement between the various classification systems and the treating physician's classification.Of 152 patients with confirmed dengue, sixty-six (43%) had evidence of circulatory failure. The WHO classification system had a sensitivity of 86% (95%CI 76\u201394) for the detection of patients with shock. All modifications to the WHO classification system had a higher sensitivity than the WHO classification system (sensitivity ranging from 88% to 99%). The WHO classification system was in only modest agreement with the intuitive classification by treating physicians whereas several modified classification systems were in good agreement.The use of the WHO classification system to classify dengue disease severity is to be questioned, because it is not accurate in correctly classifying dengue disease severity and it lacks sufficient agreement with clinical practice. Dengue virus infections are recognized as major public health problems in tropical and subtropical regions. Each year an estimated 100 million infections occur and between 250.000 and 500.000 severe cases are reported to the World Health Organization (WHO) [A standardized classification system for the severity of dengue virus infections is crucial for optimal communication of scientific data to improve our understanding of the pathogenesis and treatment of the disease. Incorrect disease severity classification may lead to faulty decision making in choosing the most appropriate treatment for the individual patient. Although the WHO classification system has been widely applied in research settings and publications, its use in everyday clinical practice has not proven easy or practical. In recent years, several studies reported difficulties with classification, inconsistencies in the WHO classification system and some found it necessary to define new categories to identify severe cases that do not meet the criteria for DHF or DSS -9.These findings raise the question if the current WHO classification system is appropriate for the classification of dengue disease severity. To answer this, we assessed the diagnostic accuracy of the WHO classification system and modifications to this system. The presence of shock was used as marker of disease severity. By comparing the various classification systems with an intuitive classification done by treating physicians, we additionally evaluated the practical usefulness of the WHO classification system and the various modified classification systems.The study was conducted from February 2001 to April 2003 on the paediatric intensive care unit and paediatric ward of the Dr. Kariadi Hospital in Semarang, Central Java, a region in Indonesia where dengue is endemic. Patients, aged 2 to 14 years, consecutively admitted to the hospital with suspected severe dengue virus infection were included, provided that a parent or legal guardian gave informed consent. No strict criteria were used for inclusion in the study. Treating physicians could use the WHO case definition for dengue haemorrhagic fever as a guiding principle. If patients did not meet all criteria but a clinical suspicion of severe dengue virus infection was present, these patients were still eligible for inclusion. Members of the study team recorded demographic data, medical history, physical examination findings, clinical course and routine laboratory test results for each patient on a standard data form. The tourniquet test was performed on admission. Since it may be negative when circulatory failure is present the test was repeated after recovery from shock. Platelet count was performed daily. Haematocrit was measured at admission, every 2 hours for the first 6 hours and than every 6 hours until stable. Both haematocrit and platelet count were repeated in the event of clinical deterioration . Additional blood samples for diagnostic procedures were obtained on day of admission and on day 7 after enrolment. The ethics committee of the Dr. Kariadi Hospital approved all clinical and laboratory aspects of this study.The study was initially designed to study pathophysiological mechanisms of hemorrhagic tendencies in patients with a severe dengue virus infection. For this we collected blood samples on several points in time during admission for the analysis of coagulation activity, fibrinolysis and inflammatory mediators. The study protocol and results of these studies have been described previously ,11. Duri3), and 4) signs of plasma leakage using laboratory findings and chest X-ray for the detection of pleural effusion. The presence of ascites on physical examination was not used as a sign of plasma leakage since routine physical examination has definite limitations in the precise diagnosis of ascites [After completion of the study, two investigators (PK and ATAM) determined the presence of the following four clinical and laboratory manifestations on admission and during follow up in every patient using the standard data form: 1) fever or a history of acute fever, 2) haemorrhagic manifestations (at least a positive tourniquet test), 3) thrombocytopenia were used for the detection of dengue virus specific IgG and IgM antibodies, according to the procedures described by the manufacturer. The sensitivity and specificity of these tests have been evaluated previously . Cases wThe evaluation of the diagnostic accuracy of the various classification systems was based on the presence of circulatory failure on admission and during follow up as the \"reference standard\". We calculated the proportion of patients with circulatory failure who were correctly classified as DHF (sensitivity). In addition, we calculated the proportion of patients classified as DF by the WHO classification system and without circulatory failure who were reclassified as having DHF when the modified classification systems were applied. The corresponding exact 95% confidence intervals (95% CI) were calculated from the binomial distribution.w) statistic with a 95% confidence interval. The \u03baw values were interpreted as: poor agreement, \u2264 0.20; fair agreement, 0.21 to 0.40; moderate agreement, 0.41 to 0.60; good agreement, 0.61 to 0.80; or very good agreement, \u2265 0.81 [As a measure of agreement between various classifications systems and the intuitive classification by the treating physician, we calculated the weighted kappa . Of the remaining 183 patients, 28 patients (15%) had inconclusive serology and were therefore categorised as indeterminate. Three patients (2%) had definitive negative serology and were categorized as not dengue. The presence of dengue was objectively confirmed in 152 patients (83%): 115 by serology, 21 by dot blot immunoassay and 16 by dengue serotype specific reverse transcription PCR . Of the patients with confirmed dengue, 66 had evidence of circulatory failure: 52 (34%) had shock on admission and 14 (9%) went on to develop shock within 48 hours after admission. Six patients (4%) died because of prolonged shock, massive haemorrhage and respiratory failure. Patient characteristics on admission are summarised in Table 3, and the remaining 2 had no evidence of haemoconcentration or other signs of plasma leakage. The WHO classification system had a sensitivity of 86% (95%CI 76\u201394). The number of patients with circulatory failure classified as DHF changed when the WHO classification system was modified. Interestingly, all modifications had a higher sensitivity than the WHO classification system. The sensitivities of the various classification systems are shown in Table According to the WHO classification system, 20 13%) patients were classified as having DF and 132 (87%) as having DHF. Of the DHF group, 57 43%) could be classified as having DSS. Of the 66 patients with confirmed dengue and circulatory failure, 9 (14%) failed to meet all four criteria necessary for a diagnosis of DHF, and were thus classified as having DF. Six patients had a negative tourniquet test result and no bleeding manifestations during hospital admission, 1 patient never had a platelet count less than 100.000 cells/mm% patient3% could w value 0.53 (95% CI 0.41\u20130.64); bleeding + thrombocytopenia or haemoconcentration, \u03baw value 0.55 (95% CI 0.43\u20130.66); WHO classification system, \u03baw value 0.57 (95% CI 0.45\u20130.68); and bleeding + haemoconcentration, \u03baw value 0.59 (95% CI 0.47\u20130.71). The remaining three modified classification systems showed good agreement with classification by the treating physician: thrombocytopenia + bleeding or haemoconcentration, \u03baw value 0.64 (95% CI 0.53\u20130.74); thrombocytopenia + haemoconcentration, \u03baw value 0.67 (95% CI 0.56\u20130.78); and haemoconcentration + thrombocytopenia or bleeding, \u03baw value 0.70 (95% CI 0.59\u20130.81).Treating physicians classified 8 patients 5%) as having DF and the remaining 144 patients (95%) as having DHF. Of the 144 patients diagnosed as having DHF, 91 (63%) were considered to have circulatory failure at admission or at some point in time during admission in the hospital. In addition to the 66 patients with hypotension or narrow pulse pressure, treating physicians considered 25 patients with tachycardia, restlessness and cold and clammy skin as having compensated shock and subsequently diagnosed them as having circulatory failure. Agreements between the various classification systems and disease classification by treating physicians are shown in Table % as haviOur results show that a considerable number of dengue virus infected patients with circulatory failure is not identified correctly when the WHO classification system is applied. By modifying the combination of criteria that are included in the WHO classification system, we were able to identify more patients with circulatory failure. Overall, 86% of patients with circulatory failure were identified correctly by the strict WHO classification system. By contrast, four modified classification systems recognized more than 90% of patients with circulatory failure as DHF.These findings are largely in line with observations made by Phuong and colleagues who studied a considerable larger group of Vietnamese patients . They alLikewise, we found that the WHO classification system was in only modest agreement with the intuitive classification by treating physicians. Treating physicians were inclined to classify patients with evidence of plasma leakage as having DHF even in the absence of both thrombocytopenia and a haemorrhagic tendency. As a result, the modified classification systems haemoconcentration and thrombocytopenia, and haemoconcentration with either thrombocytopenia or a haemorrhagic tendency demonstrated good agreement with the intuitive classification by treating physicians. Phuong and colleagues noted that in clinical practice many physicians use the modified system of a haemorrhagic tendency usually together with thrombocytopenia rather than haemoconcentration. We showed that this does not hold true for our study setting. Treating physicians were inclined to classify severe cases as suffering from DHF using a combination of haemoconcentration with either thrombocytopenia or a haemorrhagic tendency instead of a haemorrhagic tendency together with thrombocytopenia.w value 0.70). Two separate views of treating physicians played a prominent part in this. First, a considerable number of patients were classified as shock, although they did not meet the WHO criteria for shock. Treating physicians classified patients as having shock when symptoms and signs, such as cool and clammy skin, rapid pulse, decreased urinary output and confusion, indicating a stage I or compensated shock were present. This was done even in the absence of hypotension for age or a narrow pulse pressure. Second, patients who had no evidence of plasma leakage but did have a haemorrhagic tendency and thrombocytopenia were classified as DF. However, when circulatory failure was present a classification of DSS was given. To make a distinction between patients with and without evidence of plasma leakage an additional classification could be useful [Although several modified classification systems were in good agreement with the intuitive classification by treating physician, they still did not reach the magnitude of agreement we had expected . This enabled us to collect all the data necessary for classification of disease severity. If regular laboratory testing or additional diagnostic tests are not performed because of for example limited resources, it may be difficult to demonstrate the presence of haemoconcentration with the possibility of misclassification. In addition, one can elaborate on the use of ultrasound in the detection of evidence of plasma leakage. We did not perform an ultrasound scan of the chest and/or abdomen because of practical issues. In several patients with circulatory failure, no evidence of plasma leakage was found despite the use of a chest radiography and frequent measurement of haematocrit . Whether ultrasound would have provided additional cases with evidence of plasma leakage, is uncertain but theoretically possible. Several studies reported on the use of ultrasound in determining the presence of pleural effusion, ascites or thickening of the gallbladder wall -19. AlthAn increasing number of dengue infections have been related to other unusual manifestations. These include fulminant liver failure, cardiomyopathy, ocular manifestations and neurological phenomena such as altered consciousness, convulsions, and coma resulting from encephalitis and encephalopathy ,20-23. NIn conclusion, our results show that the WHO classification system is less accurate in correctly classifying dengue disease severity than all other modified classification systems. The implication of these findings is to question the use of the strict WHO classification system to classify dengue disease severity, all the more because it lacks sufficient agreement with how patients are classified in clinical practice. Additional research is needed in order to refine and improve the clinical usefulness of the system.The author(s) declare that they have no competing interests.TS, AM and PK wrote the first draft of the study protocol. DB, AO, JM, EG and AS contributed to the writing of the study protocol. TS, AM and MS were responsible for implementation of the study. TS and MS were responsible for management of patients, and data collection at the study site. PK and AO were responsible for all diagnostic procedures. AM and MM did all statistical analyses. TS, AM, and PK wrote the first draft of the report, and all authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "Diagnostic errors associated with the failure to follow up on abnormal diagnostic studies (\"missed results\") are a potential cause of treatment delay and a threat to patient safety. Few data exist concerning the frequency of missed results and associated treatment delays within the Veterans Health Administration (VA).The primary objective of the current study was to assess the frequency of missed results and resulting treatment delays encountered by primary care providers in VA clinics.An anonymous on-line survey of primary care providers was conducted as part of the health systems ongoing quality improvement programs. We collected information from providers concerning their clinical effort , number of patients with missed abnormal test results, and the number and types of treatment delays providers encountered during the two week period prior to administration of our survey.The survey was completed by 106 out of 198 providers (54 percent response rate). Respondents saw and average of 86 patients per 2 week period. Providers encountered 64 patients with missed results during the two week period leading up to the study and 52 patients with treatment delays. The most common missed results included imaging studies (29 percent), clinical laboratory (22 percent), anatomic pathology (9 percent), and other (40 percent). The most common diagnostic delays were cancer (34 percent), endocrine problems (26 percent), cardiac problems (16 percent), and others (24 percent).Missed results leading to clinically important treatment delays are an important and likely underappreciated source of diagnostic error. There is growing evidence that delays in diagnosis constitute a common medical error and represent a significant threat to patient safety -5. Yet tA number of isolated studies have examined the incidence of missed results within discrete healthcare systems by focusing on individual tests -13. IndeAn important study in this regard was conducted by Roy et al, who examined clinician awareness of significantly abnormal test results which had returned after the patient's discharge from a large academic center. The investigators found that clinically important missed results occurred in 0.9 percent of patient discharges . In anotThe Veterans Health Administration (VHA), with over 5 million patients, 22 regional health care networks and hundreds of integrated healthcare delivery systems all linked by a common electronic medical record (EMR) has long been recognized as a leader in quality and patient safety. Equipped with an advanced EMR which integrates laboratory, radiology and clinical notes and provides the capability of making test result information available within the EMR as soon as it has been finalized by the diagnostic service, the VHA presents an opportunity to examine the epidemiology of missed results in a healthcare system that has already implemented many technologies designed to minimize this problem.In an effort to determine both the incidence and clinical significance of missed test results in the VHA, we built upon our previous work to study the frequency and types of missed results and associated treatment delays that providers encountered in their clinical practices.We administered a survey (described below) to primary care providers practicing within VA Midwest Health Care Network encompassing Minnesota, Iowa, Nebraska, South Dakota, and North Dakota. This health care system, also known as Veterans Integrated Service Network 23 (VISN 23), includes three large academic medical centers, five smaller community and rural hospitals, and numerous smaller community-based outpatient clinics.This survey was developed as part of an ongoing quality improvement initiative assessing test result reporting practices and associated problems in VISN 23. A multi-disciplinary task force, consisting of primary care clinicians, specialty clinicians, radiology and pathology clinicians and administrators developed the first results reporting survey, to explore problems in the test result reporting system which was initially fielded in May 2005 and first identified that nearly half of the providers had encountered missed results and over a third had encountered treatment delays . We tookWe obtained limited demographic information and clinical effort to provide estimates of the patient volume. Although we asked in which healthcare system the clinician practiced, we did not obtain other demographic data in order to protect respondent privacy. Next we collected information about the number of days each respondent spent in clinic in the prior two weeks and the typical number of patients seen per session. See Appendix for the list of questions.The medical error section of the survey asked about missed test results and resultant treatment delays encountered by respondents during the two week period prior to receipt of the survey. Providers were asked to specify \"how many patients they had encountered during the prior two weeks with an abnormal result that had been missed because it had not received the anticipated clinical response from the ordering service .\" Subsequently they were then asked to choose from a list, which study or studies were missed , prostate specific antigen (PSA), etc.). A follow up question asked the respondent to specify how many patients they had encountered in the prior two week period whom \"may have experienced a delay in either diagnosis or treatment due to a missed diagnostic result that was overlooked by the ordering service.\" Again, in follow up, they were then asked to choose from a list, which type of treatments or diagnoses were delayed.In the next section, because primary care clinicians had expressed concern that patients frequently scheduled visits with them expressly to obtain results of tests that had been ordered by specialists in our initial survey, we asked respondents how many patients they had seen in the prior two weeks because a patient asking a specialty clinic about their test results had been redirected to primary care . Two supplemental questions, using a 5 point Likert scale (ranging from 1 [strongly agree] to 5 [strongly disagree]) investigated the time such visits took and whether the provider felt competent to interpret test results in those circumstances.The fourth section asked about procedures and processes providers used to avoid missing test results in their practice. To provide the reader an understanding of VA EMR, the notifications processes within the EMR as they existed in the network at the time of the survey are summarized. In an effort to decrease the volume of notifications that providers see each day, providers were given control of the settings determining which clinical laboratory result and which radiology result notifications were presented at provider sign on, i.e. all results, only abnormal results, or only critical results as defined by the hospital clinical executive board. Since paper copies of test results were largely eliminated, the notifications within the EMR were generally the only means by which a provider received copies of test results.In particular, providers were asked whether they used any of the following procedures: 1) Notifications within the electronic medical record (EMR) set to receive all test results; 2) Notifications set to receive only abnormal results; 3) Notifications which flag only the most critical test; 4) Paper based log of tests ordered; 5) Delegation of responsibility to support staff; and 6) Other systems.Because we also wanted to know how providers ensured patients had completed follow up after an abnormal result, the fifth section asked the providers to select from a list of options, which was the best description of their usual practice. Two active processes i.e. not dependent upon the patient action, were presented: use of an electronic or paper log or staff monitor. The three passive processes, i.e. dependent on the patient action, were presented: instructing patient to call if follow up did not occur, review of previous clinic note when patient returns, and no processes in place.Finally, we asked providers to rate the \"helpfulness\" of eight potential interventions in the VA results management system designed to improve test result management . Potential changes included:1. Establishing the expectation for patients that all test results will be reported to them.2. Providing copies of all diagnostic test results directly to patients.3. Providing, to the ordering service, summary monthly reports of abnormal labs specific to a diagnosis group (e.g. patients with CAD and LDL>110 or CXR with possible mass).4. Periodic summary reports of patients with abnormal test results that have not received the anticipated clinical response .5. Establishment of a consistent process or procedure (SOP) for the \"hand off\" of diagnostic test results when a provider is absent or leaves the service.6. The establishment of a consistent SOP for results management and reporting by each clinical service.7. A convenient process for providers to generate results letters to patients.8. A secure voice messaging system to patients for results reporting and instructions from providers.The providers were sent an email two weeks prior to the survey, briefly noting the problem of missed results and associated treatment delays, reviewing the network's commitment to periodic assessment of missed results related issues which would be conducted again during the next month. Two weeks later, the providers with more than 450 continuity patients were sent invitations to participate in the survey. Over the next three weeks, three reminder e-mails were sent, thanking providers who had completed the survey and encouraging those who had not yet completed the survey to do so.The data from the above survey was collected via a secure internet web site. Survey responses were used to calculate the mean number of patients seen per provider per week and the proportion of patients who experienced missed test results and delays in diagnosis or treatment using Microsoft Excel and Stata SE Version 8.2 . These analyses were approved by the University of Iowa Institutional Review Board.The survey was completed by 106 of 198 providers for an overall response rate of 54 percent. The response rate from the eight participating health care system ranged from a low of 40 percent (8 of 20) to a high of 69 percent (11 of 16). Providers reported working in clinic an average of 8.3 of a possible 10 days during the two week period prior to the survey administration. Approximately 9100 patient encounters were reported by the respondents with each provider on average seeing 86 patients in the prior two weeks.During this period, 63 percent of survey respondents reported that they did not encounter any patients with a probable missed result while 37 percent reported encountering at least one patient with a missed result Figure . The typIn the follow up question about treatment delays, a total of 32 providers (30 percent of respondents) reported encountering one or more patients with delays in diagnosis or treatment due to missed test results. The types of diagnoses and treatments that were reported are shown in Figure One or more patient diversions in the prior clinic session were reported by 42 percent of the providers, accounting for 7 percent of the primary care visits. The majority (70 percent) either strongly agreed or agreed that \"the time lost as a result of investigating the test is very burdensome to my practice\" and just under half (46 percent) either strongly agreed or agreed that \"they generally did not know the clinical significance of the diagnostic tests they were asked to provide.\"Providers reported using a wide array of processes to avoid missing test results. The majority of providers (55 percent) reported reliance on the electronic notification system (i.e. electronic \"in box\") within the EMR with settings customized either to receive all results (31 percent), abnormal results only (21 percent) or reliance on specific \"order flags\" for the most important tests placed (3 percent). Other process providers reported using included a combination of both paper based logs and notifications within the EMR (34 percent), delegation to support staff (3 percent), and paper based log (8 percent).The three interventions that respondents rated most favorably to enhance the management of test results were: 1) establishment of a standard procedure to manage results during the absence of the ordering provider ; 2) electronic verification of provider review of results ; and, 3) establishment of standard procedures for managing results for the clinical service . See Table Almost a third of the VA Primary Care clinicians, practicing in diverse clinical settings, encountered one or more patients with clinically important treatment delays as a result of missed results during the two weeks prior to administration of our survey. Imaging studies and studies related to potential malignancies were the most common types of studies reported missed and cancer was the most common diagnosis which was delayed. Almost 7 per cent of visits to Primary Care were to help patients get results of tests ordered by specialty services with almost half of the providers indicating they often did not know the clinical significance of the results they were asked to research. Significant variation in the processes used to ensure follow up of abnormal results were also reported. Despite practicing in a single healthcare system with a single EMR, providers reported significant variation in the procedures they used to ensure review of ordered diagnostic studies. Finally, respondents reported strong support for a number of potential interventions designed to assist them in managing test results.These findings add to the growing body of evidence documenting medical errors due to missed diagnostic tests. This study expands upon prior work by providing a more comprehensive picture of both the incidence of missed results in ambulatory practice and the potential clinical ramifications of this problem. The proportion of cancer delays which were prostate, colorectal or lung cancer matches the proportion reported in a review of VHA tort claims from 1998 through 2004 [While this study adds to the evidence that missed results are ubiquitous and result in harm to patients, finding a simple solution is likely to be challenging. Ensuring a requested test has been completed and integrated into the plan of care involves multiple steps and multiple individuals ,17. A reData from the current study provide evidence that even a well designed computerized in-box system may not prevent busy clinicians from missing results. Such a finding is not surprising given work psychology research that suggests that the vast majority of individuals will ignore alarms as work volume increases or as alarms sensitivity decreases ,21. FinaThe study is also helpful because it suggests several potential system issues which may contribute to the loss of abnormal tests results. First, variable processes were used by providers to ensure review and follow up of an abnormal result had occurred. Although the computer can automate functions, such as interfaces between the laboratory equipment and the medical record or delivery of data to clinicians, numerous human steps are still required to ensure the information is integrated in the patient's medical care, the patients are notified, and scheduled for follow up when needed ,22. AlsoFurthermore, the waste associated with poor results management is often hidden, therefore difficult to quantify. This includes direct costs for tests never reviewed and the morbidity and mortality of treatment delay associated with missed results. Other less obvious waste, occurring when patients do not receive their test, are the negative impact on Primary Care clinic access and efficiency when patients ask Primary Care to investigate those tests and find results for the patients. Also, when patients are not given their test results, they are less activated, generally experience lower levels of therapeutic adherence, and poorer outcomes ,24.There are a number of limitations to this study that should be mentioned. Because we are unable to provide a comparison of responders and non-responders, the response rate of 54 percent introduces the possibility of response bias. Even if we assume the unlikely event that all non-responders encountered no patients with missed results or treatment delays during the study period, the numbers of providers reporting errors are still concerning for both missed results (20 percent) and treatment delays (17 percent). In addition, these findings are based entirely upon provider surveys and we lack chart audits to confirm the missed results that were reported. However, our data are consistent with prior studies, making it unlikely that chart audits would have significantly altered our findings ,25,26.While it is also possible that over reporting occurred due to recall of events outside the time window, with providers choosing to report because this is the one chance they have to report these type of errors, prior studies have demonstrated provider recall underestimate errors confirmed with chart audits ,28. FurtIt is important these findings be replicated in settings outside the VA. The frequency of patient diversion from specialist clinics to primary care for test results may be higher in the VA because VA sub-specialty clinics often meet only once or twice a month. Also the VA has a sophisticated EMR with many of the key tools recommended to facilitate more effective management of test results ,29. The However it is possible that paper based systems by utilizing standardized processes and procedures to ensure all results have been reviewed may have a lower rate of missed results than what we have found in this study. Furthermore systems converting from paper based medical record to an EMR, if systems do not exist for an electronic signature to record physician review of the test result and monitors which identify results that were never viewed, may experience an increase in missed results if process controls which existed in the paper based system to ensure review of abnormal results are not replicated in some fashion in the EMR based system.In conclusion, true measurement of the burden of missed results within the population is needed, along with a public monitor; however, such tools may be years away. In the interim, the use of provider surveys can reveal useful information for healthcare systems who wish to monitor and improve the management of test results within their system. System interventions to lower the risk of missed results are needed, and data on provider responses to potential interventions are helping guide our selection of interventions to pilot as we work to reduce the burden of diagnostic errors due to the mis-handling of abnormal test results, i.e. missed results.The author(s) declare that they have no competing interests.PC participated in the design of the study and the statistical analysis. TW conceived of the study, and participated in its design and coordination. Both authors developed, and approved the final manuscript.Survey questions and the response rate (see Table The pre-publication history for this paper can be accessed here:"} +{"text": "In the first part of this study we proposed a new classification approach for spinal deformities (3-DEMO classification). To be valid, a classification needs to describe adequately the phenomenon considered : a way to verify this issue is comparison with already existing classifications .To compare the 3-DEMO classification and the numerical results of its classificatory parameters with the existing clinical classifications and the Cobb degrees on the frontal and sagittal planes respectively.118 subjects with adolescent idiopathic scoliosis have been classified according to 3-DEMO, SRS-Ponseti, King and Lenke classifications as well as according to sagittal configuration. For all patients we computed the values of the 3-DEMO parameters and the classical Cobb degrees measurements in the frontal and sagittal planes. Statistical analysis comprised Chi Square and Regression analysis, including a multivariate stepwise regression.Three of the four 3-DEMO parameters correlated with SRS-Ponseti, King and sagittal configuration classifications, but not with Lenke's one. Feeble correlations have been found among numerical parameters, while the stepwise regression allowed us to develop almost satisfactory models to obtain 3-DEMO parameters from classical Cobb degrees measurements.These results support the hypothesis of a possible clinical significance of the 3-DEMO classification, even if follow-up studies are needed to better understand these possible correlations and ultimately the classification usefulness. The most interesting 3D parameters appear to be Direction and mainly Phase, the latter being not at all correlated with currently existing classifications. Nevertheless, Shift cannot be easily appreciated on classical frontal and sagittal radiographs, even if it could presumably be calculated. The first proposed classification for scoliosis relates to the location of the various curves according to the apex vertebra, and has been initially developed by Schulthess , refined\u2022 Construct validity: the extent to which the classification accurately represents a construct and produces an observation distinct from that produced by a measure of another construct: does 3-DEMO produces something different from 2-D classifications, but anyway inherent to 3-D deformities?\u2022 Concurrent validity: a method of determining validity of a classification as the correlation with scores of other valid classifications: does 3-DEMO correlates with other classifications ?\u2022 Criterion validity: the degree to which a classification correlates with other of the same construct: does 3-DEMO correlates with other 3-D classifications ?In the future, comparing with other existing 3-D classifications, together with completing the Concurrent and Criterion validity study performed today, it will be investigated also:\u2022 Content validity: the ability of the classification to adequately represent the content of the property that the investigator wishes to measure: does 3-DEMO really evaluate 3-D the spine ?Future clinical studies will allow to study:\u2022 Predictive validity: how well a classification predicts outcome in a different population from the one from which it was derived: is 3-DEMO useful to predict clinical results ?\u2022 External validity: the extent to which the classification applies to persons, objects, settings, or times other than those that were the subject of study: is 3-DEMO applicable in other settings ?Demonstration of the possibility of future applications in everyday settings with usual clinical instruments will allow to consider the:\u2022 Ecological validity: the extent to which the classification developed in laboratory reflect real life conditions: is 3-DEMO applicable in real everyday clinical life ?Finally, partially assessed trough peer review and comments collected from peers during meetings, as well as future application by others, there is:\u2022 Face validity: the clinical sense of a classification: does 3-DEMO makes sense given current understanding of scoliosis ?While presenting a new classification as the 3-DEMO , it is cIn part I of this We included in this study 118 subjects affected by adolescent idiopathic scoliosis. Mean age was 15.9 \u00b1 3.1, while weight and height were 50.9 \u00b1 10.8 and 160.2 \u00b1 10.8 respectively. Scoliosis curvature had an average of 37.4 \u00b1 12.5\u00b0 Cobb, kyphosis was 35.4 \u00b1 13.1\u00b0 and lordosis 47.7 \u00b1 12\u00b0 Cobb.Data have been acquired with the AUSCAN system and obtained curves have been classified according to the 3-DEMO classification, as described in the first part . We also Hyperkyphosis: kyphosis of more than 50\u00b0 Cobb (18 patients); Flat-Back: kyphosis of less than 20\u00b0 Cobb (46 patients); Hyperlordosis: lordosis of more than 60\u00b0 Cobb (55 patients); Hypolordosis: lordosis of less than 30\u00b0 Cobb (no patients).Finally, considering the fact that 3-DEMO classification aims at merging in one single 3-D representation classical radiographic parameters, for each patient we computed a Cobb and a Sagittal Index. This was simply done with a sum of the angles in each radiographic plane, considering positive a right curve and lordosis, and negative a left curve and kyphosis. So, a 30\u00b0 thoracic right, 20\u00b0 lumbar left scoliosis had a Cobb Index of +10\u00b0 (+30\u00b0 -20\u00b0 = +10\u00b0), and a kyphosis of 60\u00b0 with lordosis of 45\u00b0 produced a Sagittal Index of -15\u00b0 (-60\u00b0 +45\u00b0 = -15\u00b0).All classifications have been compared with the 3-DEMO one using the Chi-square test. For the comparison between Ponseti and 3-DEMO classifications, after preliminary results were obtained, looking at the Figure 3-DEMO parameters resulted statistically different among the groups according to the SRS classification, with the only exception of Phase Figure : in partWe did not find any correlation between 3-DEMO parameters and Lenke classification, even considering the modifiers. On the contrary, King classification was correlated with Direction and Lateral Shift: in particular, most of King 2 curves have Left Direction and Right Shift combine to cause Phase gives this parameter a real 3-D importance. The name and the The modelling through a stepwise regression analysis allowed us to calculate 4 rather reliable models according to RSquare values. Interestingly, Direction and Phase have been better described using all parameters while, as awaited, Shifts required to radiographically analyze the Cobb degrees of the correspondent plane: the only exception was a light contribution of kyphosis on LL Shift. So, the \"truest\" 3-D parameters again appear to be Direction and Phase, confirming the already stated phenomenon that only an alteration of one of these parameters (even if both could be combined) can identify a scoliosis: we coulWe have found some correlations between the 3-DEMO classificatory parameters and the classical radiographic classifications and measurements. These results support the hypothesis of a possible clinical significance for this classification, even if follow-up studies are needed to better understand these possible correlations and ultimately classification usefulness. Another study is needed to compare this classification with the 3D already existing ,19 in or"} +{"text": "Purpose: Histological grading is currently one of the best predictors of tumor behavior and outcome in soft tissue sarcoma.However, occasionally there is significant disagreement even among expert pathologists. An alternative method that givesmore reliable and non-subjective diagnostic information is needed. The potential use of proton magnetic resonance spectroscopyin combination with an appropriate statistical classification strategy was tested here in differentiating normalmesenchymal tissue from soft tissue sarcoma.\t\t\t\t\tMethods: Fifty-four normal and soft tissue sarcoma specimens of various histological types were obtained from 15 patients.One-dimensional proton magnetic resonance spectra were acquired at 360 MHz. Spectral data were analyzed by using boththe conventional peak area ratios and a specific statistical classification strategy.\t\t\t\t\tResults: The statistical classification strategy gave much better results than the conventional analysis. The overall classificationaccuracy (based on the histopathology of the MRS specimens) in differentiating normal mesenchymal from soft tissuesarcoma was 93%, with a sensitivity of 100% and specificity of 88%.The results in the test set were 83, 92 and 76%, respectively.Our optimal region selection algorithm identified six spectral regions with discriminating potential, including thoseassigned to choline, creatine, glutamine, glutamic acid and lipid.\t\t\t\t\tConclusion: Proton magnetic resonance spectroscopy combined with a statistical classification strategy gave good results indifferentiating normal mesenchymal tissue from soft tissue sarcoma specimens ex vivo. Such an approach may also differentiatebenign tumors from malignant ones and this will be explored in future studies."} +{"text": "Advanced diagnostic tools, classification systems and accordingly selected surgical approaches are essential requirements for the prevention of failure of surgical treatment of thoracolumbar fractures. The present study is designed to evaluate the contribution of classification to the choice of a surgical approach using the current fracture classification systems.We studied prospectively a group of 64 patients of an average age of 43 years, all operated on for thoracolumbar fractures during the year 2001. The AO-ASIF classification was used preoperatively with all imaging studies and magnetic resonance imaging (MRI)). When the damage was detected only in the anterior column (A type), an isolated anterior stabilization (n = 22) was preferred. If the MRI study disclosed an injury in the posterior column, a posterior approach (n = 20) using the internal fixator was chosen. Injuries involving the posterior column (B or C type) were classified additionally according to the load-sharing classification (LSC). If LSC gave six or more points, treatment was completed with an anterior fusion. The combined postero-anterior procedure was carried out 22 times. The minimum followup period was 22 months.Neither implant failure and nor significant loss of correction were observed in patients treated with anterior or combined procedures. The average loss of correction (increase of kyphosis) in simple posterior stabilization was 3.1 degree.Complex fracture classification helps in the selection of the surgical approach and helps to decrease the chances of treatment failure. Patients with multiple fractures, osteoporosis and spinal cord injury were excluded from the study. The series consisted of 22 women and 42 men with a mean age of 43 years (19-71 yrs).All patients were investigated preoperatively by plain X-ray, CT and MRI and their injury examined using the AO-ASIF classification. When the damage was detected only in the anterior column (AO-ASIF A type), an isolated anterior approach was preferred. Stabilization was carried out using an anterior angle-stable device (MACS-TL) and a spacer or tricortical bone graft. If an injury of the posterior column (bony or ligamentous) was disclosed, the posterior approach using the internal fixator was chosen. In these injuries (AO-ASIF B or C type), damage of the anterior column was classified additionally according to the load-sharing classification (LSC).The patients were divided into three groups:group I (n = 22): \u201cA\u201d type fractures treated with simple anterior stabilization and fusiongroup II (n = 22): \u201cB or C\u201d type of fractures with LSC scoring equal or higher than 6 points treated using combined posteroanterior procedure Figure \u2013d.Figurgroup III (n = 20): \u201cB or C\u201d type fractures with LSC scoring less then 6 points, treated with only the posterior approachWhen applied solely without any fusion or with monosegmental fusion in combined procedures, the internal pedicular fixator was removed after an average period of 15 months. Anterior implants were not removed. Patients in all groups were followed systematically with regards to subjective, clinical and radiographic results. Followup examination was performed six and 12 weeks and six and 12 months postoperatively and then yearly. Subjective assessment was based on self-evaluation of daily activities; objective followup parameters were assessed according to Prolo\u00b4s functional and economical scale. This study exclusively compares the early postoperative and the final radiographic results (endplate angle). In light of this objective, loss of correction (increase of kyphosis), possible implant failure and the fusion rate in patients with an anterior fusion were evaluated. Fusion assessment was based on analysis of lateral plain radiographs. Patients were followed for at least 22 months after the operation; the longest followup was 38 months.Type B fractures (29 cases) were the most frequent type of fractures. Survey of diagnoses, fusion extent and LSC points in all three groups are shown in The average values of LSC scoring in fractures treated with combined procedures and with indication for mono- and bi-segmental fusion (group II) were 6.4 and 8.0 respectively. The average LSC score in fractures treated with only posterior stabilization (group III) was 4.8 . NeitherThe technology of spine fracture imaging has significantly improved in recent years. Nevertheless, high-quality X-rays are still essential in the projection of basic shape parameters of the spine and the fracture and are the most accurate tool for evaluation of treatment results. Computer tomography (CT) perfectly depicts the whole vertebra and is an important guide for surgical planning. It gives information about bone fragments within the spinal canal. So far, there is no other imaging technology that is more useful than CT to examine facet relationships. Sagittal reconstruction makes it possible to evaluate the angle of kyphosis and the shape of the spinal canal narrowing. Magnetic resonance imaging (MRI) is dominant in soft tissue imaging: intervertebral discs , vessels (thrombosis), nerve roots and mainly the spinal cord . This is a very important, noninvasive, diagnostic method for the evaluation of ligamentous injury. The AO-ASIF classification system is not applicable in preoperative decision-making without an MRI examination.11The effort to understand the principles of spine stability led to the theory of three columns initially proposed by Denis for thoracolumbar fractures,e.g., a decrease in residual fracture stability.The \u201ccomprehensive classification of thoracolumbar fractures\u201d, also known as the AO-ASIF classification of spine fractures is based on the two column theory and divides all fractures into three categories, which are further sub divided into 55 groups, to define the various fractures of the thoracic and lumbar spine.et al.et al. reported the finding that 30% of type B fractures (AO-ASIF classification) are initially overlooked due to reliance on the AO-ASIF classification alone.11The reliability of the AO-ASIF classification was tested by Blauth The second classification system is the \u201cLoad Sharing Classification\u201d (LSC) devised by McCormack, Karaikovic and Gaines,Preoperative analysis of bony fracture anatomy with LSC is useful in determining candidates for short segment posterior instrumentation, short segment anterior stabilization or short segment posterior stabilization and anterior fusion with strut graft. The classification does not grade ligament damage and is not related to the mechanism of injury. It is a helpful adjunctive tool that can complement but not replace other forms of classification.18The surprisingly high proportion of B-type fractures observed19The concept of surgical approach selection described in this paper is limited in several aspects. We do not use it in cases where the posterior approach is obviously the method of choice . It is also necessary to keep in mind the specific features of anterior approaches in the upper third of thoracic spine. Additionally, the anterior approach has more contraindications with respect to the patient's general condition.Complex fracture classification comprising a combination of the AO-ASIF and LSC classification methods helps to choose the surgical approach. A classification-related approach facilitates the prevention of treatment failure."} +{"text": "Questionnaires were circulated to UK patients and health care professionals (HCPs) participating in the Taxotere as Adjuvant ChemoTherapy (TACT) trial in autumn 2004 asking if and how trial results, when available, should be conveyed to patients. A total of 1431 (37% of surviving UK TACT patients) returned questionnaires. In all, 30 (2%) patients did not want results. In all, 554 (40%) patients preferred to receive them via their hospital; 664 (47%) preferred results posted directly to their home, 177 (13%) preferred a letter providing a telephone number to request results. Six hundred and twelve patients thought results should come directly from the trials office. One hundred and seventy-six HCPs from 89 UK centres (86%) returned questionnaires. In all, 169 out of 176 patients (96%) thought results should be written in lay terms for patients. Seventy (41%) preferred patients to receive results via their hospital; 64 (38%) preferred a letter providing a telephone number to request results, and 32 (19%) preferred results posted directly to patients. Thirty-one HCPs (18%) thought results to patients should come directly from the trials office. A total of 868 (61%) patients thought next of kin of deceased patients should receive results, 543 (38%) did not; 47 (27%) HCPs thought they should; 118 (68%) did not. The timing aimed to capture patients' views after treatment was completed and normal day to day activities were resumed, at a time when the rate of disease relapse and death remained low and ahead of the attainment of the trial's results.We aimed to find out from trial patients whether they wanted to receive trial results written in lay terms when they are available, and how they considered they wanted to receive them. We compare their preferences with those expressed by health care professionals (oncologists and nurses) who had participated in the TACT trial.Following ethics approval from the South East Multi-Centre Research Ethic Committee, a patient newsletter accompanied by a patient questionnaire was sent to UK hospitals to distribute to surviving TACT patients. The newsletter aimed to remind patients that follow-up continued, and explained why the trial had so far not produced any published results. Health Care Professionals (HCPs) either posted these directly to trial patients, or distributed them in the hospital clinic. Health care professionals could withhold the newsletter and questionnaire from individuals or groups of patients if they considered them inappropriate for example, those receiving palliative care. The exact number of questionnaires distributed is not known, however feedback from hospitals following an earlier patient newsletter suggests approximately 3000 of a possible 3842 were distributed.The questionnaire described three methods of distributing results, and explained the advantages and disadvantages of each, as perceived by the researchers at the Clinical Trials & Statistics Unit at the Institute of Cancer Research (ICR-CTSU). Patients were then asked if they wanted results written in lay terms, and if so, which of the three methods they preferred . PatientP=0.35) nor in current country of residence (P=0.5) between those who would like to have the trial results and those who do not (data not shown). No association was found between preferred method of delivery and age group (P=0.34), nor between preferred method of delivery and UK country of residence (P=0.55). The distribution between age groups and UK country is shown in In all, 37% (1431) of the UK TACT trial population who remained alive at the time the questionnaire was distributed completed and returned it . Of thosP=0.18). as to which of the three methods of communicating results they preferred; however, the difference in response between HCPs and patients is significant (P<0.001).In all 176 HCPs responses came from 89 (86%) participating UK centres, of which 93 (53%) were nurses, 80 (45%) clinicians, and three (2%) did not specify . Among HDespite knowing that patient addresses are not held by ICR-CTSU, 28 (16%) thought that communication of results to patients should come directly from ICR-CTSU .A total of 61% (868) patients thought that the next of kin of patients who had died should receive trial results, compared with only 27% (47) HCPs. However, of those who held this view, the proportion who also thought the results should be conveyed to the next of kin of patients who had not wanted the results was very similar for patients and HCPs (60 and 57% respectively) .Although the response rate of 37% was higher than anticipated, we cannot assume the results are representative of the views of 2411 patients who were not given the questionnaire or chose not to return it. For those who did not reply, we do not know how many did not want results, or did not feel strongly enough to complete and return it. That 30 patients (2% of respondents) felt strongly enough about not receiving the results to return the completed questionnaire highlights the need to ask patients if they want results prior to them being distributed.vs control treatment in one single trial, which should be viewed in the context of the worldwide evidence.Unlike trials testing ongoing treatment for a chronic disease, results of most trials of adjuvant cancer treatment have no impact on future care of participants, nor do they provide information about the future implications of trial treatment for individuals. It is commonplace within adjuvant cancer trials for the collection of long-term follow-up data to continue and the dissemination of trial results to patients could introduce bias and jeopardise future knowledge and long term outcome data, particularly long-term data on quality of life. This risk, however small, needs to be balanced against the knowledge that the broad scientific trial results will only provide information about an average treatment effect of the experimental It is difficult for trial participants to foresee how they will react to receiving the results. For example, in a trial showing a small difference between two treatment arms, approximately 50% of patients will have received treatment, which was, on average, \u2018inferior\u2019. However, it may not be inferior for most patients who received it. In addition, highlighting the importance of clear patient information at trial entry about the uncertainty of treatment superiority, raises the question of how to explain to patients that \u2018inferior\u2019 in the trial does not necessarily mean \u2018inferior\u2019 for them, personally.A single trial is unlikely to provide a definitive answer to the original research question; for patients in the TACT trial an unbiased account of the results would require researchers to explain them within the context of a systematic overview of the emerging worldwide data on taxanes. Added to that are the uncertainties of confidence intervals and the caveat that any promising subgroup analyses are hypothesis generating, not results in themselves. Thus trial results written in lay terms will not only fail to provide the personalised interpretation that patients may want, but if delivered without due consideration to the timing of any relapse a patient may have experienced, or without consideration of the method of distribution, there is a risk they could unnecessary heighten concerns about long-term prognosis and future clinical care.To avoid unnecessary distress, information that accompanied this survey did not explain that results depended on enough patients relapsing or dying, yet it is this that allows statistically reliable and precise comparisons between treatment groups. Without this knowledge, can patients know whether they would want the results if they had relapsed? Patient response to receiving results could be further complicated by knowing they had also received the \u2018inferior\u2019 treatment. The timing of this questionnaire was such that very few patients had relapsed. If those few patients were excluded from receiving the questionnaire, the views of patients who have relapsed may be under-represented.Patients were very divided on whether the next of kin of deceased patients should be given trial results, and HCPs erred towards thinking next of kin should not receive results. Qualifying comments made on questionnaires suggest this was a difficult ethical question.The majority of patients opting to receive results by post expressed a preference for ICR-CTSU to collect patients' addresses for future trials, bypassing the hospital to convey results to patients as soon as they are available; an option that suggests a higher priority for speed than confidentiality of personal data. The responses from HCPs suggest an expectation that trial results need to be interpreted for individual patients. The lower priority given to alacrity could also suggest an awareness that peer-reviewed journals do not allow widespread dissemination of results prior to publication. In addition, results of high profile trials often fall under the media spotlight ahead of any adequate peer review. Dissemination of results by the media and the \u2018spin\u2019 put on them in the popular press may be misinterpreted by trial participants, with HCPs left to interpret results in a way that seems to patients to be less attractive."} +{"text": "Diagnostic options for pulmonary tuberculosis in resource-poor settings are commonly limited to smear microscopy. We investigated whether bleach concentration by sedimentation and sputum cytology analysis (SCA) increased the positivity rate of smear microscopy for smear-positive tuberculosis.We did a prospective diagnostic study in a M\u00e9decins Sans Fronti\u00e8res-supported hospital in Mindouli, Republic of Congo. Three sputum samples were obtained from 280 consecutive pulmonary tuberculosis suspects, and were processed according to WHO guidelines for direct smear microscopy. The remainder of each sputum sample was homogenised with 2.6% bleach, sedimented overnight, smeared, and examined blinded to the direct smear result for acid-fast bacilli (AFB). All direct smears were assessed for quality by SCA. If a patient produced fewer than three good-quality sputum samples, further samples were requested. Sediment smear examination was performed independently of SCA result on the corresponding direct smear. Positivity rates were compared using McNemar's test.Excluding SCA, 43.2% of all patients were diagnosed as positive on direct microscopy of up to three samples. 47.9% were diagnosed on sediment microscopy, with 48.2% being diagnosed on direct microscopy, sediment microscopy, or both. The positivity rate increased from 43.2% to 47.9% with a case definition of one positive smear (\u22651 AFB/100 high power fields) of three, and from 42.1% to 43.9% with two positive smears. SCA resulted in 87.9% of patients producing at least two good-quality sputum samples, with 75.7% producing three or more. Using a case definition of one positive smear, the incremental yield of bleach sedimentation was 14/121, or 11.6% and in combination with SCA was 15/121, or 12.4% . Incremental yields with two positive smears were 5/118, or 4.2% and 7/118, or 5.9% , respectively.The combination of bleach sedimentation and SCA resulted in significantly increased microscopy positivity rates with a case definition of either one or two positive smears. Implementation of bleach sedimentation led to a significant increase in the diagnosis of smear-positive patients. Implementation of SCA did not result in significantly increased diagnosis of tuberculosis, but did result in improved sample quality. Requesting extra sputum samples based on SCA results, combined with bleach sedimentation, could significantly increase the detection of smear-positive patients if routinely implemented in resource-limited settings where gold standard techniques are not available. We recommend that a pilot phase is undertaken before routine implementation to determine the impact in a particular context. Tuberculosis (TB) is a major public health problem, with an estimated 2 billion people infected with tubercule bacilli worldwide. Estimated global prevalence of the disease is 139 per 100,000 population [The gold standard for diagnosing pulmonary tuberculosis is culture of sputum on L\u00f6wenstein-Jensen medium. However, due to lack of access to culture facilities and the long turn-around times involved with sputum culture, most programmes use direct Ziehl-Neelsen microscopy for detection of acid-fast bacilli (AFB) in sputum smears. This technique, when performed optimally, has reported sensitivities ranging from 61.8% to 70% compared with the gold-standard -4. SensiSputum samples are also often inadequate in patients who are immune-compromised or who have not been given correct instructions on sputum expectoration . PatientSputum concentration by homogenisation and microscopic examination of the sediment can increase the detection rate of AFB. Several concentration techniques using sedimentation or centrifugation have been reported -11. BleaDue to the variability of results to date, it has been recommended that further research is done before the adoption of bleach concentration as a standard diagnostic method .We did a prospective field assessment in routine conditions in a peripheral laboratory in an MSF-supported hospital in Mindouli, Republic of Congo. We aimed to compare the proportion of smear-positive patients detected with and without bleach concentration of sputum by overnight sedimentation with validation of all samples by SCA. Our hypothesis was that the combination of SCA and bleach sedimentation would significantly improve the detection rate for smear microscopy of pulmonary TB.The study was conducted between October 2006 and March 2008 in Mindouli, Pool Region, Republic of Congo. MSF was supporting the district Ministry of Health hospital, with services ranging from outpatient care, maternal care, and treatment of infectious diseases such as tuberculosis and HIV/AIDS, to psychosocial counselling and emergency surgery. Fighting between rebels and government forces meant that health care services had been neglected in the region, and food insecurity had resulted in high rates of malnutrition. After a ceasefire, internally displaced people returned to the region, leading MSF to start interventions in Ministry of Health hospitals in Kinkala, Kindamba, and Mindouli. Adult HIV prevalence is less than 4%, with a TB prevalence of 449 per 100,000 population ,17. PatiEthics approval for the study was obtained from the MSF Ethics Review Board. The study was also approved by the Ministry of Health Provincial Directorate of the National TB Programme of the Republic of Congo.Patients were given instructions on sputum production. Collection of the first sample was supervised by a TB nurse. Samples lacking any purulent material were rejected and the patient asked to try again. Patients were given a sputum container for expectoration at home the following morning. When the second sample was brought to the laboratory, it was examined macroscopically. If no purulent material was present, a laboratory technician supervised collection of a replacement sample. A third container was provided for expectoration the following morning. After processing, if there were fewer than three good sputum samples the patient was asked to provide further spot samples.Results were passed on to the clinician once all requested samples had been processed for direct microscopy and independently of results of sediment microscopy. For the case definition of one positive sample, any positive sample in the sample batch was considered eligible. For the case definition of two positive samples, any two positive samples in the sample batch were considered eligible. Direct microscopy smears were processed according to the standard hot Ziehl-Neelsen staining procedure and examined for SCA and presence of AFB.After direct smears had been made, the remainder of each sample was processed for bleach sedimentation according to the procedure outlined by Bonnet and colleagues , with thBoth direct and sediment smears were examined in a blinded manner by two laboratory technicians, and graded according to the Pasteur scale, which was then converted to the WHO-IUATLD scale. In case of discordant results, both readers repeated the slide reading until a consensus was reached. Monthly blinded quality control (QC) was done by the laboratory supervisor using five positive and five negative randomly selected smears from both direct and sediment subsets. Direct slides were also assessed for sputum cytology analysis agreement. External blinded QC was done halfway through and at the end of the study on 10% of positive and 10% of negative samples in both subsets. QC for SCA was also done on 10% of direct smears.\"Lacroix\" bleach was purchased from a supermarket in Brazzaville, Republic of Congo. Enough was purchased for the study duration. The stated chlorine concentration was 2.6%. To prevent reduction of chlorine activity from repeated exposure to air , each 2.Smears were examined with \u00d710 objective (\u00d7100 magnification) and categorised according to the SCA algorithm .Data were double-entered in Excel 2000, cross-checked, and analysed using Excel 2000 and STATA v10 . The exact McNemar's test was used to compare matched data for direct versus sediment results, using a case definition of either one or two positive results. A positive result was defined as \u22651 AFB per 100 high-power field. Analysis was done on the outcome based on obtaining one or two positive samples to reflect a change in the 2007 WHO recommendations that occurred halfway through the study . We analPatient enrolment progressed more slowly than expected, with only 280 patients enrolled after an extension of the study from 1 year to 18 months. The total number of samples obtained was 890. Of the 280 patients enrolled, 41% (115) were male. The mean age was 35 years (SD 12.1). Nine patients (3.2%) were lost to follow up, and three patients died.Most patients (223/280) provided three samples. 49/280 provided four or more and 8/280 provided fewer than three.Table When a case definition of two positive smears was used, bleach sedimentation gave an incremental yield of 5/118, or 4.2% of samples, including extra samples requested after SCA, were good-quality sputum. 15% (137/890) were insufficient or degraded sputum. 2% (16/890) of samples were saliva or mucus. One patient failed to produce any sputum samples, but was retained in the study .All poor-quality samples were negative on direct microscopy. Using bleach sedimentation on 47 poor-quality samples resulted in seven positive results , which improved the overall detection rate (p = 0.0215). The proportion of good-quality samples giving a positive result was significantly more using sedimentation (360 of 890) than direct microscopy .Table A total of 1780 slides were read. Fewer than ten inter-reader discrepancies were noted and all were resolved upon re-reading. External QC results showed excellent agreement for direct and bleach sedimentation techniques (both 99%), with very good agreement for SCA (90%).Our results show that bleach sedimentation and SCA increased the detection of smear-positive TB patients by 12.4%, which is in line with the findings of other studies reporting incremental yields ranging from 7% to 253% . A meta-The main implication of our study is that implementation of bleach sedimentation would significantly increase the detection of smear-positive patients if routinely implemented in resource-limited settings where gold standard techniques are not available.SCA is a low-workload intervention which takes roughly 15-30 seconds to perform, and so can easily be integrated into routine examination. Our QC results show that the technique is highly reproducible. However, bleach sedimentation increased workload and delayed results by 1 day compared with direct microscopy.The combination which yielded the highest positivity (45.4%), using a maximum of two samples and incurring minimal extra workload and delay, was a direct smear for the first sample and bleach sedimentation for the second. This approach allows rapid identification of patients positive on the first direct smear without waiting for the results of sedimentation. The use of two samples (the supervised sample and the one obtained the next morning) increased sensitivity by allowing for one sample to be of poor quality, and by concentrating the sample which is likely to be of the best quality (the morning sample). This procedure is based on the 2007 WHO recommendations ,21.One patient detected using direct smear microscopy was negative on sediment smear. This result could have been caused by chance, with the only AFB-containing portion of the sputum being used to make the direct smear.The analyses of residual chlorine activity showed that there was no reduction of chlorine activity after the bleach was decanted into small bottles. Because homogenisation activity is also visible microscopically, (i.e. lack of homogenisation results in intact white blood cells after sedimentation), it should not be necessary to measure residual chlorine activity routinely.Sediment smears were fragile and easily washed off the slides during staining. Attempts to prevent this with the addition of bovine serum albumin made smears difficult to decolourise, and the practice was discontinued. AFB in sediment smears were also found to fade; we therefore recommend that QC is performed within 6 months of staining.Our results showed that poor-quality samples are more likely to be negative on direct and sediment microscopy than good-quality samples, and that both types are significantly more likely to be positive after bleach sedimentation than on direct microscopy. 10.5% of degraded samples were negative in direct microscopy and positive on sedimentation. Although the sample size was small (4 of 38), this result suggests a need for further investigation. With the three saliva samples that became positive after sedimentation, a small amount of sputum was probably present.A recent study has stated that macroscopic evaluation of sputum for quality is as effective as microscopic techniques such as SCA . The stuThe patient positivity rate with direct microscopy without SCA was 43.2%. This rate is considerably higher than the recommended positivity rate of 5-20% and implThe main limitation of our results is that, due to a lack of TB culture facilities, we were unable to incorporate the gold standard of TB diagnosis: culture on Lowenstein-Jensen medium. We are therefore unable to describe the techniques with regard to increased sensitivity and potential false smear positive results, but only with regard to increased detection rate of AFB or incremental yield. Lack of culture confirmation could have resulted in false detection of positive results, which might have contributed to the incremental yield following sedimentation. The sample size was not reached despite extending the study to 18 months; unfortunately, logistical limitations prevented extending the study further. Bleach sedimentation with a case definition of two positive smears might have significantly increased yield if the required sample size had been reached. However, most results reached significance despite the lower-than-desired number of participants.Mindouli hospital functioned as a comprehensive HIV diagnostic and treatment centre, and a large proportion of those screened for TB infection were HIV positive. Although it would have been interesting to have stratified the results based on HIV status, this was not done due to concerns about patient confidentiality if TB results were linked with HIV results at the laboratory. We are therefore unable to provide an estimate for the proportion of HIV TB coinfected patients.Our loss-to-follow-up rate was low (3.2%), possibly because of emphasis on patient education during enrolment and provision of accommodation during the 3-day sample collection procedure, necessary since many patients came from outside the Mindouli area. Under routine conditions, without the provision of accommodation, a higher proportion of patients might be lost to follow up. The overnight delay in results following sedimentation could also lead to a higher patient loss to follow up. The loss to follow up rate after routine implementation should be monitored.Due to the disadvantages associated with the bleach concentration technique , routine implementation of the technique should only be considered after the feasibility of introduction in a particular context has been assessed, preferably following a pilot implementation phase.Implementation of bleach sedimentation will lead to a significant increase in the diagnosis of smear-positive TB patients. Implementation of SCA did not result in significantly increased diagnosis of TB. Efforts should be made to obtain good-quality samples, since bleach sedimentation of poor-quality samples is unlikely to result in a large incremental yield. With a case definition of one positive smear, we recommend that bleach sedimentation is implemented in settings where work has been done to improve sputum collection practices. Sample quality can be confirmed with SCA. A pilot study should be undertaken prior to routine implementation of bleach sedimentation to determine whether the technique will be of sufficient clinical benefit in a particular setting.The authors declare that they have no competing interests.PH conceived and carried out the study and wrote the first draft of the paper. PN was responsible for study implementation and protocol revision and commented on the draft. JG and MB performed data analysis and commented on the draft. VS designed the study, wrote the protocol, and commented on the draft. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2334/10/276/prepub"} +{"text": "New findings in MSF epidemiology, clinical features, and severe forms have changed the general perception of MSF. Rickettsia conorii conorii to be classified as a subspecies of R. conorii. New clinical features, such as multiple eschars, have been recently reported. Moreover, MSF has become more severe than RMSF; the mortality rate was as high as 32% in Portugal in 1997. Whether Rhipicephalus sanguineus is the only vector and reservoir for R. conorii conorii is a question not yet answered.Mediterranean spotted fever (MSF) was first described in 1910. Twenty years later, it was recognized as a rickettsial disease transmitted by the brown dog tick. In contrast to Rocky Mountain spotted fever (RMSF), MSF was thought to be a benign disease; however, the first severe case that resulted in death was reported in France in the 1980s. We have noted important changes in the epidemiology of MSF in the last 10 years, with emergence and reemergence of MSF in several countries. Advanced molecular tools have allowed Rickettsia conorii. It was first described a century ago as a disease that caused high fever and spots is a tick-borne disease caused by MSF is an emerging or a reemerging disease in some countries. For example, in Oran, Algeria, the first case of MSF was clinically diagnosed in 1993. Since that time, the number of cases has steadily increased (Another point is that MSF was considered for 70 years a benign disease when compared with Rocky Mountain spotted fever (RMSF). In fact, because of the lack of medical interest in MSF, its real severity was long ignored. Although the mortality rate was evaluated to be from 1% to 3% in the early reports in the literature, the first description of a highly severe form of MSF was published in the early 1980s , was described in 1925 in Marseille by Boinet and Pieri , and Astrakhan spotted fever rickettsia (AFR) constitute a homogeneous cluster supported by significant bootstrap values and distinct from other Rickettsia spp. By estimating the degrees of genotypic variation among isolates of the R. conorii strains Malish, ISFR, ITTR, and AFR, Zhu et al. proposed that R. conorii species nomenclature should be modified through the creation of the following subspecies: R. conorii conorii, R. conorii caspia, R. conoriiisraelensis, and R. conorii indica . In factR. conorii conorii has not been isolated in human clinical samples in these countries.MSF is endemic to the Mediterranean area, including northern Africa and southern Europe. Cases are still identified in new locations within this region. Thus, some cases were recently described in Algeria, Malta, Cyprus, Slovenia, Croatia, Kenya, Somalia, South Africa, and in areas surrounding the Black Sea . Spotted fever cases have been confirmed as MSF by the use of molecular tools in Portugal, Italy, Malta, Greece, Croatia, Spain, France, Turkey, Algeria, Tunisia, Morocco, Zimbabwe, Kenya, and South Africa. MSF is suspected to be endemic in Slovenia, Albania, Ukraine, Georgia, and Zimbabwe, but R. conorii conorii infection in countries where MSF is endemic.MSF appears to be waxing and waning, as indicated by peaks in the number of MSF cases . IncidenR. conorii conorii appears to be the main etiologic agent of SFG rickettsioses in this area . An increased number of ticks and increased human contact with the habitat of infected ticks are possible factors that would explain variations of incidence. In addition, the ecologic changes in the outskirts of large cities during the 1980s may have played an important role by moving rural sources to suburban zones. Climatic factors could also intervene, such as the increase of temperature, the lack of rainfall . Accordingly, the probability of being bitten simultaneously by several infected Rh. sanguineus is low. Conversely, H. marginatum ticks readily bite humans, and persons may receive multiple simultaneous tick bites . All of these patients were bitten in the southern of France. In Algeria, Mouffok et al. reported in a prospective study of 20 of 270 patients with multiple eschars .Eschars are rarely multiple. This observation was, however, reported in the early description of the disease by Olmer were found attached to a homeless man with alcoholism, who was living with his dog near Marseille . Patients with multiple eschars were not observed in 2005. Multiple eschars could indicate recent modification of tick behavior related to unusual climatic circumstances of the previous year. Likewise, laboratory evidence has shown an association between changing temperature and changing behavior of Rh. sanguineus .nguineus is generRh. sanguineus could intervene. PCR, followed by restriction fragment length polymorphism, on samples of hemolymph-positive ticks in Zimbabwe showed R. conorii conorii to be present in Rh. simus and Haemaphysalis leachi .In Africa, vectors other than Rh. sanguineus was thought to be the reservoir for R. conorii conorii . Curiously, Rh. sanguineus is found throughout the world, but R. conorii conorii is found only in some regions of the world. Dogs, the usual hosts of Rh. sanguineus, are also found everywhere. Even within endemic zones, microfoci exist. Early rickettsiologists such as Olmer in southern France and Blanc and Caminopetros in Greece have shown that foci of MSF are usually small with a low propensity for diffusion could play a role in the transmission of R. conorii conorii on the French Mediterranean coast because a large drop in MSF cases occurred in 1952 during an outbreak of myxomatosis, which killed all the wild rabbits on the French Mediterranean coast. MSF reappeared in 1967 with the reappearance of wild rabbits (R. conorii conorii in Salamanca, Spain. Hedgehogs and other small rodents are also candidates for the reservoir because antibodies against rickettsiae have been detected in serum of these animals (R. conorii conorii has never been isolated in the Americas, its reservoir is most likely a mammal present only in the Old World that has yet to be determined.Currently, we do not know the real reservoir for Our knowledge regarding MSF has undergone notable changes within the past 10 years. Molecular tools have allowed us to better discriminate rickettsial species and subspecies of the SFG. We now know that >1 rickettsiosis can be present in the same country. Patients who have been included in series of MSF cases may have had other rickettsioses. Moreover, MSF has a wider distribution than previously described. The disease has emerged and reemerged in several countries in the Mediterranean basin. New clinical features, such as multiple eschars, previously suggested in the early description, have now been confirmed in MSF. MSF is becoming an increasingly severe disease with death rates ranging from 3.2% to 32%. However, questions persist regarding the vector and reservoir for this disease, which need to be addressed."} +{"text": "Objective: This study was planned to clarify the in vitro effect of lidocaine and bupivacaine on germ tube formation by Candida albicans isolates from cases of clinical vaginal candidiasis.\t\t\t\t\tMethods: Fourteen C. albicans strains were grown on Sabouraud agar for 24 h at 37\u2103 and tested as follows: 100 \u03bcl of a yeast suspension [105 colony forming units (CFU)/ml of phosphate buffered saline (PBS)] was added to 500 \u03bcl of fresh human serum with lidocaine or bupivacaine in serial concentrations. The test was run in duplicate. Controls were prepared for each strain. After 4 h of incubation at 37\u2103, samples were taken from each vial and 200 yeasts were counted in a counting chamber. The pH of each suspension was measured.\t\t\t\t\tResults: The results are given as the mean of the 2 readings and are expressed as the percentage of blastoconidia with germ tubes/total blastoconidia.\t\t\t\t\tConclusions: Our experiments show that both lidocaine and bupivacaine have a dose-dependent inhibitory effect, pH-independent, on germ tube formation by C. albicans and that both drugs seem to be promising in the treatment of genital candidiasis due to the combination of anesthetic and antifungal properties."} +{"text": "We, therefore, hypothesized that EWS-Fli1 may affect the expression of G 1 regulatory genes. Downregulation of EWS-Fli1 fusion proteins was observed 48 hours after the treatment with EWS-Fli1 antisense oligonucleotides. The expressions of G 1 cyclins, cyclin D1 and cyclin E, were markedly decreased in parallel with the reduction of EWS-Fli1 fusion protein. On the other hand, the expression of p21 and p27, which are important cyclin-dependent kinase inhibitors (CKIs) for G 1\u2013S transition, was dramatically increased after the treatment with EWS-Fli1 antisense oligonucleotides. RT-PCR analysis showed that alteration of the expressions of the cyclins and CKIs occurred at the mRNA level. Furthermore, transfection of EWS-Fli1 cDNA to NIH3T3 caused transformation of the cells and induction of the expression of cyclin D1 and E. Clinical samples of ET also showed a high level of expression of cyclin D1 mRNA, whereas mRNAs for p21 and p27 were not detected in the samples. These findings strongly suggest that the G 1\u2013S regulatory genes may be involved in downstream of EWS-Fli1 transcription factor, and that the unbalanced expression of G 1\u2013S regulatory factors caused by EWS-Fli1 may lead to the tumorigenesis of ET. \u00a9 2001 Cancer Research Campaign http://www. bjcancer.comChromosomal translocation t(q24:q12) is detected in approximately 90% of tumours of the Ewing family (ET). This translocation results in EWS-Fli1 gene fusion which produces a EWS-Fli1 fusion protein acting as an aberrant transcriptional activator. We previously reported that the inhibition of EWS-Fli1 expression caused the G"} +{"text": "To compare self-reported pain and efficacy of warmed, alkalinized, and warmed alkalinized lidocaine with plain 2% lidocaine at room temperature for peribulbar anesthesia in cataract surgery.Through a prospective, single-blinded, randomized, controlled clinical trial 200 patients were divided into four groups. They received either lidocaine at operating room temperature 18\u00b0C, control group (Group C), lidocaine warmed to 37\u00b0C (Group W), lidocaine alkalinized to a pH of 7.09 \u00b1 0.10 (Group B) or lidocaine at 37\u00b0C alkalinized to a pH of 6.94 \u00b1 0.05 (Group WB). All solutions contained Inj. Hyaluronidase 50 IU/ml. Pain was assessed using a 10-cm visual analog score scale. Time of onset of sensory and motor blockade and time to onset of postoperative pain were recorded by a blinded observer.P < 0.001). Onset of analgesia was delayed in Group C compared with Group B (P = 0.021) and WB (P < 0.001). Mean time taken for the onset of complete akinesia and supplementation required for the block was significantly lower in Group B. Time of onset of pain after operation was significantly earlier in Group W compared with Group C (P = 0.036).Mean pain score was significantly lower in Group B and WB compared with Group C (Alkalinized lidocaine with or without warming produced less pain than lidocaine injected at room temperature. Alkalinization enhances the effect of warming for sensory nerve blockade, but warming does not enhance alkalinization, in fact it reduces the efficacy of alkalinized solution for blocking the motor nerves in the eye. Pain during injection of local anesthetic solution is common and this is partly explained by the direct tissue irritation caused by injecting an acidic solution, Lidocaine hydrochloride (L-HCL). The incrBoth alkalinization and warming have been found to produce synergistic effects in intradermal anesthesia.12 There n = 6, with Hyaluronidase 50 IU/ml , was found to be 6.52 \u00b1 0.08 (range: 6.39-6.59). pH was measured using a digital pH meter . For alkalizing the above solution, 0.5 ml of preservative-free 7.5% sodium bicarbonate was required. The mean pH \u00b1 SD of the alkalinized lidocaine solution (n = 6), was 7.09 \u00b1 0.10 (range: 7.00-7.23). The mean time interval needed for warming 2% lidocaine solution (n = 6) to a temperature of 40\u00b0C was 6 min 54 sec \u00b1 17 sec. For standardization, lidocaine vials were kept in a water bath (Labserve) set at 40\u00b0C for 10 min. The mean time interval recorded for lidocaine solution to attain a temperature of 37\u00b0C from 40\u00b0 was 120 \u00b1 24.49 sec. Hence to ensure that the temperature of the solution is around 37\u00b0C during injection into the peribulbar space, the warmed solution had to be injected within 2 min from the removal of the solution from the water bath. For alkalizing warmed lidocaine solution, 0.25 ml of 7.5% sodium bicarbonate was needed and used. The mean pH \u00b1 SD of warmed and alkalinized lidocaine solution (n = 6) was 6.94 \u00b1 0.05 (6.90-7.00). If the warmed lidocaine solution was alkalinized beyond the above range, precipitation of the solution occurred.The mean pH + SD of 10 ml of 2% lidocaine solution After obtaining approval from the Institutional Review Board of the Vision Research Foundation , 200 patients gave written informed consent for this study. All patients were aged 40 years and above and were scheduled for phacoemulsification cataract surgery under local anesthesia. Patients with history of previous intraocular surgery under local anesthesia, known allergy to lidocaine, mental retardation, one-eyed patients and those with inadequate vision to appreciate the visual analog scale were excluded. Two patients refused to participate in the study and one patient was excluded because conventional extracapsular extraction was performed.No preoperative sedatives were administered.All eligible patients were randomized into one of the four groups to receive a peribulbar injection from any one of the following solutions:Group (Gr) C: 10 ml of plain 2% lidocaine solution at room temperature, 18\u00b0C (Control group)Gr W: 10 ml of 2% lidocaine solution at 37\u00b0CGr B: 10 ml of 2% lidocaine solution buffered to an estimated pH of 7.09 \u00b1 0.10Gr WB: 10 ml of 2% lidocaine solution at 37\u00b0C buffered to an estimated pH of 6.94 \u00b1 0.05\u22121) was added prior to alkalizing or warming, to all anesthetic solutions.Randomization was done based on a computer-generated random table. Injection hyaluronidase , noninvasive arterial pressure monitoring and pulse oximetery. Patients were clearly explained about the procedure involved in the peribulbar block and also about the use of visual analog scale (VAS) of 10 cm to evaluate the pain perceived by them, zero cm representing no pain and 10 cm representing the most severe pain.To maintain the uniformity of the technique, peribulbar block was administered by a single non-blinded anesthetist, experienced in ophthalmic anesthesia, and the same blinded surgeon performed the surgery for all the patients. The block was administered using a 23-G, 1\u201d blunt steel needle. The needle was first inserted through the lid at a point between the lateral third and medial two-thirds of the lower orbital margin, with the bevel facing the globe. It was then advanced in a superomedial direction , for a distance of approximately 25 mm to the equator of the globe, where the anesthetic solution was injected, outside the muscle cone at a rate of 5 ml in 10 sec. ImmediatThe globe was then compressed gently for 2 min with the middle three fingers placed over a sterile gauze pad on the upper eye lid with the middle finger pressing directly down on the eyeball. Two minutes following the first injection, the second injection was administered in the superomedial compartment. The needle was introduced through the upper lid at about 2 mm medial and inferior to the supraorbital notch. It was then advanced in a sagittal plane under the roof of the orbit for a maximal depth of 25 mm where the remaining 5 ml of local anesthetic was injected at a similar rate as given for the inferior injection. Digital The efficacy of the block was evaluated by a second blinded anesthetist every 30 sec after administration of the superior injection. Analgesic onset was assessed by holding the bulbar conjunctiva both medially and laterally with toothed forceps. Adequacy of akinesia was determined by the absence of ocular movements (< 1 mm) in all directions. Supplemental injections with the same anesthetic mixture were given at 5 min of interval following the superior injection in case of residual movement (>1 mm). If there was superior and/or medial movement, the superior injection with 1 to 2 ml of injectate was repeated. Similarly, inferior injection with 1 to 2 ml of injectate was given if there was any inferior and/or lateral movement.Vital signs were monitored throughout the surgery. Patients were encouraged to communicate with the surgeon regarding pain during surgery and if required sub Tenon's supplementation was given with 2 ml of plain lidocaine by the surgeon. At the end of the surgery the efficacy of anesthesia was graded by the surgeon, blinded to the solution used, based on the adequacy of akinesia and anesthesia throughout the procedure and the need for intraoperative supplementation . The preThe sample size was calculated to detect a significant difference of 2 in VAS score with power of the study 80% and \u03b1 equal to 0.05.P was < 0.05. SPSS 13 software package was used for statistical analysis.All continuous variables are presented by Mean\u00b1SD and it was analyzed by Student's t-test. The categorical datas were presented by frequency with percentages and it was analyzed by Chi-square test. One-way ANOVA with Dunnett test was used for comparison between the groups. Results were considered significant if P = 0.002), Gr B (P < 0.001) and WB (P < 0.001). Mean time of onset of analgesia was delayed in Gr C compared with Gr B (P = 0.021) and WB (P < 0.001). The difference between Gr C and W in sensory blockade was not significant (P = 0.579). Onset of motor nerve blockade was earlier in Gr B compared with Gr C (P = 0.033), W (P < 0.001) and WB (P = 0.038). At 5 min of interval following superior injection, significant number of patients in Gr W (54%) and WB (48%) required supplementation of the block once compared with Gr B (24%) (P = 0.002 for Gr W and P = 0.012 for Gr WB), The groups were similar in age, gender, body weight and duration of operation . The paiP = 0.036) [Adequate anesthesia and akinesia throughout surgery was achieved in all cases of Gr B and WB. Time of onset of pain after operation was earlier in Gr W compared with Gr C (= 0.036) .Local anesthetics are weak bases. To improve their stability they are supplied in acidic solution Lidocaine hydrochloride (L-HCL). In this The pKa value is also temperature dependent. Hence as local anesthetic is warmed, the pKa value decreases (pKa for lidocaine is 7.57 at 40\u00b0C) and the Theoretically speaking, both warming and alkalinization of lidocaine should produce lowest pain scores for injection. But in our study we found that alkalinization with or without warming lidocaine produced lowest mean pain score. Thus it is quite evident that, for this iatrogenic pain reduction no synergistic effect exists between warming and alkalinization of lidocaine.et al. found that there is no significant difference in bulbar analgesia and akinesia after retrobulbar anesthesia between injections of warm and cold anesthetic solutions.[Apart from a reduction in the pain perception, warmed solution did not help to achieve early analgesia or akinesia in the eye. Krause olutions. InjectinDuring surgery, one patient (2.0%) each in Gr C and W required subtenon's supplementation due to inadequate anesthesia and two patients (4.0%) in Gr W required subtenon's supplementation due to inadequate akinesia and anesthesia. The time of onset of postoperative pain was found to be significantly earlier in patients injected with warmed than room temperature lidocaine solution.Even though both alkalinization and warming are known to increase the non-ionized active form of the drug,\u201319 the iThe only limitation encountered in the study was that the anesthesiologist who performed the block was non-blinded, since his fingers were in contact with the syringe, and he could feel the temperature change and infer the group to which the patient belonged. To minimize bias a second blinded anesthesiologist evaluated the time of onset of analgesia and akinesia and decided on the need for supplemental injections if required. Variations in block and surgical technique were reduced to the minimum as only a single anesthesiologist administered injections and the same surgeon performed all cataract surgeries.Unlike in intradermal anesthesia alkalinization and warming do not possess a synergistic effect in peribulbar anesthesia for iatrogenic pain reduction occurring during injection of lidocaine. Also, we found that alkalinization enhances the effect of warming for blocking the sensory nerves, but warming does not enhance alkalinization and actually reduces the efficacy of alkalinized solution for blocking the motor nerves in the eye.Alkalinization of lidocaine is the best choice for patients undergoing cataract surgery under periocular anesthesia as it produced the least painful injection, achieved early analgesia and akinesia with fewer supplemental injections."} +{"text": "The surface of Ocr is replete with acidic residues that mimic the phosphate backbone of DNA. In addition, Ocr also mimics the overall dimensions of a bent 24-bp DNA molecule. In this study, we attempted to delineate these two mechanisms of DNA mimicry by chemically modifying the negative charges on the Ocr surface. Our analysis reveals that removal of about 46% of the carboxylate groups per Ocr monomer results in an \u223c\u00a050-fold reduction in binding affinity for a methyltransferase from a model type I restriction/modification system. The reduced affinity between Ocr with this degree of modification and the methyltransferase is comparable with the affinity of DNA for the methyltransferase. Additional modification to remove \u223c\u00a086% of the carboxylate groups further reduces its binding affinity, although the modified Ocr still binds to the methyltransferase via a mechanism attributable to the shape mimicry of a bent DNA molecule. Our results show that the electrostatic mimicry of Ocr increases the binding affinity for its target enzyme by up to \u223c\u00a0800-fold.The homodimeric Ocr ( Escherichia coli by bacteriophage T7 is overcome classical restriction (Ocr), the product of gene 0.3.I of 4.02) with a shape similar to that of a bent double-stranded DNA molecule approximately 24\u00a0bp in length a high concentration of nucleophile and (ii) N-hydroxybenzotriazole (HOBt). HOBt reacts with the O-acylisourea to form a more stable activated ester . An additional mechanism has been reported by Nakajima and Ikada, whereby the O-acylisourea intermediate may react with a neighbouring free carboxylate to form an acid anhydride.O-acylisourea, HOBt ester and acid anhydride). Considering the close proximity of side chains within a protein and the variability of their pKa values, depending on their specific microenvironment, it is clear that a limited number of side reactions are unavoidable.We used 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride (EDC), a water-soluble carbodiimide, to specifically modify the carboxyl groups of Asp and Glu residues and the C-terminus of Ocr . The phymediates . The O-aIn order to ascertain the degree of modification, we initially analysed the various protein samples by polyacrylamide gel electrophoresis (PAGE) under non-denaturing conditions . The lona\u2013d), even though the same sample analysed by denaturing SDS-PAGE migrated as a single species runs as two distinct bands, but this may be the result of partial denaturation during the running of the gel, thereby causing the protein to adopt a different conformation.Native gel electrophoresis of the unmodified Ocr revealed four sharp bands of the D-series of modified samples was performed to obtain this information as each modification will increase the mass by 27\u00a0Da (\u0394mass +\u00a027\u00a0Da). Unfortunately, MS is unsuitable for analysis of the N-series because there is only a net reduction in mass of 1\u00a0Da after each successive modification.Matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS of the D-series of modified samples was performed. Each spectrum showed a broad, slightly asymmetric peak skewed toward greater mass. Nevertheless, we were able to estimate the number of modifications for each sample as follows: D15, 9\u201318 modifications; D60, 16\u201325 modifications; D180, 22\u201333 modifications (data not shown). A smaller peak corresponding to Ocr dimer was also observed.via reversed-phase HPLC, and electrospray MS data were collected for the peak eluting from the HPLC column. A nested set of MS species was observed with a mass difference of 27\u00a0Da, corresponding to the anticipated increase in mass after each modification with dimethylamine instrument to obtain higher resolution. The proteins were desalted hylamine . From thModification of a large fraction of the 35 carboxyl groups (34 acidic residues plus the C-terminus) in each Ocr monomer could induce an alteration in the folding of the protein, potentially leading to a loss of activity through a gross structural defect rather than a more subtle change in the charge distribution of the protein. Chemical modification of Ocr did result in some loss of protein due to precipitation during sample preparation. However, upon removal of the aggregates, the final protein samples displayed no increased propensity to denature and precipitate, indicating that the structural integrity of the chemically modified Ocr was not compromised.n-to-\u03c0\u204e transition). CD spectra of the modified Ocr samples showed no major alteration in secondary structure content for the D-series of modifications -induced unfolding of the protein. Addition of denaturant caused the protein to unfold, resulting in a quenching of the fluorescent signal from the single tryptophan W94 (data not shown). For unmodified Ocr, the data indicated a two-state transition with no intermediate step with a \u0394Tm of D15 (Tm\u00a0=\u00a068.2\u00a0\u00b0C) was similar to that of the unmodified Ocr (Tm\u00a0=\u00a069.0\u00a0\u00b0C). However, the Tm of N60 (Tm\u00a0=\u00a073.7\u00a0\u00b0C) was slightly elevated relative to that of the native protein. Noteworthy, however, was the observation that the unfolding of the modified Ocr samples (N60 and D15) was irreversible and showed a broader unfolding transition in contrast to unmodified Ocr, where the unfolding transition was completely reversible. This irreversible unfolding is presumably due to the formation of covalent cross-links within the Ocr dimer. Furthermore, a reduction in the net charge/electrostatic repulsions in the native fold could enhance the tendency for aggregation/irreversibility in the unfolded state.The thermal stability of the N60 and D15 samples was also analysed by differential scanning calorimetry (data not shown). The data showed a clear energy uptake during the transition consistent with a cooperative endothermic unfolding event. The Having determined that the modified Ocr samples were still folded and the extent to which they had been modified, we assessed their ability to function as DNA mimics by measuring their interaction with the methyltransferase core, M.EcoKI, and the entire nuclease, EcoKI. Specifically, we performed (i) isothermal titration calorimetry (ITC) of Ocr with M.EcoKI to determine the enthalpy and stoichiometry of binding , (ii) competition between Ocr and a fluorescently labelled 21-bp DNA duplex for binding to M.EcoKI to obtain the binding affinity and (iii) inhibition of the nuclease activity of EcoKI on plasmid DNA by Ocr.2, 7\u00a0mM 2-mercaptoethanol and 100\u00a0\u03bcM S-adenosyl-l-methionine (SAM) with or without 500\u00a0mM NaCl at a range of temperatures from 10 to 30\u00a0\u00b0C. The experiment at 25\u00a0\u00b0C was repeated using 20\u00a0mM Hepes buffer (heat of ionization\u00a0=\u00a020.5 kJ/mol) in place of Tris\u2013HCl (heat of ionization\u00a0=\u00a047.4\u00a0kJ/mol) in the absence of NaCl, which gave a very similar enthalpy of interaction (\u0394H of \u2212\u00a090.0 kJ/mol in Hepes compared with \u2212\u00a086.2\u00a0kJ/mol in Tris). These experiments indicated that there was no major contribution to \u0394H due to effects of buffer ionization arising from protonation changes during binding. The enthalpy change upon interaction was strongly exothermic in the absence of NaCl but showed significant temperature dependence characteristic of a heat capacity change, \u0394Cp, upon formation of the M.EcoKI\u2013Ocr complex. This was quantified from the slope of the plot of enthalpy change versus temperature (assumed to be linear) using the standard thermodynamic relationship:The ITC experiment was initially performed using unmodified Ocr in 20\u00a0mM Tris\u2013HCl, pH\u00a08.0, 6\u00a0mM MgCl\u00a0J/mol K . In 500\u00a0H values fell between the behaviour of the unmodified Ocr in zero NaCl buffer and that in 500\u00a0mM NaCl buffer . In addition, the \u0394l buffer . FurtherKd, given in We also studied the interaction between modified Ocr and M.EcoKI using a sensitive fluorescence anisotropy assay to determine the binding affinity.The activity of the modified Ocr samples was tested in an endonuclease assay using purified EcoKI. Linearisation of a circular unmethylated plasmid (pBRsk1) containing a unique EcoKI target recognition site was monitored in the absence and in the presence of Ocr (unmodified and N- or D-modified samples). In each case, the reaction mixture minus DNA was prepared and the digestion was initiated by addition of pBRsk1. The reaction was stopped after 10\u00a0min, and the mixtures were then analysed by agarose gel electrophoresis . Incubatet al. produced a 16-\u00c5 resolution structure of M.EcoKI and an approximate atomic model of it in complex with Ocr, which shows it completely enveloping the Ocr molecule.The DNA mimicry displayed by Ocr comprises two main features: its mimicry of the bent DNA substrate, preferred by EcoKI, and its mimicry of the electrostatics of the phosphate backbone. Other features, such as H-bonding and van der Waals interactions, will also play a role in the binding of Ocr to its target enzyme. However, in the absence of a detailed structure for an Ocr\u2013M.EcoKI complex, these intermolecular forces are not easily defined. Recently, Kennaway Chemical modification offers a convenient method of reducing the number of negatively charged groups on the Ocr surface. Our results clearly show the stepwise reduction of negative charge with reaction time. MS of the D-series of chemically modified samples showed that the protein was subject to averages of 16 (12/13/17/18/19 modifications most common), 20 (16/17/22/23/24 most common) and 27 (23/24/29/30 most common) modifications per Ocr monomer for the D15, D60 and D180 samples, respectively. The presence of cross-links was also demonstrated despite our effort to minimise this side reaction, so these numbers of modifications may be a slight underestimate by perhaps 1 or 2 modifications per Ocr monomer. Our MS measurements of the number of modified residues and the additional uncertainty due to cross-linking reflect the relative number of \u201crandom modification cycles\u201d required to target the most important residues involved in the interaction. Therefore, our results are most probably an overestimate of the total number of negatively charged residues that are critically involved in the interaction between Ocr and M.EcoKI. This is supported by mutational studies showing that positional context and the number of negatively charged residues are critical for the interaction with M.EcoKI.kBT.The degree of chemical modification did not appear to have any deleterious effect on protein folding, as shown by CD, or stability. Such extensive modification without destruction of the protein fold is noteworthy. Indeed the stability to denaturation even increased. This is attributable to two factors. First, the adventitious formation of intermolecular and intramolecular cross-links during the chemical modification will stabilise the fold . Second, the closeness of the carboxylates in the unmodified Ocr leads to electrostatic repulsion energies of the order Concomitant with the degree of modification, we observed a clear drop in the binding affinity for an anion-exchange column, for binding to M.EcoKI and for inhibiting the EcoKI nuclease. The first two features are more easily quantifiable and should be correlated given the importance of electrostatics in protein\u2013DNA (and protein\u2013DNA mimic) interactions. Additionally, the binding also gave rise to well-defined enthalpy and heat capacity changes, which can be discussed in terms of protein\u2013protein interfaces.Kd) against [NaCl] or elution time (Kd) and the number of residues modified. As all degrees of modification from the wild type up to and including D30 and N180 fall on a straight line, this would appear to be the case starting from unmodified Ocr until one reaches \u223c\u00a016 to \u223c\u00a020 modified residues per monomer (\u223c\u00a032 to \u223c\u00a040 for the dimer). Thus, each of these individual negative charges contributes only \u223c\u00a00.4 to \u223c\u00a00.5\u00a0kJ/mol to the free energy of binding. Since this energy is less than the thermal energy, kBT, then it is clear that a very large number of such weak interactions need to be summed to account for the effectiveness of Ocr as an inhibitor. This value per charge is also small when compared with the electrostatic effects observed with the conceptually similar barnase\u2013barstar protein\u2013protein interaction system.Using our data of binding affinity and elution time (or [NaCl]) from the anion-exchange column, we can plot RTlog, presumably due to the favourable shape complementarity of the Ocr\u2013M.EcoKI association involving a multiplicity of weak intermolecular interactions. Our results show that the addition of electrostatic mimicry to the DNA-shape mimicry of the Ocr molecule further increases the binding affinity for M.EcoKI by up to \u223c\u00a0800-fold.2 for the dimer.We also investigated the Ocr\u2013M.EcoKI interaction by ITC, and this gives us some further insight into the nature of the protein\u2013protein interface. The thermodynamics of protein\u2013protein interactions are typically made up of numerous small changes in free energy with both enthalpic and entropic components.Initially, we studied the interaction of unmodified Ocr with M.EcoKI in the absence and in the presence of monovalent salt. Bearing in mind the average \u0394Cp for protein\u2013protein interactions is reported to be \u2212\u00a01393\u00a0\u00b1\u00a0845\u00a0J/mol K,Conventionally, large \u0394Cp effects are associated with hydrophobic interactions. However, theoretical considerations and empirical observations show that long-range electrostatic interactions and other effects can also make a significant contribution to \u0394Cp.In conclusion, our results show that the DNA mimicry displayed by Ocr is extremely robust and can be separated into mimicry of the shape and charge of DNA. After extensive modification, potentially removing \u223c\u00a086% of the negative charge from the carboxyl side chains and C-terminus, the modified Ocr protein still binds to M.EcoKI with an affinity of 27\u00a0nM (\u2212\u00a043.9\u00a0kJ/mol). This is only marginally weaker than the M.EcoKI\u2013DNA interaction (\u2212\u00a049.4\u00a0kJ/mol) and is presumably the contribution to binding of M.EcoKI by Ocr's mimicry of the shape of the bent DNA molecule preferred by M.EcoKI.2M2S1) and the DNA methyltransferase component M.EcoKI (M2S1) were purified as described previously.d-thiogalactopyranoside-inducible promoter.E. coli NM1261 (r\u2212\u00a0m\u2212\u00a0) . Dimethylamine, EDC and HOBt were obtained from Pierce . Guanidine hydrochloride (ULTROL grade) was purchased from Calbiochem . SAM was from New England Biolabs . All other reagents were purchased from Sigma-Aldrich . Broad-range pre-stained molecular mass markers for SDS-PAGE were purchased from BioRad . All solutions were made up in distilled, deionized water.EcoKI or dimethylamine as a nucleophile. Chemical modification of Ocr (3\u00a0\u03bcM) was carried out at 25\u00a0\u00b0C in 750\u00a0mM ammonium hydroxide or dimethylamine HCl, 150\u00a0mM NaCl, 60\u00a0mM EDC and 60\u00a0mM HOBt, pH\u00a06.5. Aliquots were withdrawn at specific time points (1 to 180\u00a0min), and the reaction was quenched by adding a 6-fold excess of sodium acetate . After incubation for 10\u201320\u00a0min, the mixture was first dialysed against 100\u00a0mM nucleophile for 4\u00a0h at 25\u00a0\u00b0C and then against 50\u00a0mM ammonium acetate at 4\u00a0\u00b0C for a minimum of 4\u00a0h. Hydroxylamine was added to the solution to a final concentration of 400\u00a0mM , and the mixture was incubated at 25\u00a0\u00b0C for 4\u00a0h. Finally, the solution was dialysed against 20\u00a0mM ammonium acetate and then the protein concentration was adjusted to 20\u201330\u00a0\u03bcM using a Vivaspin concentrator . Samples were stored in 50% v/v glycerol at \u2212\u00a020\u00a0\u00b0C.WT and chemically modified Ocr samples were analysed by anion-exchange chromatography using a 1-ml Mono Q column . Each protein (11.2\u00a0\u03bcg) was individually loaded onto the column pre-equilibrated in 20\u00a0mM Tris\u2013HCl, pH\u00a08.0, at a flow rate of 1\u00a0ml/min. After washing the column, a linear gradient of 0\u20131\u00a0M NaCl in 20\u00a0mM Tris\u2013HCl, pH\u00a08.0, was run over 30\u00a0min using the same flow rate. Protein elution from the column was monitored by measuring the tryptophan fluorescence of the eluate. The elution time of each sample was determined by integrating the peak and calculating the point corresponding to 50% of the peak area .MS of the D-series of chemically modified Ocr was performed by MALDI-TOF using a Voyager DE STR instrument . Protein samples were diluted in 0.1% trifluoroacetic acid to 0.05\u00a0mg/ml and mixed with an equal volume of matrix (saturated solution of sinapinic acid in 50% acetonitrile and 0.1% trifluoroacetic acid) on a stainless steel surface. The samples were air dried at room temperature to crystallize. The machine was operated in positive ion mode and calibrated with conalbumin and bovine serum albumin.g for 2\u00a0min) immediately prior to injection onto the column. After injection, the column was washed with solution B for 5\u00a0min, followed by a 20-min linear gradient elution (20\u00a0\u03bcl/min) into solution C. The eluate was passed into the mass spectrometer. MS data were acquired on a Bruker 12-Tesla Apex Qe FT-ICR equipped with an electrospray ionization source. Desolvated ions were transmitted to a 6-cm Infinity Cell\u00ae Penning trap. Trapped ions were excited (frequency chirp of 48\u2013500\u00a0kHz at 100 steps of 25\u00a0\u03bcs) and detected between m/z values of 600 and 2000 for 0.5\u00a0s to yield broadband 512-kWord time-domain data. Fast FTs and subsequent analyses were performed using DataAnalysis software. Multiple charge states could be observed in this way for each of the major species.Protein samples were extensively desalted by dialysis into 20\u00a0mM ammonium acetate prior to MS. For LC-MS, an Ultimate 3000 HPLC system equipped with a monolithic PS-DVB (500\u00a0\u03bcm\u00a0\u00d7\u00a05\u00a0mm) analytical column (Dionex Corporation) was used. Solutions B and C were prepared comprising 2:97.95:0.05 and 80:19.95:0.05 of acetonitrile/water/formic acid, respectively. Samples in solution B containing \u223c\u00a01\u00a0\u03bcg of chemically modified Ocr were centrifuged (16,100CD measurements were performed as described previously.versus 380\u00a0nm to remove any variation in intensity due to slight differences in protein concentration between samples.Equilibrium unfolding of Ocr as a function of GdmCl concentration was monitored by tryptophan fluorescence spectroscopy as described previously.2, 7\u00a0mM 2-mercaptoethanol and 100\u00a0\u03bcM SAM. Typically, Ocr at a concentration of 40\u00a0\u03bcM was titrated into an M.EcoKI solution at a concentration of 4\u00a0\u03bcM.Differential scanning calorimetry and ITC were carried out as described previously.Competition for binding WT or chemically modified Ocr to M.EcoKI was determined using the fluorescence anisotropy assay as described previously.in vitro assay monitored the cleavage of unmethylated circular pBRsk1 using purified EcoKI in the absence or in the presence of WT or chemically modified Ocr essentially as described elsewhere.The"} +{"text": "Iron deficiency anaemia is a common paediatric problem worldwide, with significant neurodevelopmental morbidity if left untreated. A decrease in the mean corpuscular volume (MCV) can be used as a surrogate marker for detecting early iron deficiency prior to definitive investigation and treatment. An audit cycle was therefore undertaken to evaluate and improve the identification, follow-up and treatment of abnormally low MCV results amongst the paediatric inpatients in an English district general hospital.The audit cycle was performed retrospectively over two three-month periods , amongst patients aged between one month and 16 years that had full blood counts performed whilst admitted on the paediatric ward. Patients with at least one abnormally low MCV result were identified, and their notes reviewed. We looked for any underlying explanation for the result, adequate documentation of the result as abnormal, and instigation of follow-up or treatment. In-between the two audit periods, the results of the first audit period were presented to the medical staff and suggestions were made for improvements in documentation and follow-up of abnormal results. The z-test was used to test for equality of proportions between the two audit samples.Out of 701 inpatients across both audit periods that had full blood counts, 61 (8.7%) had a low MCV result. Only 15% of patients in each audit period had an identifiable explanation for their low MCV values. Amongst the remaining 85% with either potentially explicable or inexplicable results, there was a significant increase in documentation of results as abnormal from 25% to 91% of cases between the first and second audit periods (p = 0.00 using z-test). However, there was no accompanying increase in the proportion of patients who received follow-up or treatment for their abnormal results.Abnormal red cell indices that may indicate iron deficiency are frequently missed amongst paediatric inpatients. Medical staff education and the use of appropriate protocols or pathways could further improve detection and treatment rates in this setting. Iron-deficiency anaemia in children is an important problem worldwide, estimated to affect some 43% of the world's children . A studyIron-deficiency anaemia manifests itself as a microcytic, hypochromic anaemia. Microcytosis develops either prior to or along with any reduction in haemoglobin (Hb) levels ,7. HenceThe use of the MCV as a tool for guiding selection of inpatients for further investigation of possible iron deficiency has been questioned, mainly due to its moderately poor sensitivity in detecting iron deficiency despite its apparent high specificity ,15. ThesWe were interested to see whether our department was adequately detecting and following up paediatric inpatients that might have iron deficiency, given the important consequences of this condition if allowed to progress to frank iron-deficiency anaemia. The department receives about 3500 inpatient admissions per year from a mixed urban and rural setting in Eastern England. Given that a substantial proportion of these patients will have blood tests, we decided to use the MCV as an indicator of whether a patient might have iron deficiency. The MCV was chosen since other tests commonly used to diagnose iron deficiency are not routinely performed in the inpatient setting at our hospital, whereas nearly every inpatient who has blood taken will have a full blood count performed and will have a MCV readily available. Further tests might then be added to existing blood samples or performed following recovery to further characterise whether an iron-deficient state or alternative explanation for the microcytosis is present.The audit cycle was performed retrospectively on data from two three-month periods: February to April 2006, and September to November 2006 inclusive. Our department had a total complement of twenty-one medical staff at the time of the audits who were involved with inpatient care. All clinical decisions involving the patients had already been made prior to the commencement of data collection for each audit period.The first period formed the basis of our initial audit; these findings were presented to our department's medical staff in May 2006. This presentation also included background information regarding iron-deficiency anaemia, the rationale for use of the MCV as a surrogate screening marker, and outlined a set of draft guidelines for medical staff regarding adequate documentation and follow up of abnormal red cell indices. These guidelines covered the use of age-specific ranges for red cell indices to better identify abnormal results, the adequate documentation of abnormal results in the notes and discharge summaries, and provided suggestions for possible follow-up tests (e.g. addition of RDW and a blood film to previous FBC tests) and other interventions (e.g. dietary advice) for those patients with borderline and significantly abnormal results. The second period formed the basis of a re-audit to see if there was any improvement in detection and follow-up rates following our presentation.We considered all full blood count (FBC) results from the Paediatric Admission Unit and inpatient ward at our hospital from children aged one month to 16 years inclusive over the time periods specified. Requests from the Special Care Baby Unit, Outpatient Department, Emergency Department and GPs were excluded. The age-specific limits for Hb and MCV given by Nathan and Oski that had clearly resulted in ACD with microcytosis. 'Potentially explicable' results were those in patients who had a recent or ongoing acute severe illness of significant duration, or in patients who might have undiagnosed thalassaemia based on their ethnic origin. 'Explanation unknown' results were those that could not be explained by the patient's medical history and known risk factors for other disorders.We then determined from the notes and discharge summaries of the 'potentially explicable' and 'explanation unknown' groups whether the laboratory abnormalities (Hb and MCV) had been adequately documented as abnormal, and for those that had, whether follow-up and/or treatment was subsequently arranged for those patients. Acceptable follow up included asking the patient's General Practitioner via the discharge summary to review them with regards to their low MCV and iron status, or arranging follow-up in paediatric outpatients for this specific purpose. Acceptable treatment included dietician advice regarding iron intake, and/or the commencement of oral iron therapy.This notes review process and categorisation of results was performed exclusively by DNS for the first audit data, and exclusively by SK for the second audit data; both investigators used the same criteria mentioned above for result categorisation and identification of information from the notes.We used the z-test to test Since this was an internal department audit which required no extra tests or interventions to be performed on human subjects, approval from an ethics committee was not required. We confirmed this fact with the Norfolk Research Ethics Committee and hospital Research Governance Committee, prior to publication.In the first audit period, 319 paediatric inpatients had at least one FBC test performed. Thirty-three (10%) had a low MCV result during their admission. The sex and age distribution for these results is summarised in Table In term of explicability, five patients (15%) had 'explicable' results; thirteen patients (39%) had 'potentially explicable' results; fifteen patients (46%) had 'explanation unknown' results. Excluding the patients with explicable low MCV results, seven patients (25%) had their low MCV results documented as abnormal in the notes and/or discharge summary; the remaining twenty-one (75%) had no documentation of their abnormal results and did not receive any subsequent treatment or follow-up. Of the seven patients whose results had been documented, four received iron deficiency treatment and/or follow-up; the other three received no treatment or follow-up. Overall, only four out of thirty-three patients with a low MCV (12%) received treatment and/or follow up.In the second audit period, 382 paediatric inpatients had at least one FBC performed. Twenty-eight had a low MCV result during their admission. The sex and age distribution for these results is summarised in Table The notes of one patient with a low MCV result were unavailable for review. Of the remaining twenty-seven patients, four had 'explicable' results, seven had 'potentially explicable' results, and sixteen had 'explanation unknown' results. One patient each in the 'explicable' and 'potentially explicable' groups also had a low MCV result in the first audit period, and had results that were similarly categorised back then as well. Excluding the patients with explicable low MCV results, twenty-one patients had their low MCV results documented as abnormal in the notes and/or discharge summary; the remaining two (8.7%) had no documentation of their abnormal results and did not receive any subsequent treatment or follow-up. Of the twenty-one patients whose results had been documented, two received iron-deficiency treatment and/or follow-up; the other nineteen (90%) received no treatment or follow-up. Overall, only two out of twenty-seven patients with a low MCV received treatment and/or follow-up.The overall results from both audit periods are summarised in a flow chart Figure . A scattMicrocytosis was not an uncommon finding amongst the paediatric inpatients who had blood tests performed; overall, 61 out of 701 inpatients (8.7%) that had FBCs performed across both audit periods had microcytosis. 30 of these patients (49%) had MCV values that were 2 fL or more below the age-adjusted lower MCV limit. The proportion of patients identified as having microcytosis that was 'explicable' remained consistent across both audits at around 15%. Since these patients' results could be clearly explained, they would not require any additional follow-up or treatment, other than that already arranged for their known illnesses.The remaining 85% in each audit had microcytosis that was either 'potentially explicable' or 'explanation unknown'. Both groups of patients might have had iron deficiency (particularly in the 'explanation unknown' group), but diagnosis would require further dietary history from the parents and blood tests from the patients once well in order to confirm or rule out this possibility. Further testing might also reveal other disorders associated with microcytosis, such as thalassaemia. However, this must be balanced against having to do a large number of potentially unnecessary and unpleasant blood tests in order to detect the relatively small proportion of children that have true iron deficiency.Our audit cycle demonstrated a significant and substantial improvement in documentation with regards to abnormal FBCs in those patients with 'potentially explicable' or 'explanation unknown' microcytosis, from 25% to 91% of abnormal results. This could be attributed to positive steps that were taken in our department following presentation of results from the first audit, consisting of education of other medical staff regarding the importance of adequate documentation of abnormal FBC results in the notes and discharge summaries.However, the results with regard to follow-up of patients with abnormal results were more disappointing. Despite a far higher number of patients having their results recorded as abnormal, only two patients (as compared with four from the first audit) had subsequent follow-up and/or treatment during the second audit period. Although the educational presentation had covered follow-up and treatment of low MCV results, this was clearly inadequate in isolation for empowering the medical staff to act upon the increased number of abnormal results that would now be documented, without accompanying printed guidelines disseminated to the ward and to all staff. Further work would therefore be necessary to ensure that the staff improved on following up and treating these patients where appropriate, and that a formalised, printed protocol or patient pathway was in place to aid this.One suggested protocol would be to take a dietary history and offer appropriate dietary advice to the parents of all patients with microcytosis where there is no immediate explanation, and to add a RDW and blood film analysis to any FBC samples that have been taken. Patients could then be stratified into low, moderate or high risk groups for iron deficiency. This would then be followed by GP follow-up with repeat FBC for the low risk patients; GP or paediatric outpatient follow-up, testing and treatment for the moderate risk patients; and in-hospital commencement of iron treatment with paediatric follow-up and further testing for the high risk patients. Further outpatient testing might include repeat FBC with a blood film, iron studies , and other tests as appropriate depending on the clinical context .Current practice amongst our haematology lab is to report paediatric results as abnormal when compared against adult ranges only. This might lead to potentially abnormal results being missed by clinicians who do not then make the effort to check the abnormal results against the age-corrected ranges, or who are not familiar with them. A change in haematology lab practice, whereby results from patients under the age of 16 were reported against standardised age-specific ranges based on their date of birth, would avoid this.For our audit, we were able to track all FBC results ordered from our ward for the relevant time periods through the computerised haematology results system. We are therefore confident that we included every paediatric inpatient in the age range specified with a low MCV result. Information was incomplete for only one patient from the second audit period whose notes were unobtainable. Our results are applicable only to hospital paediatric inpatients, and are not representative of the incidence, treatment and follow-up of microcytosis and iron deficiency in the community setting. The age-specific MCV limits that were used were taken from an American textbook of paediatric haematology, since no age-specific limits have been formulated by our local haematology laboratory for our population. It would have been better to use local norms. Our audit relied upon the subjective classification of illnesses in order to categorise MCV results by explicability, and hence was subject to operator bias. However, this was minimised by only having two analysers, who both used the same set of criteria for categorising results.et al [A study by Pusic et al also looOur work demonstrates that a substantial improvement in the documentation rate or awareness of microcytosis by clinicians can be achieved through staff education. Follow-up and treatment rates of these patients remained low in our audit, but there are ways in which this could be improved, primarily involving education of medical staff and the use of a protocol or pathway. It would be interesting to see if our experience with identification and follow-up of these patients is similar at other hospitals in different settings.The authors declare that they have no competing interests.DNS designed the audit study, carried out collection and analysis of the first audit data and analysis of the second audit data, and was responsible for drafting of the final manuscript text. SK carried out collection of the second audit data, drafted the manuscript abstract and designed the results flow chart. AB performed statistical analysis on the data, and incorporated the relevant results and explanations into the final text. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:"} +{"text": "Objective: We sought to determine the effect of bacteria on fluorescence polarization (FPOL) testing of amniotic fluid.Methods: Fusobacterium necrophorum and Escherichia coli were inoculated at concentrations of 103 and 106/ml in amniotic-fluid specimens from 4 patients with no clinical or laboratory evidence of infection. The FPOL results were obtained at inoculation and again at 24 h of incubation. The results were compared using analysis of variance (ANOVA).Results: The FPOL results from inoculated specimens were all within 2% of the uninoculated controls. The specimens incubated with bacteria showed a < 1\u201319% variation when compared with the time-zero uninoculated controls. However, uninoculated controls incubated for 24 h exhibited a 2\u201312% variation when compared with the time-zero controls, suggesting that the variation present was not secondary to the bacterial co-incubation.Conclusions: In vitro, neither bacterial inoculation nor prolonged co-incubation influences FPOL results beyond the effect of incubation alone. FPOL appears to be an appropriate test to assess fetal lung maturity in patients in whom intraamniotic infection is a concern."} +{"text": "M. tuberculosis. The four direct tests included two in-house phenotypic assays \u2013 Nitrate Reductase Assay (NRA) and Microscopic Observation Drug Susceptibility (MODS), and two commercially available tests \u2013 Genotype\u00ae MTBDR and Genotype\u00ae MTBDRplus .One of the challenges facing the tuberculosis (TB) control programmes in resource-limited settings is lack of rapid techniques for detection of drug resistant TB, particularly multi drug resistant tuberculosis (MDR TB). Results obtained with the conventional indirect susceptibility testing methods come too late to influence a timely decision on patient management. More rapid tests directly applied on sputum samples are needed. This study compared the sensitivity, specificity and time to results of four direct drug susceptibility testing tests with the conventional indirect testing for detection of resistance to rifampicin and isoniazid in A literature review and meta-analysis of study reports was performed. The Meta-Disc software was used to analyse the reports and tests for sensitivity, specificity, and area under the summary receiver operating characteristic (sROC) curves. Heterogeneity in accuracy estimates was tested with the Spearman correlation coefficient and Chi-square.\u00ae \u2013 3 and Genotype\u00ae MTBDRplus \u2013 5. The pooled sensitivity and specificity for detection of resistance to rifampicin were 99% and 100% with NRA, 96% and 96% with MODS, 99% and 98% with Genotype\u00ae MTBDR, and 99% and 99% with the new Genotype\u00ae MTBDRplus, respectively. For isoniazid it was 94% and 100% for NRA, 92% and 96% for MODS, 71% and 100% for Genotype\u00ae MTBDR, and 96% and 100% with the Genotype\u00ae MTBDRplus, respectively. The area under the summary receiver operating characteristic (sROC) curves was in ranges of 0.98 to 1.00 for all the four tests. Molecular tests were completed in 1 \u2013 2 days and also the phenotypic assays were much more rapid than conventional testing.Eighteen direct DST reports were analysed: NRA \u2013 4, MODS- 6, Genotype MTBDRM. tuberculosis was found to be highly sensitive and specific, and allows prompt detection of MDR TB.Direct testing of rifampicin and isoniazid resistance in Tuberculosis (TB) continues to be a leading cause of morbidity and mortality in developing countries . Global MDR TB requires 18\u201324 months of treatment with expensive second line drugs some of which are injectable agents. The cure rate is much lower than for drug susceptible TB, only around 60% . TherefoMycobacterium tuberculosis (MTB), followed by drug susceptibility testing (DST). This process, referred to as indirect susceptibility testing has a long turn around time (TAT) of around 2 months. The TAT is longest in the TB high burden low-income countries where primary isolation and indirect DST are almost exclusively performed on solid medium. Use of liquid systems such as the BACTEC MGIT 960 system has improved TAT to about 25\u201345 days, but liquid culture systems are in most cases not available where the need is greatest [Conventional methods for detection of MDR TB involve primary culture of specimens and isolation of \u00ae MTBDR , and its newer version \u2013 the Genotype\u00ae MTBDRplus.Even though liquid-based indirect susceptibility tests have improved the TAT, they are still not rapid enough to allow timely decisions on patient management in case of MDR TB. More rapid TB susceptibility tests are needed, particularly in TB high burden countries. Recently, the focus has shifted to rapid direct tests in which decontaminated respiratory samples are directly inoculated in drug-free and drug-containing medium or amplified for detection of MDR TB. Some of the direct tests being studied with prospects for applicability in developing countries include the Nitrate Reductase Assay (NRA); Microscopic Observation Drug Susceptibility (MODS) assay, and more recently molecular assays such as the GenotypeNRA test, initially introduced as an indirect assay is performed on solid medium as for the proportion method, though liquid-based assays have recently been studied [M. tuberculosis organisms possess the nitro-reductase enzyme and will reduce nitrate to nitrite, which is then detected as a pink-purple colour when a detection reagent (Griess reagent) is added to the tube [MODS assay is a low-technology liquid culture system performed in OADC-supplemented 7H9 broth on an ordinary tissue culture plate [M. tuberculosis grows in the broth, characteristic cord-like structures can be seen under an inverted microscope, permitting early detection of resistance [GenoType\u00aeMTBDR assay is a molecular test that detects the common mutations in the rpoB and katG genes responsible for resistance to rifampicin and isoniazid, respectively [\u00ae MTBDRplus assay detects additional mutations in the rpoB gene and also in the inhA gene promoter region, giving a higher sensitivity in resistance detection [The studied -9. The mthe tube . A coloure plate . A cock-sistance -16. The sistance , which iectively . The tesectively -20. The etection ,21-24.\u00ae MTBDR and Genotype\u00ae MTBDRplus was pooled and analysed for sensitivity, specificity and time to results of direct testing against conventional indirect susceptibility testing in detection of MDR TB. The results of this meta-analysis are intended to guide TB control programmes in TB high burden countries to select for further operational study, highly sensitive and specific rapid tests to identify MDR TB.Published studies have evaluated the performance of direct testing with the above mentioned tests. However, the data is spread in many different journals, which makes it difficult to fully understand the performance of direct testing, thereby delaying decisions on adoption of this approach for prompt detection of MDR TB. In this study, available data from individual study reports on direct testing with the NRA, MODS, GenotypeA literature review and meta-analysis was conducted.Original articles published in English up to end of January 2009 were searched with PubMed and Google. Each of the four tests was searched by its name, and the name combined with the words 'tuberculosis drug resistance testing', 'rifampicin resistant tuberculosis', 'isoniazid resistant tuberculosis', 'multi drug resistant tuberculosis testing'. New links displayed beside the abstracts were followed and retrieved. Finally, the bibliographies of each article were carefully reviewed and relevant articles also retrieved. A search in other databases did not reveal any additional articles previously missed on PubMed or Google searches.M. tuberculosis were included. At least 3 independent direct DST reports were required to qualify a test for the pooled data analysis. Additionally, the study report must have had extractable data to fill the 4 cells of a 2 \u00d7 2 table for diagnostic tests . Lastly, studies were included if the reference standard test in the report was an indirect assay i.e. proportion method (PM) on Lowenstein-Jensen (L-J) or 7H10 agar, BACTEC 460, BACTEC MGIT 960 or a MIC (minimum inhibitory concentration) test. One genotypic study used DNA sequencing as the reference test but was also included. Indirect DST assay reports or study reports that used the test for reasons other than DST were excluded from further analysis, as were study reports without extractable data for a 2 \u00d7 2 table.Only study reports that had evaluated direct DST for detection of resistance to RIF and/or INH in In a meta-analysis of diagnostic accuracy studies, factors such as study design, patient selection criteria, reference standard and blinding, may be related to overly optimistic estimates of diagnostic accuracy . We applData from study reports was extracted twice. Data items included author(s); year of publication; reference standard test; country where the study was conducted; sample size; specimen type; values of true resistance (TR), false resistance (FR), false susceptible (FS) and true susceptible (TS); and the QUADAS items. The time to results (TTR) in days from setting the test to obtaining results for 100% of the samples in each study report was recorded. The average time for each test type was then calculated.Sensitivity, specificity, forest plots and summary receiver operating characteristic (sROC) curves were analysed with the Meta-Disc software, based on the fixed model effect . Sensitip-values and (iii) inconsistence index.Threshold/cut off effect as a possible cause of variations in sensitivity and specificity among the reviewed reports was tested with the Spearman correlation coefficient between the logit of sensitivity and logit of 1-specificity. Variation due to factors other than threshold/cut off effect was tested by visual inspection of the forest plots for (i) degree of deviation of sensitivity and specificity of each study from the vertical line corresponding with the pooled estimates, (ii) Chi-square The average time to results was computed in MS office excel 2007.M. tuberculosis. Eighteen of the 19 reports fulfilled the inclusion criteria for the meta-analysis. The study reports reviewed and meta-analysed or excluded, plus reasons for the exclusion are shown in table Sixty-four reports were initially reviewed. Nineteen of these had studied direct DST for detection of resistance to rifampicin and/or isoniazid in Thirteen (72%) of the 18 study reports had reported the spectrum of patients or samples to be representative of those to benefit from routine use of the test (QUADAS item 1). Eight (44%) of the 18 study reports clearly described the patient or sample selection criteria (QUADAS item 2). Quality items 3 to 9 that relate to internal validity of the assay results were reported in 67\u2013100% of the studies. Lastly, seven of the 18 studies reported on blinding to the results of the reference test (items 10) while five reported on blinding to the new tests results (item 11). Un-interpretable results were reported in 13 (78%) of the 18 studies (item 13).\u00ae MTBDR, and 99% and 99% with the new Genotype\u00ae MTBDRplus, respectively. See forest plots figures The pooled sensitivity and specificity for detection of resistance to rifampicin was 99% and 100% with NRA, 96% and 96% with MODS, 99% and 98% with Genotype\u00ae MTBDR, and 96% and 100% with the Genotype\u00ae MTBDRplus, respectively. See forest plots figures The pooled sensitivity and specificity for detection of resistance to isoniazid was 94% and 100% for NRA, 92% and 96% for MODS, 71% and 100% for GenotypeThe sROC curves are shown in figures p-values and inconsistence index (1-squared) are shown in the forest plots for each test, figures Table ce index -squared \u00ae MTBDRplus studies reported TTR (2 days).The average time to 100% of the results was 23 days (range 18\u201328 days) for the NRA and 21 days (range 15\u201329) for MODS. One of the Genotype\u00ae MTBDR and Genotype\u00ae MTBDRplus tests for direct detection of resistance to rifampicin and isoniazid compared with conventional indirect DST. The results are intended to guide TB control programmes in RLS to select for further operational study highly sensitive and specific tests with shorter time to results for detection of MDR TB.This study aimed at assessing the sensitivity, specificity, and time to results of the NRA, MODS, GenotypeDirect NRA performed with excellent pooled sensitivity and specificity for both rifampicin and isoniazid (94% \u2013 100%). These findings indicate improved performance of the test when compared to results in a review by Martin A et al where sensitivity and specificity of direct NRA studies was 88% \u2013 100% . The MOD\u00ae MTBDR and Genotype\u00ae MTBDRplus showed excellent pooled sensitivity and specificity for detection of resistance to rifampicin (96\u2013100%). For isoniazid, sensitivity of the Genotype\u00ae MTBDRplus was high but it was low with the Genotype\u00ae MTBDR test . The pooled specificity was excellent for both assays i.e. 100%. The Genotype\u00ae MTBDR test was designed to detect the most common mutations for INH resistance in the katG gene, and these account for 50\u201380% of INH resistance in M. tuberculosis [\u00ae MTBDRplus detects additional mutations in the katG gene and also in the inhA promoter region for isoniazid resistance [\u00ae MTBDR and The Genotype\u00ae MTBDRplus test was combined, and this could explain the higher sensitivity for detection of resistance to isoniazid in their study. In our study the pooled sensitivity for isoniazid resistance detection of 96% with the Genotype\u00ae MTBDRplus alone means that this test performs excellent as a direct assay for INH as well. This is an advantage over the old Genotype\u00ae MTBDR test, and the related test \u2013 the Line Probe Assay , which detects mutations in only the rpoB gene for rifampicin but not isoniazid resistance [Both the Genotyperculosis . The newsistance , leadingsistance . In theisistance ,24,30i.e. the point at which the sROC crosses the diagonal line from the left upper coordinate to the right bottom coordinate was excellent for each of the four tests . This implies that heterogeneity in sensitivity and specificity due to chance, study design/population, and the way a study was conducted could have caused the variations in the latter tests [\u00ae MTBDR assays should be primarily judged based on their areas under sROC, while the other tests can be reliably judged based on their pooled values.Variations caused by threshold/cut off effect are detected by a Spearman correlation coefficient between the logit of sensitivity and logit of 1-specificity with a significant p-value ,32,33. Ter tests ,35. SincWe presented data for TTR for 100% of the DST results to permit comparison of rapidity between the different tests. For the MODS and NRA tests, the average TTR was within 23 days compared with the 2 months required for conventional indirect testing. Moreover, most results were ready in 7\u201314 days for MODS (data not shown). Contamination and indeterminate results in phenotypic methods may prolong the time to the final result but this was difficult to quantify in this study. For the genotypic assays, the only study that indicated TTR reported 2 days, but the protocol of these genotypic assays allows DST results within 1\u20132 days . The risEven though, the potential for contamination, resulting in un-interpretable or indeterminate results in phenotypic direct tests were to be considered, the time periods shown in this study were for 100% of the results. Additionally, traditional reservations about the direct versus indirect testing pertain in part to the inability to control the inoculum of a direct test. For rifampicin and isoniazid \u2013 the two important drugs defining MDR TB, it appears that inoculum size in direct assays is not as critical as previously believed. Moreover if MDR TB is identified then further DST, including second line drugs can be undertaken. It should be understood that the markedly shortened time to results with rapid direct testing is meaningless in settings where lengthy turn around time (TAT) is due to delays in sample delivery to the laboratory or delays in reporting the laboratory results. With that aside, it appears that the choice of which direct test to customize in a given setting will likely depend on some other operational issues such as technical ease, cost and bio safety that are briefly discussed below:The NRA and MODS are technically simple to perform and do not require sophiscated equipment when compared with the conventional proportion method on Lowenstein-Jensen (L-J) medium. The relative complexity of the PCR-based genotypic tests compared with the NRA and MODS may be a limitation to their use in resource-limited settings (RLS). Genotypic assays require well trained manpower though this is not as critical as previously believed . Also re\u00ae MTBDRplus assay would be 50% cheaper than conventional testing [Due to insufficient data, planned cost analysis was not performed in this study. One report indicated MODS to cost $3 per sample while another report from S. Africa suggested that direct Genotype testing ,24. DireConventional indirect testing requires sophisticated bio safety level 3 laboratories with negative pressure air flow to safely manipulate grown cultures at the time of the DST. Conversely, direct DST is less demanding and the bio safety risk is similar to that for workers doing microscopy . Direct The quality of the analysed reports was good in some but not all aspects according to QUADAS analysis . Three-qCombining high test performance and the operational issues discussed above, direct NRA and MODS assays appear to be competing tests for TB laboratories at safety level 2 in RLS. However, it is possible that in most of such settings, laboratories are familiar with the L-J solid medium-based assays, where NRA would require only a minor adjustment to be implemented in the routines.Other tests that have recently appeared in the literature and proposed for TB high burden RLS include the Alamar blue, MTT -2,5-diphenyltetrazolium bromide), and resazurin assays . Most of\u00ae MTBDRplus for MDR TB is highly sensitive and specific, and significantly more rapid than conventional indirect susceptibility testing. The choice of which test to adopt will likely depend on technical ease and cost-effectiveness studies in the local settings, but the NRA and MODS appear to be promising tests for RLS.Direct testing with the NRA, MODS and GenotypeFew direct DST reports were available for our analysis. This could be a limitation to generalization of the findings in this study. Second, since not all the reviewed studies fulfilled the study quality items in the QUADAS tool, the results in some of the analysed reports could have affected the pooled estimates shown in this study. However, some authors simply don't report according to standards for reporting diagnostic accuracy studies (STARD) even when the studies themselves were performed well.The authors declare that they have no competing interests.All the authors planned and designed the study. FB:Retrieved and reviewed the study reports, summarized and analysed the data, and prepared the manuscript. MH:Retrieved some of the study reports and critically revised the manuscript versions. SH:Critically revised the manuscript versions. MJ:Critically revised the manuscript versions.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2334/9/67/prepub"} +{"text": "Visual disability in India is categorized based on severity. Sometimes the disabled person does not fit unambiguously into any of the categories.To identify and quantify disability that does not fit in the current classification, and propose a new classification that includes all levels of vision.Retrospective chart review of visual disability awarded in a teaching hospital.The last hundred records of patients who had been classified as visually disabled were screened for vision in both eyes and percentage disability awarded. Data were handled in accordance with the Helsinki Declaration.Twenty-one patients had been classified as having 30% disability, seven each had 40% and 75%, and 65 had 100% disability. Eleven of them did not fall into any of the current categories, forcing the disability board to use its own judgment. There was a tendency to over-grade the disability . The classification proposed by us is based on the national program for control of blindness' definition of normal vision (20/20 to 20/60), low vision (<20/60 to 20/200), economic blindness (<20/200 to 20/400) and social blindness (<20/400). It ranges from the mildest disability up to the most severe grade .The current classification of visual disabilities does not include all combinations of vision; some disabled patients cannot be categorized. The classification proposed by us is comprehensive, progresses logically, and follows the definitions of the national program. Visual impairment disability in India is categorized based on its severity. Percentages are accorded as proposed by a subcommittee constituted by the Ministry of Social Justice and Empowerment in 1999. The categories of visual disability are notified in the Gazette of India, extraordinary, 2001 and are followed all over the country.This means that a person with 25% neurological and 20% visual disability would have a combined disability of 40%, thus entitling him to benefits and concessions.The new classification suggested by us includesThe proposed classification is comparable with the classification of visual impairment recommended by the World Health Organization (WHO) .3] Pers Pers3] PThe proposed classification is comparable with the one currently in use in that vision reduction upto and including 20/60, in both eyes, is considered normal. The current classification awards 20, 30 or 40% disability to persons with normal vision in one eye but fails to categorize many visual combinations . Forty pMost combinations for persons with bilateral low vision are missing from the current classification. The rest are awarded 40% disability. The proposed classification misses none, awarding 40% disability to all. Persons with low vision in the better eye and economic blindness in the worse eye are awarded 40% disability in the current classification; some visual combinations are missing. The proposed classification misses none, and awards them 50% disability. The difference of 10% in disability status should not make much difference since both will be eligible for concessions or benefits.For persons with low vision in the better eye and social blindness in the worse eye, the current classification awards 40% or 75% disability. In our opinion, 40% disability is unfair since these persons are definitely more disadvantaged than those described in the preceding paragraph (low vision in one eye and economic blindness in the other). On the other hand, 75% disability may be too generous; they are in a better position than persons with economic blindness in both eyes. Thus, the proposed classification awards them 60% disability.Persons with economic blindness in both eyes are awarded 75% disability using the current, and 70% using the proposed classification. Many visual combinations are missing in the current classification but not in the proposed one. The current classification awards 75 or 100% disability to persons with economic blindness in the better eye and social blindness in the worse eye, but misses some visual combinations. The proposed classification misses none and awards 80% disability to them.The current classification awards 100% disability to persons who have social blindness in both eyes, but misses many visual combinations. We suggest such persons be awarded 90% disability except when they have no perception of light in both eyes (suggesting an incurable condition), when they can be awarded 100% disability.It was considered during formulation of this classification, that visual disability of <40% could be abolished altogether since no benefits or concessions accrue to them. However, if multiple disabilities are present, even 20% visual disability may allow the person to get benefit from educational and job schemes. Thus, lower degrees of visual disability must continue to have a place in the disability classification. In the proposed classification, the difference between grades is 10%. The spectrum varies from the mildest disability up to the most severe grade .The proposed classification has several strengths to recommend it. It follows the NPCB definitions of low vision and blindness, thus being in uniformity with the national program. It is in tandem with the WHO classification of visual disability, thus giving it international comparability. It is more or less comparable with the current classification in India; the field defects commensurate with low vision and blindness are also the same; BCVA is the criterion in both classifications. It includes every possible combination of vision in the two eyes. In addition, it provides a wider range of disability. This may be of use to a person having multiple disabilities. The proposed categories follow a natural progression making them logical and easy to remember."} +{"text": "Data from scientific literature show that about 63% of abstracts presented at biomedical conferences will be published in full. Some studies have indicated that full publication is associated with the direction of results (publication bias). No study has looked into the occurrence of publication bias in the field of addiction.To investigate whether the significance or direction of results of abstracts presented at the major international scientific conference on addiction is associated with full publicationThe conference proceedings of the US Annual Meeting of the College on Problems of Drug Dependence (CPDD), were handsearched for abstracts of randomized controlled trials and controlled clinical trials that evaluated interventions for prevention, rehabilitation and treatment of drug addiction in humans (years searched 1993\u20132002). Data regarding the study designs and outcomes reported were extracted. Subsequent publication in peer reviewed journals was searched in MEDLINE and EMBASE databases, as of March 2006.Out of 5919 abstracts presented, 581 met the inclusion criteria; 359 (62%) conference abstracts had been published in a broad variety of peer reviewed journals . The proportion of published studies was almost the same for randomized controlled trials (62.4%) and controlled clinical trials (59.5%) while studies that reported positive results were significantly more likely to be published (74.5%) than those that did not report statistical results (60.9%.), negative or null results (47.1%) and no results (38.6%), Abstracts reporting positive results had a significantly higher probability of being published in full, while abstracts reporting null or negative results were half as likely to be published compared with positive ones Clinical trials were the minority of abstracts presented at the CPDD; we found evidence of possible publication bias in the field of addiction, with negative or null results having half the likelihood of being published than positive ones. Dissemination of new scientific research is a critical issue for researchers. Improving dissemination helps to accelerate research, to enrich education, and to enhance return on taxpayer investment in research. Usually international meetings are the first step in this process of dissemination, when preliminary results are presented, eventually followed by full publication of the study in a peer-reviewed journal. How and when the research is published can be as significant as the research results themselves, since the influence of a research article may only be as potent as its ability to attract an audience of readers.Data from a systematic review of reporAnother systematic review on the impact of including grey literature in meta-analyses of healthcare interventions showed tWe set about to investigate the occurrence of publication bias in the field of addiction, focusing on the direction of the results and their association with full publication of abstracts presented at the Annual Meeting of College on Problems of Drug Dependence (CPDD), one of the most important international scientific conferences on addiction.We considered the abstracts presented at the Annual Meeting of College on Problems of Drug Dependence (CPDD), US, between 1993 and 2002 (N = 5919) which were published as supplements in the Drug and Alcohol Dependence journal. Abstracts from other international conferences were not available. We handsearched and screened the titles and the abstracts for RCTs and CCTs, evaluating interventions for prevention, rehabilitation and treatment of drug addiction in humans. Trials were classified as RCTs when the randomization was explicitly defined and as CCTs when it was not.We excluded abstracts reporting analyses on healthy people, pharmacokinetic and toxicity studies, and preliminary findings reporting the characteristics of the participants without mention of the allocation groups.Data was selected by one investigator (SV). For each selected abstract we collected information on the year of the meeting, study design, country, substance of abuse and results. We classified the results of each trial according to the primary outcome as:\u25aa Positive results: statistically significant results (p < 0.05) in the experimental arm.\u25aa Negative or Null results: statistically significant results (p < 0.05) in the control arm, or not statistically significant (p = >0.05) results.\u25aa Not reported statistical results: abstracts that did not report statistical significance.\u25aa No results: abstracts that do not provide any results.Full-length articles published in peer reviewed journals were searched in MEDLINE and EMBASE electronic databases from 1992 to March 2006, with no language restrictions. The first search criterion was the combination of the first author's name and/or keywords in the title or abstract. When this search strategy did not identify any publications, we added the subsequent authors' names to the search. We considered the abstract as published if a) at least one author of the abstract was an author of the full publication and b) the primary outcome from the abstract was an outcome in the full manuscript. When a publication was confirmed, we recorded the journal, month and year of publication. For journals published in the spring or fall we assigned the months March or October, respectively.If the abstract was published more than once, we used the earliest publication in the analysis. Abstracts published in full before the presentation at the Conference were excluded.To evaluate the association between the study findings and the time interval between submission and publication, we performed a Kaplan Meier analysis and estimated hazard ratios of publication. We also examined the association between characteristics of the study (i.e. substance of abuse) and full publication. Person-time at risk for full publication was computed as time since presentation of the abstract to the Conference till time of full publication or end of follow-up (March 2006).Out of a total of 5919 abstracts submitted, 581 met the inclusion criteria; 359 (62%) were subsequently published in peer reviewed journals, of which 284 were reports of RCTs. Table Of the abstracts published in full, the most common substances of abuse considered were opioids , cocaine , not-specified , poly-abuse and cannabis . Other substances of abuse were considered in less than 10% of the studies.The abstracts were published in 57 journals: 21% in Drug and Alcohol Dependence, 7.6% in the Journal of Consulting and Clinical Psychology and in Psychopharmacology, 6.3% in the Journal of Substance Abuse Treatment and Experimental & Clinical Psychopharmacology, and the remaining 51.2% in a variety of other journals .Table The median time lapse until publication was 3.8 years, The time duration from presentation at the conference to publication was shorter for trials with positive results than it was for trials with negative or null results Group, a checklist of essential items has been developed that authors should consult when reporting the main results of an RCT in any journal or conference abstract ,10.Some limitations of our study should be taken into account; we included only abstracts presented at one international conference because it is considered the most important international meeting in this field, and because there are no abstracts electronically and systematically available from other conferences. This may lead to an overestimation of the likelihood of publication, particularly for smaller studies or studies with null or negative results that may be present in less prestigious or local conferences. There is in fact a high probability that even fewer abstracts from these meetings end up published in full. It is also expected that the time lapse until publication is even longer for these abstracts. We may have touched only the tip of the iceberg of published or unpublished abstracts.Moreover, we only searched two electronic databases (MEDLINE and EMBASE), and did not contact the authors of the studies for which we were unable to find the full publication. However, the two main electronic databases we searched provide 92% of the studies included in Cochrane reviews published in the area of drug and alcohol addiction . In addiIn conclusion, our study confirms that abstracts that report null or negative results presented at conferences in the field of addiction are significantly less likely to be published than abstracts reporting positive results. If we consider that an additional bias could occur in the acceptance phase of abstract submission to conferences, possible publication bias should always be considered when conducting systematic reviews of the effectiveness of interventions for drug dependence in order to avoid biased results.Making the registries of ongoing trials accessible may be one way to reduce publication bias. Researchers initiating randomized controlled trials should register trials to ensure the availability of trial results, independently of their full publication. Recently, the World Health Organization (WHO) launched an International Clinical Trials Registry Platform (ICTRP) with the mission of ensuring that a complete view of research is accessible to all those involved in health care decision making.Eventually, given that conference proceedings are relevant for providing updated knowledge to be incorporated into systematic reviews of the effectiveness of health care interventions, more efforts should be put into ensuring that they fairly represent all research, regardless of the direction of their results.CPDD: College on Problems of Drug Dependence; RCT: Randomised Controlled Trial; CCT: Controlled Clinical Trial; CI: Confidence Interval; SD: Standard Deviation; HR: Hazard Ratio; OR: Odds Ratio.The authors declare that they have no competing interests.MD e SV conceptualized the design of the study; SV extracted the data, wrote and updated the manuscript. VB analysed the data, LA, MD and CAP provided critiques and suggestions. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2288/9/23/prepub"} +{"text": "The growing competition and \u201cpublish or perish\u201d culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce \u201cpublishable\u201d results at all costs. Papers are less likely to be published and to be cited if they report \u201cnegative\u201d results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of \u201cpositive\u201d results in the literature should be higher in the more competitive and \u201cproductive\u201d academic environments. This study verified this hypothesis by measuring the frequency of positive results in a large random sample of papers with a corresponding author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis if their corresponding authors were working in states that, according to NSF data, produced more academic papers per capita. The size of this effect increased when controlling for state's per capita R&D expenditure and for study characteristics that previous research showed to correlate with the frequency of positive results, including discipline and methodology. Although the confounding effect of institutions' prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists' productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high. The objectivity and integrity of contemporary science faces many threats. A cause of particular concern is the growing competition for research funding and academic positions, which, combined with an increasing use of bibliometric parameters to evaluate careers , pressures scientists into continuously producing \u201cpublishable\u201d results Competition is encouraged in scientifically advanced countries because it increases the efficiency and productivity of researchers Words like \u201cpositive\u201d, \u201csignificant\u201d, \u201cnegative\u201d or \u201cnull\u201d are common scientific jargon, but are obviously misleading, because all results are equally relevant to science, as long as they have been produced by sound logic and methods Many factors contribute to this publication bias against negative results, which is rooted in the psychology and sociology of science. Like all human beings, scientists are confirmation-biased (i.e. tend to select information that supports their hypotheses about the world) Confronted with a \u201cnegative\u201d result, therefore, a scientist might be tempted to either not spend time publishing it or to turn it somehow into a positive result. This can be done by re-formulating the hypothesis or a \u201cnegative\u201d support for the tested hypothesis. Using data compiled by the National Science Foundation, the proportion of \u201cpositive\u201d results was then regressed against a sheer measure of academic productivity: the number of articles published per-capita (i.e. per doctorate holder in academia) in each US state, controlling for the effects of per-capita research expenditure. NSF data provides an accurate proxy of a state's academic productivity, because it controls for multiple authorship by counting papers fractionally. Since the probability for a paper to report a positive result depends significantly on its methodology, on whether it tests one or more hypotheses, on the discipline it belongs to and particularly on whether the discipline is pure or applied A total of 1316 papers were included in the analysis. All US states and the federal district were represented in the sample, except Delaware. The number of papers per state varied between 1 and 150 (mean: 26.32\u00b14.16SE), and the percentage of positive results between 25% and 100% \u200a=\u200a3.988(1.047\u201315.193), 2\u200a=\u200a0.051). Similar results were obtained when controlling for the effect of discipline instead of methodology (2\u200a=\u200a0.065). Adding an interaction term of discipline by academic productivity did not improve the model significantly overall , although contrasting each discipline's interaction term with that of Space Science showed significantly positive interaction effects for Neuroscience & Behaviour and Pharmacology and Toxicology .The effect of per capita academic productivity remained highly significant when controlling for expenditure and for characteristics of study: broad methodological category, papers testing one vs. multiple hypotheses, and pure vs. applied discipline number of doctorate holders, total number of papers and total R&D expenditure . Controlling for any of these parameters did not alter the results of the regression in any meaningful way.The analyses were run using 2003 data from the Science and Engineering Indicators 2006 report In a random sample of 1316 papers that declared to have \u201ctested a hypothesis\u201d in all disciplines, outcomes could be significantly predicted by knowing the addresses of the corresponding authors: those based in US states where researchers publish more papers per capita were significantly more likely to report positive results, independently of their discipline, methodology and research expenditure. The probability for a study to yield a support for the tested hypothesis depends on several research-specific factors, primarily on whether the hypothesis tested is actually true and how much statistical power is available to reject the null hypothesis All main sources of sampling and methodological bias in this study were controlled for. The number of papers from each state in the sample was almost perfectly correlated with the actual number of papers that each state produced in any given year, which confirms that the sampling of papers was completely randomised with respect to address (as well as any other study characteristic including the particular hypothesis tested and the methods employed), and therefore that the sample was highly representative of the US research panorama. The total number of papers, total R&D and total number of doctorate holders were completely uncorrelated to the proportion of positive results, ruling out the possibility that different frequencies of positive results between states are due to sampling effects. Although the analyses were all conducted by one author, expectancy biases can be excluded, because the classification of papers in positive and negative was completely blind to the corresponding address in the paper, and the US states' data were obtained by an independent source (NSF). We can also exclude that the association between productivity and positive results was an artifact of the effects of methodologies and disciplines of papers trend was observed instead. Second, because the variability in frequency of positive results between states is too high to be reasonably explained by the quality factor alone. At one extreme, states yielded as few as 1 in 4 papers that supported the tested hypothesis, at the other extreme, numerous states reported between 95% and 100% positive results, including academically productive ones like Michigan (N\u200a=\u200a54 papers in this sample), Ohio (N\u200a=\u200a47), District of Columbia (N\u200a=\u200a18) and Nebraska (N\u200a=\u200a13). In absence of bias of any kind, this would mean that corresponding authors in these states almost never failed to find a support for the hypotheses they tested. But negative results are virtually inevitable, unless all the hypotheses tested were true, experiments were designed and conducted perfectly, and the statistical power available were always 100% \u2013 which it rarely is, and is usually much lower As a matter of fact, the prestige of institutions could be expected to have the opposite influence on published results, in analogy with what has been observed by comparing countries. In the biomedical literature, the statistical significance of results tends to be lower in papers from high-income countries, which suggests that journal editors tend to reject papers from low-income countries unless they have particularly \u201cgood\u201d results A possibility that needs to be considered in all regression analyses is whether the cause-effect relationship could be reversed: could some states be more productive precisely because their researchers tend to do many cheap and non-explorative studies ? This appears unlikely, because it would contradict the observation that the most productive institutions are also the more prestigious, and therefore the ones where the most important research tends to be done.What happened to the missing negative results? As explained in the Introduction, presumably they either went completely unpublished or were somehow turned into positive through selective reporting, post-hoc re-interpretation, and alteration of methods, analyses and data. The relative frequency of these behaviours remains to be established, but the simple non-publication of results is unlikely to be the only explanation. If it were, then we should have to assume that authors in the more productive states are even more productive than they appear, but wastefully do not publish many negative results they get.Since positive results in this study are estimated using what is declared in the papers, we cannot exclude the possibility that authors in more productive states simply tend to write the sentence \u201ctest the hypothesis\u201d more often when they get positive results. However, it would be problematic to explain why this should be the case and, if it were, then we would still have to understand if and how negative results are published. Ultimately, such an association of word usage with socio-economic parameters would still suggest that publication pressures have some measurable effect on how research is conducted and/or presented.Selective reporting, reinterpreting and altering results are commonly considered \u201cquestionable research practices\u201d: behaviours that might or might not represent falsification of results, depending on whether they express an intention to deceive. There is no doubt that negative results produced by a methodological flaw should either be corrected or not be published at all, and it is likely that many scientists select or manipulate their negative results because they sincerely think their experiments went wrong somewhere \u2013 maybe the sample was too small or too heterogeneous, some measurements were inaccurate and should be discarded, the hypothesis should be reformulated, etc\u2026 However, in most circumstances this might be nothing more than a \u201cgut feeling\u201d Adding an interaction term of discipline by productivity did not increase the accuracy of the model significantly. Although we are currently unable to measure the statistical power of interaction terms in complex logistic regression models, the lack of significance suggests that large disciplinary differences in the effect of publication pressures are unlikely. Interestingly, however, some interdisciplinary variability was observed: Pharmacology and Toxicology, and Neuroscience and Behaviour had a significantly stronger association between productivity and positive results compared to Space Science. Of course, since we had 20 disciplines in the model, the significance of these two terms could be due to chance alone. However, we cannot exclude that a study with higher statistical power could confirm this result and reveal other small, but nonetheless interesting differences between fields.This study focused on the United States primarily because they are one of the most scientifically productive countries, and are academically diversified but linguistically and culturally rather homogeneous, which eliminated the confounding effect of editorial biases against particular countries, cultures or languages. Moreover, the research output and expenditure of all US states are recorded and reported by NSF periodically and with great accuracy, yielding a reliable dataset. Academic competition might be particularly high in US universities The sample of papers used in this study was part of a larger sample used to compare bias between disciplines All data was extracted by the author. An untrained assistant who was given basic written instructions scored papers the same way as the author in 18 out of 20 cases, and picked up exactly the same sentences for hypothesis and conclusions in all but three cases. The discrepancies were easily explained, showing that the procedure is objective and replicable.To identify methodological categories, the outcome of each paper was classified according to a set of binary variables: 1-outcome measured on biological material; 2- outcome measured on human material; 3-outcome exclusively behavioural ; 4-outcome exclusively non-behavioural . Biological studies in vitro for which the human/non-human classification was uncertain were classified as non-human. Different combinations of these variables identified mutually exclusive methodological categories: Physical/Chemical ; Biological, Non-Behavioural ; Behavioural/Social , Behavioural/Social + Biological, Non-Behavioural , Other methodology . Disciplines were attributed based on how the ESI database had classified the journal in which the paper appeared, and the pure-applied status of discipline followed classifications identified in previous studies in the state of the corresponding author of the ith paper, X2 is the ith paper's state R&D expenditure per capita, and Xn represents the various characteristics of the ith paper that were controlled for in the models as specified in the 2.The ability of independent variables to predict the outcome of a paper was tested by standard logistic regression analysis, fitting a model in the form:Multicollinearity among independent variables was tested by examining tolerance and Variance Inflation Factors for all variables in the model. All variables had tolerance\u22650.42 and VIF\u22642.383 except one of the methodological dummy variables (Tolerance\u200a=\u200a0.34 and VIF\u200a=\u200a2.942). To avoid this (modest) sign of possible collinearity, methodological categories were reduced to the minimum number that previous analyses have shown to differ significantly in the frequency of positive results: purely physical and chemical, biological non-behavioural, and behavioural and mixed studies on humans and on non-humans p is the proportion of negative results, and n is the total number of papers. Values for high and low confidence interval were calculated and the final result was back-transformed in percentages using the following equations for proportion and percentages, respectively:x is either Plogit or each of the corresponding 95%CI values.Confidence intervals in the graphs were obtained independently from the statistical analyses, using the following logit transformation to calculate the proportion of positive results and standard error:"} +{"text": "Two multi-system disorders, Myotonic Dystrophies type 1 and type 2 (DM1 and DM2), are complex neuromuscular diseases caused by an accumulation of expanded, non-coding RNAs, containing repetitive CUG and CCUG elements. Similarities of these mutations suggest similar mechanisms for both diseases. The expanded CUGn and CCUGn RNAs mainly target two RNA binding proteins, MBNL1 and CUGBP1, elevating levels of CUGBP1 and reducing levels of MBNL1. These alterations change processing of RNAs that are regulated by these proteins. Whereas overall toxicity of CUGn/CCUGn RNAs on RNA homeostasis in DM cells has been proven, the mechanisms which make these RNAs toxic remain illusive. A current view is that the toxicity of RNA CUGn and CCUGn is associated exclusively with global mis-splicing in DM patients. However, a growing number of new findings show that the expansion of CUGn and CCUGn RNAs mis-regulates several additional pathways in nuclei and cytoplasm of cells from patients with DM1 and DM2. The purpose of this review is to discuss the similarities and differences in the clinical presentation and molecular genetics of both diseases. We will also discuss the complexity of the molecular abnormalities in DM1 and DM2 caused by CUG and CCUG repeats and will summarize the outcomes of the toxicity of CUG and CCUG repeats. Myotonic dystrophies (DMs) are autosomal dominant, multisystem diseases with a common pattern of clinical signs and symptoms such as myotonia, muscular dystrophy, cardiac conduction defects, cataracts, and endocrine disorders. In 1909, the German Steinert and the British doctors Batten and Gibb described the \u201cclassic\u201d type of myotonic dystrophy . Later iCongenital Myotonic Dystrophy (CDM) exists only in DM1. In 1960, Dr. Vanier has described 6 children with DM1 (the youngest one was 9 months old) with the disease manifestation at the time of birth . After tBecause of the large expansions of CTG repeats, CDM children may be born as premature infants. In many reported pregnancies, fetal movements are reduced and polyhydramnios occurred. Postnatal hypotonia and immobility are important first symptoms of CDM. In up to 50% of CDM, bilateral talipes and other contractures are present at birth. Facial diplegia with a tent-formed upper lip and a high arched palate is a characteristic feature. This weakness causes a weak cry and the inability to suck in approximately 75% of affected newborns. In survivors hypotonia is steadily improving and only rarely prominent at 3\u20134 years of age, but facial diplegia leads to the typical facial \u201ccarp-mouth\u201d appearance. Respiratory complications are frequent in neonates. Severely affected neonates requiring ventilation for more than 4 weeks will die from respiratory problems. Delayed motor development is an important feature at postnatal stages. Almost all children become able to walk independently. Mental retardation is observed to a variable degree in a great number of affected individuals but normal mental development is possible, even if motor development is delayed. Rarely, attention deficit hyperactivity and anxiety disorders, autism, behavioural problems, and depression are reported in childhood. Despite the severe muscular phenotype, clinical myotonia is neither a feature presented in the neonatal period nor can it be disclosed in the electromyogram (EMG). Furthermore there is a high frequency for other associated abnormalities such as inguinal or hiatus hernia, undescended testis, congenital dislocation of the hip and heart defect, hydrocephalus, congenital cataract, and cleft lip .3) [The adult onset clinical phenotype is the most typical appearance of DMs. The core features are facial weakness with ptosis and distal muscle weakness. Grip and percussion myotonia are regular features; however, myotonia affects any other muscle including bulbar, tongue or facial muscles, causing problems with talking, chewing, and swallowing. Furthermore, a 3- to 7-fold elevation of the serum creatine kinase is apparent. Cardiac involvement includes conduction abnormalities with arrhythmia and conduction blocks up to sudden cardiac death. In some patients and families, a dilated cardiomyopathy may be observed. Central nervous system involvement covers cognitive impairment/mental retardation, specific patterns of psychological dysfunction, personality traits, neuropsychological changes, and excessive daytime sleepiness. Some of these features may be related to alterations found by neuroimaging and neuropathology . The most common eye defect are the posterior capsular cataracts and, rarely, pigmentary retinal degeneration. Gastrointestinal tract involvement covers irritable bowel syndrome and symptomatic gall stones and, gamma-glutamyltransferase elevations. Finally, endocrine abnormalities include testicular atrophy, hypotestosteronism, insulin resistance with usually mild type-2 diabetes , 11.By aging, more proximal and axial muscle weakness is common and many patients are wheelchair bound. Nevertheless, myotonia decreases by progressive muscle atrophy. Late, but prominent respiratory insufficiency occurs by diaphragmatic weakness. Still, cognitive impairment increases. The majority of DM1 patients have overt type-2 diabetes at older ages. Thus, in summary, many patients become severely disabled by their fifth and sixth decades of life. Chest infections, partly by aspiration, and diaphragm weakness are common and may precipitate respiratory failure. Sudden cardiac death is not uncommon, even in younger patients, and may be preventable, at least in part, by cardiac pacemaker implantation. On the other side, especially in late-onset or asymptomatic patients (with low number of CTG repeats), only limited features are found on clinical and paraclinical assessment. In late-onset patients, the search for cataracts is helpful for identifying the transmitting person ,11.3).The most important discrepancy between DM1 and DM2 is absence of a congenital or early-onset form in DM2 , 11,12. By aging, more axial and distal muscle weakness is common and some patients are wheelchair bound. Myotonia decreases by progressive muscle atrophy. Even late, there is seems no overt respiratory insufficiency. Cognitive impairment increases very slowly. Only some of DM2 patients have overt diabetes at older age. Thus, in summary, only few patients become severely disabled by the sixth to eighth decades. However, there are seems to be many asymptomatic and undiscovered DM2 patients. Even by careful clinical and paraclinical assessment, sometimes it is challenging to recognize DM2 phenotype ,15. ThusThe obvious clinical phenotype and the family history helps the diagnosis. In late-onset patients, different specialists may be involved in the treatment of symptoms. Genetic analysis is used to identify and/or confirm the diagnosis. Therefore, muscle biopsy is only rarely required. However, muscle biopsies are required in cases with neuromuscular complaints and with negative genetic analysis ,16.Different specialists may initially be involved in diagnostics. When proximal weakness or myotonia becomes obvious, together with a positive family history, diagnosis can be made. Genetic analysis is advised to confirm the diagnosis. A muscle biopsy may be required in asymptomatic neuromuscular patients or when genetic analysis for DM1 and DM2 is negative.DMPK) gene [1).In 1992, DM1 mutation has been discovered on chromosome 19q as an expansion of CTG repeats in the 3\u2019 untranslated region of the dystrophia myotonica-protein kinase (PK) gene ,15,17,18PK) gene ,11,15. TPK) gene ,11,15. TPK) gene ,11,15 (TZNF9) gene [1). Expanded DM2 alleles show extraordinary somatic instability with significant increases in length over time (e.g. 2000 bp/3 years) and expanded alleles often appear as smears by Southern blotting analysis [In 1998, the DM2 locus was mapped to 3q21 and, thereafter, the mutation was identified as a CCTG expansion in intron 1 of the zinc finger protein 9 (F9) gene . On the analysis . The ageanalysis .The experimental work during last 15 years has been focused on the molecular mechanisms by which expansions of CTG and CCTG repeats cause DM phenotype. Although CTG repeats affect expression of DMPK protein and the transcription of genes in the DMPK locus (recently reviewed in details in ), multisEarly studies of DM1 mechanisms have investigated the transcription and post-transcriptional processing of the mutant DMPK RNA in DM1 patients. The main hypothesis of these studies was that the expanded RNA CTG repeats in the 3\u2019 UTR of DMPK may interfere with transcription, post-transcriptional modifications and export of the mutant DMPK mRNA from nucleus to cytoplasm. It has been shown that the mutant DMPK mRNA changed processing of wild type DMPK mRNA through trans effect presenting the first evidence for the toxic role of CUG RNA repeats in DM1 pathology . At the S. cerevisiae has been shown upon block of RNA nuclear export [S. cerevisiae required the components of the nuclear exosome including protein RRP6 since deletion of the RRP6 gene releases trapped RNAs from intranuclear foci [Several studies have examined common mechanisms of aggregation of RNAs. The aggregation of RNAs in nuclei of r export . Such reear foci . One migear foci . Two sepear foci ,46.Despite the nuclear aggregation of the mutant DMPK mRNA, a significant portion of this mRNA is still transported to cytoplasm ,39,47. IOne of the possible reasons for the existing controversies on the effect of CUG repeats on mutant DMPK nuclear-cytoplasmic export might be the sensitivity of assays. The initial studies showed that the sensitivity of FISH assay is very important because cytoplasmic mutant DMPK mRNA is detected in the multiple complexes of smaller size relatively to very large nuclear foci . It is iZNF9 gene [In DM2, expanded RNA CCUG repeats are located within intron 1 of NF9 gene . There aNF9 gene . It was NF9 gene ,51. It hNF9 gene . These f36) also aggregate in nuclei and in cytoplasm [36 RNA. The detection of foci containing 36 CCUG repeats in transfected C2C12 myoblasts might be also associated with a possible increase of their stability. It is likely that some RNA CCUG repeats in DM2 myoblasts escape nucleus and migrate to cytoplasm [Introns are degraded in nuclei by exosome immediately after their excision and linearization. The aggregation of CCUG repeats in DM2 nuclei suggests that there is a block or delay of the degradation of the mutant intron 1. However, investigations of the co-localization of CCUG foci and exosome have shown that CCUG repeats are not associated with exosome . It is aytoplasm . These dytoplasm . How CCUytoplasm . It is iytoplasm . These dThe aggregated forms of mutant DMPK mRNA are mainly detected in the nuclei of DM1 cells ,32,37,38in vivo models of DM1. In DM1 Drosophila model (generated in the Dr. Botas\u2019 lab), interrupted CUG repeats (CUG20CUCGA24) cause muscle wasting and eye degeneration; and this phenotype is rescued by overexpression of MBNL1 [Drosophila caused phenotype which is similar to that caused by overexpression of CUG repeats [480 also show muscle phenotype, suggesting that MBNL1 levels have to be tightly regulated for normal muscle function. It is also interesting that flies with increased CUGBP1 crossed with flies expressing interrupted CUG480 RNA showed worsening eye degeneration suggesting that the increase of CUGBP1 levels is toxic for normal cells. The co-expression of CUGBP1 with interrupted CUG480 increases muscle wasting compared to flies expressing only CUG480 RNA [In the course of studies of molecular pathogeneses of DM1 and DM2, it became clear that the mechanisms of these diseases are much more complex and are not limited to the alterations in splicing. Several recent reports suggest that the nuclear aggregates of the mutant DMPK mRNA are not sufficient to cause DM1 phenotype. Although the induction of the mutant 3\u2019 UTR of DMPK in mice leads to accumulation of nuclear aggregates and to sequestration of MBNL1; these mice do not show overt DM1 phenotype . In contof MBNL1 . The inc repeats . ImportaG480 RNA .Drosophila model without binding to the aggregated form (foci) of mutant CUG480 RNA. This observation suggests that the elevation of CUGBP1 and reduction of MBNL1 might cause DM1 phenotype through independent mechanisms. One of these possible mechanisms has been proposed by Dr. Junghans [4). According to this model, the mutant RNA with long CUG repeats exists in the double-stranded form, stability of which depends on the relative levels of free CUGBP1 and MBNL1. MBNL1 binds to the aggregated form of CUGn RNA, organized in the double-stranded helix; whereas CUGBP1 binds to the melted regions of the CUG helix [It is important to note that the increase of CUGBP1 causes degeneration in DM1 Junghans . It has UG helix . In thisUG helix . HoweverUG helix . Thus, tDrosophila model, reported in [Drosophila line, showing degeneration, insertion occurred in the gene encoding zinc finger protein [While CUG repeats in the orted in were toxorted in . Severalorted in . Surprisorted in . This ob protein . Thus, s4, 5) [Drosophila models generated by Dr. Artero\u2019s group showed that there is a more complicated relationship between global splicing abnormalities and the sequestration of muscleblind. While transgenic flies with 480 CUG repeats showed stronger phenotype than flies expressing 60 CUG repeats, splicing abnormalities of some mRNAs were stronger in the line with lower number of CUG repeats [Drosophila), it is possible that short CUG repeats (CUG60) cause greater abnormalities in splicing of some mRNAs due to higher levels of expression. This suggestion is based on the overt DM1 phenotype in \u201ctet-inducible\u201d mouse model expressing high number of copies of the 3\u2019 UTR of normal DMPK with 5 CUG repeats [36) in normal myoblasts causes changes in RNA processing identical to those observed in DM2 cells. These data support the suggestion that the high number of copies of short CCUG repeats have the same toxicity as low number of copies of long repeats [The major toxicity of the mutant CUG and CCUG nuclear aggregates is associated with alterations of splicing of mRNAs regulated by MBNL1 . Analysi repeats . Althoug repeats . We have repeats . In addi repeats . WhereasOne of the molecular hallmarks of DM1 is the elevation of CUGBP1 protein and its RNA-binding activity ,39,62,63The elevation of CUGBP1 has been also reported for DM2 patients ; howeverin vivo with CUG and CCUG RNAs located outside of aggregated CUG and CCUG repeats . Since CUGBP1 mRNA levels are not increased in DM1 cells, it was suggested that CUG repeats may increase CUGBP1 stability. In fact, examination of CUGBP1 half-life in the presence of CUG repeats showed that CUG repeats stabilize CUGBP1 [The mechanisms by which CUG and CCUG repeats elevate CUGBP1 need additional investigations. Since CUGBP1 has been not found in the nuclear CUG and CCUG aggregates, nuclear CUG and CCUG foci do not appear to affect CUGBP1 levels. On the contrary, identification of CUGBP1-RNA complexes from DM1 and DM2 cells by biochemical methods shows that CUGBP1 forms stable complexes with CUG and CCUG RNAs in DM1 and in DM2 correspondingly and these complexes are not detected in normal cells ,52. The e CUGBP1 . Recent e CUGBP1 . Thus, sGrowing number of new reports suggest that additional pathways are involved in the regulation of activity and levels of CUGBP1 in DM1 and DM2 cells. Analysis of CUGBP1 stability in DM2 myoblasts and in normal cells expressing CCUG repeats showed that the stability of CUGBP1 is also increased in the presence of CCUG repeats . CUGBP1 in vivo is independent of MBNL1 [Why the increase of CUGBP1 is toxic for cell functions? Detailed analysis of CUGBP1 in normal cells showed that this protein has many functions and plays an important role in several biological processes. CUGBP1 is expressed in both nucleus and cytoplasm. Like MBNL1, CUGBP1 has splicing activity -65,68-70of MBNL1 . It is lCUGBP1 regulates cap-dependent and cap-independent translation -78. CUGB6). These data suggest that a lack of CDM in patients with DM2 may be, at least in part, due to normal phosphorylation of CUGBP1 at Ser302.In contrast to DM1 myotubes, CUGBP1-eIF2 complexes are increased in DM2 differentiating myotubes similar to normal myotubes . In proliferating myoblasts CUGBP1 is phosphorylated by Akt and the ph-S28-CUGBP1 has increased binding activity toward mRNA encoding cyclin D1 [7). Cyclin D1 is an important regulator of cell proliferation, while p21 a key regulator of the transition of dividing myoblasts to differentiation. Thus, changes of Akt-CUGBP1-cyclin D1 and cyclinD3/cdk4-CUGBP1-p21 pathways in DM1 disease might affect the efficiency of myogenesis causing a delay of differentiation. In addition, the phosphorylation of CUGBP1 on the putative PKC sites might stabilize CUGBP1 in DM1 cells leading to the enhancement of CUGBP1 functions. In summary, these data show that biological functions of CUGBP1 are altered in DM1 patients not only by the elevation of the protein, but also by phosphorylation-specific changes in RNA-binding activity of CUGBP1.It has been shown that the site-specific phosphorylation of CUGBP1 by Akt and cyclinD3/cdk4 kinase regulates CUGBP1 function during normal myogenesis 6); suggesting that in DM2 myotubes, phosphorylation of CUGBP1 at Ser302 is normal. If this is the case, then the RNA-binding activity of CUGBP1 toward its RNA targets might be different in DM1 and in DM2.Elevation of CUGBP1 in DM2 muscle cells and tissues suggests that CUGBP1-dependent pathways might be also altered in DM2 cells similar to alterations observed in DM1. However, the DM2 phenotype is milder than DM1. Comprehensive analysis of CUGBP1 in DM2 cells and in DM2 models revealed several essential differences in CUGBP1 function in DM2 compared to DM1. CUGBP1 binds to CUG repeats within the DM1 protein extracts mainly as a single protein; however, in DM2 extracts, CUGBP1 binds to CCUG repeats as a component of the high molecular weight CUGBP1-eIF2 complex . As noteGrowing number of evidence indicates that biological functions of CUGBP1 are much broader than it has been initially suggested. Recent studies demonstrated that CUGBP1 is involved in the regulation of stability of short-lived mRNAs (cytokines and oncogenes) through the binding to the GRE (GU-rich) elements in their 3\u2019 UTRs ,84. A laXenopus EDEN-BP protein which regulates RNA deadenylation through EDEN element during development [CUGBP1 is homologous to the elopment ,81. Giveelopment ,89. Iden4). It remains to investigate if the mutant CCUG repeats affect certain TFs in patients with DM2. It has been suggested that global transcription in DM1 may be affected by mutant CUG repeats indirectly through alterations of splicing [In addition to alterations of RNA-binding proteins, the mutant CUG repeats affect transcription factors (TFs) by leaching Specificity protein 1 (Sp1) and Retinoic Acid Receptor (RAR) out of active chromatin . These Tsplicing . Howeversplicing . This suin vivo. Many attempts have been made to determine MBNL1 and CUGBP1 binding sites within mRNAs: natural targets of these proteins. So far, the usage of numerous methods in vitro has produced contradictory observations. It has been initially suggested that MBNL1 binds exclusively to the double-stranded structures formed by long CUG repeats [6) are sufficient to form double-stranded helix [Development of approaches reducing toxicity of CUG and CCUG repeats requires a better understanding of the primary targets of CUG/CCUG repeats. Data discussed above suggest that CUG and CCUG repeats affect MBNL1 and CUGBP1 independently through aggregated and un-aggregated CUG and CCUG repeats. Early elevation of CUGBP1 in transgenic mice expressing CUG repeats shows that the increase of CUGBP1 is not a consequence of different abnormalities in DM1 but rather a direct result of expression of the mutant CUG repeats . Evaluat repeats . However repeats ,93. More repeats . In addied helix .8) [CUGBP1 has been identified as the protein which binds to RNA oligonucleotide containing eight CUG repeats (CUG8) ,28. CUGB8) ,81,83. Ain vivo, particularly in normal and in DM cells. A comprehensive analysis of CUGBP1-RNPs from the CUGBP1 transgenic and MBNL1-RNPs from the wild type and MBNL1 knock out mice would be one of the approaches for identification of mRNAs which are targets of CUGBP1 and MBNL1 in vivo. In the case of CUGBP1, it is clearly shown that this protein has multiple targets with a variety of binding sites [It seems that the best way to determine biologically relevant binding sites for these proteins is to identify mRNAs which are associated with CUGBP1 and MBNL1 ng sites ,39,69-765). In addition, DM2 cells contain abundant translational CUGBP1-eIF2 complex which changes translation of certain proteins in DM2 cells. Interestingly, the 20S proteasome complex in DM2 cells is associated with ER chaperone BiP, which is a master regulator of Unfolded Protein Response (UPR). Usually, the UPR signaling prevents protein aggregation by two pathways: (1) reduction of translation; and by (2) activation of splicing of a specific b-ZIP transcription factor, XBP1, which promotes transcription of genes regulating protein degradation. The presence of ER chaperones in the CCUG-binding multi-protein complexes and the accumulation of undegraded proteins in DM2 cytoplasm suggest that ER chaperones play a specific role in the attenuation of protein translation, RNA splicing and RNA expression in DM2. Thus, ER chaperones may have an additional toxic effect in DM2 cells.Although initial studies suggested that DM2 pathology is mainly mediated by changing in alternative splicing, further studies showed much more complex mechanisms for DM2. Examination of cytoplasmic RNA-protein complexes binding to CCUG repeats revealed that un-aggregated CCUG repeats sequester the 20S proteasome . In agrein vivo show that ZNF9 deletion causes main symptoms of DM2 [Although majority of data pointed that CCUG expansion in the ZNF9 gene has a trans effect on gene expression, data s of DM2 . Since Zs of DM2 -98. Conss of DM2 . The binIt has been shown that long CUG repeats in DM1 cells are cleaved by a dicer leading to accumulation of RNA containing short CUG repeats . What arThe toxicity of CUG/CCUG repeats in DM is mediated by following mechanisms: a) reduction of MBNL1 in nuclei of DM1 and DM2; b) elevation of CUGBP1 in DM1 and DM2; c) alteration of splicing; d) increase of CUGBP1 translational targets; e) alteration of RNA stability; f) reduction of the rate of protein translation; g) reduction of TFs; h) increase of protein stability; i) increase of Akt and PKC kinases and k) the reduction of cyclin D3.The length of CUG/CCUG expansions is critical; however, high number of copies of the short CUG and CCUG repeats might be also pathogenic.Aggregation of CUG and CCUG repeats in nuclei might be toxic; however, additional studies are needed to examine the correlation of toxicity of the total amounts of CUG/CCUG RNA repeats with a number of CUG/CCUG nuclear aggregates.Disruption of nuclear CUG foci with anti-sense to CUG RNA helps to correct MBNL1-dependent splicing in nuclei of DM patients ,102. TheBased on the current knowledge, the \u201cideal\u201d approaches for DM therapy should include the efficient degradation of the mutant RNAs without disruption of the wild type DMPK and ZNF9 mRNAs. Such approaches would help to eliminate complex effects of CUG/CCUG repeats on molecular processes in DM tissues."} +{"text": "A precise modular topographic-morphological (MTM) classification for proximal humeral fractures may address current classification problems. The classification was developed to evaluate whether a very detailed classification exceeding the analysis of fractured parts may be a valuable tool.Three observers classified plain radiographs of 22 fractures using both a simple version and an extensive version of the MTM classification. Kappa-statistics were used to determine reliability.An acceptable reliability was found for the simple version classifying fracture displacement and fractured main parts. Fair interobserver agreement was found for the extensive version with individual topographic fracture type and morphology.Although the MTM-classification covers a wide spectrum of fracture types, our results indicate that the precise topographic and morphological description is not delivering reproducible results. Therefore, simplicity in fracture classification may be more useful than extensive approaches, which are not adequately reliable to address current classification problems. Proximal humerus fractures have a great variability and complexity. In general only two systems are used for classification: the Neer classification and the classification of the Arbeitsgemeinschaft f\u00fcr Osteosynthesefragen/Association for the Study of Internal Fixation (AO/ASIF) -4. DurinTherefore, new classification systems were introduced during the last years ,19. A nemodular topographic and morphologic (MTM-) classification. To facilitate a more precise, reliable and reproducible fracture differentiation, initial radiological assessment, standardized plain x-rays in a.p. and axillary views were mandatory.The alphanumeric classification consists of a The topographic basis is the division of the proximal humerus into two segments-the articular segment and the extraarticular segment. This extraarticular segment (with A-fractures) is further divided into the three subsegments metaphysis (M), Greater tuberosity (G) and Lesser tuberosity (L).Both the articular and the extraarticular segments are divided by the anatomical neck that extend through the surgical neck, two-part greater tuberosity (G) and two-part lesser tuberosity fractures (L). The tuberosity fractures are defined by a complete separation of the tuberosity from the metaphysis and the anatomical neck.Three-part A fractures (MG and ML fractures) are a metaphyseal fracture (M) with a fracture of one tuberosity (G or L).Four-part A fractures (MT fracture) are a metaphyseal fracture (M) with a fracture of both tuberosities (G+L = T) , an incomplete fracture of the anatomical neck with the greater tuberosity at the humeral head (GB) and an incomplete fracture of the anatomical neck with the lesser tuberosity connected to the humeral head (LB).The further division of type B fractures depends on the additionally occurring tuberosity fractures. These three-part B fractures are GB fractures with a separate fracture of the lesser tuberosity (GBL), LB fractures with a separate fracture of the greater tuberosity (LBG), MB fractures with a separate fracture of the greater tuberosity (MBG) and MB fractures with a separate fracture of the lesser tuberosity (MBL).B fractures with four main parts (= 4-part B fractures) are MB fractures in combination with a fracture of both tuberosities (MBT) .Three-part C fractures showing a fracture of the greater tuberosity (CG), with a fracture of the lesser tuberosity (CL). Four-part C fractures show a fracture of both tuberosities fractures (CT). Along with four-part fractures there is often a fracture of the metaphyseal subsegment. These even more complex fractures are called (CTM) , type B fracture-dislocations (DB) and type A fracture-dislocations (DA).The further classification of DA fractures depends on the fractures in the extraarticular segment: fracture-dislocation with a metaphyseal fracture (DM), anterior fracture-dislocation dislocation and fracture of the greater tuberosity (DG) and posterior fracture-dislocation and fracture of the lesser tuberosity (DL).For morphological analysis, the MTM classification is based on four defined specifications, which are relevant for therapy and prognosis.These specifications are organized by increasing fracture severity: minimally displaced and stable (S1), minimally displaced and unstable (S2), displaced (S3), displaced and comminuted (S4). In fractures with several parts, each part has to be classified individually. are defined as fractures with angulations up to 25\u00b0, a displacement of the tuberosities and the anatomical neck up to 5 mm, and a metaphyseal fracture displacement of 10 mm. Up to this extent of displacement, a real impairment of shoulder function is not to be expected.Fracture stability is given if \u2013 through impaction of the main parts and preserved soft tissues \u2013 mobility between the main parts resulting in further displacement is unlikely. Thus, the fracture position, induced by the trauma, is not changing by careful functional strain of the shoulder.Therefore, minimally displaced and stable fractures are amenable to nonoperative treatment including early functional exercises. Regardless of fracture type and the number of main fragments, those fractures can be grouped together as S1 fractures since they are almost analogous in treatment and prognosis.The displacement of S2 fractures is defined similar to S1 fractures as a displacement of the tuberosities and the anatomical neck up to 5 mm, and a metaphyseal fracture displacement of 10 mm. Those fractures were defined as unstable when fractured parts were not impacted into each other resulting in instability between the fractured parts. Thus, through muscle pull and shoulder mobilization further displacements beyond the initial radiographically diagnosed extent may occur.If the above-defined criteria for stability and instability cannot clearly be applied radiologically, an additional fluoroscopic examination of the fracture is recommended. In this examination, a gentle abduction and rotation is applied in true a.p. view. If mobility can be visualized between fractured parts, the fracture is defined as unstable.S3 fractures always show a stronger malalignment and fragment separation, and, thus, the interfragmental soft tissue is ruptured more strongly, mostly induced by a combination of angulatory, rotatory and translatory displacement. Often, a compression mechanism leads to strong displacement with impaction. In displaced C fractures, the blood supply to the humeral head is completely destroyed. In cases with translatory and rotatory displacement of the humeral head, there is no integrity of the medial hinge and capsular and periosteal vessels ascending intraosteally to the humeral head are ruptured,20.In addition to the displacement, S4 fractures show comminution of the main part complicating the therapeutic procedure and worsening the prognosis. For example displaced head-splitting fractures define the fracture of the humeral head in several single fragments (C4).Due to its modularity, the classification can be applied in several ways depending on application purposes.The short topographic version allows to classify into main types or into the number of main parts or into a combination of both the main-fracture-type and parts (2-p-A-fracture to 4-p-C-fracture).A more detailed approach evaluated individual fracture types (M to CTM) or fracture types in combination with the individual specification (M to CTM and S1 to S4).For example, a stable fracture of the anatomical neck with a displaced fracture of the greater tuberosity could be classified as C1G3.To allow the comparism with other studies we also classified the fractures according to the determination between minimal-displaced and displaced fractures (S1/S2 to S3/S4).In a prospective study during March 2005 and August 2005 a consecutive series of 22 patients with 22 proximal humeral fractures presented at the BG Trauma Centre of the University of Tuebingen were included into the study. All patients were diagnosed with standardized plain x-rays in a.p. and axillary views with the patient supine using a shoulder splint with at least 60\u00b0 abduction of the arm. In addiAt the beginning of the study, the MTM classification was provided to the examiners in English and German language. In addition, the first author gave a 15-minute presentation of the MTM-Classification. A goniometer and a pen were given to the examiners. The data acquisition of the three examiners took place independently.All fractures were classified by the observers according to the following guidelines:a) Topographic analysis by individual fracture types (M to CTM) with consideration of individual specification Topographical analysis by individual fracture types (M to CTM) (see Table c) Analysis by a combination of both main fracture types and number of parts (2\u20134) (see Table a) Analysis by the 4 main fracture types Table .d) Analysis by number of main parts (see Table e) Morphological distinction between minimal-displaced fracture and displaced fracture (S1/S2 or S3/S4) . For intraobserver reliability and interobserver reliability, the kappa statistic function of the JMP-statistical package was used measuring kappa values (\u03ba) to describe the agreement between observers while correcting for the proportion that may have occurred by chance alone.A kappa value of 0 represented agreement by chance alone while kappa value of 1 represented a perfect agreement. Kappa values were interpreted using the guidelines proposed by Landis and Koch. Values between 0.81 and 1 indicated excellent or almost perfect, 0.61 and 0.80 substantial, 0.41 and 0.60 moderate, 0.21 and 0.40 fair and 0 and 0.20 slight reliability .Lowest percentages of agreement were detected for individual fracture type and morphological specification , followed by number of main parts combined with main fracture type and individual fracture type .The highest percentage agreement between the observers was found for the parameters of fracture displacement , number of main parts and main fracture type . Statistical analysis showed the lowest kappa values in individual fracture type and morphological specification and the highest intraobserver reliability when fractures where classified according to number of main parts, fracture displacement and main fracture types including number of parts. a topographic classification could by possible. By adding morphological aspects and the possibility of a modular application of the system a precise evaluation of almost all fractures may be possible as well. However, the classification of proximal humeral fractures with the MTM-systems results in various differences in reliability depending on the short or extensive version of application. Analyzing the short version of the classification, the interobserver analysis resulted in moderate kappa values in the category 'main parts' (according to Neer-parts) and substantial kappa values for analyzing 'fracture displacement'. Those results were in accordance with the literature,23. AnalAt present the classifications by Neer and the AO for proximal humeral fractures are widely accepted and commonly used although both classifications have received some criticism during the last years ,3. NeithNew classifications systems such as the MTM classification were developed to address these critical points,19. As aThe MTM classification advanced available classifications by including the fracture model of Codman, the Neer analysis of fractured parts and the AO differentiation of fracture height, allowing for the description of almost all possible fracture types ,3,25.Published reliability studies differ in observer experience and diagnostic imaging methods. In addition, partly simplified Neer and AO classifications were used, leading only to limited study comparisons. resulted in better kappa values than the modified Neer classification with four choices of fracture types . These fSidor also discussed that a differentiation of single fragments due to multiple fractures lines is difficult. He also stresses the importance of a high-quality radiological diagnostics, which makes an overlapping free presentation for the fractured region possible to avoid any classification restriction. For an excellent classification of proximal humeral fractures, a perfect radiological visualization of the fractured region is mandatory. In accoBrorson and Shrader analyzed the importance of training in a specific classification system. They could show that kappa values for interobserver reliability were significantly improved from fair to substantial after observers had received a classification training,30,31.Since the training in a specific classification system improves reliability, one weakness of the current study was, that the MTM-classification was not used in daily clinical practice. With sufficient training, the reliability of this classification could be higher.In summary, some complex fracture types are inadequately defined by classification systems such as the present Neer or AO classification. To allow a precise topographic and morphological description, the MTM classification was developed for a better understanding of individual fractures and to address the question whether a very detailed classification of proximal humeral fractures may be limited by its reliability. Unfortunately, the very detailed classification approach led only to fair or unacceptable results and is not helpful to improve reliability.We concluded that a detailed classification exceeding the part analysis of Neer is not a practical approach to address current problems in classification systems regarding proximal humerus fractures.For future projects an evaluation of ongoing developments of diagnostic imaging-technology like the CT and here specifically multiplanar visualisation of the fracture in thin-cut technique and 3 D visualisation should be undertaken.The pre-publication history for this paper can be accessed here:"} +{"text": "P<0.05), independent of the hormone receptor status. Similarly, in IDCs with nodal metastases, only the PVN classification significantly increased the HRs of tumour recurrence and death (P<0.05), independent of the hormone receptor status. We conclude that the PVN prognostic histological classification is the best classification available for IDC of the breast.There are many studies that show biological differences between invasive ductal carcinoma (IDC) with and without nodal metastasis, but no prognostic classification taking into consideration any biological differences between them is currently available. We previously investigated the histological characteristics that play an important role in tumour progression of IDCs according to their nodal status, and a new prognostic histological classification, the primary tumour\u2013vessel tumour\u2013nodal tumour (PVN) classification, was devised based on the histological characteristics of IDCs with and without nodal metastasis. Multivariate analyses using the Cox proportional hazard regression models were used to compare the ability of the PVN classification to predict tumour recurrence and death in 393 IDC patients based on the following histological classifications: (1) the pTNM classification, (2) the Nottingham Prognostic Index, (3) the modified Nottingham Prognostic Index, and (4) the histologic grade. In IDCs without nodal metastasis, only the PVN classification significantly increased the hazard rates (HRs) of tumour recurrence and death ( There are many studies that show differences between the prognostic parameters of the primary-invasive tumours of invasive ductal carcinoma (IDC) patients with and without nodal metastasis , and histologic grade are the major histological prognostic classifications currently used clinically to predict the outcome of patients with IDC . NeverthThe purpose of this study was to establish separate prognostic histological classifications for IDC patients with and without nodal metastasis based on not only the histological characteristics of the primary-invasive tumours but also of the tumour cells in lymph vessels, blood vessels, and nodal metastatic tumours according to hormone receptor status. The results clearly demonstrated that the newly proposed prognostic histological classifications are the best classifications available for IDC of the breast.\u22121 protein and 10\u2009fmol\u2009mg\u22121 protein, respectively.A total of 392 consecutive cases of IDC of the breast surgically treated between July 1992 and November 1998 at the National Cancer Center Hospital East served as the basis of this study. Clinical information was obtained from the patients' medical records after complete histological examination of all of the IDCs. All patients were Japanese women, and they ranged in age from 28 to 78 years . All had a solitary lesion. In total, 209 patients were premenopausal, and 183 were postmenopausal. Partial mastectomy was performed in 55, modified radical mastectomy in 313, and standard radical mastectomy in 24 patients. Axillary lymph node dissection consisting of levels I, II, \u00b1III was carried out in all patients. None of the patients had received radiotherapy or chemotherapy before surgery. Adjuvant therapy was performed in 289 patients. Of the 188 IDC patients without nodal metastasis, 88 received no adjuvant therapy, 24 received tamoxifen, 45 received CMF , AC (adriamycin and cyclophosphamide), or EC (epirubicin and cyclophosphamide), and 31 received chemotherapy plus tamoxifen. Of the 204 IDC patients with nodal metastases, 14 received no adjuvant therapy, 34 received tamoxifen, 51 received chemotherapy, and 105 received chemotherapy plus tamoxifen. There were no cases of inflammatory breast cancer in this series. Oestrogen receptors (ERs) and progesterone receptors (PRs) in the cytosol fractions were determined by enzyme immunoassay . The upper cutoff values of the ER assay and PR assay were 13\u2009fmol\u2009mgFor pathological examination, the surgically resected specimens were fixed in 10% formalin, and multiple histological sections were taken from each tumour for histological examination without knowledge of the patient's outcome. The sections were processed routinely and embedded in paraffin.We attempted to establish new separate histological prognostic classifications, called the PVN classifications, for IDC patients with and without nodal metastasis. The PVN classifications for IDCs with and without nodal metastases were devised based on the histological characteristics of the tumour that were found to be most important in predicting the outcome of IDC patients in the previous studies , 2004b. The following existing histological classifications were compared with our proposed new classification in regard to prediction of disease-free and overall survival: (1) pTNM , (2) NPIPatient survival was evaluated by follow-up for a median period of 94 months, ranging in months from 61 to 136 months as of November 2003. A total of 106 patients experienced tumour recurrence, and 83 had died of their disease. Disease-free and overall survival were measured from the date of surgery. Metastasis or local recurrence was considered evidence of tumour relapse. Only deaths due to breast cancer were considered for the purposes of this study.P-values for disease-free or overall survival were evaluated using a multivariate analysis with the Cox proportional hazard regression model. Since only five patients with IDCs without nodal metastasis and either or both positive for ER and PR died, a multivariate analysis could not be performed for overall survival. In IDCs with nodal metastasis according to hormone receptor status, since tumour recurrence and/or death was not observed in the low-risk groups of the PVN, NPI, modified NPI, pTNM, or HG classifications, the low- and intermediate-risk groups were taken together as a referent category to assess the HRs of tumour recurrence or death in the multivariate analyses. The predictive power for disease-free and overall survivals of each classification was evaluated by multivariate analysis using the Cox proportional hazard regression model , intermediate (score 1), high (score 2), and very high-risk groups (scores 3 and 4) ; IDCs wion model .All analyses were performed with Statistica/Windows software .P<0.001, In IDCs without nodal metastasis, the largest numbers of patients were observed in the low-risk group, and the number of patients belonging to each group decreased in the risk order of the classification . The ratP<0.001, In IDCs with nodal metastasis, the rates of tumour recurrence or death increased in the risk order of the classification, and the tumour recurrence of the low-risk group was observed only in one case, and no case died of the disease . On the P-values for disease-free survival when compared with the other prognostic classification systems, and only the NPI classification also showed a significant trend for the HR, 95% CI, and the P-values for disease-free survival (In IDCs without nodal metastasis and positive for ER or PR or both, the PVN classification had the largest number of cases in the low-risk group, compared with all the prognostic classification systems studied, and the frequency of tumour recurrence or death in the low-risk group was similar to that of the other prognostic classifications . The fresurvival .P-values for disease-free and overall survival when compared with other prognostic classification systems (P-values when compared with the PVN classification.In IDCs without nodal metastasis that were negative for both ER and PR, the PVN classification and the pTNM classification selected almost an equally large number of cases belonging to the low-risk group. However, the frequency of tumour recurrence or death in the very high-risk group of the former classification was 100%, while no cases of tumour recurrence or death were observed among the stage IIIB cases in the latter classification . The fre systems . On the In IDCs with nodal metastasis and positive for ER or PR or both, the rates of tumour recurrence or death increased according to the risk order of the PVN classification, and the rates of tumour recurrence and death in the very high-risk group were 72 and 62%, respectively . In the In IDCs with nodal metastasis who were negative for both ER and PR, the rates of tumour recurrence or death according to the risk order of the PVN classification, and the rates of tumour recurrence and death in the very high-risk group were 88 and 88%, respectively . In the The current study clearly demonstrated that the PVN classification is the only prognostic classification that can classify IDC patients into the four groups according to in the risk order of the classification with significant rates of tumour recurrence or death. In addition, only the PVN classification could select IDC patients with the very high risk of tumour recurrence or death independent of nodal status and hormone receptor status. Since the parameters of the PVN classification were selected based on the precise studies that evaluate the histological characteristics of the primary-invasive tumours, tumours in vessels, and those in lymph nodes , 2004b, The comparative studies also clearly demonstrated the merits and demerits of the other prognostic classifications in the prediction of the outcome of IDC patients. In IDCs without nodal metastasis, the pTNM and HG classifications could not significantly increase the trend values for the HRs of tumour recurrence or death in the multivariate analyses, but a significant increase in the trend values of the HRs of tumour recurrence was observed for the NPI classification in multivariate analyses with the PVN classification in IDCs positive for ER and/or PR. The pTNM classification evaluates the malignant potential of IDCs only according to the invasive tumour size of the primary tumours. The HG classification evaluates the degree of tubular formation, the degree of nuclear atypia, and the number of mitotic figures in tumour cells, but takes no account of invasive tumour size. The NPI classification evaluates the malignant potential of IDCs according to the HG and invasive tumour size of the primary invasive tumours. This strongly suggests that the NPI classification system contains more biological information based on the tumour histology of the primary invasive tumour cells than the pTNM classification and more biological information on the tumour size of the primary invasive tumours than the HG classification. Thus, the NPI classification can more precisely assess the malignant potential of IDCs positive for ER and/or PR than the pTNM and HG classifications, resulting in the superiority of the former to the latter in the prediction of the outcome of IDC patients without nodal metastasis who were positive for ER and/or PR. However, in IDCs negative for both ER and PR, the NPI classification failed to significantly increase the trend values of the HRs of tumour recurrence and death in the multivariate analyses with the PVN classification. Since IDCs that are negative for both ER and PR have a much higher malignant potential than IDCs that are positive for ER and/or PR, the NPI classification does not maintain its prognostic predictive power when compared with the PVN classification. In addition, this study clearly demonstrated that the modification of the NPI classification was of no benefit to accurate prediction of tumour recurrence or death in patients with IDCs without nodal metastasis.P-values for tumour recurrence and death, and especially in stage IIIC cases a significant increase was seen in the HRs of tumour recurrence and death in the multivariate analyses. The HG classification ignores the nodal status of IDCs. The NPI and modified NPI classifications assess nodal status according to the number of nodal metastases: score 1, no nodal metastasis; score 2, one to three nodal metastases; and score 3, four or more nodal metastases. Although the pTNM classification also assesses nodal status according to the number of nodal metastases, the stage IIIC IDCs consist of IDCs with 10 or more nodal metastases, independent of their invasive tumour size. Thus, the assessment of 10 or more nodal metastases in the pTNM classification is probably very important for accurately predicting the outcome of patients with IDCs with nodal metastasis that are positive for ER and/or PR. The pTNM evaluations of stage IIB, IIIA, and IIIB cases failed to increase the HRs of tumour recurrence or death in the multivariate analyses, and these stages exhibited similar rates of tumour recurrence or death in IDCs with nodal metastases that are positive for ER and/or PR. In addition, the NPI and modified NPI node classifications did not improve the accurate prediction of tumour recurrence or death in patients with nodal metastasis who were positive for ER and/or PR. Therefore, the N1 and N2 categories of the pTNM classification probably have no effect on the accurate prediction of the outcome of IDC patients with nodal metastases. However, this study clearly demonstrated that in IDCs negative for both ER and PR, the stage IIIC of the pTNM classification failed to maintain its predictive prognostic power, when compared with the PVN classification. Based on these findings, the histological characteristics of the T and N categories of the pTNM classification should be improved, since the pTNM classification is the global prognostic classification for patients with IDC of the breast.In IDCs with nodal metastasis, the comparative studies with the PVN classification clearly demonstrated that the NPI, modified NPI, and HG classification were no use in the prediction of the outcome of IDC patients in the multivariate analyses, but in IDCs that are positive for ER and/or PR the pTNM classification showed significant trend In conclusion, the current study clearly demonstrated that the PVN classification is by far the best histological classification for predicting the outcome of patients with IDC of the breast. Indeed, the methodology for determining the PVN classification may be more complex than those other existing classification systems, but the methods that evaluate the parameters of the PVN classification have been reported in our previous studies , 2004b,"} +{"text": "Interferon-\u03b3 release assay (IGRA) may improve diagnostic accuracy for latent tuberculosis infection (LTBI). This study compared the performance of the tuberculin skin test (TST) with that of IGRA for the diagnosis of LTBI in immunocompromised patients in an intermediate TB burden country where BCG vaccination is mandatory.We conducted a retrospective observational study of patients given the TST and an IGRA, the QuantiFERON-TB Gold In-Tube (QFT-IT), at Severance Hospital, a tertiary hospital in South Korea, from December 2006 to May 2009.Of 211 patients who underwent TST and QFT-IT testing, 117 (55%) were classified as immunocompromised. Significantly fewer immunocompromised than immunocompetent patients had positive TST results , whereas the percentage of positive QFT-IT results was comparable for both groups (21.4% vs. 25.5%). However, indeterminate QFT-IT results were more frequent in immunocompromised than immunocompetent patients . Agreement between the TST and QFT-IT was fair for the immunocompromised group (\u03ba = 0.38), but moderate agreement was observed for the immunocompetent group (\u03ba = 0.57). Indeterminate QFT-IT results were associated with anaemia, lymphocytopenia, hypoproteinemia, and hypoalbuminemia.In immunocompromised patients, the QFT-IT may be more sensitive than the TST for detection of LTBI, but it resulted in a considerable proportion of indeterminate results. Therefore, both tests may maximise the efficacy of screening for LTBI in immunocompromised patients. M. tuberculosis (MTB)-specific antigens [Tuberculosis (TB) is the single most important cause of death due to infectious disease worldwide, resulting in ~1.8 million deaths annually . For thiantigens . IGRA teantigens ,14 and aantigens ,16. In aantigens . Therefoantigens ,18-20. Tantigens .In South Korea, treatment of LTBI has been recommended only for children aged <6 years who have been exposed to TB, for HIV-infected individuals, for patients receiving tumour necrosis factor-\u03b1 inhibitors after diagnosis of LTBI using the TST . HoweverPatients tested for TB infection with the TST and an IGRA, the QuantiFERON-TB Gold In-Tube (QFT-IT), were included in the study. Patients were tested at Severance Hospital , a university-affiliated tertiary care referral hospital, between December 2006 and May 2009. We reviewed patients' medical records, microbiologic results, other laboratory results, and radiographic results. Patients who were diagnosed with active TB during the study period or who had previously been treated for TB were excluded to enable evaluation of LTBI. The protocol for this study was approved by the Ethical Review Committee of Severance Hospital. We included 211 patients who underwent both the TST and QFT-IT. Most (197) participants were tested for suspicion of active TB during a clinical work up by attending physicians, nine patients underwent the tests for screening before tumour necrosis factor-\u03b1 inhibitors were administered, and five patients before transplantation.The definition of an immunocompromised condition included the following: 1) diabetes mellitus 2) undergoing chemotherapy for an underlying malignancy at the time of TST and QFT-IT testing, 3) received either a solid organ transplant or bone marrow transplant, 4) end-stage renal disease and on renal replacement therapy, 5) advanced liver cirrhosis with Child-Pough class C, 6) seropositivity for human immunodeficiency virus, 7) daily administration of systemic corticosteroids .The TST was performed by injecting a 2-TU dose of purified protein derivative RT23 intradermally into the forearm using the Mantoux technique . The criThe QFT-IT test was performed in the Immunology Laboratory at Severance Hospital according to the recommendations of the manufacturer Briefly, 1 ml blood was drawn directly into each of three evacuated blood collection tubes: one containing heparin alone ; one containing the T cell mitogen phytohemagglutinin ; and one containing the MTB-specific antigens ESAT-6, CFP-10, and TB7.7 (the TB antigen tube). After mixing, the tubes were incubated upright for 20 h at 37\u00b0C before plasma was harvested and then were stored frozen at -20\u00b0C until further analysis within 5 days. The concentration of interferon-\u03b3 in each plasma sample was determined using QFT ELISA. Results were calculated using the QFT-IT software provided by the manufacturer.2 test or Fisher's exact test; continuous variables were analysed using Mann-Whitney U tests. The concordance between the TST and QFT-IT test results was assessed using \u03ba coefficients and was interpreted according to the Landis and Koch's classification [Data are expressed as number (percentage) or median and interquartile range . Categorical variables were analysed using Pearson's \u03c7fication .P < 0.05 was considered significant. SPSS v. 11.5 was used for statistical analyses.All tests of significance were two-tailed; A total of 211 participants underwent both TST and QFT-IT testing for TB infection. Baseline characteristics of participants are shown in Table p 0.001). For the QFT-IT, the proportion of positive results was comparable for both groups ; however, the proportion of indeterminate results was higher in the immunocompromised group than in the immunocompetent group and negative results in 173 patients 82%), whereas the QFT-IT showed positive results in 49 patients (23.2%) and negative results in 128 patients (60.7%). Thirty-four patients (16.1%) had indeterminate QFT-IT assay results. The proportion of positive TST results was higher in the immunocompetent group (27.7%) than in the immunocompromised group . However, agreement was fair in the immunocompromised group . Agreement between the TST and QFT-IT did not show a remarkable change when we used a 5-mm induration cut off for the TST in the immunocompromised group.Table p 0.057).Table Among our 211 participants, including 117 (55%) immunocompromised patients, the number of patients with positive QFT-IT results was comparable between the immunocompetent and immunocompromised groups , whereas the TST detected significantly less LTBI among the immunocompromised group than the immunocompetent group (27.7% vs 10.3%). In addition, for the 177 patients with determinate QFT-IT and TST results, the proportions of positive results for the TST and QFT-IT were significantly different among the immunocompromised group but were comparable among the immunocompetent group . These findings suggest that the sensitivity of the QFT-IT for the diagnosis of LTBI infection might be higher than that of the TST in immunocompromised patients. The results of our study are consistent with those of previous studies reporting the higher sensitivity of the IGRA test compared to the TST ,20,24 foThe moderate concordance between the TST and QFT-IT in the immunocompetent group in contrast with the fair concordance of the TST and QFT-IT in the immunocompromised group revealed that more patients had TST negative/QFT-IT positive results in the immunocompromised group. Considering the high specificity of the QFT-IT for MTB infection , these rAlthough our results suggest that the QFT-IT improves the accuracy of the diagnosis of LTBI in immunocompromised groups, the QFT-IT had a considerable proportion of indeterminate results in the immunocompromised group (21.4%). In this study, indeterminate results for the QFT-IT occurred at a relatively high rate compared to in previous studies, which ranged from 1-21% -29. The In South Korea, the TB infection rate in 20- to 29-year-olds was 59% in 1995, and the expected prevalence of LTBI in all Koreans is ~30% ,25. TreaTo fully understand our results, it is necessary to consider the limitations of this study. First, the accuracy of the TST and QFT-IT for the diagnosis of LTBI infection could not be directly estimated in our study because there were no methods to confirm the diagnosis. Second, the study did not include long-term follow-up data for progression to active TB in conjunction with the results of the TST and QFT-IT. Therefore, we could not conclusively demonstrate the superiority of the QFT-IT for the diagnosis of LTBI among immunocompromised patients. Third, no data were included regarding the BCG vaccination status of participants, and therefore the effect of BCG vaccination on the TST and QFT-IT results of immunocompetent and immunocompromised patients could not be analysed. However, because BCG vaccination is mandatory in South Korea, and the age distribution of the two groups was similar, we do not expect BCG vaccination to have greatly affected our results. Fourth, the heterogeneous nature and small number of immunocompromised participants made it difficult to generalize among the various immunocompromised conditions.In conclusion, compared with the TST, the QFT-IT assay seems to be more sensitive for detecting LTBI in immunocompromised patients; however, the QFT-IT gave a considerable proportion of indeterminate results among immunocompromised patients. Therefore, the use of both the TST and QFT-IT could maximize the efficacy of screening for LTBI in immunocompromised patients.M. tuberculosis; HIV: human immunodeficiency virus; ESAT-6: early secreted antigen 6; CFP-10: culture filtrate protein 10; QFT ELISA: QuantiFERON-TB Gold enzyme linked immunosorbent assay; OR: odds ratio; CI: confidence interval; BMI: body mass index; ESRD: end-stage renal disease; IQR: interquartile range; WBC: white blood cells.IGRA: Interferon-\u03b3 release assay; LTBI: latent tuberculosis infection; TST: tuberculin skin test; QFT-IT: QuantiFERON-TB Gold In-Tube; TB: Tuberculosis; BCG: Bacille Calmette-Guerin; MTB: The authors declare that they have no competing interests.EY Kim carried out screening and statistical analysis of the data and participated in the writing of the manuscript. JE Lim carried out screening and acquisition of data. JY Jung participated in the acquisition of data and statistical analysis. JY Son participated in the acquisition of data and analysis and interpretation of data. KJ Lee, YW Yoon and BH Park participated in the acquisition of data, interpretation of data and writing of the manuscript. JW Moon, MS Park and YS Kim participated in the study design and the analysis and interpretation of data. SK Kim and J Chang participated in the study design, analysis and interpretation of data and critical revision of the manuscript for important intellectual content. YA Kang participated in the study design, analysis and interpretation of data and the writing of the manuscript. All authors read and approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2334/9/207/prepub"} +{"text": "Background. Right ventricular pacing (RVP) has been associated with adverse outcomes, including heart failure and death. Minimizing RVP has been proposed as a therapeutic goal for a variety of pacing devices and indications. Objective. Quantify survival according to frequency of RVP in veterans with pacemakers. Methods. We analyzed electrograms from transtelephonic monitoring of veterans implanted with pacemakers between 1995 and 2005 followed by the Eastern Pacemaker Surveillance Center. We compared all cause mortality and time to death between patients with less than 20% and more than 80% RVP. Results. Analysis was limited to the 7198 patients with at least six trans-telephonic monitoring records (mean = 21). Average follow-up was 5.3 years. Average age at pacemaker implant was significantly lower among veterans with <20% RVP . An equal proportion of deaths during follow-up were noted for each group: 126/565 patients (22%) with <20% RVP and 1113/4968 patients (22%) with >80% RVP. However, average post-implant survival was 4.3 years with <20% RVP versus 4.7 years with >80% RVP (P < .0001). Conclusions. Greater frequency (>80%) of RVP was not associated with higher mortality in this population of veterans. Those veterans utilizing <20% RVP had a shortened adjusted survival rate (P = .0016). Right ventricular apical pacing (RVP) is commonly employed, but concerns have been raised suggesting that it is associated with worsened mortality in the setting of cardiomyopathy. Several trials have found an association between more frequent RVP and adverse effects, such as atrial fibrillation and congestive heart failure \u20134. Dual-The Eastern Pacemaker Surveillance Center is one of two national Veterans Administration centers established for remote telephonic monitoring. It has served Veterans for 25 years, maintaining a large registry of transtelephonic monitoring records (TTMs) and outcomes. Quality assurance analysis of deidentified data from this population was used to assess for effects of frequent RVP, and potential need for reprogramming pacemakers to minimize RVP. From this registry of over 66,000 patients, we identified those with permanent pacemakers which had active right ventricular leads implanted between January 1, 1995, and December 31, 2005. This group was then limited to those with a minimum number of TTM followups, who had either a very high (>80%) or very low (<20%) frequency of RVP. Frequency was determined by the TTM recordings which lasted 30 seconds before and 15 seconds during magnet application. The percent of paced ventricular complexes on each TTM was noted. The values were averaged for each patient and used as a representation of that patient's frequency of RVP.R = 0.867) so that the current analysis required veterans with at least 6\u2009TTMs. We reviewed records of 174 patients from the Washington Veterans Affairs Medical Center with 3 or more TTMs to determine the minimum number of prior TTMs needed. The average frequency of TTM-derived RVP was compared to at least 2 independent records of frequency of RVP obtained from implanted pacemaker generated data logs from office-based pacemaker interrogation. A minimum of 6 TTM-derived RVP values correlat Our group had previously analyzed outcomes for very high and very low frequency RVP based on those with less than 20% right ventricular pacing (<20% RVP) and those with greater than 80% (>80% RVP) excludinThe primary endpoints were all-cause mortality and post pacemaker implant survival, measured as the time from pacemaker insertion to death. We examined univariate relationships between predictors (patient and pacemaker characteristics) and outcomes using Kaplan-Meier analysis (Proc Lifetest in SAS version 9.1). Multivariate relationships were examined using Cox regression (Proc Phreg in SAS version 9.1). n = 565) or >80% RVP (n = 4968). This represented 77% of all patients with at least 6\u2009TTMs (P = .062). We identified 7198 patients from the Eastern Pacemaker Surveillance Center registry with six or more TTMs (Mean = 21\u2009TTMs) during the 11-year time period with either <20% RVP . The overwhelming majority of patients were men (98% in both groups). Single-chamber pacemakers were present in 32% of patients who had <20% RVP and in 18% of patients who had >80% RVP . Sinus node dysfunction was the pacing indication slightly more often in those patients with <20% RVP (66%), compared to those patients with >80% RVP (56%); whereas atrioventricular node block was the pacing indication slightly less often in those patients with <20% RVP (34%) compared to those patients with >80% RVP (44%). Average age at time of pacemaker implant was significantly lower in the group who had <20% RVP versus >80% RVP or in survival time for the 20% RVP group; and at 6.4 years, (95% CI = 6.2\u20136.7 years) in the >80% RVP group). Survival was also assessed at two separate time points post implant: 4 years and 9.5 years post implant. Significantly more patients with >80% RVP were alive at 4 years; however, at 9.5 years survival rates were not significantly different . There were a small percentage of patients in both groups without rate responsiveness. Rate responsiveness was an independent predictor of better survival (P = .01). Since the <20% RVP versus >80% RVP patients differed in age at implant, pacemaker type, and percent with rate responsiveness, it was necessary to do a multivariate analysis using Cox regression in order to determine the independent effect of <20% RVP versus >80% RVP after accounting for age, pacemaker type, and rate responsiveness. This analysis found that <20% RVP versus >80% RVP, age at implant, and pacemaker type together were significantly related to survival time , and that each of these variables independently had a significant impact on survival time (P = .0016). Pacemaker type , age at implant, and rate responsiveness all had significant univariate relationships with survival time , after accounting for the effects of all other covariates. Having a single-chamber pacemaker was associated with a 28.6% higher likelihood of death during the followup period (P < .001). Absence of rate responsiveness raised the risk of death by 29.6% (P = .01). Each year of age was associated with a 6.0% higher likelihood of death during the 5.3-year average followup period (P < .0001). Having <20% RVP in comparison to >80% RVP was associated with a 36% higher likelihood of death during the followup period (P < .0001). Patients with <20% RVP and single-chamber pacemakers had both the highest percentage of patients who died (30.8%) and the shortest time to death (25% died by 4.5 years post implant). Patients with >80% RVP and single-chamber pacemakers also had a high percentage who died (30.5%), and a moderate survival time (25% died by 5.5 years). The two groups with dual-chamber pacemakers lived longer (25% died by 6.7 years for both <20% RVP and >80% RVP) and fewer died . The age at implant, rate responsiveness, and the combined variable (percent pacing and pacemaker type) made significant contributions to the regression equation (In order to more closely examine the concurrent effects of both percent pacing (<20% RVP versus >80% RVP) and pacemaker type , we coded the four possible combinations of these 2 variables: (1) <20%: single, (2) <20%: dual, (3) >80%: single, and (4) >80%: dual; then we completed both Kaplan-Meier and Cox regression analyses. In the Kaplan-Meier analysis, the group variable of percent pacing and pacemaker type (with 4 levels) had a significant impact on survival . The <20% RVP and >80% RVP cutoffs, which our group has previously used, were predetermined to allocate large groups with either a relatively low level of pacing or a relatively high level of pacing. Using these two extremes might allow better detection of a pacing effect. We did not use cutoffs of 0% RVP and 100% RVP, since far too few patients would be included in the analysis. Nor were cutoffs such as <50% RVP and >50% RVP used, because these might not discriminate pacing effect: patients in the low-pacing group could have up to 49% pacing, whereas those in the high-pacing group could have as little as 51% pacing. We found that, despite being significantly older at the time of pacemaker insertion, the group that paced more frequently did not have a higher incidence of death. When survival was assessed for the overall group, and at two additional time points, more frequent right ventricular pacing did not shorten survival. The higher frequency of dual-chamber pacing in the >80% RVP group may have contributed to this difference. Despite the apparent survival advantage with more pacing at 4 years, longer followup suggests that this advantage is lost by 9.5 years. The results of even longer followup are not known. Furthermore, of those who died, the <20% RVP group had a significantly shorter survival following pacemaker implant. The suboptimal physiologic effects of right ventricular pacing therefore do not appear to lead to a worse mortality outcome. It appears that, in patients with an appropriate indication for pacemaker therapy, an increased frequency of right ventricle pacing will not hasten death. This should be reassuring to both patients with pacemakers and their physicians. The physiologic effects of right ventricular pacing are well known. These can include atrioventricular dissociation, as well as valvular regurgitation leading to atrial enlargement and remodeling, and ventriculoatrial conduction, which may predispose patients to atrial fibrillation . AdditioDespite absence of conclusive evidence demonstrating a worse mortality outcome, minimizing right ventricular pacing has become a therapeutic goal in pacemaker patients . New pac In addition to the primary endpoints, this study validated that when substantial transtelephonic monitoring data is available it has a high correlation as a surrogate for actual percentage of ventricular pacing. To our knowledge, this had not been previously demonstrated. This could have implications on future study designs and open new opportunities for research on pacing therapy. Furthermore, it was observed that, among Eastern Pacemaker Surveillance Center patients with six or more TTMs, only 23% of patients received right ventricular pacing between 20% and 80% of the time. This demonstrates that patients in our study tend to fall at the extremes of being frequently paced (>80% RVP) or minimally paced (<20% RVP) while at rest for TTM recording, which could reflect not only pacing indication but also pacemaker programming. A comparison of the extremes was made; the effects of right ventricular pacing in the intermediate group with >20% RVP but <80% RVP were not assessed.Our observations and conclusions should be evaluated recognizing the inherent limitations of a retrospective study design. Differences in pacemaker indication and pacemaker type between the two groups may have contributed to better survival despite older age. Although the regional group analyzed did not show any statistically significant difference in comorbidities or medications used, it did differ slightly from the overall cohort. However, these are not large differences: the regional group is slightly older, has slightly more single-chamber pacing in the 20% RVP group, and slightly less in the 80% RVP group, and has slightly more rate-responsive pacing in the >80% RVP group and slightly less in the <20% RVP group. Regarding the primary survival analysis, it is possible that unrecognized differences in care or patient characteristics account for the lack of a difference in observed survival; however this finding is similar to that of a Swedish study comparing AAI pacing to DDD pacing . The actP = .39). When controlled for age at implant, type of pacemaker, and rate responsiveness, more frequent right ventricular pacing was associated with overall 36.2% higher likelihood of survival during five years of followup (P = .0016). Most of the difference in survival occurs during years 3 through 8 post implant. There was little difference prior to year 2. In our study more frequent right ventricular pacing was not followed by an increased or earlier mortality in this unselected veteran population. Following pacemaker implantation, multiple variables impacted mortality outcomes. Kaplan-Meier analysis comparing survival difference between all patients with <20% right ventricular pacing (22.3%) compared to those with >80% right ventricular pacing (22.4%) showed no difference ( Despite the potential negative physiologic effects of right ventricular pacing that have been previously demonstrated in select patient groups, our findings suggest that right ventricular pacing per se does not have a deleterious effect on survival. Review of this large clinical database suggests that less frequent right ventricular pacing does not decrease mortality, and that more frequent right ventricular pacing does not increase mortality in an unselected veteran population. Thus from a quality assurance perspective, there does not appear to be a need for reprogramming all patients to minimize the frequency of right ventricular pacing. Large prospective or case-controlled studies would be needed to validate these findings."} +{"text": "Sputum concentration increases the sensitivity of smear microscopy for the diagnosis of tuberculosis (TB), but few studies have investigated this method in human immunodeficiency virus (HIV)-infected individuals.We performed a prospective, blinded evaluation of direct and concentrated Ziehl-Neelsen smear microscopy on a single early-morning sputum sample in HIV-infected patients with > 2 weeks of cough hospitalized in Kampala, Uganda. Direct and concentrated smear results were compared with results of Lowenstein-Jensen culture.Of 279 participants, 170 (61%) had culture-confirmed TB. The sensitivity of direct and concentrated smear microscopy was not significantly different : , p = 0.88). However, when results of both direct and concentrated smears were considered together, sensitivity was significantly increased compared with either method alone and was similar to that of direct smear results from consecutive (spot and early-morning) specimens . Among 109 patients with negative cultures, one had a positive direct smear and 12 had positive concentrated smears . Of these 13 patients, 5 (38%) had improved on TB therapy after two months.Sputum concentration did not increase the sensitivity of light microscopy for TB diagnosis in this HIV-infected population. Given the resource requirements for sputum concentration, additional studies using maximal blinding, high-quality direct microscopy, and a rigorous gold standard should be conducted before universally recommending this technique. Direct sputum smear microscopy is the cornerstone of tuberculosis (TB) diagnosis worldwide . Direct We therefore performed a prospective, blinded evaluation of direct and concentrated smear microscopy \u2013 performed simultaneously on a single early-morning sputum specimen \u2013 in a population of hospitalized, HIV-infected patients with cough for 2 or more weeks in Kampala, Uganda.Consecutive HIV-infected patients admitted to the medical wards of Mulago Hospital between September 2007 and April 2008 for respiratory illness with cough of at least 2 weeks' duration were eligible for the study. We included patients who provided informed consent and an early-morning sputum specimen for TB diagnosis. We excluded patients who were receiving anti-TB treatment or had clinical evidence of congestive heart failure. The study protocol was approved by the institutional review boards at Makerere University, Mulago Hospital, the Uganda National Council for Sciences and Technology, and the University of California, San Francisco.All patients were tested for HIV infection with a sequential testing algorithm incorporating three rapid enzyme immunoassay kits. For TB diagnosis, patients provided a randomly-timed sputum sample for direct smear microscopy at the time of enrollment. In addition, patients provided an early-morning sputum sample on the morning following admission; this sample was sent for both direct and concentrated smear microscopy (as described below). Patients without any positive smear examinations were offered bronchoscopy with bronchoalveolar lavage (BAL) if the procedure was deemed safe and appropriate by the chest medicine consultant. All sputum and BAL specimens were sent for mycobacterial culture. Patients with suspected TB (determined by the treating ward physician) began treatment with isoniazid, rifampin, ethambutol, and pyrazinamide. Patients were evaluated during an outpatient visit or by telephone interview between two and four months after hospital discharge to assess for clinical/radiographic improvement.Sputum and BAL samples were analyzed at the Uganda National Tuberculosis and Leprosy Programme Reference Laboratory (NTRL). Both direct and concentrated smears were prepared from the same specimen. Direct smears were prepared and stained using the hot Ziehl-Neelsen method (1% carbol-fuchsin dye) . SpecimeNTRL staff, who were also blinded to all clinical information, read all smears within 48 hours of preparation using a standard light microscope (magnification 1000\u00d7). They reported the presence or absence of acid-fast bacilli (AFB) using the WHO/IUATLD scale, with a positive result corresponding to \u2265 1 AFB per 100 high-power fields (HPFs) . They alThe primary outcome for our analyses was culture-positive TB, defined as a positive Lowenstein-Jensen (LJ) culture result from the randomly-timed sputum specimen, early morning sputum specimen, or, when available, BAL specimen. We performed two secondary analyses using different \"gold standard\" definitions of TB. First, we restricted the definition of TB to include only patients with a positive culture on the same specimen from which smears were prepared. Second, we broadened the definition of TB to also include patients who improved clinically on empiric TB therapy, as documented by a study medical officer and a chest consultant (W.W. or S.Y.) between two and four months after hospital discharge.We aimed to collect concentrated sputum specimens from 329 patients, in order to provide 90% power to detect a difference between 50% and 60% sensitivity for direct and concentrated sputum smear, respectively, assuming a 2-sided alpha of 0.05, phi (correlation coefficient) of 0.5, and a projected 20% dropout rate due to contamination or failure to perform culture . Sample size calculations were performed using PS: Power and Sample Size Calculation, version 2.1.31 .Analyses were performed using STATA 9.0 . Sensitivity and specificity were calculated in reference to the outcomes defined above, and compared between diagnostic strategies using McNemar's test. Bivariate comparisons were made using Fisher's exact test for dichotomous variables and the Wilcoxon rank-sum test for continuous variables. Concordance was measured using the kappa statistic. All p-values were two-sided, with statistical significance defined as p < 0.05.Of 388 eligible patients, 39 10%) were unable to provide an early-morning sputum specimen (unable or unwilling to spontaneously expectorate), 20 (5%) did not have a concentrated smear performed, 48 (12%) had a contaminated sputum culture, and 2 (1%) did not have culture performed despite the availability of concentrated smear, giving a final sample size of 279 HIV-infected TB suspects were positive for TB on culture of the early-morning sputum specimen used for comparison of direct and concentrated smear results. An additional 27 (10%) were positive on culture of another specimen. Of the remaining 109 patients, 103 (98%) had two or more negative cultures and six had only a single negative culture. Exclusion of these latter six patients did not materially affect results . Patients with at least one positive TB culture had significantly lower CD4+ T-lymphocyte counts than patients with negative TB cultures , but these two groups did not differ significantly by gender, age, education, antiretroviral use, or by mortality at hospital discharge or at two months , and the specificity of direct smear remained higher than that of concentrated smear . Second, when we expanded the gold standard to include patients with negative cultures but clinical response to TB therapy, the sensitivity of the two methods remained similar . However, the difference in specificity was no longer statistically significant because 5 (42%) of the 12 patients with positive concentrated smears but negative cultures had documented clinical improvement on TB therapy at 2-month follow-up , even though both techniques were performed on the same sputum specimen. Although direct and concentrated smear microscopy had similar sensitivities, they detected different patients: 39% 42/109) of all smear-positive patients with culture-confirmed TB were only positive by a single method . Estimates of diagnostic performance are known to vary between ambulatory and hospital settings. However, the choice of study population is less likely to impact a comparison between two diagnostic techniques. In addition, given the rigorous training required of microscopists at the Uganda NTRL, it is unlikely that laboratory inexperience explains the results of the present study. Finally, in order to better replicate actual test conditions, internal quality assurance was not performed during the study period. Though we would not expect reliability to differentially affect direct versus concentrated sputum smear results, we were unable to quantify inter-reader and intra-reader agreement.In conclusion, we failed to find a difference in sensitivity between direct and concentrated sputum smear microscopy performed in a national reference laboratory serving an HIV-infected hospitalized adult population. Before widely recommending sputum concentration, additional field evaluations that demonstrate benefit when incorporating strict blinding, high quality direct smear microscopy, and a clear gold standard are needed. Such studies should also investigate whether simpler modifications can similarly increase sensitivity and cost-effectiveness. Ultimately, modifications in smear microscopy may increase the yield of TB diagnosis only marginally, a possibility which emphasizes the need for development and testing of novel rapid diagnostic technologies.TB: tuberculosis; HIV: human immunodeficiency virus; CI: confidence interval; BAL: bronchoalveolar lavage; NTRL: National Tuberculosis and Leprosy Programme Reference Laboratory; NALC: N-acetyl-L-cysteine; NaOH: sodium hydroxide; AFB: acid-fast bacilli; WHO: World Health Organization; IUATLD: International Union Against Tuberculosis and Lung Disease; HPF: high-powered field.The authors declare that they have no competing interests.AC and JLD participated in study design, data collection, statistical analysis, and drafting of the manuscript. DWD performed the primary statistical analysis and drafted the initial manuscript. WW, SY, MJ, and JM participated in study design, data collection, and drafting of the manuscript. PCH and LH participated in study design and drafting of the manuscript. All authors approved the final manuscript.The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1471-2334/9/53/prepub"} +{"text": "To evaluate the usefulness of the modified lateral pillar classification as a prognostic factor in Legg-Calv\u00e9-Perthes disease (LCPD).Thirty nine patients diagnosed with lateral pillar C in LCPD from May, 1977, to October, 2001 were reviewed, and their skeletal maturity was followed. The mean follow up duration was 12 years and 7 months . Lateral pillar C classification was divided into C1 and C2 (> 75%). All radiological and clinical prognostic factors were evaluated. The final results were evaluated according to the Stulberg classification.p = 0.002). Patients with more head-at-risk signs had significantly poorer outcomes.Twenty one and 18 of the affected hips were in groups C1 and C2, respectively. According to the Stulberg classification, the final results of group C1 were better than those of C2 (The modified lateral pillar classification has significant value for predicting the prognosis of LCPD. There are a variety of options for treating Legg-Calv\u00e9-Perthes disease (LCPD) ranging from conservative to surgical treatment. The treatment modality is chosen according to the degree of involvement of the femoral head and the judgment of the surgeon. Unfortunately, there are no established prognostic factors for LCPD that may be helpful in the early phases of the disease or in the initial diagnosis, and the reliability of those suggested by other authors are controversial.Between May 1977 and October 2001, 630 patients were treated for LCPD at our institution. Of these, 39 patients with type C hips, which could be followed up until skeletal maturity, were enrolled in this study. There were 33 males (84.6%) and 6 females (15.4%). The affected side was the right in 19 (48.7%) cases and the left in 20 cases (51.3%). The mean age at the onset of the disease at the last follow-up was 7.3 years and 20.0 years , respectively. The mean follow-up period ranging from the end of the pathological process to skeletal maturity was 12 years and 7 months . Based on the plain radiographs of the femoral head taken at the initial diagnosis, 18 (46.2%) hips were in the initial stage, 11 (28.2%) were in the fragmentation stage, and 10 (25.6%) were in the late fragmentation stage. Surgery was performed on 7 of the 18 initial stage patients. Based on the plain radiographs, 1 procedure was performed in the early stage while the other 6 were performed in the later stages. Nine cases were treated conservatively using an aid. When these patients were subdivided into 2 groups according to our modified lateral pillar classification, there were no differences in the use of surgery or aid between the groups. The surgical options were taken after the initial stage of the disease in 17 (43.6%) cases: proximal femoral varus osteotomy in 15 cases, Salter innominate osteotomy in 1, and fusion of the greater trochanteric epiphyseal plate in 1. Conservative treatments using an aid were performed in the remaining 22 (56.4%) cases.The age at the onset of the disease, gender, and surgical experience were investigated clinically, and the associations between these findings and the final outcomes were assessed according to the Stulberg classification at skeletal maturity. The modified lateral pillar classification system was used to divide the hips into two types, C1 and C2. Type C1 hips were defined as those with 50-75% collapse of the lateral pillar and type C2 hips as those with \u2265 75% collapse of the lateral pillar . The relThe final outcomes were assessed by evaluating the anteroposterior pelvic radiographs taken at skeletal maturity according to the Stulberg classification system.p-value < 0.05 was considered significant.Statistical analyses were performed using SPSS ver. 12.0 . A chi-square test was performed to determine if the prognostic factors were associated with the final outcomes assessed according to the Stulberg classification system. A Fisher's exact test was used when an expected value in the crosstabulations was < 5. A p-value > 0.05) (p = 0.002) because the number of patients with poor results at the last follow-up radiographic assessment was significantly higher in the C2 group (p = 0.014). With regard to the final outcomes excluding the influence of the head-at-risk signs, the C2 group had significantly poorer outcomes than the C1 group .Patients with type C hips according to the Herring classification based on the plain pelvic radiographs taken at the fragmentation stage were included for analysis. The patients were divided into two groups according to the modified lateral pillar classification system. Of the 39 cases, there were 21 (54%) and 18 (46%) cases of C1 and C2 hips, respectively. The age at the onset of the disease, Catterall group, and treatment method were similar in the two groups ( > 0.05) . In the C2 group . More hep < 0.05). With regard to the correlation between each of the radiographic signs and the final outcomes, Gage's sign, lateral subluxation of the femoral head, and horizontal growth plate except for the remaining two signs were associated with the final outcomes . However, when the patients were divided into 2 groups using 6, 7, 8, and 9 years of age as the dividing point, respectively, those who were < 6 years old at the time of onset had good results (Stulberg I and II) (p = 0.039). There was no association observed in the remaining age groups. Other prognostic factors, such as gender and surgical experience, were not related to the Stulberg classification (The patients were divided into those who were < 6 years old and those who were \u2265 6 years old at the age at disease onset. A chi-square test was used to assess the correlation. No association between the age at the onset and the Stulberg classification was found (fication .p = 0.001) and 4.According to Stulberg et al.,Various clinical and radiological prognostic factors have been suggested by many authors, even though their reliability is controversial. The clinical factors include the age at the onset of the disease, reduced mobility of the hip joint, and obesity. The radiological factors are the extent of femoral head involvement , the location of the affected area , and head-at-risk signs. However, the radiological signs do not often appear in the early stages, making them less useful in the initial diagnosis. In addition, they show poor interobserver agreement.Wiig et al.An accurate evaluation of the prognostic factors is essential for choosing a proper treatment modality between various LCPD treatment options. Kamegaya et al.The most common classification systems for moderate LCPD are as follows: the Catterall classification,Generally, lateral pillar type C hips result in the worst outcomes. According to Herring et al.,Catterall highlighted the extent of the deformity of the femoral head as an important prognostic factor. Herring et al. reported that the site of the deformity was a more important prognostic factor in LCPD patients than the extent of the deformity, and the lateral pillar classification was reported to have high interobserver reliability and was helpful for prognostic judgment. Our modified lateral pillar classification was applied to type C hips with a poor prognosis to analyze the final clinical outcomes. Type C2 hips had a poorer prognosis than type C1 hips. In addition, 3 of the head-at-risk signs were found to be associated with the poor results rated according to the Stulberg classification, and there was a correlation between the number of those signs and Stulberg classification. In other words, the prognostic factors significantly associated with the outcome of LCPD treatment included our modified lateral pillar classification, the number of the radiographic head-at-risk signs, Gage sign, lateral subluxation of the fermoral head, and horizontal growth plate. Poor results could be expected in type C2 hips and in those with \u2265 2 radiographic head-at-risk signs.In conclusion, the number of radiographic head-at-risk signs was associated with the prognosis in type C LCPD patients. Using our modified lateral pillar classification, poorer final outcomes were observed in C2 hips with more severe symptoms. Therefore, subcagetorizing type C hips, which are generally known to have a poor prognosis, will be helpful in predicting the prognosis."} +{"text": "A classification of lumbosacral spondylolisthesis has been proposed recently. This classification describes eight distinct types of spondylolisthesis based on the slip grade, the degree of dysplasia, and the sagittal sacro-pelvic balance. The objectives of this study are to assess the reliability of this classification and to propose a new and refined classification.Standing posteroanterior and lateral radiographs of the spine and pelvis of 40 subjects with lumbosacral spondylolisthesis were reviewed twice by six spine surgeons. Each radiograph was classified based on the slip grade, the degree of dysplasia, and the sagittal sacro-pelvic balance. No measurements from the radiographs were allowed. Intra- and inter-observer reliability was assessed using kappa coefficients. A refined classification is proposed based on the reliability study.All eight types of spondylolisthesis described in the original classification were identified. Overall intra- and inter-observer agreement was respectively 76.7% (kappa: 0.72) and 57.0% (kappa: 0.49). The specific intra-observer agreement was 97.1% (kappa: 0.94), 85.0% (kappa: 0.69) and 88.8% (kappa: 0.85) with respect to the slip grade, the degree of dysplasia, and the sacro-pelvic balance, respectively. The specific inter-observer agreement was 95.2% (kappa: 0.90), 72.2% (kappa: 0.43) and 77.2% (kappa: 0.69) with respect to the slip grade, the degree of dysplasia, and the sacro-pelvic balance, respectively.This study confirmed that surgeons can classify radiographic findings into all eight types of spondylolisthesis. The intra-observer reliability was substantial, while the inter-observer reliability was moderate mainly due to the difficulty in distinguishing between low- and high-dysplasia. A refined classification excluding the assessment of dysplasia, while incorporating the assessment of the slip grade, sacro-pelvic balance and global spino-pelvic balance is proposed, and now includes five types of lumbosacral spondylolisthesis. Spondylolisthesis has been commonly described using the classification system developed by Wiltse et al. which diRecently, Mac-Thiong and Labelle proposedUsing a simplified version of the original classification Figure , good inst 2001 and March 1st 2006 were reviewed. Patients were defined as potential subjects for inclusion in the study if they had postero-anterior (PA) and lateral (LAT) standing radiographs of the spine and pelvis showing both femoral heads. All subjects with a history or clinical signs of hip, pelvic or lower limb disorder were excluded. For patients who had undergone surgical treatment of spondylolisthesis, only the preoperative radiographs were considered. All 18 patients with high-grade spondylolisthesis were included. Twenty-two patients were randomly selected from the remaining 108 patients with low-grade spondylolisthesis. Thus a total of 40 subjects with spondylolisthesis (18 high-grade and 22 low-grade) were available for this study. All radiographs used in the current study were retrieved from the PACS system by an independent observer and there was no header or any information on the radiographs in order to minimize potential sources of bias. The mean percentage of slip was 16 \u00b1 8% (range: 4\u201344%) for subjects with low-grade spondylolisthesis and 80 \u00b1 17% (range: 53\u2013100%) for subjects with high-grade slips. The mean age was 14.7 \u00b1 2.9 years (range: 7.9\u201320.0 years).The radiological files of all patients with lumbosacral developmental spondylolisthesis seen for the first time at the spine clinic of a paediatric hospital between July 1Six spine surgeons from four different institutions classified all 40 subjects twice , based on digital standing PA and LAT radiographs of the spine and pelvis viewed on a computer screen. The observers were allowed to view the radiographs with the software of their choice. They classified the spondylolisthesis for each subject into one of eight types described by the classification system provided in Figure Statistical analysis was performed by a biostatistician from PhDx Systems Inc . Classification reliability was assessed by calculating the intra- and inter-observer percentage of agreement, as well as the kappa coefficients. The resulting kappa values were interpreted based on the recommendations of Landis and Koch as well as substantial inter-observer agreement (kappa: 0.69). In the 36 cases wherein all observers agreed on grade, seventeen had agreement between all six observers, and fourteen had agreement between five observers. The remaining five (four low-grade and one high-grade) cases showed agreement for only four observers. Figure The lowest reliability was associated with the degree of dysplasia (low- vs. high-dysplastic) wherein intra-observer agreement was substantial (kappa: 0.69) but inter-observer agreement was only moderate (kappa: 0.43). Seventeen cases resulted in agreement between all six observers, and eight cases had agreement for five observers. However, 15 subjects resulted in agreement for only three or four of the six observers, indicating that in these cases agreement was only related to chance. Of those 15 subjects, nine had low-grade spondylolisthesis and six had high-grade spondylolisthesis.Figure This study evaluated the reliability of a classification previously proposed for lumbosacral spondylolisthesis ,22. FirsIt is expected that the clinical reliability of the classification could be improved if direct measurements on the radiographs were made. Indeed, the fact that no measurements were allowed in this study may explain most of the cases for which there was disagreement concerning slip grades around 50% Figure , althougFurthermore, this study demonstrated that most of the disagreement in the classification centered in determination of the degree of dysplasia when using only qualitative criteria. Although intra-observer agreement for the degree of dysplasia was substantial kappa: 0.69), inter-observer agreement was only moderate (kappa: 0.43). As seen in Figure 9, inter-The reliability for classifying patients could also be improved by using Ferguson and lateral lumbosacral junction views, in addition to the PA and LAT radiographs of the complete spine and pelvis that were used in this study. The reduced visibility of the lumbosacral junction on PA and LAT radiographs of the complete spine and pelvis might explain part of the discrepancies in this study, especially for high-grade subjects.In the perspective of developing a surgical treatment algorithm for lumbosacral spondylolisthesis, it is important to propose a classification system that is simple to use clinically and highly reliable. Due to the moderate inter-observer reliability for the assessment of the degree of dysplasia and also due to the difficulty to define reliable quantitative criteria for dysplasia, we have decided to exclude the assessment of dysplasia from the classification system.In addition, recent work from the SDSG has led us to modify the determination of spino-pelvic balance in low-grade spondylolisthesis. A study conducted on 257 patients with low-grade spondylolisthesis and using K-means cluster analysis of sacro-pelvic parameters showed that these patients were divided into two distinct groups: a group with normal or near normal pelvic incidence (< 60\u00b0) and a group with high pelvic incidence (\u2265 60\u00b0). The low PI/low SS and high PI/high SS groups mentioned in the original classification system are in fact subtypes of the two groups recently described (pelvic incidence < 60\u00b0 vs. pelvic incidence \u2265 60\u00b0).Also, it is now recognized that preservation or restoration of an adequate global sagittal balance is of prime importance in the management of spinal deformity, so that assessment of global balance has been introduced into the SDSG classification. Two studies ,30 have The revised classification of lumbosacral spondylolisthesis supported by the SDSG is based on three important characteristics that can be assessesd from the preoperative imaging studies: 1) the grade of slip, 2) the sacro-pelvic balance, and 3) the global spino-pelvic balance. Accordingly, five different types of spondylolisthesis have been identified (Table In high-grade spondylolisthesis, sacro-pelvic balance is assessed based on the findings of Hresko et al Figure . Each suThis study evaluated the reliability of a classification previously proposed for lumbosacral spondylolisthesis. The reliability study showed suboptimal inter-observer reliability regarding the assessment of dysplasia, but excellent reliability for the slip grade and sacro-pelvic balance. A refined classification excluding the assessment of dysplasia, while incorporating the assessment of the slip grade, sacro-pelvic balance and global spino-pelvic balance is proposed, and now includes five types of lumbosacral spondylolisthesis. In addition, the reliability of the classification is expected to increase with direct measurements from the radiographs.This research was assisted by support from the Spinal Deformity Study Group. This research was funded by an educational/research grant from Medtronic Sofamor Danek.JMMT and HL were responsible for the design of the study, as well as the data analysis. All authors participated in the classification of all cases, as well as in the preparation and approval of the final manuscript."} +{"text": "Management of repeated implantation failure despite transfer of good-quality embryos still remains a dilemma for ART specialists. Scrapping of endometrium in the nontransfer cycle has been shown to improve the pregnancy rate in the subsequent IVF/ET cycle in recent studies.The objective of this randomized controlled trial (RCT) was to determine whether endometrial injury caused by Pipelle sampling in the nontransfer cycle could improve the probability of pregnancy in the subsequent IVF cycle in patients who had previous failed IVF outcome.Tertiary assisted conception center.Randomized controlled study.100 eligible patients with previous failed IVF despite transfer of good-quality embryos were randomly allocated to the intervention group and control groups. In the intervention group, Pipelle endometrial sampling was done twice: One in the follicular phase and again in the luteal phase in the cycle preceding the embryo transfer cycle.The primary outcome measure was live birth rate. The secondary outcome measures were implantation and clinical pregnancy rates.P = 0.04). The clinical pregnancy rate in the intervention group was 32.7%, while that in the control group was 13.7%, which was also statistically significant (P = 0.01). The implantation rate was significantly higher in the intervention group as compared to controls (13.07% vs 7.1% P = 0.04).The live birth rate was significantly higher in the intervention group compared to control group (22.4% and 9.8% Endometrial injury in nontransfer cycle improves the live birth rate, clinical pregnancy and implantation rates in the subsequent IVF-ET cycle in patients with previous unsuccessful IVF cycles. For implantation to occur, a genetically normal blastocyst should hatch, appose, adhere, penetrate, and finally invade a well-synchronized endometrium, under the influence of estrogens and progesterone. Recently, a number of locally acting molecules including growth factors, cytokines, matrix metalloproteinases (MMPs), adhesion molecules, extracellular matrix components, and homeobox element containing genes, which mediate the action of the steroids hormones on the endometrium, have been discovered.2et al.[The treatment of repeated implantation failure inspite of transfer of good-quality embryos continues to be a dilemma. Barash et al. were theThe objective of this RCT was to test the hypothesis that endometrial injury in the nontransfer cycle could improve the probability of pregnancy in the subsequent IVF cycle in patients who had previous failed IVF outcome.This is a prospective, open-label, randomized controlled trial, involving patients undergoing IVF treatment at our center. The subjects were recruited from the period between May 2007 and July 2008. Approval for the study was obtained from the Institutional Review Board.We enrolled patients undergoing fresh autologous IVF-ET, if they fulfilled all of the following inclusion criteria:Patients with atleast one previous failed IVF-ET/ICSI cycles undergoing fresh autologous IVF/ICSI cycles.Good responders in the previous IVF cycle.Age: Less than or equal to 37 years.We excluded patients with the following factors found to have a negative impact on implantation, namely:Patients detected to have endometrial tuberculosis in the past, including those treated with antituberculous treatment.Presence of intramural fibroid distorting the endometrial cavity/submucous myoma/ashermans syndrome.Presence of sonographically detected hydrosalpinx.We defined \u201cgood responders\u201d as the patients who had developed at least four good-quality embryos (grade 1 and 2 of Veeck's grading) in the previous IVF cycles.Patients found eligible for the study were offered to undergo endometrial sampling in the cycle prior to the embryo transfer cycle. After obtaining an informed consent, those willing to participate were randomized to either the intervention group or the control group at the time of hysteroscopy. The random allocation was based on computer-generated random numbers, sealed in consecutively numbered opaque envelopes, which were picked up by a nurse outside the operation theater. The study was not blinded, because the patients as well as the clinicians were aware of the treatment group.th to 10th day of the cycle prior to the embryo transfer cycle. Records of previous stimulation protocols and embryology details were reviewed.According to our internal protocol, all patients were evaluated with baseline day 3 FSH, antral follicle count, and a hysteroscopy on 7th to 25th day of the nontransfer cycle on outpatient basis. After the introduction of the Pipelle into the uterine cavity, it was rotated 360 degrees and moved up and down four times after withdrawing the piston. All patients were prescribed Diclofenac 500 mg 30 minutes prior the procedure. Doxycyclin 100 mg was prescribed twice daily for 7 days after both the procedures. In order to avoid the possible confounding effect of antibiotic on IVF success, the control group was also prescribed Doxycyclin twice. Nonhormonal contraception was advised to the patients in both the groups in the nontransfer cycle.The patients in the intervention group underwent endometrial sampling twice, with a biopsy catheter , first on the day of hysteroscopy, and once again between 24Each woman recruited in the study underwent the same COH protocol that she had undergone in the previous IVF cycles, which included one of the three regimens, namely, long midluteal phase GnRH agonist suppression, GnRH antagonist or the GnRH agonist short protocol. In our unit, the protocols are selected by their primary physician depending on age, antral follicle count, and serum FSH levels. The GnRH agonists midluteal downregulation protocols are preferred for age groups \u226435 years, FSH <8 IU/l, and the combined antral follicle count \u226510. The short flare and antagonist protocols are preferred for age groups >35 years, FSH >8 IU/L, and antral follicle count <10. In the long protocol, patients were downregulated with 0.5 mg GnRH for the period of 10-14 days following which the dose was reduced to 0.2 mg and continued till hCG. After confirming adequate downregulation, FSH , in the dose ranging from 150 to 250 IU, was commenced.In the Antagonist group, flexible, multiple-dose regimens were used. GnRH antagonist was started at a dose of 0.25 mg when at least one follicle reached 14 mm. Both Recagon\u00ae and Orgalutran\u00ae were continued till ovulation trigger.The patients allocated to the short protocol were administered GnRH agonist 0.5 mg from day 2 onward and continued till the ovulation trigger. Gonadotropins were started from day 3 onward Subsequent monitoring was same as in the long protocol.Women were scheduled for oocyte retrieval when at least three follicles reached a size of 18 mm. Oocyte retrieval was performed by the transvaginal route under ultrasound guidance, 35-hr after HCG trigger with 5000 IU, with the patient under conscious sedation. The morphology of each aspirated oocyte was noted after denudation with hyaluronidase. ICSI was performed for severe male factor, while combination of ICSI and conventional IVF was performed on some patients with unexplained infertility.The embryos were classified according to Veeck's grading as folloGrade 1 - preembryos with blastomeres of equal size and no cytoplasmic fragmentation;Grade 2 - preembryos with blastomeres of equal size with cytoplasmic fragmentation equal to 15% of the total embryonic volume);Grade 3 - uneven blastomeres with no fragmentation;Grade 4 - uneven blastomeres with gross fragmentation (\u226520% fragments).P values <0.05 were considered significant.Grade 1 or 2 embryos were considered to be good-quality embryos. Embryo transfer was performed with a Wallace\u00ae catheter on day 3 a traumatically under ultrasound guidance by a senior consultant. In our center, we transfer upto three good-quality embryos in age group \u226435 years and up to four embryos in those above 35 years. Assisted hatching was not done in any of the patients. Luteal phase was supported with 600 mg/day of micronized progesterone vaginally till 12 weeks of pregnancy. \u03b2-hCG was determined 2 weeks after the embryo transfer. The primary outcome measure was live birth rate. The secondary outcome measures were implantation and clinical pregnancy rates. Live birth rate was calculated as the ratio of number of patients with live births divided by the number of patients who had embryo transfer.Clinical pregnancy was defined as ultrasound evidence of fetal heart beat. Clinical pregnancy rate was calculated as the number of patients with clinical pregnancy divided by the number of patients who had embryo transfer. The implantation rate was defined as the number of gestational sacs as seen on transvaginal sonography divided by the number of embryos transferred.t-test was used for continuous variables that were normally distributed. P value <0.05 was considered significant.Statistical analyses were performed using the Statistical Package for the Social Sciences . The Chi-square test was used for categorical variables and an independent sample n = 28) or failure to meet the inclusion criteria (n = 6). Thus one hundred patients were randomized to the two groups with 49 women in the intervention group and 51 in the control group. One woman in the intervention group had Pipelle biopsy only once due to miscommunication. We performed an intention to treat analysis, and thus 49 women were analyzed in the intervention group and 50 in the control.The participant flow is given in The baseline characteristics of patients and their outcome of controlled ovarian hyperstimulation are given in All patients were monitored for the evidence of infection following the Pipelle biopsy. None of our patients developed infection. Also, other than spotting per-vaginuum for one to two days, there was no disturbance in the menstrual cycle.P = 0.04) [P = 0.01). The implantation rate was significantly higher in the intervention group as compared to controls (13.07% vs 7.1%). There were two twins and one triplet pregnancy in the intervention group, while in the control group there was one twin and one triplet pregnancy. There were five spontaneous miscarriages, all after the appearance of fetal pole, in the intervention group, and two in the control group.The live birth rate was significantly higher in the intervention group compared to control group (22.4% and 9.8% = 0.04) . The cliet al. published a prospective case-control study of 45 \u201cgood responder\u201d subjects who failed to conceive during one or more IVF-ET cycles. Endometrial samples were taken on days 8, 12, 21, and 26 of the menstrual cycle prior to their next IVF-ET. They reported a significantly doubled clinical pregnancy rate (66.7% vs 30.3%), implantation rate (22.7% vs 14.2%), and live birth rates (48.9% vs 22.5%).Barash et al.[P = 0.02), clinical pregnancy (30% vs 12% P = 0.02), and ongoing pregnancy rates (22% vs 8% P = 0.07) in the intervention groups.Subsequently, Raziel et al. reportedOur study was a randomized controlled trial unlike the other two studies. The other difference in the methodology was the number of times endometrial scrapping was performed in each patient.et al. performed endometrial biopsies four times in the spontaneous cycle , while Raziel el al. performed them twice on days 21 and 26. We felt it would be appropriate to perform this experimental procedure twice, once in follicular phase and once in luteal phase, mainly, to make it more acceptable to our patients.Barash The scientific explanation of the effect of endometrial injury is not yet fully clear. It was observed by Leob in 1907 7et al.[P < 0.05), clinical pregnancy rate (48.3% vs 27.8% P < 0.05), and live birth rates (41.6% vs 22.9% P < 0.05). In addition, 10 endometrial biopsy samples obtained on day 10 of the COH cycle were processed individually for gene chip hybridization. They found a total of 218 genes showing a statistically significant different expression when comparing the pregnant and nonpregnant patients. Of these, 41 were upregulated and 177 were downregulated. The genes for laminin alpha 4 and MMP1 were upregulated, while that of integrin alpha 6 were downregulated. While the exact function of laminin alpha 4 is not known, MMP1 and integrin alpha 6 play an important role in implantation.[10Zhou et al. performetation.10 The sameet al.[More recently, Kalma et al. demonstret al.[Spandorfer et al. studied There is a possibility that the diagnostic hysteroscopy could have caused mild endometrial injury in the control group as well. However, we believe that the injury induced by the Pipelle is deeper and secondly the injury inflicted in the luteal phase in the intervention group could have possibly resulted in the improvement in the pregnancy outcome.The shortcoming of this trial is the small sample size and hence the power of the study as far as the live birth rate is concerned is 52%. A larger study needs to be done to verify the findings and improve the statistical power.Here we have demonstrated through a randomized control trial that the live birth rates clinical pregnancy and implantation rates significantly increase after endometrial scraping in the nontransfer cycle in patients with good-quality embryos. This phenomenon could be due to the injury-induced endometrial decidualization secondary to upregulation of genes encoding for locally acting mediators. Pipelle endometrial sampling is an easy and safe outpatient procedure. This certainly needs further investigation."} +{"text": "Neurospora crassa met-2+ gene, which encodes cystathionine \u03b2-lyase, is regulated is important in determining the basis of the cellular control of transsulfuration. The aim of this study was to determine the nature of a potential regulatory connection of met-2+ to the Neurospora sulfur regulatory network.Cystathionine \u03b2-lyase performs an essential role in the transsulfuration pathway by its primary reaction of forming homocysteine from cystathionine. Understanding how the met-2+) gene was cloned by the identification of a cosmid genomic clone capable of transforming a met-2 mutant to methionine prototrophy and subsequently characterized. The gene contains a single intron and encodes a protein of 457 amino acids with conserved residues predicted to be important for catalysis and pyridoxal-5\u2032-phosphate co-factor binding. The expression of met-2+ in wild-type N. crassa increased 3.1-fold under sulfur-limiting growth conditions as compared to the transcript levels seen under high sulfur growth conditions . In a \u0394cys-3 strain, met-2+ transcript levels were substantially reduced under either low- or high-sulfur growth conditions. In addition, the presence of CYS3 activator binding sites on the met-2+ promoter was demonstrated by gel mobility shift assays.The cystathionine \u03b2-lyase (met-2+ gene and confirm its connection to the N. crassa sulfur regulatory circuit by the reduced expression observed in a \u0394cys-3 mutant and the in vitro detection of CYS3 binding sites in the met-2+ promoter. The data further adds to our understanding of the regulatory dynamics of transsulfuration.In this report, we demonstrate the sulfur-regulated expression of the Cystathionine \u03b2-lyase catalyzes the conversion of cystathionine to homocysteine, ammonia and pyruvate. Cystathionine \u03b2-lyase plays an important role in transsulfuration in that it allows for the utilization of the intracellular pool of cystathionine for the synthesis of homocysteine which serves as the immediate precursor to methionine. In combination with the action of the other transsulfuration and reverse transsulfuration reactions catalyzed by cystathionine \u03b3-lyase, cystathionine \u03b3-synthase, and cystathionine \u03b2-synthase ATP, annealed and gel purified as described previously [met-2+ promoter were analyzed: Site 1 [ 5\u2032 GAAAAGGATGGCGAATTTTAGTGA 3\u2032], Site 2 [ 5\u2032 GGTCTAGGTGTTATCATCTGGTGG 3\u2032], Site 3 [5\u2032GGCCCTGATTTCGCCATTTTCTTT 3\u2032], and Site 4 [5\u2032 TTGACTCATCACACCATCGGCCTC 3\u2032]. Mutated CYS3 binding sites had a purine to pyrimidine substitution at the sixth position of the 10\u00a0bp core of the CYS3 consensus binding site: Site 1\u00a0M [ 5\u2032 GAAAAGGATGGCCAATTTTAGTGA 3\u2032], Site 2\u00a0M [5\u2032 GGTCTAGGTGTTCTCATCTGGTGG 3\u2032], Site 3\u00a0M [5\u2032 GGCCCTGATTTCTCCATTTTCTTT 3\u2032], and Site 4\u00a0M [5\u2032 TTGACTCATCACCCCATCGGCCTC 3\u2032]. E. coli produced CYS3 protein .The sequence data supporting the results of this article is available in the GenBank respository [AF401237, The authors declare that they have no competing interests.BR carried out the majority of experiments. JP and BR designed the experiments and prepared the manuscript. Both authors read and approved the final manuscript."} +{"text": "To compare the accuracy of ultrasonography (US) with the current clinical standard of endoscopy for a diagnosis of nasopharyngeal carcinoma (NPC).A total of 150 patients suspected of having NPC underwent US and endoscopy. A diagnosis was obtained from an endoscopic biopsy collected from each suspected tumor and was compared with a biopsy obtained from a normal nasopharynx. The diagnostic accuracy of US and endoscopy for NPC was evaluated using receiver operating curve (ROC) analysis performed by MedCalc Software.Z\u200a=\u200a0.36, P\u200a=\u200a0.72).The sensitivity, specificity, and accuracy of US versus endoscopy for this cohort were 90.1%, 84.8%, and 87.3% for US, and 88.7%, 97.5%, and 93.3% for endoscopy, respectively. Both US and endoscopy exhibited good diagnostic accuracy for NPC with area under the curve (AUC) values of 0.929 and 0.938, respectively. However, this difference was not significant (US is a useful tool for the detection of tumors in endoscopically suspicious nasopharynx tissues, and also for the detection of subclinical tumors in endoscopically normal nasopharynx tissues. Nasopharyngeal endoscopy is typically used to detect nasopharyngeal carcinoma (NPC). A definitive diagnosis is subsequently confirmed with an endoscopic biopsy of the primary tumor site A previous study indicated that ultrasonography (US) may be a useful tool for diagnosing NPC and for defining the relationship between a tumor and the parapharyngeal space This study protocol was approved by the Guangxi Medical University ethics committee and written informed consent was obtained from all patients. Patients suspected of having NPC were recruited to this prospective study between January 2010 and January 2013 in a region where NPC is endemic. Suspicion of NPC was based on the presence of metastatic cervical lymph nodes and/or a nasopharyngeal abnormality accompanied by nonspecific symptoms , and/or positive Epstein-Barr virus (EBV) serologic results. Patients were excluded if they did not successfully undergo US, endoscopy, and an endoscopic biopsy, or if a non-NPC tumor of the nasopharynx was diagnosed. The study group included 150 patients ranging in age from 21\u201368 y . US examination was performed prior to the nasopharyngeal endoscopy and endoscopic biopsy to ensure that the biopsy would not affect nasopharynx imaging. In addition, an endoscopy was performed following an endoscopic biopsy, and was performed with knowledge of the clinical reasons for suspecting NPC.US was performed using an Esaote Technos MPX or MP scanner with a 3.5\u20135.0 MHz convex-array transducer or a 7.0\u201313.0 MHz linear-array transducer for obese patients versus thin patients, respectively. Patients were placed in the supine position with the neck biased toward the opposite side and slightly tilted back. The transducer was placed between the mastoid and mandible ramus aspect of the neck, and the nasopharynx and parapharyngeal space were examined in transverse, longitudinal, and oblique planes. In our experience, the parotid gland can be used as an acoustic window. Therefore, the operator subsequently requested that patients swallow to confirm the linear air in the nasopharynx and pharyngeal recess. US images obtained for each patient were acquired, reviewed, and interpreted by two sonologists with 8 y and 24 y of US experience, respectively. Each scan was scored from 1 to 4 . For theAn endoscopy was performed after each US examination. This procedure was performed with knowledge of the clinical reasons for suspecting NPC, although previous US findings were not provided. The absence of NPC was defined as normal endoscopic findings or findings that showed a minor abnormality not suspicious of NPC. In contrast, the presence of NPC was defined as suspicious abnormalities or definitive NPC. An endoscopic biopsy was performed at the site of abnormalities. Patients with an endoscopically normal nasopharynx underwent endoscopic sampling biopsies from both the right and left sides of the posterior wall of the nasopharynx. Sampling specimens were selected for microscopic examination and underwent processing for hematoxylin-eosin staining.Z-value was also calculated using MedCalc software, and P-values less than 0.05 were considered statistically significant.MedCalc software was used for statistical analyses. Sensitivity, specificity, negative predictive value, positive predictive value, and accuracy of US and endoscopy were also calculated. The diagnostic accuracy of US and endoscopy for NPC was evaluated using receiver operating characteristic (ROC) analysis. Area under the curve (AUC) values less than 0.7, between 0.7 and 0.9, or greater than 0.9 were considered to indicate low, medium, and high diagnostic accuracies, respectively. A Of the patients analyzed by US and endoscopy, 79/150 (52.7%) were negative for NPC and 71/150 (47.3%) were positive for NPC. All of the NPC cases involved non-keratinizing undifferentiated carcinomas, with 10/71 (14.1%) being submucosal tumors and 16/71 (22.5%) being infiltrating tumors. Among the non-NPC patients, nasal melanoma (n\u200a=\u200a1), lymphoma (n\u200a=\u200a1), and benign mucosal lesions (n\u200a=\u200a5) were identified, while the remaining patients were healthy.US detected NPC in 64/71 (90.1%) patients and NPC was excluded for 67/79 (84.8%) patients . US was Endoscopy detected NPC in 63/71 (88.7%) patients and NPC was excluded for 77/79 (97.5%) patients . FurtherZ\u200a=\u200a0.36, P\u200a=\u200a0.72). Overall, US was able to detect tumors in endoscopically suspicious nasopharynx tissues, and was also able to detect subclinical tumors in endoscopically normal nasopharynx tissues. Representative patient images of concordant and discordant results are shown of these cases, while endoscopy detected 14/16 (87.5%) of these cases. There were eight patients with NPC that were not detected by endoscopy, while six of these cancers were identified using US . Conversely, there were seven patients with NPC that were not detected by US, while five of these cancers were identified using endoscopy. These five cases included two anterior nasopharynx masses, two slightly plump nasopharynx pharyngeal recesses, and in one case, the top surface of the nasopharyngeal mucosa was rough. The final two false-negative findings were confirmed using random endoscopic biopsies that sampled the nasopharynx. The sensitivity, specificity, negative and positive predictive values, and accuracy values associated with the use of US and endoscopy are listed in are shown\u20135.Z\u200a=\u200a0.36, P\u200a=\u200a0.72).For this cohort, US was able to detect primary NPCs that caused an obvious focal mass, deeply infiltrating tumors, and early tumors that produced mild thickening of the mucosa. Furthermore, US achieved a good diagnostic accuracy for NPC with an AUC value of 0.929. A similar diagnostic sensitivity and specificity were identified for both US and endoscopy methods , and therefore, a significant difference in diagnostic accuracy for the two modalities was not observed (It was previously reported that 10% of cancers are missed at endoscopy, with the majority of these missed tumors being small or deeply infiltrating tumors that often involve the submucosa Given that an endoscopic biopsy is an invasive procedure, patients with an endoscopically normal nasopharynx, or their clinicians, may be reluctant to undergo or repeat this procedure due to discomfort, risk of bleeding, and the potential administration of a general anesthetic. Consistent with the results of previous studies In the present study, US was found to facilitate the detection of subclinical tumors in endoscopically normal nasopharynx tissues, as well as tumors present in endoscopically suspicious nasopharynx tissues. The adenoids that are located in the central roof and upper posterior wall of the nasopharynx are a common site for benign diseases. Moreover, it has previously been shown in other endoscopy-based studies It is important to note that the results of the present study indicate that US should not replace an endoscopy. For example, of the seven patients with NPC that were detected by endoscopy and not by US, these cases involved very small nasopharyngeal tumors present on the top wall which only exhibited a rough nasopharyngeal mucosal surface by endoscopy. Therefore, very small lesions that do not exhibit a smooth mucosal surface or significant thickening, may be associated with a poor NPC detection rate by US. In contrast, endoscopy can detect early, subtle changes in the mucosal surface. Therefore, it may be more appropriate for US to be applied as an adjunct method to endoscopy for the detection of subclinical cancers present in endoscopically normal nasopharynx tissue. A great effort was also made to assess patients in whom a cancer had been missed during an initial endoscopy, yet was subsequently identified using US. As such, there was a potential for bias toward US in this study. Correspondingly, the true incidence of NPC in this study population may be underestimated, and the sensitivity overestimated. However, it was not the aim of the current study to definitively determine the accuracy of these two techniques. Rather, the aim was to determine the potential benefit of performing US for the examination of endoscopically normal nasopharynx tissue. Lastly, color Doppler was not sufficiently sensitive to detect tumor blood supply, perhaps due to the anatomical location of the nasopharynx deep within the head. Therefore, further study is needed to evaluate the capacity for US to detect a tumor's blood supply and to distinguish benign and malignant tumors.US can also readily measure tumor volume, can characterize the boundaries, shape, and internal echo of a mass, and can evaluate the relationship between a tumor and the parapharyngeal space. In previous studies, primary tumor volume and invasion of the parapharyngeal space were found to be closely related to NPC survival rates In conclusion, US achieved a good diagnostic accuracy for NPC and is a less invasive and more patient-friendly technique compared to endoscopy. As such, US could be used for the initial investigation of primary tumors in patients suspected of having NPC, especially when a repeat biopsy is needed for endoscopically normal nasopharynx tissue. Furthermore, for patients with abnormal US results, US could subsequently be used to guide the biopsy of a subclinical tumor site."} +{"text": "L'examen du fond d\u2019\u0153il fait partie du bilan de nombreuses maladies g\u00e9n\u00e9rales en dehors de l'ophtalmologie. L'objectif de notre travail \u00e9tait d\u2019\u00e9tudier les aspects cliniques et \u00e9pid\u00e9miologiques des patients adress\u00e9s pour un fond d\u2019\u0153il afin de montrer l'int\u00e9r\u00eatde cet examen. Il s'est agi d'une \u00e9tude r\u00e9trospective descriptive des examens du fond d\u2019\u0153il durant la p\u00e9riode de janvier 2011 \u00e0 d\u00e9cembre 2013 dans un cabinet d'ophtalmologie d'une polyclinique \u00e0 Bobo Dioulasso. Au cours de la p\u00e9riode \u00e9tudi\u00e9e, 5942 consultations ont \u00e9t\u00e9 enregistr\u00e9es, dont 438 pour fond d\u2019\u0153il soit 7,37%. Il y avait 225 hommes et 213 femmes soit un sex ratio de 1,056. La tranche d\u2019\u00e2ge 40-59 ans repr\u00e9sentait 54%. La fr\u00e9quence des principaux motifs de la demande \u00e9tait l'hypertension art\u00e9rielle 43,15% (N=189), le diab\u00e8te 39,04% (N=171), l'association HTA et diab\u00e8te 10,27% (N=45), et la dr\u00e9panocytose 7,53% (N=33). Le fond d\u2019\u0153il \u00e9tait anormal chez 175 patients soit 36,23%. La r\u00e9tinopathie hypertensive \u00e9tait retrouv\u00e9e dans 42,73% des cas, la r\u00e9tinopathie diab\u00e9tique 25,92%, et la r\u00e9tinopathie dr\u00e9panocytaire 7,53%. L'examen du fond d\u2019\u0153il en m\u00e9decine de ville pr\u00e9sente un int\u00e9r\u00eat majeur, et permet de retrouver des anomalies chez plus d'un tiers des patients. Dans la ville il existe 5 structures publiques dont le Centre Hospitalier Universitaire Sour\u00f4 Sanou (CHUSS), et 3 cabinets priv\u00e9s pour les soins oculaires. La pathologie oculaire dans la ville est cosmopolite. De nombreuses affections g\u00e9n\u00e9rales ont un retentissement sur le fond d\u2019\u0153il. L'hypertension art\u00e9rielle, le diab\u00e8te, la dr\u00e9panocytose et les affections inflammatoires syst\u00e9miques sont les plus courantes. L'examen clinique en ophtalmologique en particulier l'examen du fond d\u2019\u0153il est tr\u00e8s souvent utile pour faire le bilan de retentissement de ces affections g\u00e9n\u00e9rales. C'est un temps important qui permet l'analyse macroscopique de la r\u00e9tine en particulier. Le but de notre travail a \u00e9t\u00e9 d\u2019\u00e9tudier les aspects \u00e9pid\u00e9miologiques et cliniques des patients adress\u00e9s pour un examen du fond d\u2019\u0153il, rappeler l'int\u00e9r\u00eat du fond d\u2019\u0153il et de contribuer \u00e0 la prise en charge multidisciplinaire deaffections g\u00e9n\u00e9rales.Bobo-Dioulasso est le chef-lieu de la r\u00e9gion des Haut-bassins et repr\u00e9sente la 2Le champ de notre \u00e9tude a \u00e9t\u00e9 la ville de Bobo Dioulasso et le cadre une polyclinique priv\u00e9e de la place de Bobo Dioulasso, situ\u00e9e dans le centre-ville. Nous avons r\u00e9alis\u00e9 une \u00e9tude r\u00e9trospective descriptive de janvier 2011 \u00e0 d\u00e9cembre 2013. La population de notre \u00e9tude \u00e9tait constitu\u00e9e des patients adress\u00e9s pour un examen du fond d\u2019\u0153il au cours de cette p\u00e9riode. Les donn\u00e9es ont \u00e9t\u00e9 recueillies \u00e0 partir des dossiers des patients, et des registres de consultation. Nous avons d\u00e9crit les variables sociod\u00e9mographiques, les ant\u00e9c\u00e9dents cliniques, les motifs de la demande du fond d\u2019\u0153il, les r\u00e9sultats cliniques de l'examen du fond d\u2019\u0153il. Nous avons utilis\u00e9 la classification de Kirkendall . Les. Les2]. er examen peut \u00eatre pratiqu\u00e9 chez les sujets SC \u00e0 partir de 8 ans, \u00e0 l'adolescence puis annuellement. En pr\u00e9sence de r\u00e9tinopathie, la surveillance est fonction de la gravit\u00e9. Chez les sujets SS un FO \u00e0 l'adolescence puis annuellement.Une autre affection qui a justifi\u00e9 la demande d'examen du fond d\u2019\u0153il a \u00e9t\u00e9 la dr\u00e9panocytose. Il s'agit de la maladie h\u00e9r\u00e9ditaire la plus r\u00e9pandue dans le monde et surtout en Afrique. Au Burkina Faso, les syndromes dr\u00e9panocytaires majeurs (SS et SC) touchent pr\u00e8s de 2% des nouveau-n\u00e9s avec une incidence de 1 sur 57/an . La r\u00e9tiL'examen du fond d\u2019\u0153il en m\u00e9decine de ville repr\u00e9sente une activit\u00e9 importante qui repr\u00e9sente 7,37% de l'ensemble des consultations. Il permet de mettre en \u00e9vidence dans plus d'un tiers des cas une anomalie. La s\u00e9v\u00e9rit\u00e9 des cas semble moins importante que dans la population hospitali\u00e8re. Dans notre contexte de travail, l'hypertension art\u00e9rielle, le diab\u00e8te et la dr\u00e9panocytose ont \u00e9t\u00e9 les principaux motifs de demande. L'examen du fond d\u2019\u0153il doit \u00eatre plus souvent demand\u00e9 par les praticiens afin d'am\u00e9liorer la qualit\u00e9 de la prise en charge multidisciplinaire de certaines affections g\u00e9n\u00e9rales."} +{"text": "Sclerotiniaceae, a family of Ascomycete fungi. Using a phylogenetic framework, we associate diversification rates, the frequency of host jump events and host range variation during the evolution of this family. Variations in diversification rate during the evolution of the Sclerotiniaceae define three major macro\u2010evolutionary regimes with contrasted proportions of species infecting a broad range of hosts. Host\u2013parasite cophylogenetic analyses pointed towards parasite radiation on distant hosts long after host speciation (host jump or duplication events) as the dominant mode of association with plants in the Sclerotiniaceae. The intermediate macro\u2010evolutionary regime showed a low diversification rate, high frequency of duplication events and the highest proportion of broad host range species. Our findings suggest that the emergence of broad host range fungal pathogens results largely from host jumps, as previously reported for oomycete parasites, probably combined with low speciation rates. These results have important implications for our understanding of fungal parasites evolution and are of particular relevance for the durable management of disease epidemics.The range of hosts that a parasite can infect in nature is a trait determined by its own evolutionary history and that of its potential hosts. However, knowledge on host range diversity and evolution at the family level is often lacking. Here, we investigate host range variation and diversification trends within the To account for incomplete sampling in this analysis, ancestral state reconstruction was computed for every plant group by the re\u2010rooting method , and summarized as maximum clade credibility (MCC) trees using TreeAnnotator within the beast package. The trees were edited in FigTree. The r packages ape version 2.5 using the coda r\u2010package. Lineage\u2010through\u2010time plots with extant and extinct lineages were computed using the phytools package in r with that of the tree of the Sclerotiniaceae (BICSclerotiniaceae) supported the eight modalities . We obtained significant BIC ratios with two or more modalities, supporting the existence of at least two macro\u2010evolutionary regimes in the Sclerotiniaceae.We used the section containing all 105 Revell, with then Figure\u00a0. To detee Figure\u00a0b,c. A po2.6Elliottinia kerneri, Coprotinia minutula and Stromatinia cryptomeriae). We used a full set of 263 host\u2013pathogen associations and a simplified set of 121 associations minimizing the number of host families involved to control for the impact of sampling bias and two species in the Rutstroemiaceae (Rutstroemia cunicularia and Rutstroemia cuniculi) reported as nonpathogenic to plants that are coprophilous , a group of the Rosids (Eudicots) including notably cultivated plants from the Fabales (legumes) and Rosales orders. Plants from the order Vitales , from the Magnoliids and from the Polypodiidae (ferns) were colonized by Sclerotiniaceae but not Rutstroemiaceae fungi.To document the extant diversity in the e Figure\u00a0. To reduElliott, . Most spITS) region of rDNA sequences to construct a phylogenetic tree of the 105 Sclerotiniaceae and 56 Rutstroemiaceae species . Random tree pruning in phytools indicated that this result is robust to sampling biases. Over 48% of the Sclerotiniaceae species are pathogens of host plants that evolved prior to the divergence of the Fabids, suggesting numerous host jumps in this family of parasites. Notably, a jump to Malvids and then to Monocots was identify at the base of the Botrytis genus , a jump to Commelinids occurred in the Myriosclerotinia genus (91% probability in S\u2010DIVA), a jump to the Ranunculales occurred at the base of the Sclerotinia genus (76% probability in S\u2010DIVA); a jump to Asterids was found at the base of a major group of Monilinia (89% probability in S\u2010DIVA).We used RASP and 41 (73.2%) Rutstroemiaceae infected a single host family exhibited this trait, including Botrytis cinerea, Sclerotinia sclerotiorum, Sclerotinia minor and Grovesinia pyramidalis, each of which colonizes plants from more than 30 families. Each of these species belongs to a clearly distinct phylogenetic group with a majority of species infecting a single host family. This may result from radiation following host jumps , with a mean age of 69.7\u00a0Mya. The divergence of Botrytis pseudocinerea, estimated from the ITS data set, occurred ca. 3.35\u201317.8\u00a0Mya (mean age 9.8\u00a0Mya) and is similar to a previous estimate of 7\u201318\u00a0Mya each immediately adjacent to a minor rate shift within two different clades . These values were consistent with diversification rate estimated in BAMM for each regime broad host range species in the Rutstroemiaceae (Moellerodiscus lentus) and eleven in the Sclerotiniaceae. In the Sclerotiniaceae, regime G2 with intermediate diversification rates showed the highest proportion of broad host range species , followed by regime G3 and regime G1 . These analyses suggest that the evolutionary history of regime G2 could have favoured the emergence of broad host range parasites in the Sclerotiniaceae.We compared the proportion of broad host range (\u22655 plant families) fungal species that emerged in the e Figure\u00a0b. We ran3.3Sclerotiniaceae parasites. To test this hypothesis, we performed host\u2013parasite cophylogeny reconstructions using CoRe\u2010PA . There was no obvious topological congruence between the plant tree and groups G2 and G3 of the Sclerotiniaceae. To take into account sampling bias, we performed cophylogeny reconstructions using the full set of 263 host\u2013parasite associations and a simplified set of 121 associations. Reconstructions were also performed independently on each of the three macro\u2010evolutionary regimes. CoRe\u2010PA classifies host\u2013pathogen associations into (i) cospeciation, when speciation of host and pathogen occurs simultaneously, (ii) duplication, when pathogen speciation occurs independently of host speciation, (iii) sorting or loss, when a pathogen remains associated with a single descendant host species after host speciation, and (iv) host switch when a pathogen changes host independently of speciation events . The PACo analysis also includes taxon jackknifing to test for the relative contribution of each host\u2013pathogen association in the cophylogeny pattern. In the full set of host\u2013Sclerotiniaceae associations, ~21% contributed positively and significantly to cophylogeny, likely representing cospeciation events, while ~31% contributed negatively and significantly to cophylogeny, therefore likely representing host jump events (Table\u00a0Sclerotiniaceae family (simplified set of associations) identifies losses as the dominant form of host association (~65%), followed by duplications (~15%) and host jumps (~12%) while failure to diverge (~5.5%) and cospeciation (~3%) was rare event .Within the last 50\u00a0Ma, the world has experienced an overall decrease in mean temperatures but with important fluctuations that dramatically modified the global distribution of land plants and can have a lower optimal growth temperature than their sister species (Hoshino, Terami, Tkachenko, Tojo, & Matsumoto, Similar to herbivore diet breadth (Forister et\u00a0al., Sclerotiniaceae. This effect could have been direct, through the emergence of cold adaptation as an enabling trait, or indirect through changes in host population structures and host\u2013parasite association patterns. Knowledge on the dynamics of pathogen evolution increases the understanding of the complex interplay between host, pathogen and environment governing the dynamics of disease epidemics. These evolutionary principles are useful for the design of disease management strategies (Vander Wal et\u00a0al., These findings suggest that global climate instability and host diversification in the Cenozoic might have impacted on the diversity of fungal parasites within the Species identifiers: GenBank accessions provided in Table\u00a0DNA sequences and alignments: provided as File\u00a0Phylogenetic trees: provided as Files\u00a0https://doi.org/10.5061/dryad.7cs3gHost range data, DNA sequences and alignments and Phylogenetic trees are available from the Dryad Digital Repository: O.N. collected data, performed analyses, wrote the original draft and revised the manuscript; A.B. performed analyses, wrote and revised the manuscript; A.T. collected data and revised the manuscript; J.P.C. involved in funding acquisition, data collection, revision of manuscript; S.R. involved in supervision, funding acquisition, project administration, writing original draft and revision of manuscript.\u00a0Click here for additional data file.\u00a0Click here for additional data file."} +{"text": "As there is a growing number of long-term cancer survivors, the incidence of carcinogenesis as a late effect of radiotherapy is getting more and more into the focus. The risk for the development of secondary malignant neoplasms might be significantly increased due to exposure of healthy tissue outside of the target field to secondary neutrons, in particular in proton therapy. Thus far, the radiobiological effects of these neutrons and a comparison with photons on normal breast cells have not been sufficiently characterised.En\u00a0>\u00a0= 5.8\u00a0MeV), monoenergetic neutrons and of the mixed field of gamma\u2019s and secondary neutrons (\u00a0= 70.5\u00a0MeV) produced by 190\u00a0MeV protons impinging on a water phantom, were analysed. The clonogenic survival and the DNA repair capacity were determined and values of relative biological effectiveness were compared. Furthermore, the influence of radiation on the sphere formation was observed to examine the radiation response of the potential fraction of stem like cells within the MCF10A cell population.MCF10A cells were irradiated with doses of up to 2\u00a0Gy with neutrons of different energy spectra and X-rays for comparison. The biological effects of neutrons with a broad energy distribution . Both experimental endpoints provided comparable values of the relative biological effectiveness. Significant changes in the sphere formation were notable following the various radiation qualities.En\u00a0>\u00a0= 5.8\u00a0MeV), and to the mixed gamma - secondary neutron field given by interactions of 190\u00a0MeV protons in water. The results of the present study are highly relevant for further investigations of radiation-induced carcinogenesis and are very important in perspective for a better risk assessment after secondary neutron exposure in the field of conventional and proton radiotherapy.The present study compared the radiation response of MCF10A cells after IR with neutrons and photons. For the first time it was shown that monoenergetic neutrons with energies around 1\u00a0MeV have stronger radiobiological effects on normal human breast cells with respect to X rays, to neutrons with a broad energy distribution (\u00a0= 5.8\u00a0MeV [Two types of neutron irradiations were performed at PTB: firstly, a \u201cmedium-energy\u201d intense neutron field with dose rates of 0.1\u00a0Gy/min (HDR) and of about 0.003\u00a0Gy/min (LDR). It was produced by the o 10\u00a0MeV . The ene 5.8\u00a0MeV (for ion 5.8\u00a0MeV ). Dose t 5.8\u00a0MeV . The rel3He reaction and neutrons of 0.56\u00a0MeV (0.0045\u00a0Gy/min) by the 7Li7Be reaction [Secondly, \u201clow-energy\u201d monoenergetic neutrons with an energy of 1.2\u00a0MeV (0.003\u00a0Gy/min) were produced by the Treaction , 23. Botreaction . In thisIn order to generate a neutron spectrum similar to that produced during proton therapy, additional irradiations were performed at the KVI-CART. An uncollimated pencil beam of 190\u00a0MeV protons with a width (1\u03c3) of 4\u00a0mm and an RMS energy spread of about 0.2% was directed onto a 300\u00a0mm cubic water phantom (with front and back layers of 8\u00a0mm PMMA) in which the protons were stopped. The beam profile at the entrance of the phantom was measured with Gafchromic EBT film. The proton current impinging on the water phantom was monitored using an ionisation chamber which was calibrated using a scintillation detector to determine the number of protons as function of the accumulated charge from the ionisation chamber. The absolute uncertainty in the number of protons entering the water phantom is estimated to be of the order of 1%. This uncertainty is mainly due to the uncertainty in the determination of the calibration factor converting the accumulated charge from the ionization chamber to the number of protons entering the water phantom. Samples were positioned behind the water phantom (at 0\u00b0 relative to the incident proton beam) at a distance of 50\u00a0mm. Proton interactions in water generated a mixed gamma \u2013 secondary neutron field at the sample positions. The total dose on the sample delivered by the mixed field was determined using a Monte Carlo simulation described below to be 4.0E-15\u00a0Gy/proton. Four sets of samples were irradiated with respectively 3.80E13; 9.50E13; 1.90E14 and 3.80E14 protons entering the water phantom, with total doses of 0.152, 0.38, 0.76 and 1.52\u00a0Gy, respectively. The dose rate was chosen such that each irradiation had equal duration (5.5\u00a0h), and that such duration was comparable to that for LDR irradiations at PTB, in view of the final data comparison. The relative standard uncertainty for the total dose determination was about 5\u20136%.En\u00a0>\u00a0=70.5\u00a0MeV. The ratio of neutron dose/total dose was 0.65, meaning 35% extra dose to the samples from gammas. This estimation of the neutron absorbed dose is done by tracking the recoil particles directly, and running PHITS in the mode that scores the energy loss of charged particles and nuclei. For neutron induced reactions below 20\u00a0MeV, PHITS was run in the Event Generator Mode using the Evaluated Nuclear Data libraries JENDL-4.0. [All radiation fields and sample exposures were simulated using the Monte Carlo radiation-transport code PHITS ver. 2.88 , verifyiNDL-4.0. . For higNDL-4.0. was emplNDL-4.0. was emplNDL-4.0. was adop3 cells were seeded in a 25\u00a0cm2 cell culture flask in triplicates for each dose value. Eight days later the colonies were fixed with 70% ethanol for 10\u00a0min and stained for 5\u201310\u00a0min with 1% crystal violet solution . Colonies consisting of 50 cells and more were counted. Plating efficiency and survival fractions (SF) were determined and RBE values for a survival of 10%, referred to as RBE(SF 0.1) in the text, were calculated with respect to X-rays (LDR) as described by Paganetti [Twenty-four\u00a0hours after IR, 1\u00a0\u00d7\u00a010aganetti .4 cells per well (1.8\u00a0cm2) were seeded in duplicate in chamber slides and incubated for 24\u00a0h. After fixation with 2% formaldehyde and permeabilisation with 0.25% triton-X 100 the cells were consecutively incubated 60\u00a0min with anti-\u03b3H2AX antibody and Alexa Fluor 594 goat anti-mouse IgG1 for 30\u00a0min. The slides were mounted with Vectashield\u00ae containing anti-4\u2032,6-diamidino-2-phenylindole . The foci were visualised with an Eclipse TE300 inverted microscope . At a magnification of 1000\u00d7, the foci of 50 cells per chamber were counted; two chambers per irradiation. The extra yield (\u2206Y) was calculated as the difference between irradiated samples and the individual 0\u00a0Gy control value of residual foci as a function of dose and plotted in a graph. Linearisation was performed as described by Barendsen [(foci 24 h) in the text, were calculated with respect to LDR X-rays via the \u03b1 value with reference to Franken et al. [2 yielded \u03b2 values of zero.Directly after irradiation, 1\u00a0\u00d7\u00a010n et al. , 32. Fit4 cells per well were plated in triplicates in ultra-low attachment 6-well plates for each irradiation dose. Since the cells can not adhere to the cell culture surface, they are able to form three-dimensional spheres. All samples were incubated under standard cell culture conditions. The number of spheres was counted by microscopy with a magnification of 100\u00d7 at day 1\u20137 after seeding, which is day 2\u20138 after IR. The sham irradiated control (0\u00a0Gy) of each radiation quality was set to 100% at every counting day. The radiation-induced change in the number of the spheres was related to the appropriate 0\u00a0Gy control (100%).Twenty four\u00a0hour after IR 1\u00a0\u00d7\u00a010p\u00a0<\u00a00.05 was considered to indicate a statistically significant difference. F spheres formation, the statistical significance to the individual sham-irradiated control (0\u00a0Gy) of each radiation quality was calculated via one-sample t-test and a value of p\u00a0<\u00a00.02 indicated a statistically significant difference.Data of at least three independent experiments are represented as mean values \u00b1 standard error of the mean (SEM). For the clonogenic survival and for DNA DSBs, the assessment of statistical significance of differences was performed by student t-test. The statistical analyses of all values refer to LDR X-rays. The survival\u2212/foci-values obtained from fits to the data points were used for statistical analyses between 0 and 2\u00a0Gy. A value of En\u00a0>\u00a0=5.8\u00a0MeV neutrons was slightly less pronounced: after a dose of 1\u00a0Gy the SF was 35%. This effect was still significant compared to the survival after 1\u00a0Gy of LDR X-rays. With respect to 2\u00a0Gy of HDR X-rays, the SF was significantly decreased by a factor of 5 after 2\u00a0Gy of HDR medium-energy neutrons (\u00a0=5.8\u00a0MeV). The effectiveness of 1\u00a0Gy of LDR medium-energy neutrons was higher compared to X-ray LDR as the SF was only 42%. The mixed gamma - secondary neutrons had a comparable effect on the cells as HDR and LDR medium-energy neutrons (\u00a0=5.8\u00a0MeV). After an IR of 1.52\u00a0Gy the SF was reduced to 20%. RBE values were calculated using 220\u00a0kV X-rays LDR as a reference (Table\u00a0En\u00a0>\u00a0= 70.5\u00a0MeV) is similar to RBE values of 2.06 and 1.99 for HDR and LDR medium-energy neutrons (\u00a0=5.8\u00a0MeV).Long-term effects after radiation were investigated via the clonogenic survival assay. For all radiation qualities a dose-dependent decrease in the SF Fig.\u00a0 was obseEn\u00a0>\u00a0= 70.5\u00a0MeV, as performed at KVI-CART, induced an almost similar effect as HDR neutrons of \u00a0= 5.8\u00a0MeV. The \u03b1-based RBE values for foci induction showed a very clear increase following neutron radiation exposure with HDR and LDR medium-energy neutrons, low-energy neutrons of 1.2\u00a0MeV and the mixed gamma - secondary neutron field produced by a 190\u00a0MeV proton beam, when compared to X-rays. The values were even higher when the cells were irradiated with monoenergetic neutrons of 0.56\u00a0MeV showed a slightly stronger effect, especially on day 4, as the LDR neutrons, which showed a uniform reduction at all days, which is in the range of 63\u201373%. Radiation with 0.88\u00a0Gy of 1.2\u00a0MeV monoenergetic neutrons and 1.52\u00a0Gy of a mixed gamma-secondary neutron field showed time-dependent a reducing effect on the sphere formation ability. On the eighth day, the ability to form spheres is more restricted than on the second day after irradiation. In addition, the exposure to 0.56\u00a0MeV monoenergetic neutrons resulted in a strong reduction of the sphere formation ability already 2\u00a0days after irradiation but this decrease seemed to recover within the following 6\u00a0days.With respect to the individual 0\u00a0Gy control per day, there is a general decrease of the sphere formation ability visible for every radiation quality. The irradiation with LDR X-rays showed a clear reduction of the sphere formation ability at day 2 and 4, unlike HDR-X-rays, where no significant impairments could be observed. The exposure to neutrons caused significant changes. The use of HDR neutrons .En\u00a0>\u00a0= 5.8\u00a0MeV). Compared to X-rays (LDR), the SF was significantly decreased after irradiation with monoenergetic neutrons . This effectiveness of 0.56\u00a0MeV neutrons on the clonogenic survival confirms the results of Okumura et al. [Concerning clonogenic survival, our data showed a large variation between the effects of various neutron energies with regard to the use of doses up to 1\u00a0Gy. Presented results about the radiobiological effects on the clonogenic survival with low-energy monoenergetic neutrons are of particular significance in contrast to X-rays and medium-energy neutrons 24\u00a0h after radiation exposure, our findings reveal a 2-fold higher response of cells irradiated with HDR in contrast to LDR X-rays. This can be a result of repair processes, which start after a few minutes. Therefore, DSBs induced by LDR may already be partially repaired during the irradiation process before the full applicable dose is reached, as the time for a dose of 1\u00a0Gy generated with LDR is more than 20-times longer than generated with HDR .Our results demonstrated that the number of residual \u03b3H2AX foci 24\u00a0h after IR, as an indicator for DSBs, increased as a function of increasing dose, which has been also reported by Okumura and colleagues , who obsEn\u00a0>\u00a0= 5.8\u00a0MeV). Like Dionet et al., who investigated fast neutrons on normal skin fibroblast by cell survival assay, we could show a stronger effect following HDR neutrons compared to LDR neutrons [Using a sphere formation assay, we examined the radiation-induced response of a potential stem-like subpopulation: it is known that MCF10A cells include a progenitor like cell subpopulation . Presentneutrons . In geneneutrons , 40. TheFrom the presented dataset, observable differences in RBE values by various neutron energies relative to X-rays can be concluded.En\u00a0>\u00a0= 5.8\u00a0MeV). The mixed gamma - secondary neutron (\u00a0= 70.5\u00a0MeV) field, generated by protons of 190\u00a0MeV impinging on a water phantom, yielded RBE values for both, SF and residual foci, which are comparable to those of HDR and LDR medium-energy neutrons of \u00a0= 5.8\u00a0MeV: 4.47 for RBE(foci 24 h) and 2.09 for RBE(SF 0.1). As well known, RBE is a variable function of several factors, among which the endpoint itself [(SF 0.1) and RBE(foci 24 h), both increasing with decreasing neutron energies for the covered range from 0.56\u00a0MeV to \u00a0= 5.8\u00a0MeV.Consistent with Tanaka et al. and Schmt itself , 41. QuaEn\u00a0>\u00a0= 70.5\u00a0MeV), adopted to simulate the scattered neutron field during proton therapy. Dose-rate effects were also addressed when high vs. low dose rate exposures could be performed (X-rays and medium-energy neutron exposure). Effects on clonogenic survival and \u03b3H2AX residual foci induction are reported, which were strongly dependent on radiation quality and dose . RBE values were extracted from measured endpoints as RBE(SF 0.1) and RBE(foci 24 h). They are found to be coherently increasing for decreasing neutron energy in the investigated energy range. The exposure to the mixed gamma - secondary neutron field yield RBEs as high as for medium-energy neutrons. The response of the potential fraction of stem-like cells in the MCF10A cell population was also addressed, by measuring sphere formation ability for up to 8\u00a0days after exposure with the maximal dose or 1\u00a0Gy for each radiation quality (0.88\u00a0Gy \u2013 1.52\u00a0Gy).This investigation provides a deeper insight into the radiobiological effects of neutron exposure, which is very important in order to assess the risk of secondary neutrons produced during conventional and particle radiotherapy and their possible trigger function for potential carcinogenic effects on normal breast cells and stem cells.The present study extensively investigated chosen radiobiological effects following exposures in the dose range of 0\u00a0Gy up to 2\u00a0Gy to different neutron energies compared to X-rays in MCF10A normal human breast cells. The range of selected neutron energies was expanded by the use of a mixed gamma - secondary neutron field (<"} +{"text": "There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously\u00a0combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. Prior to this work MB\u00a0could only adapt from one generation to the other, so we introduce feedback gates which augment their ability to learn during their lifetime. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning and could be another step towards autonomously learning machines. Being able to solve a T-maze repetitively, remembering where food is\u00a0located, avoiding places where predators have been spotted, and even learning another language are all cognitive abilities that require an organism to have neural plasticity. We will show how this neural plasticity can evolve in a computational model system. Neural plasticity allows natural organisms to learn due to reinforcement of their behavior1. However, learning is tied to specific neural mechanisms- working memory (WM), short-term memory (STM) and long-term memory (LTM). While learning was initially perceived as a new \u201cfactor\u201d in evolution2, potentially even independent, it has since been well integrated into the Modern Synthesis of Evolution3. Evolution and learning can have a positive effect on each other5, however, this is not necessarily always the case6. This has several implications: Evolution begin with organisms that could not adapt during their lifetime, which means that they had no neural plasticity. The only feedback that the evolutionary process receives is differential birth and death. As a consequence, learning will only evolve if it can increase the number of viable offspring, and it can only do so if there is a signal that predictably indicates a fitness advantage7.Natural organisms not only have to fit their environment, but also have to adapt to changes in their environment which means that they have to be plastic. While plasticity occurs in many forms, here we focus on Organisms receive many signals from their environment which have to be filtered and interpreted. Irrelevant signals should be ignored while others require adaptive responses. This can be done through instincts or reflexes in cases where fixed responses are necessary. In other cases, information has to be stored and integrated in order to inform later decisions, which requires memory and learning. To distinguish between actions that lead to advantageous results and those that are disadvantageous organisms need positive or negative feedback. However, none of the signals organisms receive are inherently \u201cgood\u201d or \u201cbad\u201d; even a signal as simple as food requires interpretation. The consumption of food has to trigger positive feedback within the organism in order to function as a reward. The machinery that triggers the feedback is an evolved mechanism and is often adaptive to the environment. If food were a global positive feedback signal, it would reinforce indiscriminate food consumption. Organisms would not be able to avoid food or store it for later, but instead eat constantly.9. Imagine this as the flurry of action potentials that go through the brain defining its current state. Information that a living organism needs to store for a moment is believed to reside in STM10, but how information transforms from WM to STM is not fully understood11. Natural systems use their LTM if they want to keep information for longer. Presumably, information from STM becomes reinforced and thus forms LTM, this is sometimes referred to as consolidation12. The reinforcement process takes time and therefore is less immediate than STM. In addition, memories can be episodic or semantic13 and can later be retrieved to influence current decisions. While information in the working memory can be used to influence decisions, it does not change the cognitive substrate.\u00a0However, long term potentiation use this information to change the neural substrate by presumably forming or modifying connections.Another important detail we have to consider is the difference between learning and memory. While memory is information about the past, learning is the process that takes a sensorial percept and, typically by reinforcement, retains that information for later use. Specifically, sensory information is stored in working memory (WM)evolution happens over generations while learning happens during the lifetime of an organismevolution is based on differential birth and death (selection) and learning evolved to increase the number of viable offspring and/or to avoid deathorganisms do not receive an objective \u201cpositive\u201d or \u201cnegative\u201d signal, but instead evolved mechanisms to sense and interpret the world so that they can tell what actions were positive and which ones were notmemory is information about the past and\u00a0can be transientinformation in the WM does not change the cognitive machinery, while learning changes the substrate to retain information for longer, which turns transient into permanent informationIn summary, if we want to model the evolution of learning in natural organisms properly we need to take the following statements seriously:14. Typically, these methods try to find a solution to a specific problem. If we provide an explicit reference or example class we refer to it as\u00a0supervised learning since the answer is known and the fitness function quantifies the difference between the provided solution and the ones the machine generates. For unsupervised learning we provide a fitness function that measures how well a machine or algorithm performs without the need to know the solution in advance. Genetic algorithms (GAs), which are a form of evolutionary search, work in supervised or unsupervised contexts, whereas learning algorithms are typically supervised. A special class are learning to learn algorithms, which improve their learning ability while adapting to a problem15 but do not necessarily apply evolutionary principles.Computer science and engineering are typically not concerned with biological accuracy but more with scalability, speed, and required resources. Therefore, the field of machine learning is much more of a conglomerate of different methods which straddle the distinct concepts we laid out above. Machine learning includes methods such as data mining, clustering, classification, and evolutionary computationQ-learning16 optimizes a Markov Decision Processes by changing probabilities when a reward is applied. Typically, delayed rewards are a problem, that deep-Q learning and memory replay try to overcome17. Artificial neural networks can be trained by using back propagation20, the Baum-Welch algorithm22, or gradient decent24 which happens episodically, but on an individual level and not to a population that experiences generations. Multiplicative weights algorithm strengthens or weakens connections in a neural network-based on the consensus of a pool of experts26, again on an individual\u00a0level.Genetic algorithms clearly optimize from one generation to the other, while learning algorithms on the other hand could be understood as lifetime learning. 27 . Changes to the weights of an ANN, or the probabilities of a Markov process, or\u00a0the probabilities of a\u00a0 POMDP29 reflect better learning, since those changes are not transient and change all future computations executed by the system.At the same time, memory and learning are often treated interchangeably. Recurrent artificial neural networks can store information in their recurrent nodes (or layer) without changing their weights, which would be analogous to WM. Similarly, the system we use, Markov Brains (MB), can form representations about their environment and store this information in hidden states (WM), again transiently without changing its computational structure32 (among many others) which change from generation to generation and allow for memory to form by using recurrent connections. Alternatively, other forms of evolving systems interact and use additional forms of memory34. In order to evolve and learn, other systems allow the topology and/or weights of the neural network to change during evolution\u00a0while also allowing weight changes during their lifetime44. Presenting objective feedback to adapt these systems during their lifetime allowed their performance to improve. As a consequence, the machinery that interprets the environment to create feedback was of no concern, but, as stated above, natural organisms also\u00a0need to evolve that machinery to learn. We think it is quite possible to change these systems to not rely only on external feedback. Instead, they themselves\u00a0could create the feedback signal as part of their output. However, none of the systems mentioned above is an evolvable MB 55 with an arbitrary topology that uses Boolean logic instead of logistic functions. Through sensors these networks receive information about their environment as zeros or ones, perform computations and typically act upon their environment through their outputs. We commonly refer to MBs that are embodied and through that embodiment56 interact with their environment as agents (others use the term animat which is synonymous). MBs use hidden states to store information about the past similar to recurrent nodes in an ANN. The state of these hidden nodes has to be actively maintained which makes the information volatile. The information in the hidden states can be used to perform computations and functions as memory. This form of memory resembles WM or STM more than LTM due its volatile nature. In the past, the entire structure of a MB would be encoded by the genome and would not change over the lifetime of the agent. Here we introduce what we call feedback gates, which allow MBs to use internal feedback to store information by changing their probabilistic logic gates (see Methods for a detailed description of feedback gates). Like other systems these updates do not change the topological structure of the node network but rather the probabilities within the gates; similar to how learning in ANN is achieved through weight changes. However, feedback is not an objective signal coming from the environment but must be generated as part of the evolved controller. The feedback gates only receive internally generated feedback to change their behavior. This linkage between inputs, evaluation of the environment to generate feedback, how feedback gates receive this information, and how everything controls the actions of the agent evolves over time where a single tile is randomly selected as the goal an agent must reach. The lattice is surrounded by a wall so agents can not escape the boundary, and The agent, controlled by a MB, is randomly placed on a tile that is 32 tiles away from the goal and facing in a random direction . Agents can see the arrow of the tile they are standing on. The direction indicated by the tile is relative to the that of the agent, so that a tile indicating north will only be perceived as a forward facing arrow if the agent also faces north. The agent has four binary sensors that are used to indicate which relative direction the agent should go to reach the goal.The agent can move over the lattice by either turning 90 degrees to the left or right, or by moving forward. So far, in order to navigate perfectly, the agent would simply need to move forward when seeing a forward facing arrow, or turn accordingly. Instead of allowing the agent to directly pick a movement, it can choose one of four intermediate options at any given update. At the birth of an agent, these four possible options are mapped to the\u00a0four possible actions: move forward, turn left, turn right, do nothing. As a result, the complexity of the task increases when the agent has to learn which of the 24 possible option-to-action maps currently applies to navigate the environment properly. The agent is not given any direct feedback about its actions; a mechanism must evolve to discern the current mapping and which is rather difficult.53, MBs were made from deterministic or probabilistic logic gates that use a logic table to determine the output given a particular input. Deterministic gates have one possible output for each input, while probabilistic gates use a linear vector of probabilities to determine the likelihood for any of the possible outputs to occur. To enable agents to form LTM and learn during their lifetime we introduce a new type of gate: feedback gate. These gates are different from other probabilistic gates, in that they can change their probability distribution during their lifetime based on feedback . This allows for permanent changes which are akin to LTM. While MBs could already retain information by using hidden states, now they can also change \u201cphysically\u201d. MBs must evolve to integrate these new gates into their network of other gates and find a way to supply feedback appropriately.In prior experimentsTo test if the newly introduced feedback gates help evolution and increase performance, we compare three different evolutionary experimental conditions. Agents were evolved over 500,000 generations that could use only deterministic logic gates, only probabilistic logic gates, or all three types of gates\u2013deterministic, probabilistic, and feedback gates to solve the task.57 comparing the final distributions of performances for each experimental condition showed that we can reject the hypothesis that they were drawn from the same distribution . This shows that agents that were allowed to use feedback gates outperform all other conditions by far.When analyzing the line of descent , we find a strong difference in performance across the three evolutionary conditions and providing agents with feedback gates during evolution allows them to reach the goal more often. This is an important control because from a computational point of view there is no qualitative difference between WM and LTM as both methods allow for recall of the past.Populations allowed to use feedback gates quickly evolve the ability to reach the goal in any of the 24 possible environments. The variance of their performance supports the same idea, that agents do not become better by just performing well in one environment, but instead evolve the general ability to learn the mapping each environment presents between performance and increase in mutual information and higher at the end of the task (~0.8). This\u00a0 signifies that the agents have more information about the environment at death than they did at birth, as expected. We then compute the difference between both measurements, which suggests a difference in behavior. The usage of actions is also drastically different during evolution . We used a biological inspired task and we see the future application of this technology in the domain of computational modeling to study how intelligence evolved in natural systems\u00a0and to eventually use neuroevolution as the means to bring about general artificial intelligence. ANNs that are specifically optimized by machine learning will solve specific classification tasks much faster than the model introduced here. The idea is not to present a new paradigm for machine learning, but to implement a tool to study the evolution of learning. While using MBs augmented with feedback gates\u00a0will probably not be competitive with other supervised learning techniques, it remains an interesting question: How would typical machine learning tools perform if challenged with the task presented here? and explore under which circumstances evolution benefits from learning and when it does not;\u00a0we propose to use this model to study these questions in the future. Another dimension we will investigate in the future is the ability of MBs to change their connections due to feedback, not just the probabilities within their gates.61. If we are incapable of providing examples of correctly classified data we use unsupervised learning methods and only need to provide a fitness function that quantifies performance. But we know that fitness functions can be deceptive and designing them is sometimes more of an art than a science. When interacting with future AI systems we should find a different way to specify what we need them to do. Ideally they should autonomously explore the environment and learn everything there is to know without human intervention - nobody tells us humans what to research and explore, evolution primed us to pursue this autonomously. The work presented here is one step into this direction and will allow us to study evolution of learning in a biological context as well as explore how we can evolve machines to autonomously learn.As stated before, combining evolution with learning is not a new idea. We think that it is in principle very easy for other systems to internalize the feedback. For example, it should be easy to evolve an artificial neural network to first interpret the environment, and then use this information to apply feedback on itself. However, we need to ask under which circumstances this is necessary. By providing training classes for supervised learning situations we can already create (or deep learn) machines that can learn to classify these classes. In addition, we often find these classifiers exceed human performance50. A new type of gate, the feedback gate, has been added to the Markov Brain framework (https://github.com/LSheneman/autonomous-learning),and this framework has been used to run all the evolutionary experiments. The Markov Brain framework has since been updated to MABE62. See below for a detailed description of each component:Markov Brains are networks of probabilistic and deterministic logic gates encoded by a genome. The genome contains genes and each gene specifies one logic gate, the logic it performs and how it is connected to sensors, motors, and to other gatesThe environment the agents had to navigate was a 2D spatial grid of 64\u2009\u00d7\u200964 tiles. Tiles were either empty or contained a solid block that could not be traversed. The environment was surrounded by those solid blocks to prevent the navigating agent from leaving that space. At the beginning of each agent evaluation a new environment was generated and A target was randomly placed in the environment, and Dijkstra\u2019s algorithm was used to compute the distance from all empty tiles to the target tile. These distances were used to label each empty block so that it had an arrow facing to the next closest tile to the target. When there was ambiguity (two adjacent tiles had the same distance) a random tile of the set of closest tiles was chosen. At birth agents were randomly placed in a tile that had a Dijkstra\u2019s number of 32 and face a random direction . Due to the random placement of blocks it was possible that the goal was blocked so that there was no tile that is 32 tiles away, in which case a new environment was created, which happened only very rarely.d), and received a bonus (b) every time they reached the goal, thus the fitness function becomes:Agents were then allowed to move around the environment for 512 updates. If they were able to reach the target, a new random start orientation and location with a Dijkstra\u2019s number of 32 was selected. Agents used two binary outputs from the MB to indicate their actions\u2013 00, 01, 10, or 11. Each output was translated using a mapping function to one of four possible actions- move forward, do nothing, turn left, or turn right. This resulted in 24 different ways to map the four possible outputs of the MB to the four possible actions that moved the agent. The input sensors gave information about the label of the tile the agent was standing on. Observe that the agent itself had an orientation and the label was interpreted relative to the direction the agent faced. There were four possible arrows the agent could see\u2013forward, right, backward, or left\u2013and were encoded as four binary inputs, one for each possible direction. Beyond the four input and two outputs nodes, agents could use 10 hidden nodes to connect their logic gates. Performance (or fitness) was calculated by exposing the agent to all 24 mappings and testing how often it reached the goal within the 512 updates it was allowed to explore the world. At every update agents were rewarded proportional to their distance to the goal is quickly found. Observe that all mutations that swept the population can be found on the LOD, and the LOD contains all evolutionary changes that mattered.An agent was selected at random from the final generation to determine the line of descent (LOD) by tracing the ancestors to the first generationSupplementary Material"} +{"text": "Drosophila learning center but not in other brain regions triggered changes normally restricted to aged brains: impaired associative olfactory memory as well as a brain-wide ultrastructural increase of presynaptic active zones (metaplasticity), a state non-compatible with memory formation. Mechanistically, decreasing autophagy within the MBs reduced expression of an NPY-family neuropeptide, and interfering with autocrine NPY signaling of the MBs provoked similar brain-wide metaplastic changes. Our results in an exemplary fashion show that autophagy-regulated signaling emanating from a higher brain integration center can execute high-level control over other brain regions to steer life-strategy decisions such as whether or not to form memories.Macroautophagy is an evolutionarily conserved cellular maintenance program, meant to protect the brain from premature aging and neurodegeneration. How neuronal autophagy, usually loosing efficacy with age, intersects with neuronal processes mediating brain maintenance remains to be explored. Here, we show that impairing autophagy in the The role of macroautophagy in neuronal processes mediating brain maintenance remains enigmatic. Authors show here that impairing autophagy within the major learning related brain center of Drosophila, and not other regions, triggered a form of presynaptic metaplasticity that was invariably connected to the absence of the specific component of aversive olfactory memory, which normally only declines in the course of aging process Compromised efficacy of autophagy is suspected to contribute to brain aging, and in reverse rejuvenating autophagy in aging neurons is considered a promising strategy to restore cognitive performance6. Autophagosome biogenesis takes place mainly in distal axons, close to presynaptic specializations8 and is suggested to reduce with age9. Retrograde transport of autophagosomes might play a role in neuronal signaling processes, promoting neuronal complexity and preventing neurodegeneration6.Macroautophagy is a process of cellular self-digestion in which portions of the cytoplasm and even whole organelles are sequestered in double-membrane or multi-membrane vesicles (autophagosomes), and then delivered to lysosomes for bulk degradation. Autophagy lately came in the focus for its apparently crucial role in the aging and neurodegeneration process11. Deficits in plasticity might thus be of particular importance for age-induced cognitive decline. In fact, instead of emphasizing the loss of neurons, studies in several models now point towards rather subtle age-related synaptic alterations in the hippocampus and other parts of the cortical brain as being associated with age-associated cognitive decline14. However, causal connections between synaptic changes and age-associated cognitive decline remain to be established. In short, the challenge is to identify the mechanisms of protein homeostasis that are active in neurons, and to understand how these mechanisms intersect with the multiple aspects of neuron function and plasticity over a lifetime.Changes in synaptic strength with increased or decreased synaptic activity (synaptic plasticity) are considered to be the core process regulating memory formation across all model systems investigatedDrosophila, called mushroom body (MB), sufficed to trigger brain-wide changes in presynaptic organization in a non-cell autonomous manner. The occurrence of this brain-wide presynaptic metaplasticity was invariably connected to the absence of the specific component of aversive olfactory memory, which normally only declines in the course of the aging process. In contrast, attenuating autophagy in other brain centers was without any measureable effect on synaptic metaplasticity, not even locally in neurons under direct genetic manipulation. We further found signaling of the metabolism-related NPY-type neuropeptide within the MBs (but not MB synaptic vesicle release or excitability) to be important to protect the brain from premature metaplasticity and consequently a decay of memory formation capability. From a broader perspective, our results provide evidence for a high-level control by a brain integration center (here the MB), whose autophagy status can seemingly tune the overall information processing strategy of an entire brain.Here we show that genetically impairing autophagy within the major learning-related brain-center of Drosophila brain, we expressed RNA interference (RNAi) constructs targeting core components of autophagy machinery via a pan-neuronal driver line 16. One diagnostic feature of autophagic efficacy is the degradation of the ubiquitin-binding scaffold protein p62/SQSTM1. Ubiquitinated proteins meant for autophagic degradation are positive for p62/SQSTM117 (Drosophila homolog: Ref(2)p), which due to its interaction with LC3/ATG8 is degraded via autophagy18. Therefore, lack of autophagy leads to an accumulation of p62 aggregates, while induction of autophagy reduces p62. We hence used p62 as read-out to in the Drosophila brain screen across RNAi lines directed against the expression of autophagy core components p aggregates as evident in immunostainings of adult Drosophila brains . atg9 transcript levels in morphologically isolated brains of the elav/atg9-RNAi were reduced by almost 50% compared to controls . The above-mentioned KD efficiencies are likely an underestimation of transcript KD as not all brain cells, particularly glia, are targeted by this pan-neuronal driver line. Atg5 forms a complex with Atg12 and Atg16, which acts as an E3 ligase in the lipidation of Atg8 (LC3) to promote the elongation of the autophagosomal membranes19. A deficiency of Atg5 should inhibit the lipidation process. The elav/atg5-RNAi brains as expected showed an absence of second band in Western blot corresponding to lipidated Atg8 autophagy. We compared these two situations in order to identify the generic roles of autophagy for neuronal and cognitive maintenance of the fly brain.Among the lines tested, RNA lines targeting ins Fig.\u00a0. This inins Fig.\u00a0. We furttg8 Fig.\u00a0. Atg9, oKD) Fig.\u00a0. Thus, w21, a process associated with age in Drosophila22. To investigate whether the manipulation of autophagy using these two RNAi constructs would cause any ectopic apoptosis, we immunostained 10-day-old fly brains for Annexin V and activated Death caspase-1 (Dcp-1), respectively, to detect apoptotic cells. Annexin V binds to phosphatidylserine, a marker of apoptosis when it is on the outer leaflet of the plasma membrane23. Dcp-1 is a commonly used marker for cells undergoing apoptosis in Drosophila24. While we could visualize the previously described age-induced increase of apoptotic cells, no such increase was observed in elav/atg5-RNAi brains or elav/atg9-RNAi brains transmit their information to projection neurons (PNs) in the central brain, with their performance being critical for smell response and learning of olfactory cues30. The majority of PNs can be labeled with the transgenic reporter line gh146-Gal428. Both gh146/atg9-RNAi and gh146/atg5-RNAi had p62 aggregates within PN cell bodies 32. Combination of either driver line with either the atg5 or the atg9-RNAi inducible transgene provoked a strong buildup of both p62 and Atg8a in the cell bodies of the MB intrinsic neurons but did not affect levels of downstream effector Syntaxin-17 and anesthesia-resistant memory (ARM). ASM is calculated by subtracting ARM scores, measured after amnestic cooling, from MTM. ASM is considered to be the precursor of long-term consolidated memory (LTM)36. Most important for this context, it is the ASM, unlike the ARM, which has been shown to be strongly impaired with aging37. We compared ARM and ASM scores for pan-neuronal and MB-specific KD of atg9. Both, pan-neuronal and equally MB-specific KD significantly reduced the ASM but not the ARM component of the MTM master scaffold protein Bruchpilot (BRP)39. Young atg7 generic null mutant flies mimicked aged flies in showing both this increase in BRP staining, as well as an inability to form olfactory memories39. Pan-neuronal KD of either atg5 or atg9 triggered a similar increase in BRPNc82 label brain-wide , changes in BRP intensity clearly extended over the expression domain of these drivers. In fact, BRP in all cases appeared to be elevated over the entire central brain in the MBs by simultaneously expressing a Gal4 repressor, Gal80, under control of an independent MB-specific promoter, mb247. We found that MTM was restored when Gal4 activity was simultaneously blocked in MBs with mb247-Gal80 was no longer detected when Gal4 was simultaneously blocked in MBs with Gal80 , the designated insulin-producing cells (IPCs). The assumed dendrites of IPCs are located in the Pars Intercerebralis, dorsally in the protocerebrum42. Altered insulin-type signaling has in fact been linked with non-cell autonomous effects of autophagy44. We, therefore, performed atg9 KD in mNSCs using dilp2-Gal445. As expected, atg9 KD in these cells resulted in a strong buildup of p62 aggregates in the targeted neuron populations of autophagy. While KD of either atg5 autophagy after KD of autophagy core components in mice provokes massive neuron loss in cerebral and cerebellar regions in the course of monthstg5 Fig.\u00a0 or atg9 tg9 Fig.\u00a0 lead to tg9 Fig.\u00a0.ok107/atg9 KD to transmission emission electron microscopy (EM) and super-resolution light microscopy (STED). The pre-synaptic AZ scaffold exhibits an electron dense structure in EM48. At Drosophila synapses, the AZ scaffold appears as a T-shaped structure, hence named T-bar. While principle synapse organization appeared normal in the ok107/atg9-RNAi, T-bars within the MB intrinsic KCs appeared more prominent. Quantification found them to cover a significantly larger area in transmission EM sections previously allowed us to unmask the nano-architecture of KDs Fig.\u00a0. Finallytg9 Fig.\u00a0. Thus, wDrosophila brain after MB-specific attenuation of autophagy appeared identical to the brain-wide ultrastructural phenotype of aged flies which we reported previously39. Thus, our ultrastructural analysis indicates that autophagic decline within the MB could be of major importance for driving the age-induced decline of learning and memory processes.As far as we could see, the synaptic phenotype observed throughout the atg5/9 KD to adult life only by combining the RNAi line/driver combinations with a temperature-sensitive Gal80 construct, and shifting these flies to non-permissive temperature (29\u2009\u00b0C) after eclosion. Unfortunately, however, this approach did not result in a measurable accumulation of p62 in KCs, likely reflecting a slow turnover of already developmentally expressed Atg5 or Atg9. While we currently cannot differentiate between contributions of autophagy suppression in pre-hatching and post-hatching phase, our arguments concerning a non-cell autonomous role of the MB remain unaffected. We finally addressed putative mechanisms through which the MB might steer presynaptic metaplasticity in an essentially brain-wide fashion.In additional experiments, we sought to limit the shibirets)54 within the MBs of animals raised at restrictive temperature (29\u2009\u00b0C). In other experiments, we induced activity levels in MB neurons throughout development until adult day 10 by expressing a heat-activated transient potential receptor cation channel, dTrpA155, in MBs of animals raised at 29\u2009\u00b0C. In contrast to MB-specific autophagy attenuation, however, these flies displayed morphological defects in their MBs. A continuous MB-specific attenuation of synaptic release led to nearly full physical loss of MBs in ok107/shits and severely malformed MBs in vt30559/shits over whole dissected brains upon MB-specific attenuation of autophagy. Consistently, in immunostainings, the prominent MB expression of short NPF (sNPF) precursor peptide in the MB appeared about 50% reduced upon attenuation of autophagy in MBs , which by immunostainings displayed a ~50% decline in the levels of the sNPF peptide precursor in the MB . As expected from the BRP confocal staining experiments, a metaplastic increase of AZ ultrastructural sizes was observed . Notably, KD of ved Fig.\u00a0.64, affected by reduced sNPF signaling of presynaptic AZs (metaplasticity). Two findings causally linked this upshift to decreased olfactory memory performance. First, when continuously fed with spermidine, flies of 30 days of age were largely protected from these changes. Secondly, genetically provoking this up-shift eliminated the normally age-sensitive memory component in young animals already39. An upshift in the AZ size should increase synaptic strength52, evident in increased SV release in response to natural odors observed in aged but not aged-spermidine-fed flies39. Presynaptic plasticity is crucial for forming memory traces in Drosophila73. Our previous work thus suggests that this presynaptic metaplasticity shifts the operational range of synapses in a way that they become unable to execute the plastic changes faithfully in response to conditioning stimuli.The endogenous polyamine spermidine has prominent cardio-protective and neuro-protective effects8. Retrograde transport of autophagosomes might play a role in broader neuronal signaling processes, promoting neuronal complexity and preventing neurodegeneration. Surprisingly, however, our data do not favor a direct substrate relationship between AZ proteins and autophagy. Instead, we find evidence for a seemingly non-cell autonomous relation between brain-wide synapse organization and the autophagic status of the mere MB. After genetic impairment of autophagy (via atg5 or atg9 KD) using two different MB-specific Gal4-driver lines, we observed the presynaptic metaplasticity across the Drosophila olfactory system and beyond. While the autophagic arrest (p62 staining) was largely limited to the expression domain of these drivers, the synapses were pushed towards a state of metaplasticity. Since the ultrastructural size of AZs and the per AZ BRP levels39 increased equally in aged and MB-autophagy-challenged animals, we conclude that the autophagic status of the MB neuron population executes a signaling process, which can control the per AZ amounts of BRP and other AZ proteins. Further studies are warranted to dissect the nature of these signaling processes.We here further addressed the relation between defective autophagy, presynaptic ultrastructure and plasticity and olfactory memory formation. Autophagosome biogenesis is very dominant close to presynaptic specializations in distal axons in compartmentalized fashion and efficient macro-autophagy is essential for neuronal homeostasis and survival63. NPY levels decrease with age in mice and re-substituting NPY is able to counteract age-induced changes of the brain at several levels63. A cross-talk between autophagy and NPY in regulating the feeding behavior has been demonstrated in mice62.Notably, accumulating evidences support the important role of neuropeptide Y (NPY) in aging and lifespan determinationsnpf hypomorph allele mimicking the MB reduction of sNPF of the MB-specific autophagy KD situations as well as the sNPF expression in aged animals. In this hypomorph allele we observed a similar up regulation in BRP Nc82 signal. KD of the snpfr using an MB-specific driver drove the brain-wide metaplastic change even stronger than the sNPF hypomorph . This scenario in ultrastructural detail resembled both the age-induced and MB-specific autophagy-KD-induced metaplasticity phenotypes. These results, therefore, support the essential role of MB in integrating the metabolic state of Drosophila in an autocrine fashion to modulate the presynaptic release scaffold state throughout the fly brain. The mechanistic basis of this exciting regulation warrants further investigation. Interestingly, elevated cAMP signaling is generally driving plasticity in Drosophila neurons, while sNPF signaling is meant to reduce cAMP74 and thus potentially might be able to reset plastic changes such as increased BRP levels. In apparent contradiction to sNPF signaling directly widely controlling metaplasticity is our finding that MB-specific KD of the sNPFR sufficed to increase BRP levels. At this moment, we can only speculate as to why KD of sNPF-receptor also results in extended metaplastic changes. Potentially, sNPF-receptor signaling within the MB might be important to control sNPF secretion in a physiological manner via a quasi-autocrine mechanism.We here found that transcript expression level of an NPY family member (sNPF) are controlled by autophagy within the MBs. We used an 31. Notably, autophagy and NPY signaling are prime candidate mechanisms for the therapy of age-induced cognitive processes75.Intriguingly, the metaplastic state characterized both aged and MB-specific autophagy KD animals, and in both cases provoked a specific loss of the ASM component of memory. Notably, olfactory MTM measured here, are considered to be the direct precursor of olfactory LTM, which in turn have been shown to be energetically costlynon-cell autonomously both in neurons and in intestines to firstly, maintain the wild-type lifespan of C.elegans and secondly, to respond to the dietary restriction and DAF-2 longevity signals44. Atg18 in chemosensory neurons and intestines acts in parallel and converges on unidentified neurons that secrete neuropeptides to mediate the influence of Daf-2 on C.elegans lifespan through the transcription factor DAF-16/FOXO in response to reduced IGF signaling44. In Drosophila, neuronal up-regulation of AMPK induces autophagy, via up-regulation of Atg1 non-cell autonomously in intestines and slows intestinal aging and vice versa. Moreover, up-regulation of Atg1 in neurons extends lifespan and maintains intestinal homeostasis during aging and these inter-tissue effects of AMPK/Atg1 were linked to altered insulin-like signaling43. On the contrary, we found the insulin producing cells (IPCs) themselves to not mediate the observed metaplastic state, as neither the KD of atg9 nor the KD of snpfr in Pars intercerebralis had any impact on the synaptic status of these flies.Recent research has uncovered several examples connecting autophagy and hormonal-type regulations interacting between organ systems in non-cell autonomous regimes. For instance, Atg18 acts 76. The fruit fly can evaluate its metabolic state by integrating hunger and satiety signals at the very KC-to-MBON synapses in MB under control of dopaminergic neurons to control hunger-driven food-seeking behavior77. At the same time, long-term memory encoding necessitates an increase in MB energy flux with dopamine signaling mediating this energy switch in the MB31. In line with these findings, we here now provide a modeling basis to study these delicate relations in an exemplary fashion. Taken together, our data suggest that MB integrates the metabolic state of the flies via cross talk between autophagy and sNPF signaling with the decision whether to form memories or not and a block in this cross talk with aging gives rise to synaptic metaplasticity which initiates the age-induced memory impairment in Drosophila. It is tempting to speculate that the MB executes hierarchically, a high-level control integrating the metabolic and caloric situation with a life-strategy decision of whether or not to form mid-term memories.Autophagy regulation is tightly connected to cellular energetics, nutrient recycling, and the maintenance of cellular energy statushttps://bdsc.indiana.edu/information/recipes/bloomfood.html with minor modifications). For KD studies, flies were mated at 25\u2009\u00b0C and the F1 progeny were allowed to develop and age (until desired age) at 29\u2009\u00b0C. Flies used in all experiments are F1 progeny. For aging, flies were collected once every 2 days (preferably evening) and flipped every 2\u20133 days on fresh food until desired age was reached.Fly-strains were reared under standard laboratory conditions at 25\u2009\u00b0C and 65% humidity with constant 12\u2009h:12\u2009h light:dark cycle, unless otherwise stated. Flies were raised on standard fly food . sNPFc00448 flies were kindly provided by Dr. Peter Soba (Universit\u00e4tsklinikum Hamburg-Eppendorf). dilp2-Gal4 (#37516), atg7-RNAi , atg5-RNAi , atg9-RNAi (#34901), atg8-RNAi (#28989), syx17-RNAi (#25896) were obtained from the Bloomington Drosophila stock center. vt30559-Gal4 (#206077) and atg17-RNAi (#KK104864) were obtained from Vienna Drosophila Resource Center. In addition, elav-Gal4, appl-Gal4, gh146-Gal4, ok107-Gal4, ok107-Gal4; mb247-Gal80 and ok107-Gal4; tub-Gal80ts were used.Isogenized watg5 and atg9 and to quantify differences in transcription of snpf, qRT-PCR was performed. Total RNA was isolated from whole brains of 50-day-old, 10-day-old female flies using TRIzol reagent (Invitrogen). RNA concentration was measured using spectrophotometer (NanoDrop) and 500\u2009ng of RNA was converted to cDNA using the SuperScript III First Strand Synthesis System (Invitrogen) according to the manufacturer\u2019s instructions. The primers used to amplify the atg5 were as follows: atg5-Forward (GCACTACATGTCCTGCCTGA) and atg5-Reverse (AGATTCGCAGGGGAATGTTT). The primers used to amplify the snpf were as follows: snpf-Forward (CAAAAAGCGTGGCATACATT) and snpf-Reverse (AATGTCCGGATTTCAAGGAG). The primers used to amplify the atg9 were as follows: atg9-Forward (TTGTCCAGATCCGAATCCTC) and atg9-Reverse (TCGTCTGGCTACTTGCCTTT). actin5c was used as a reference gene for normalization and calculation of fold change differences between control and experimental group(s). The primers used to amplify actin5c were as follows: actin5c-Forward (TTGTCTGGGCAAGAGGATCAG) and actin5c-Reverse (ACCACTCGCACTTGCACTTTC). All primers were tested for their amplification efficiency according to standard methods. qRT-PCR was performed using the Dynamo Flash SYBR green master mix (Thermo-Fischer # F415L) and the Agilent Technologies Stratagene Mx3005P Real-time PCR system according to the manufacturer\u2019s instructions. The threshold cycle (Ct) is the point where each kinetic curve reaches a common arbitrary fluorescence level (AFL), placed to intersect each curve in the region of exponential increase. Subsequently, the Ct values of experimental group were subtracted from that of control group, resulting in \u2212\u0394Ct and the fold change was calculated as 2\u2212\u0394\u0394Ct. Values are presented as mean\u2009\u00b1\u2009SE of triplicate assay.For validation of RNA KD efficiency of To detect p62 and Atg8a, five female fly brains were homogenized in 50\u2009\u03bcl 2% SDS buffer containing protease inhibitors. An amount equivalent to 1 brain was loaded and resolved on 4\u201320% gradient gels , followed by electroblotting to nitrocellulose membranes . Subsequently, blots were probed with antibodies against Tubulin (loading control), p62 and Atg8a (see antibodies for further information). Immunoblots were scanned and analyzed using ImageJ software. The relative amounts of p62 and Atg8a proteins from individual samples were corrected using antibody to Tubulin as loading control.\u03b1BRPNc82 , M\u03b1FasII1D4 Rb\u03b1p62 , Rb\u03b1sNPF , G\u03b1M Cy3 , and G\u03b1Rb Alexa 488, M\u03b1GFP , Rb\u03b1AnnexinV , Rb\u03b1Dcp-1 .The following antibody dilutions were used for Confocal microscopy: M\u03b1BRPNc82 , Rb\u03b1p62 , Rb\u03b1sNPF , G\u03b1M aberrior star 635p, and G\u03b1Rb Alexa 594 .The following antibody dilutions were used for super-resolution STED microscopy: M\u03b1p62 , M\u03b1Tubulin , Rb\u03b1Atg8a , G\u03b1M Peroxidase , and G\u03b1Rb Peroxidase .The following antibody dilutions were used for Western blots: Rb\u00ae (Vector Laboratories) before confocal scanning. The dilutions for various antibodies used for immunohistochemistry are mentioned in Antibodies section.The adult brain dissections were always done between 8 a.m. and 11 a.m. Adult brains were dissected in ice-cold hemolymph-like saline (HL3) solution and immediately, fixed in 4% paraformaldehyde at room temperature (RT) for 30\u2009min. After fixation the brains were incubated with PBS containing 0.5% Triton X-100 (0.5% PBT) for 30\u2009min. Afterwards they were blocked in 10% normal goat serum for 2\u2009h at RT. For primary antibody treatment, samples were incubated in 0.5% PBT containing 5% NGS, 0.1% sodium azide, and primary antibodies for 48\u2009h at 4\u2009\u00b0C. After primary antibody incubation, brains were washed in 0.5% PBT for 6\u2009\u00d7\u200930\u2009min at RT. All samples were then incubated in 0.5% PBT with 5% NGS, 0.1% sodium azide containing the secondary antibodies for 24\u2009h at 4\u2009\u00b0C. Brains were washed in 0.5% PBT for 6\u2009\u00d7\u200930\u2009min at RT followed by overnight incubation in VectashieldConventional confocal images were acquired with a Leica TCS SP8 confocal microscope (Leica Microsystems) using a \u00d720, 0.7 NA oil objective for whole-brain imaging. All images were acquired using Leica LAS X software. Lateral pixel size was set to values around 300\u2009nm. Exact values varied depending on situation. Typically 1024\u2009\u00d7\u20091024 images were scanned at 600\u2009Hz using 4x line averaging.STED microscopy was performed using Leica Microsystems TCS SP8 gSTED 3x set-up equipped with pulsed white light laser and two STED lasers for depletion . The pulsed 775\u2009nm STED laser was triggered by the WLL. Images were acquired with \u00d7100, 1.4NA oil immersion objective. 1024\u2009\u00d7\u20091024 pixel resolution STED images were scanned at 600\u2009Hz using 8x line averaging. Lateral pixel size was set to values ~18\u2009nm with z-stack of three images, step size 0.13\u2009nm for better PSF estimation. To minimize thermal drift, the microscope was housed in an incubation chamber. STED images were processed using the Huygens deconvolution software . STED images were acquired at the cellular imaging facility of the Leibniz Instititute for Molecular Pharmacology Berlin, Germany.\u00ae software, Visage Imaging GmbH. The first step was to separate the object of interest from the background . A unique label was defined for each region in the first fluorescence channel (e.g. Nc82). This was done by manually assigning the central brain region to interior regions on the basis of the voxel values (volumetric pixels). By this procedure, each voxel value outside the central brain region was excluded from the interior label . A full statistical analysis of the image data associated with the segmented materials was obtained by applying Material Statistics module of the Amira\u00ae software, in which the mean gray value of the interior region is calculated. The median voxel values of the central brain regions were compared, as measured in individual adult brains, in order to evaluate the synaptic marker label.Segmentation of 3D image stacks of the central body region of brains was done using Amirahttp://fiji.sc/FIJI). Confocal stacks were merged into a single z-plane by using the maximum projection function. Subsequently, the region of central brain was manually selected (using free-hand function) and absolute fluorescence intensity was measured and normalized to the area of the central brain for each brain.In case of p62/Ref(2)p and sNPF peptide precursor, images were quantified using FIJI software were processed in ImageJ. The diameters of planar oriented BRP rings were measured using the line tool of ImageJ. The distance from intensity maximum to intensity maximum was acquired in the plot window of individual hand-drawn lines and transferred to Microsoft excel.For cell counting, we collected confocal stacks at 0.5\u2009\u03bcm intervals with a \u00d763 objective lens. The posterior region of the MB was zoomed at \u00d71.5 magnification so that all the Gal4 expressing Kenyon cell bodies are in a frame. The posterior MBs in the left and right hemispheres were separately scanned and analyzed. For the quantitative analysis, brains were scanned with comparable intensity and offset. Images of the confocal stacks were analyzed with the open-source softwareFiJi. Randomly chosen stacks with non-overlapping cell bodies were examined manually to quantify GFP-positive cell bodies.Brains were dissected in HL3 solution and fixed for 20\u2009min at RT with 4% paraformaldehyde and 0.5% glutaraldehyde in PBS. Subsequently, the brains were incubated overnight at 4\u2009\u00b0C with 2% glutaraldehyde in buffer containing 0.1\u2009M sodium cacodylate at pH 7.2. Brains were then washed 3\u00d7 in cacodylate buffer for 10\u2009min at 20\u201330\u2009\u00b0C. Afterwards, the brains were incubated with 1% Osmium tetroxide and 0.8% KFeCn (in 0.1\u2009M cacodylate buffer) for 90\u2009min on ice. Brains were then washed with cacodylate buffer for 10\u2009min on ice and then three quick washed with distilled water. The brains were stained with 1% uranylacetate (w/v) for 90\u2009min on ice and dehydrated through a series of increasing alcohol concentrations. Samples were embedded in EPON resin by incubation sequentially in ethanol/EPON 1:1 solution for 45 and 90\u2009min at 20\u201330\u2009\u00b0C, then in pure EPON overnight at 15\u201320\u2009\u00b0C. Thereafter, the resin was changed once and brains were embedded in a single block at 60\u2009\u00b0C to allow for polymerization of the resin.After embedding, sections of 60\u2009nm each were cut using a Leica Ultracut E ultramicrotome equipped with a 2\u2009mm diamond knife. Sections were collected on 100 mesh copper grids coated with 0.1% Pioloform resin. Contrast was enhanced by placing the grids in 2% uranyl acetate for 2\u2009min, followed by washing with water three times and then incubation in lead citrate for 1.5\u2009min. The grids were washed 3\u00d7 with water and dried. Images were acquired fully automatically on a FEI tecnai Spirit transmission electron microscope operated at 120\u2009kV equipped with a FEI 2\u2009K Eagle CCD camera using Leginon. Regions of interest were first selected at \u00d7560 nominal magnification and then successively imaged at \u00d74400, \u00d711,000 and \u00d721,000 nominal magnification, respectively.+; 3-octanol or 4-methylcyclohexanol) paired with electric shock followed by a rest of 30\u2009s and then to a second odor without US for 60\u2009s. During testing, flies were exposed simultaneously to the CS+ and CS\u2212 in a T-maze for 30\u2009s.Behavioral experiments were performed in dim red light in 25\u2009\u00b0C and ~70% humidity with 3-octanol and 4-methylcyclohexanol serving as olfactory cues and 120\u2009V alternate current served as behavioral reinforce. Standard single-cycle olfactory associative memory was performed as previously described with minor modifications. Briefly 80\u2013100 flies received one training session, during which they were exposed sequentially to one odor , the conditioned odor avoidance was tested immediately after training. Subsequently, flies trapped in either T-maze arm were anaesthetized and counted. For each distribution, a performance index (PI) was calculated asFor MTM, flies were trained as explained above, but tested 1\u2009h after training. For separation of consolidated ARM and labile ASM, the flies were trained and one group was cooled in an ice-bath for 90\u2009s, 30\u2009min after training. The flies were allowed a recovery period of 30\u2009min, i.e. 1\u2009h after training onset. Since labile ASM is erased by this procedure, performance of the cooled group is solely due to ARM. In other words, ASM was calculated by subtracting the PI of ARM from that of median MTM.p\u2009<\u20090.05; **p\u2009<\u20090.01; ***p\u2009<\u20090.001; nsp\u2009>\u20090.5). No statistical methods were used to pre-determine sample sizes but our sample sizes are similar to those reported in previous publications. Wherever possible, data were collected with the investigator blind to the genotypes, treatment, and age of genotypes. The data collection and data processing were done in parallel in randomized order for all experiments. Two groups were compared using the non-parametric Mann\u2013Whitney U-tests while more than two groups were compared using one-way ANOVA with different post-hoc correction or Kruskal\u2013Wallis test, which have been mentioned in the figure legends. Number of independent experiments, \u2018n\u2019s are mentioned in figure legends.Unless otherwise stated, the data was analyzed wth GraphPad Prism 5 software. Asterisks are used to indicate statistical significance of the results (*This study did not use animals and/or animal-derived materials for which ethical approval is required.Further information on experimental design is available in the\u00a0Supplementary InformationPeer Review FileReporting Summary"} +{"text": "Background: Various techniques for tissue engineering have been introduced to aid the regeneration of defective or lost bone tissue. The aim of this study was to compare thein vivo bone-forming potential of bone marrow mesenchymal stem cells (BM-MSCs) and platelet-rich fibrin (PRF) on induced bone defects in rats\u2019 tibiae.Methods: In total, one defect of 3-mm diameter was created in each tibia of 36 Wistar male rats. There were two groups: group A, left tibia bone defects that received PRF; and group B, right tibia bone defects of the same animal that received BM-MSCs loaded on a chitosan scaffold. Subsequently, Scanning electron microscope/energy-dispersive X-ray (SEM/EDX) analyses was performed at 3 and 10 days, and 3 weeks post\u2011implantation and following euthanasia; (n=12).Results: The EDX analysis performed for each group and time point revealed a significant increase in the mean calcium and phosphorous weight percentage in the BM-MSC-treated group relative to the PRF-treated group at all-time intervals (P < 0.05). Moreover, the mean calcium and phosphorus weight percentage increased as time progressed since the surgical intervention in the PRF-treated and BM-MSCs groups (P < 0.05).Conclusions: In the present study, both BM-MSCs and PRF were capable of healing osseous defects induced in a rat tibial model. Yet, BM-MSCs promoted more adequate healing, with higher mean calcium and phosphorous weight percentages than PRF at all-time points, and showed greater integration into the surrounding tissues than PRF. Autologous bone graft limitations are related to harvesting process including the quality and quantity of grafted bone and complications at the second surgical site, while allogenic bone grafts carry the risk of disease transmission and immunological rejection. Hence, there are considerable motivations for developing alternative solutions for bone regeneration2. The use of tissue engineering approaches has proven to be effective in inducing bone formation by applying mesenchymal stem cells (MSCs)3 or platelet-rich fibrin (PRF)4. The capacity of bone marrow mesenchymal stem cells (BM-MSCs) for bone repair has been well reportedin vivo with promising results; BM-MSCs remain the most widely used source of osteogenic cells in bone tissue engineering studies7 MSCs are undifferentiated cells capable of replication8 that have the potential to differentiate along multiple cell lineages, giving rise to cells that form mesenchymal tissues, including bone, cartilage and muscle9. PRF is a second-generation platelet-rich biomaterial10. PRF is derived from a natural and slowly progressive polymerization process occurring during centrifugation, which increases incorporation of the circulating cytokines and growth factors in the fibrin mesh and prevents them from undergoing proteolysis11. In addition, the PRF fibrin matrix provides an optimal support for MSCs which constitute the determining elements responsible for real therapeutic potential13. Platelets are active growth factor-secreting cells that initiate wound-healing, connective tissue healing and cell proliferation14. Therefore, PRF is considered as an inexpensive autologous fibrin scaffold prepared in approximately one minute and hence no cost for membrane and bone graft15. In the present research, rats were used as they are easy to handle and less expensive. In addition, breeding cycles are substantially shorter, providing enough animals in a reasonable amount of time16. Research on bone tissue engineering is focused on the development of alternatives to autologous bone grafts for bone reconstruction. Although multiple stem cell-based products and biomaterials are currently being examined, comparative studies are rarely achieved to evaluate the most appropriate approach in this context. The purpose of this study was to compare the regenerative capacity of bone marrow (BM)-MSCs and PRF implanted in surgically induced bone defects in rats\u2019 tibiae.Several biomaterials are used to treat bone deficienciesThe study protocol was approved by the Research ethics committee of Faculty of Dentistry, Cairo University (151031).ad libitum.A total of 36 male Wistar rats weighing 175\u2013200 g, aged 12\u201314 weeks-old were used in this study. The animals were obtained from and housed in the Animal house, Faculty of Medicine, Cairo University. The animals were randomly placed in separate cages under controlled room temperature 25\u00b12\u00b0C with 12/12 h light/dark cycle and were fed food and water17. Briefly, bone marrow was harvested by flushing the femurs with Dulbecco\u2019s modified Eagle\u2019s medium supplemented with 10% fetal bovine medium (GIBCO/BRL). Cells were isolated with a density gradient [Ficoll/Paque (Pharmacia)] and cultured in culture medium supplemented with 1% penicillin-streptomycin (GIBCO/BRL) at 37\u00b0C in a humidified 5% CO2 incubator. When large colonies developed (80\u201390% confluence), cultures were washed twice with phosphate buffer saline (PBS) and cells were trypsinized with 0.25% trypsin in 1 mM EDTA (GIBCO/BRL) for 5 minutes at 37\u00b0C. After centrifugation (at 2400 rpm for 20 minutes), cells were re-suspended with serum-supplemented medium and incubated in 50 cm2 culture flask Falcon. On day 14, the adherent colonies of cells were trypsinized, and counted.17. Cultures confluence was monitored by inverted light microscope with a digital camera .BM-MSCs were isolated from the femurs of 6 Wistar donor rats (100\u00b120 g), BM-MSCs isolation and propagation occur in 14 days before experimental procedures under aseptic conditions as previously described3 cells/each well by osteocyte StemPro osteogenesis differentiation kit at concentration 300 \u00b5l of osteogenic medium (stempro medium) and identified by alizarin red staining (Sigma-Aldrich) for 30 min at room temprature, the mineralized nodules were stained and monitored using inverted light microscope with a digital camera . The results were presented by descriptive analysis.Surface antigens CD90 and CD34 were detected by flow cytometry to allow identification of BM-MSCs as follows. Following blocking in 0.5% BSA and 2% FBS in PBS, 100,000 cells were incubated in the dark at 4\u00b0C for 20 min with the following monoclonal antibodies; FITC CD 90 , PE CD 34 . Mouse isotype PE antibody were used as controls . Cells were washed and suspended in 500 \u00b5l fluorescence activated cell sorting (FACS) buffer and analyzed using a Cytomics FC 500 flow cytometer using CPX software version 2.2. BM-MSCs osteogenic differentiation was induced by StemPro osteogenic induction medium; incubation for 7 days at the third passage, 1 \u00d7 10TM, Pharmaceutical Inc.) in the proximal\u2013medial area of each tibia. While blood samples were being prepared, a 3-mm diameter bone defect was created using a round surgical bur3 under constant irrigation with saline solution in both tibiae of the same animal (split-body design) to avoid selection bias and neutralize any confounders that may affect the outcomes of both treatments. Experimental groups were standardized among all the animals: group A, left tibia defect received PRF clot immediately placed by sterile tweezers in the defect; Group B, right tibia of the same animal received BM-MSCs seeded on chitosan scaffold then implanted in the tibial bone defect using sterile spatula. Both groups were randomly sub-divided according to time of euthanasia into three sub-groups ; at 3, 10 days and 3 weeks, respectively (n = 12) and 20 mg/ Kg body weight xylazine HCL . Postope19.To obtain a porous chitosan scaffold to deliver BM-MSCs into the defect, 1 g chitosan (Merck Germany) was dissolved in 200 \u00b5l 0.2% M acetic acid , stored for 1 day at room temperature, poured into a 3-mm diameter stainless steel circular mould, stored in deep freezer at \u221270\u00b0C for 5 days, then lyophilized for 3 days as follows. In the lyophiliser (Thermo Fisher Scientific), there were three phases of preparation. The first phase was freezing phase, where the sample was exposed to \u221240\u00b0C in a vacuum for 10 min. The second phase was warm up vacuum pump phase, where sample was exposed to \u221215\u00b0C in a vacuum for 20 min. The third phase was main drying phase, where sample was exposed to 30\u00b0C in a vacuum for 3 days; after the 3 days, a blank porous chitosan scaffold was prepared6 cells/scaffold under static conditions, by means of a cell suspension. The seeded scaffold was then placed in the defect to deliver the stem cells to the defect, which was then sutured closed.Prior to cell seeding, the lyophilized scaffolds were immersed in absolute ethanol for sterilization. Hydration was accomplished by sequential immersion in serially diluted ethanol solutions of 70, 50, and 25%, during intervals of 30 min each. Scaffolds were finally equilibrated in PBS followed by standard culture medium , and then placed in tissue culture plates ready to be seeded. BM-MSCs were seeded at a density of 2.5x1020. In the middle of the tube, a fibrin clot was formed between the supernatant acellular plasma and the lower red corpuscles. The PRF clot was detached using sterile tweezers applied to the bone defect.A total of 2 ml venous blood was drawn from the caudal vein of rats used in the experiment into a plain tube and immediately centrifuged at room temperature with a lab centrifuge for 10 min at 3000 rpm. The EDX analysis system works as an integrated feature of SEM Quanta FEG 250 attached with EDX unit, . EDX analysis of the bone surfaces was performed, the elemental distribution of phosphorus and calcium (expressed as weight percentage) were determined. Composition scans were collected at randomly selected points in the bone surfaces of the defect using the backscattered electron mode. Data were obtained by calculating the mean of ten independent determinations21.Tibiae were carefully dissected free from soft tissue; bone specimens of each group were sectioned using a disc in low speed hand piece under constant irrigation to include the entire defect sites. Specimens were placed in 2.5% buffered glutaraldehyde solution (pH 4.7) for 6 hours. Then, dehydrated in increasing concentrations of ethanol for 10 minutes at each concentration. Finally, they were mounted on EM stubs and examined using SEM One-way analysis of variance (ANOVA) was used to compare different observation times within the same group. This was followed by Tukey\u2019s post hoc test when the difference was found to be significant. A t-test was used to compare between both groups using IBM SPSS 18.0 version 21 for windows . The significance level was set at p \u2264 0.05.Cells were rounded non-adhesive isolated bone marrow cells . Adhesiv22.Fibro-cellular tissue and traces of PRF material were seen in sub-group A1 . In sub-There was significant increase in the mean calcium and phosphorous weight percentage of group B relative to group A at all-time intervals. Moreover, calcium and phosphorus weight percent mean value increased by time in both groups \u2013Table 5Raw data for EDX analysis and flow cytometry gating graphs for identification of BM-MSCsAlso included are raw SEM imagesClick here for additional data file.Copyright: \u00a9 2018 Rady D et al.2018Data associated with the article are available under the terms of the Creative Commons Zero \"No rights reserved\" data waiver (CC0 1.0 Public domain dedication).et al., 2007 showed that the adult rat bone marrow was a suitable source for MSCs that can be easily induced to differentiate into an osteogenic lineage, so they are thought to be a promising candidate, supporting cells for bone reconstruction23. Most invitro and manyin vivo studies have proposed that MSCs possess the ability to increase osteoinduction and osteogenesis27.Bone regeneration using BMSCs was very well reported and standardized in many protocols, DonzelliThrough the current study, BM-MSCs and PRF promoted bone regeneration; where the newly formed bone was almost remodelled and integrated into the surrounding old bone with well-vascularized fibro-cellular tissue. In addition, evidence of osteogenesis was reflected by the presence of blood vessels. However, there was an improved bone regenerative capacity in defects treated with BM-MSCs compared to those treated with PRF. SEM-EDX analysis revealed a significant increase in the mean calcium and phosphorous weight percentage in the BM-MSCs group relative to the PRF group at all-time intervals.28. This would help explain the significant increase in calcium and phosphorus weight percentage in the PRF group through the experiment.Considering PRF, growth factors released are postulated to be promoters of tissue regeneration, tissue vascularity, mitosis of MSCs and osteoblasts and rate of collagen formation, playing a key role in the rate and extent of bone formation11.Accordingly, there was a marked drop in elemental analysis of calcium and phosphorous in sub-group A1, that increased gradually in sub-groups A2 and A3. This can be explained by the findings of a previous study on the pattern of growth factor release where PRF sustained long-term release of TGF-1 and PDGF-AB that peaked at day 14, which led to increase mineralization then decreased mildly but had a delayed peak of releasein vivo, or via an indirect pathway by paracrine effects on host stem or progenitor cells29. Notably, a significant increase in the mean calcium and phosphorus weight percentage was observed in BM-MSC-treated group at all time intervals throughout the experiment when compared with the corresponding PRF group. In accordance with the findings of a previous study, SEM/EDX analysis of osteogenic differentiated MSCs seeded on collagen scaffold demonstrated that calcium co-localized with phosphorous, with a gradual increase of both chemical elements observed from day 7 up to very high levels at day 2823. In conclusion, we confirmed that PRF yielded inferior bone formation to that by BM-MSCs after implantation in rat tibiae.The BM-MSC-treated group exhibited more organized bone architecture than the PRF-treated group; sub-group B3 exhibited well-oriented thick and smooth interconnecting bone trabeculae filling the defect compared to sub-group A3; which revealed spongy-like pattern with abundant non-remodelled vascular spaces containing fibro-cellular tissue. The proposed mechanism through which BM-MSCs contribute to bone regeneration enhancement is via maturation into osteoblastsThe data referenced by this article are under copyright with the following copyright statement: Copyright: \u00a9 2018 Rady D et al.Data associated with the article are available under the terms of the Creative Commons Zero \"No rights reserved\" data waiver (CC0 1.0 Public domain dedication).https://doi.org/10.5256/f1000research.15985.d21854822.Dataset 1. Raw data for EDX analysis and flow cytometry gating graphs for identification of BM-MSCs. Also included are raw SEM images. DOI: A statement needs to be added in the methods outlining the power analysis performed to determine the sample size in each group.Despite the localized effect of BM-MSCs and PRF being applied on a scaffold, the authors should add in their discussion the potential systemic effect of both treatments especially that the choice was made to have both defects created in the same animal (Right and left tibis with different treatments). I am guessing that this was done to reduce the number of animals used in the experiment. However, the systemic effect of treatments should be addressed as a limitation of the study and/or a discussion should made to clarify how the systemic effect could not have contributed to the significant results.Statistical analysis: A two ANOVA analysis should be more approriate to use in order to investigate the single main effects of time and treatment as well as the possible interaction between treatment and time. This will add more strength to the results rather than using post hoc results and t-tests.Overall the study was well performed and showed some interesting results. After looking at referees' reports which summarized some revisions required for this paper, I would like to add the following:I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. The Idea of research article \u00a0in vivo\u00a0study\"\u00a0by\u00a0Rady\u00a0et\u00a0al., is\u00a0interesting and valuable as it is comparing\u00a0\u00a0two different ways for enhancing bone defect healing in vivo\u00a0 which could be a base for clinical application . \"Healing capacity of bone marrow mesenchymal stem cells versus platelet-rich fibrin in tibial bone defects of albino rats: an\u00a0 Overall, the paper is clear, substantially easy to read and well constructed but still, there are\u00a0suggested\u00a0minor\u00a0comments the authors could deal with, or at least discuss for additional impact.\u00a0MethodsIt is worthy mentioning the number of rats per cage\u00a0Further details\u00a0 for clarifying the methodology and aseptic\u00a0conditions of\u00a0 BM-MSCs isolation from the femurs of donor rats\u00a0 will be a valuable add.It is recommended to mention how the depth of\u00a0 bone defect was controlled to be standard\u00a0 in all experiment induced defects .It is an add to mention the place of the lab where the\u00a0 BM-MSCs isolation procedures were done.\u00a0You need to give further details about \u00a0the size and methodology of BM-MScs and\u00a0 PRF pellet\u00a0 loading\u00a0\u00a0in the bone defect\u00a0ResultsFig. 3 need show monitoring of \u00a0the positively \u00a0stained calcified nodules \u00a0as mentioned in the methodologyRecommendations would be worth mentioning\u00a0 depending on the paper results\u00a0I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. First of all, I would like to congratulate the authors for attempting to undertake this project which I found very interesting and of valuable additional knowledge. The manuscript itself is well-written and well-structured. The authors of this study have investigated the\u00a0Healing capacity of bone marrow mesenchymal stem cells versus platelet-rich fibrin in tibial bone defects of albino rats.\u00a0Based on the study results, the authors have concluded that\u00a0\u00a0BM-MSCs\u00a0promoted more\u00a0adequate\u00a0healing,\u00a0with\u00a0higher\u00a0mean\u00a0calcium\u00a0and\u00a0phosphorous\u00a0weight percentages\u00a0than\u00a0PRF\u00a0at\u00a0all-time\u00a0points,\u00a0and\u00a0showed\u00a0greater\u00a0integration\u00a0into the\u00a0surrounding\u00a0tissues\u00a0than\u00a0PRF. However, the authors need to address the following minor remarks:Methods: The auhtors mentioned \"experimental procedures under aseptic conditions as previously described\", however I found no previously described information.Results: 1. In vitro evaluation of BM-MSCs: It is not clear in the text the shape of cells which were found belongs to which group, 3 days or 7 days? 2.\u00a0Alizarin red staining was used however no sufficient information was provided for the benefits of using this stain.Discussion: Discussion of the results is quite comprehensive. In analyzing the results, the authors also show citations from the previous study to support the explanation of these results.Conclusions: I think the authors should have added more points to conclude the hard work they have done.I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. This report by Rady et al., examines the healing capacity of bone marrow mesenchymal stem cells versus platelet-rich fibrin in tibial bone defects of albino rats. the authors' inclusion of PRF yielded inferior bone formation to that by BM-MSCs after implantation in rat tibiae. The study, although it may be small, adds knowledge to the existing literature. Suggested minor comments would help to improve the impact of this paper:Isolation, culture and identification of BM-MSCs:(aseptic conditions as previously described) No previously described information found.Establishment of bone defects: what is the depth of the defect? Did it reach the bone marrow spaces?Preparation of PRF: taking 2ml of blood of rats used in the experiment may lead to death of the animal or affecting its healing capacity. it was more safe to\u00a0use a donor like preparation of BM-MSCs.It was better to\u00a0compare your results with a control group to evaluate the normal healing capacity with other groups. Results\u00a0SEM/EDX analysis: Figures (B) x500 is not so clear please do not minimize them to save the magnification benefits. MethodsI have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard."} +{"text": "We studied interactions between nitrogen (N) amendments and soil conditions in a 2-year field experiment with or without catch crop incorporation before seeding of spring barley, and with or without application of N in the form of digested liquid manure or mineral N fertilizer. Weather conditions, soil inorganic N dynamics, and N2O emissions were monitored during spring, and soil samples were analyzed for abundances of nitrite reduction (nirK and nirS) and N2O reduction genes (nosZ clade I and II), and structure of nitrite- and N2O-reducing communities. Fertilization significantly enhanced soil mineral N accumulation compared to treatments with catch crop residues as the only N source. Nitrous oxide emissions, in contrast, were stimulated in rotations with catch crop residue incorporation, probably as a result of concurrent net N mineralization, and O2 depletion associated with residue degradation in organic hotspots. Emissions of N2O from digested manure were low in both years, while emissions from mineral N fertilizer were nearly absent in the first year, but comparable to emissions from catch crop residues in the second year with higher precipitation and delayed plant N uptake. Higher gene abundances, as well as shifts in community structure, were also observed in the second year, which were significantly correlated to 2O-reducing communities correlated to the difference in N2O emissions between years, while there were no consistent effects of management as represented by catch crops or fertilization. It is concluded that N2O emissions were constrained by environmental, rather than the genetic potential for nitrite and N2O reduction.Agricultural soils are a significant source of anthropogenic nitrous oxide (N Li et al. (2O emissions even at 40% water-filled pore space (WFPS), while ryegrass caused net N immobilization and much lower N2O emissions. While residue N availability is important for denitrifier activity and N2O emissions, especially if soil 2). Thus, residue decomposition may interact with soil water content in determining soil O2 status around organic hotspots. For example, Li et al. that N-rich fertilizer and catch crop residues would interact positively on N2O emissions; (2) that N2O emissions derived from mineral N would depend more on soil O2 status, and hence rainfall, than emissions derived from catch crop residues; and (3) that the abundance and composition of denitrifying communities would reflect the long-term effects of cropping system on metabolizable C and N availability.Our aim was to better understand the complex interactions between soil conditions, crop residues and N amendments during spring, and the response of nitrite- and N\u22121 soil organic carbon (SOC) and 1.8 g kg\u22121 total N, and it has a pHCaCl2 of 6.5, a cation exchange capacity of 12.3 meq 100 g\u22121, and an average bulk density of 1.35 g cm\u22123. Mean annual rainfall is 704 mm and mean annual air temperature 7.3\u00b0C.The study made use of a long-term crop rotation experiment, established in 1996, that is located at 56\u00b030\u2032N, 9\u00b034\u2032E in Western Denmark , hemp (Cannabis sativa), pea (Pisum sativum)/barley, spring wheat (Triticum aestivum) and potato (Solanum tuberosum). All crops were represented each year in two fully randomized blocks. Where a catch crop was present before spring barley (+CC), this was a mixture of rye , hairy vetch (Vicia villosa) and rapeseed (Brassica napus). Four of the five rotations were under organic management (O4), and the last rotation under conventional management (C4), where the identifiers O4 and C4 are used in accordance with previous studies from this long-term crop ration experiment , 6.5 kg Mg\u22121 total N and 3.9 kg Mg\u22121 total ammonia-N (TAN) in 2011, and 2.6% DM, 8.2 kg Mg\u22121 total N and 5.4 kg Mg\u22121 TAN in 2012. The two organic rotations with manure application received 99.4 kg ha\u22121 TAN in 2011, and 132 kg ha\u22121 TAN in 2012. The conventional rotation received 120 kg ha\u22121 N in NPK 23-3-6 fertilizer, with similar amounts of ammonium was spring barley. A rotation with neither catch crop nor N fertilization was not represented in the basic design, and instead manure application was excluded from a 1.5 m strip of 2O monitoring period. In early August of both years, the above-ground biomass (including spring barley and weeds) was cut to determine DM production and N uptake in harvested biomass.The amount of N returned to the soil through incorporation of above-ground catch crop biomass was estimated by cuts to 1 cm height in mid-November of 2010 and 2011, respectively. Total DM and N percentage of cuts were determined. In 2011, rotovation and plowing (with incorporation of catch crops where present) took place on 6 April, N fertilization on 12 April, and seeding on 19 April. In 2012, the rotovation and plowing took place on 4 April, N fertilization on 10 April, and seeding on 11 April. There were no further field operations during the N2O emissions. Two-part static chambers were used with permanently installed stainless steel collars covering a 0.75 \u00d7 0.75 m area. The chambers (height 20 cm) of 4 mm white expanded PVC were vented and further equipped with a battery-powered fan for mixing of the chamber headspace during deployment. When chambers were deployed for flux measurements, gas samples (10 mL) were collected through a septum using a polypropylene syringe and hypodermic needle, and stored in evacuated 6 mL exetainer vials for later analysis. Five gas samples were taken over the course of c. 2 h starting around 9:30, the first sample at the time of deployment.The dimensions of field plots were 12 \u00d7 15 m, with a 6 \u00d7 15 m harvest plot in the middle, and to each side sampling plots with dedicated 1 \u00d7 1 m microplots for experimental purposes. For the present study, two available microplots per field plot were randomly selected for monitoring of N2O monitoring started immediately after tillage, and two N2O-flux measurement campaigns were conducted in the week between tillage and fertilization; then collars were temporarily removed for manure application and incorporation, and seeding. Since 2011 showed no significant N2O emissions prior to fertilization, the first N2O flux measurement campaign in 2012 took place on the day of seeding. Three N2O flux measurement campaigns were then carried out during the first week, followed by weekly campaigns until mid-June.In 2011, the N2 at a flow rate of 45 mL min\u22121, and Ar-CH4 (95/5%) at a flow rate of 40 mL min\u22121 was used as make-up gas. Temperatures of injection port, column and detector were 80, 80, and 325\u00b0C, respectively.Nitrous oxide concentrations in the gas samples were determined using an Agilent 7890 GC system with a CTC CombiPal autosampler . The gas chromatograph had a 2-m back-flushed pre-column with Hayesep P, and a 2-m main column with Porapak Q connected to an electron capture detector. The carrier gas was N2O monitoring, soil samples were collected adjacent to micro-plots used for N2O flux measurements. Ten subsamples were taken from each field plot and pooled. Subsamples (10 g) were extracted in 1 M KCl and filtered extracts frozen at \u221220\u00b0C until analyzed for From the time of N fertilization, and then weekly until the end of NDp and D0 are gas diffusivity in soil and air, respectively (m2 s\u22121), \u03a6 is total porosity (m3 m\u22123 soil), \u03b5 is volumetric air content (m3 m\u22123 soil), and \u03b5100 is volumetric air content at \u2212100 cm H2O.where 2O emission measurement campaign in June of each year, two 250 cm3 soil samples were collected from 0 to 10 cm depth for molecular analyses within each of the permanently installed collars used for N2O monitoring. These samples were sieved and mixed separately, and subsamples frozen at \u221220\u00b0C until DNA isolation.After the final Ng for 1 min, and then the supernatant was transferred to a sterile 1.5-mL Eppendorf tube. Ammonium acetate (5 M) was added to the tube to a final concentration of 2 M, and the tube was incubated on ice for 5 min after vortexing. Then, the tube was centrifuged at 16,000 \u00d7 g for 10 min at 4\u00b0C, and the supernatant was transferred to a 9-mL plastic tube. Two mL guanidine HCl (7 M) was added to the tube and mixed by vortexing, and then 900 \u03bcL of the mixture was transferred to a spin column and centrifuged at 14,000 \u00d7 g for 15 s. After centrifugation, the catch tube was emptied, and the process was repeated with another 900 \u03bcL liquid until the entire sample had run through the spin column. Finally, the spin column was washed, and the DNA was eluted according to the manufacturer's instructions.Microbial genomic DNA was isolated from soil samples using Genomic Spin Kit following a modified protocol. A 500-mg soil sample was added to a tube containing small glass beads, followed by 1 mL extraction buffer (A&A Biotechnology). Cells in the soil were lysed using a FastPrep instrument for 30 s at a speed of 5.5, followed by centrifugation at 14,000 \u00d7 \u22121 and kept at \u221220\u00b0C until used for downstream analysis.The extracts were analyzed by 1% (w/v) agarose gel electrophoresis, and the bands containing genomic DNA were cut out for DNA recovery using SpinPrep Gel DNA Kit . The quantities of extracted DNA were determined using Qubit dsDNA BR assays . After quantification, the DNA were diluted to 10 ng \u03bcL5 copies of the plasmid, and 2 \u03bcL of either soil DNA (20 ng) or water. No inhibition was observed with the amount of DNA used.Quantitative real-time PCR (qPCR) was performed using a Bio-Rad CFX96 Real-Time System . Prior to gene quantification, the presence of potential PCR inhibitors in each soil DNA extract was tested by quantifying a known amount of the pGEM-T plasmid using plasmid specific T7 and SP6 primers in the presence of extracted DNA or water. The 15 \u03bcL mixture for inhibition test contained 1 \u00d7 DyNAmo Flash SYBR Green qPCR Master Mix , 1 \u03bcg bovine serum albumin , 0.25 \u03bcM of each primer, 1 \u00d7 102 to 108 gene copies \u03bcL\u22121 were prepared from linearized pGEM plasmids with insertions of fragments of the target genes . The genes nirK and nirS were amplified with primers F1aCu/R3Cu or 0.8 \u03bcM (for nirS and nosZ) of each primer, and 2 \u03bcL (20 ng) of template. Primers and thermal cycling conditions are detailed in Table nirK, nirS, nosZ-I, and nosZ-II, respectively. Results were processed using Bio-Rad CFX Manager software version 3.1 with default settings.Standards ranging from 1 \u00d7 10nirK) or 0.8 \u03bcM (for nirS and nosZ) of each primer, and 20 ng of soil DNA. The thermal cycling conditions were identical to those used for qPCR, with the modification of exclusion of the data acquisition step and the melting curve analyses. Amplicons were analyzed by agarose gel electrophoresis to confirm successful amplification and correctness of fragment sizes. Amplicons of each gene were digested by two different restriction endonucleases separately to produce terminal restriction fragments (T-RFs): nirK amplicons were treated by HaeIII and HpyCH4IV, nirS by HaeIII and HhaI, nosZ-I by BstUI and Sau96I, and nosZ-II by HpyCH4IV and NlaIII . Enzyme digestions were performed according to manufacturer's instructions. T-RFLP profiling was performed using a 3,730xl DNA Analyzer at Uppsala Genome Center, Uppsala University, Sweden, and data on peak positions and sizes were extracted using the Peak Scanner software (Applied Biosystems).PCR for T-RFLP analysis was performed on a Bio-Rad C1000 Thermal Cycler . The same primers as those for qPCR were used for amplification, with the modification that the 5\u2032 ends of the forward primers were labeled with the fluorescent dye hexachlorofluorescein (HEX). The 40-\u03bcL PCR mixture contained 20 \u03bcL DreamTaq Green PCR Master Mix (Thermo Scientific), 4 \u03bcg bovine serum albumin, 0.25 \u03bcM with coefficients coinciding with the weights of the trapezoidal approximation of the respective integrals, as described in Duan et al. for making inferences on the contrasts and post-hoc analyses. The p-values implicitly used in the post-hoc analyses were adjusted for multiple comparisons using the false discovery rate (FDR) using the vegan package. The abundances of T-RFs were presented as relative peak areas, and then transformed using Wisconsin double standardization before being supplied to the metaMDS function. The ordination was performed using a random start for 100 runs, with 100 iterations in each run. The number of dimensions from one to six was tested, and three dimensions were selected for final analysis with the assistance of scree plots. Following ordination, a test was conducted to find whether there was a correlation between T-RFLP profiles and soil properties.Effects of rotations, catch crop, fertilization, and year on gene copy numbers were evaluated by multivariate analysis of variance using the p/D0 values, and cumulative N2O emissions, were averaged using the trapezoidal rule. A matrix containing these soil properties was fit to the ordination using the envfit function with 1,000 permutation tests. Based on the p-values of the results, gradients of soil properties that had a significant effect (p < 0.05) were shown in the ordination plots using the ordisurf function. Ordination and fitting of environmental vectors were performed with T-RFLP profiles of denitrifier genes (nirK and nirS), N2O reduction genes (nosZ-I and nosZ-II), as well as with a combined profile of all four denitrification genes .Soil properties, including The weather in 2011 was generally warmer than in 2012 during the monitoring period, with average temperatures of 11.7\u00b0C in 2011 and 9.9\u00b0C in 2012 in 2012 compared to 2011, whereas WFPS and Dp/D0 were similar in 2011 and 2012 in the two organic rotations without a catch crop , as well as from catch crop residues (+CC), was reflected in soil concentrations of O4-CC-N, and in all treatments before N fertilization in 2011 or manure compared to those with crop residues only (O4+CC-N). This does not directly reflect the differences in N availability, since the retention time in soil before plant N uptake would have been shorter with a more gradual release of N from catch crop residues. In accordance with this, the N uptake with catch crop residues only (O4+CC-N) was greater than the uptake with digested manure only (O4-CC+N) in both years (Table O4+CC+N). The conventional system with NPK fertilizer (C4-CC+N) had higher plant N uptake than all four organic rotations.The accumulation of mineral N was higher in treatments receiving mineral fertilizer . In contrast, organic rotations without catch crop incorporation in spring (O4-CC+N and O4-CC-N) had low N2O footprints in both years, irrespective of fertilization with digested manure. The conventional rotation without catch crop (C4-CC+N) showed different patterns in the 2 years, with little or no N2O emission in 2011, but substantial emissions in 2012. In both years, the N2O emissions in all treatments had returned to the background level by the time of the last sampling. The temporary decline in N2O emission rates around DOY125 in 2011, and DOY130 in 2012, coincided with transient cold spells , and for the conventional rotation (C4-CC+N) were again low and only marginally higher than from the unamended reference were calculated with reference to N input in catch crop residues and N fertilization; emissions were corrected for background emissions, assumed to be represented by treatment O4-CC-N. For treatment O4+CC-N with catch crop residues as only N input, the area-based N2O EF was high in both years (1.7\u20132.3%) compared to the rotation with both catch crop residue incorporation and digested manure (O4+CC+N) at 0.4\u20130.7%. The EF for treatment O4-CC+N receiving digested manure was consistently low. In contrast, the N2O EFs for treatment C4-CC+N receiving mineral fertilizer differed in the 2 years, with no increase in N2O emissions in 2011 and 0.7% in 2012. Yield-scaled EFs were calculated with reference to the N content in plant biomass harvested in each of the experimental treatments in August 2011 and August 2012, respectively with respect to gene abundances within each year, as determined by multivariate analyses of variance. The average ratios of nir to nos gene copy numbers (nir/nos ratios) for all treatments were approximately 1.56 in both years, and there were no significant difference (p > 0.05) across treatments and/or years.The abundances of nirK, nirS, and nosZ clade I and II genes show two distinct clusters, representing samples from 2011 and 2012, which reveals a shift in community structure between years , as well as cumulative N2O emissions (p = 0.035). Gradients of Dp/D0 also partly described this inter-annual variation; however, the correlation was not significant (p = 0.114). Samples were more scattered in 2011 compared to 2012, suggesting less overall heterogeneity in 2012. Separate ordination analyses of T-RFLP profiles for nitrite reduction genes . In contrast, no correlation between community structure and environmental variables was found for N2O-reducing communities, and there was no effect of management on the structure of any of the communities in either year.The ordination of the combined T-RFLP profiles of s Figure . These c2 status compared to 2011 in the conventional rotation, and in the two organic rotations with catch crops. In accordance with this, the N2O emissions were also significantly higher in 2012 in treatments C4-CC+N and O4+CC+N, whereas the difference was not significant in O4+CC-N . Generally, residue quality will determine the extent of net N mineralization from decomposing residues the sum of plant-growing degree days, calculated according to L\u00e9on , were 212O emissions, are not exclusively derived from the most recent input of manure and residues, and that long-term effects of management influence N2O emissions via net N mineralization, nitrification and denitrification.Positive effects of catch crops on yields are normally seen when access to mineral N in fertilizers or manure is suboptimal (Li et al., 2O emissions was indicated by nir/nos gene copy ratios > 1 across all treatments and both years. The increase in N2O emissions in 2012 was corroborated by a significant increase in the abundances of nirK and nirS genes, suggesting that the size of the community matters (Hallin et al., nir/nos ratios between the 2 years, and no correlation was found between nir/nos ratios and N2O emissions. This lack of correlation indicates a more complex regulation of the N2O balance than mere gene copy numbers, and that subsequent regulations of gene transcription and enzymatic activities are important in the shorter term (R\u00f6ling, nosZ may be impaired by low pH (Liu et al., 2O reductase loses activity if exposed to O2 (Thomson et al., 2 supply was probably lower, not higher, in 2012 compared to 2011 (cf. Figure 2O:N2 product ratio (Benckiser et al., 2O emissions from 2011 to 2012 was also associated with changes in the collective communities carrying nir and nos genes (Figure 2O emissions were the result of a balance between N2O production and consumption, the inter-annual shift observed for the collective denitrifier communities was only found for communities carrying nirS, but not nirK nor nosZ, genes (Figure nirS-type denitrifiers accounted for the higher N2O emissions in 2012. Different responses of nirS- and nirK-type denitrifiers is consistent with the concept that the two variants respond differentially to environmental factors (Hallin et al., Genetic potential for net N R\u00f6ling, . Expresss Figure . Althougs Figure . This su2 availability, and metabolizable carbon (Hallin et al., 2 availability. Higher 2 supply was reduced because of higher precipitation. Under such conditions, with more anoxic periods and fluctuating soil O2 status, the denitrifiers have an advantage compared to obligate aerobic microorganisms. The higher 2 availability in the first month after tillage and fertilization, and the availability of metabolizable C, probably together stimulated the activity and growth of nirS denitrifiers in 2012 compared to 2011, leading to the inter-annual shift in community composition and elevated N2O emissions. Hence, the pressure caused by the year-to-year differences in abiotic parameters was stronger than selective pressure from management for these functional groups. This suggests that climatic factors rather than management could impact future N2O emissions from denitrification and climate feedbacks.Organic or mineral N fertilizers, and catch crop residue decomposition, have the potential to modify denitrifier communities through effects on soil 2O emission factors increased in all treatments between 2011 and 2012, although treatments and cropping histories were identical. The annual application of 100 kg N ha\u22121 or more in digested manure resulted in no or barely measurable emissions of N2O in both 2011 and 2012, whereas N2O emissions in treatments with catch crop residue incorporation were high in both years despite lower N input (cf. Tables 2O emission factor of 0.01. However, the patterns of N2O emissions and soil characteristics observed here across five experimental rotations and 2 years suggest that there may be scope for better predictions of N2O emissions by taking site-specific conditions into account. This should include soil physical properties and precipitation, but also the amount and quality of organic C input as a potential driver for denitrification in organic hotspots. Given that catch crop residues, by the inclusion of above-ground parts, will often have a higher degradability and lower C:N ratio compared to roots and stubble of harvested crops (Trinsoutrot et al., 2O emissions, and search for mitigation options.Both area-based and yield-scaled N2O emissions after spring incorporation than rotations without catch crop, and stimulated N2O emissions more consistently than addition of N, either as mineral fertilizer or digested manure. Contrary to our original hypothesis, there was limited evidence for a positive interaction between crop residues and N fertilizer application, whereas the importance of rainfall for N2O emissions from mineral N fertilizer was confirmed. This indicates an important role of crop residues in regulating N2O emissions from sandy soils, where transformations of residue-derived N probably took place in organic hotspots with O2 limitation caused by intense turnover of degradable residue carbon. The abundance of denitrifier genes increased from 2011 to 2012, and the inter-annual shift in community composition was associated with gradients in 2O emissions in 2012 compared to 2011. However, management differences between the five rotations had limited effect on the abundance and structure of nitrite- and N2O-reducers. Together these results suggest that rotations with catch crops significantly stimulated N2O emissions from agricultural soil, but had limited effect on the genetic potential for denitrification and N2O reduction.Rotations with a catch crop during winter had significantly higher N2O emission data. Y-FD and SP wrote the first draft of the manuscript. All authors contributed to the development of the manuscript.SP designed the study and organized the field experiment. Y-FD performed molecular analyses in collaboration with SH and AP. CJ developed the R package used for T-RFLP alignment. RL provided consultation on statistical analysis of NThe authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "Assessing patients\u2019 experience with primary care complements measures of clinical health outcomes in evaluating service performance. Measuring patients\u2019 experience and satisfaction are among Malawi\u2019s health sector strategic goals. The purpose of this study was to investigate patients\u2019 experience with primary care and to identify associated patients\u2019 sociodemographic, healthcare and health characteristics.This was a cross sectional survey using questionnaires administered in public primary care facilities in Neno district, Malawi. Data on patients\u2019 primary care experience and their sociodemographic, healthcare and health characteristics were collected through face to face interviews using a validated Malawian version of the primary care assessment tool (PCAT-Mw). Mean scores were derived for the following dimensions: first contact access, continuity of care, comprehensiveness, community orientation and total primary care. Linear regression models were used to assess association between primary care dimension scores and patients\u2019 characteristics.From 631 completed questionnaires, first contact access, relational continuity and comprehensiveness of services available scored below the defined minimum. Sex, geographical location, self-rated health status, duration of contact with facility and facility affiliation were associated with patients\u2019 experience with primary care. These factors explained 10.9% of the variance in total primary care scores; 25.2% in comprehensiveness of services available and 29.4% in first contact access.This paper presents results from the first use of the validated PCAT-Mw. The study provides a baseline indicating areas that need improvement. The results can also be used alongside clinical outcome studies to provide comprehensive evaluation of primary care performance in Malawi. Measuring patients\u2019 experience with care should be part of the process of establishing services and delivering primary care that users need . This faMalawi\u2019s health sector strategic plan for 2017 to 2022 is based on principles of primary health care and aspires for patient satisfaction . The couMalawi does not have a specific primary care policy that defines the gate-keeping role of primary care. However, patients enter the public health system through a primary care level staffed by nurses and mid-level provider medical assistants. Primary care facilities refer patients to district hospitals where, in addition to the mid-level providers, there are two to three physicians typically without any specialization. Tertiary hospitals are located in four regions of the country.Neno, is a rural district with an estimated population of 170,000. The district is supported by the international non-governmental organization Partners In Health (PIH) to develop a model of district health services. There are two hospitals and seven health centers under the Ministry of Health; four health centers under a faith-based organization and one health center largely for employees of an electricity generation company. Faith-based health facilities charge user fees. With support from PIH, Neno has the highest per capita health funding in Malawi at nearly 66 US$ comparedThe aim of this study was to evaluate the performance of primary care in Neno based on patients\u2019 experience of services. Specifically, the study measured the performance of primary care in Neno through total primary care and dimension mean scores and assessed association between the scores and patients\u2019 sociodemographic, healthcare and health characteristics.Within primary health care research, the US Primary Care Assessment Tool (PCAT) has been widely adapted and used in patient surveys in many countries \u201326. BaseThe development and validation of the Malawian version of the primary care assessment tool (PCAT-Mw) has been documented in another paper . The tooItems are scored on a 4-point Likert scale, with 1 indicating \u201cdefinitely not,\u201d 2 indicating \u201cprobably not,\u201d 3 representing \u201cprobably,\u201d and 4 representing \u201cdefinitely.\u201d For consistency with methods used in PCAT studies in other countries, a mid-scale value of 2.5 is assigned to \u201cnot sure\u201d answers while the mean item score is used for missing data \u201324. AddiA face to face administered cross sectional study was carried out in August \u2013September, 2016 in outpatient clinics of ten facilities \u2013 the two hospitals and eight health centers in Neno district. Facilities were selected purposefully to include all the public health facilities in the district. One of the faith-based health centers was included as it had signed a memorandum of understanding with the authorities to remove the user fees and run as a public facility. Patients were at least 18\u2009years of age, must have used the facility for at least six months and must have visited the facility for at least 3 times. Acutely ill, frail looking or severe mental health patients were excluded in order to allow them to receive urgent medical attention. As this study\u2019s data collection was part of the validation of the PCAT-Mw through metric analyses, sample size was calculated based on similar studies using at least 5:1 subject to item ratio \u201326. Sampth\u2019 patient was then asked for consent to participate in the study.Six interviewers were trained to conduct the PCAT-Mw survey. A pilot study showed that the questionnaire would take about 45\u2009min to administer. Each interviewer was therefore expected to interview seven patients per day. The sampling frame was 50\u201360 patients waiting to be seen on each working day. Sampling interval (n) was calculated by dividing the number of waiting patients by seven. A random starting point was obtained using a smart phone random number generator. Each \u2018nIndependent variables were sex, age, education, geographical location, duration of contact with facility, reason for attending: chronic or acute condition, distance to facility measured through time taken to walk to the facility, cost of travel to the facility, waiting time, individual health facility affiliation and self- rated health status.Data were entered into and analyzed using the IBM SPSS Statistics 24.0.0 (2016) package. Dimension mean scores were derived by dividing the sum of the item means by the number of items in the dimension. A score\u2009\u2265\u20093 was considered \u2018acceptable to good performance\u2019 and\u2009<\u20093 as \u2018poor performance\u2019. , 29 Totap-value less than 0.05 were used as thresholds of statistical significance.Next, independent sample T tests were done to compare dimension means and total primary care scores between the sexes. Multiple linear regression models were used to assess association between sociodemographic, health care and health characteristics and total primary care scores after adjusting for sex and age. Further, stepwise exclusion regression models were used to identify independent variables that accounted for significant variances in patients\u2019 experiences with regard to total primary care and individual dimension mean scores. For all tests, confidence intervals of 95% and a A total of 649 patients were approached and 18 2.8%) declined to participate in the study. This paper presents results from 631 completed questionnaires. Missing data accounted for approximately 1.9% of all data. Table\u00a0.8% declip\u2009=\u20090.01), first contact access (p\u2009=\u20090.021), relational continuity (p\u2009=\u20090.044) and comprehensiveness of services available (p\u2009=\u20090.017).Table\u00a0p\u2009=\u20090.033). Increasing self-rated health status was associated 0.8 points higher scores at good and 0.9 points for very good to excellent , duration of contact with facility of more than 4\u2009years was associated with scores 1.1 points higher while acute presentation was associated with 0.6 points lower . At the individual facility level, patients from the health centers scored significantly below the reference outpatient clinic at the district hospital by points ranging from 0.6 to 2.0. Level of education, distance to the facility, cost of travel to the facility and waiting time were not associated with total primary care scores.(Table\u00a0Table\u00a0. These fTo our knowledge, this paper is the first time primary care performance has been measured based on patients\u2019 experience in Malawi. The study shows poor performance in relational continuity, comprehensiveness of services available and first contact access. Acceptable performance was achieved in community orientation, comprehensiveness of services provided, and communication continuity of care.The study shows that more primary care visits were from female patients; who also tended to have lower levels of education similar to findings in a South African study . The femMost public primary care facilities in Malawi serve a geographically recognizable catchment population. This provides opportunity for relational continuity of care and population based primary care approaches. Population management, stable patient-team partnership, and continuity of care are known building blocks of effective primary care systems This stuMost patients\u2019 reason for their primary care visit in this study was care for acute conditions. However, care for chronic conditions was associated with better overall experience. Chronic care patients were given appointments for their visits and were usually attended by the same team. Community health workers also followed up patients when they missed their appointments. Further prospective studies should be carried out to assess if these processes of care would explain the differences and if the primary care experience of patients presenting with acute conditions would improve when offered the same management.Health centers play an important gate-keeping role that is essential to well-functioning health systems. This is not clearly defined in Malawi\u2019s district health system although patients are expected to first report to their public primary care facilities by virtue of proximity. In this study, health centers were scored lower than the outpatient clinics at the hospitals with regard to total primary care, first contact access and comprehensiveness of services available. A study in several African countries showed that staffing levels, experience of providers and facility management were associated with quality of care provided . While tUsers who rated their health status as \u2018good\u2019 or \u2018very good\u2019 also rated primary care experience better than those who rated their health as \u2018poor\u2019. Similar findings have been reported in the Korean and South African PCAT studies , 29. AltEducation, age, distance to facility and cost of travel were not associated with total primary care scores. A lack of association between socioeconomic factors and patients\u2019 experience of primary care has also been reported in other studies. , 29, 34 Low scores noted in first contact access, comprehensiveness of services available and relational continuity of care are similar to findings in other studies , 34. In The factors that were significantly associated with patients\u2019 experience of primary care accounted for much higher variances in first contact access and comprehensiveness of services provided dimensions, 29.4 and 25.2% respectively. This underscores the importance of access and availability of services as the core factors on which the other dimensions of primary care depend. Utilization, continuity, coordination and service provision will take place successfully only when people have effective access to facilities and services that they need which is an important objective of universal health coverage . ImproveStrengths of the study include use of a globally accepted tool that had been culturally adapted and validated for use in Malawi . The PCAThe study had a number of limitations. First, because this was a cross-sectional study, causal inferences to findings are not possible. Second, liability to several types of bias is noted: recall, response and selection. The face to face interview partly minimized recall bias through clarifying questions whenever that was necessary. Potential for response bias was possible because data collection was done during clinic visit. Selection bias might have resulted from excluding those who were acutely ill, frail or had severe mental illness and interviewing only patients who attended clinics and might have had better experience than the patients excluded. The study was also carried out in one district only. In another subsequent study, we have included multiple sites to improve generalizability of results. Third, the factors identified accounted for 10.9% of total primary care score variances, 25.2% in the comprehensiveness of services available and 29.4% in the first contact access. Potential unmeasured factors such as the actual quality of services provided and health care workers\u2019 skills, attitude and behaviors might confound the results. Fourth, this was a study of patient experiences of primary care and not of health outcomes. Further studies could assess correlations between clinical outcomes and patient experiences of care and the extent to which patient experiences predict later health outcomes.This paper presents results from the first use of the validated PCAT-Mw to assess patients\u2019 experience of primary care and associated sociodemographic, health care and health factors in a rural district in Malawi. Patients reported acceptable levels of performance in the primary care dimensions of communication continuity of care, comprehensiveness of services provided and community orientation. Poor performance was reported in first contact access, comprehensiveness of services available and relational continuity of care. Our experience indicates that the PCAT-Mw can be used alongside clinical health outcome studies to provide.comprehensive evaluation of primary care performance in Malawi. The areas of poor patient experience need further research to evaluate possible explanations and to inform appropriate interventions."} +{"text": "Moderate intake of total LC \u03c9-3 PUFA (approximately 0.5\u20131 g/day) was significantly associated with a lower prevalence of depression. Conclusion: In our study, moderate fish and LC \u03c9-3 PUFA intake, but not high intake, was associated with lower odds of depression suggesting a U-shaped relationship.Background: The aim of this analysis was to ascertain the type of relationship between fish and seafood consumption, omega-3 polyunsaturated fatty acids (\u03c9-3 PUFA) intake, and depression prevalence. Methods: Cross-sectional analyses of the PREDIMED-Plus trial. Fish and seafood consumption and \u03c9-3 PUFA intake were assessed through a validated food-frequency questionnaire. Self-reported life-time medical diagnosis of depression or use of antidepressants was considered as outcome. Depressive symptoms were collected by the Beck Depression Inventory-II. Logistic regression models were used to estimate the association between seafood products and \u03c9-3 PUFA consumption and depression. Multiple linear regression models were fitted to assess the association between fish and long-chain (LC) \u03c9-3 PUFA intake and depressive symptoms. Results: Out of 6587 participants, there were 1367 cases of depression. Total seafood consumption was not associated with depression. The odds ratios (ORs) ) for the 2nd, 3rd, and 4th quintiles of consumption of fatty fish were 0.77 (0.63\u20130.94), 0.71 (0.58\u20130.87), and 0.78 (0.64\u20130.96), respectively, and Unipolar depression is identified as one of the leading causes of burden of disease worldwide, measured in adjusted life years . In addiAccording to the International Society for Nutritional Psychiatry Research, although the growth in scientific research related to nutrition in psychiatry may be recent, it is now at a stage where it can no longer be ignored . In thisThe aims of this study were (1) to cross-sectionally analyze the association between the consumption of different types of seafood products and \u03c9-3 PUFA and depression, (2) to establish the shape of the dose\u2013response curve and the potential existence of a non-linear threshold effect for \u03c9-3 PUFA and, finally, (3) to ascertain if these associations differed by sex, the presence of cardio-metabolic disorders, or several life-style habits in the PREDIMED-Plus trial.http://predimedplus.com/. This study was registered at the International Standard Randomized Controlled Trial with number 89898870 (registration date: 24 July 2014).This study was based on the cross-sectional analysis of baseline data within the frame of the PREDIMED-Plus trial, a six-year ongoing multicenter, randomized, parallel-group clinical trial conducted in Spain to assess the effect of an intensive weight-loss intervention based on an energy-restricted traditional Mediterranean diet, physical activity promotion, and behavioral support on hard cardiovascular events, in comparison with a control group receiving usual care intervention only with energy-unrestricted Mediterranean diet recommendations. A more detailed description of the PREDIMED-Plus study is available at 2], who met at least three components of the metabolic syndrome (MetS) according to the updated harmonized criteria of the International Diabetes Federation and the American Heart Association and National Heart, Lung, and Blood Institute [A total of 6874 participants were recruited and randomized in 23 recruitment sites from different universities, hospitals, and research institutes of Spain. The eligible participants were community-dwelling adults with overweight/obesity [body mass index (BMI) \u2265 27 and < 40 kg/mnstitute .n = 259), and participants with missing data in smoking status (n = 28) were excluded from the analyses. Finally, 6587 participants were analyzed.For the present analysis, participants who were outside of predefined limits for baseline total energy intake acid were collected through six items . Consumption of flaxseed and canola oils was not considered because these oils are not consumed in Spain. Nutrient intakes were computed using Spanish food composition tables ,21. The Depression was collected at baseline and was defined as a self-reported life-time medical diagnosis of depression or the habitual use of antidepressants by the participant. The use of self-reported medical diagnosis of depression collected through a questionnaire has been validated in another Spanish study showing adequate validity . MoreoveInformation about socio-demographic , and lifestyle-related variables were obtained from the baseline questionnaire.Anthropometric variables were determined by trained staff and in accordance with the PREDIMED-Plus operations protocol. Weight and height were measured with calibrated scales and a wall-mounted stadiometer, respectively. BMI was calculated as the weight in kilograms divided by the height in meters squared.Leisure-time physical activity was assessed using the short form of the Minnesota Leisure Time Physical Activity Questionnaire validated in Spain ,25 proposed by Trichopoulou and was based on the consumption of eight items . Persona2, continuous), presence of several diseases at baseline, and total energy intake and adherence to the Mediterranean Diet . Tests of linear trend across increasing quintiles of exposures were conducted by assigning the medians to each quintile and treating them as continuous variables.Logistic regression models were fitted to assess the relationship between the energy-adjusted consumption of different types of fish and seafood products and intake of \u03c9-3 PUFA (in quintiles) and the prevalence of depression. Odds ratios (ORs) and their 95% CIs were calculated considering the lowest quintile as the reference category. To control for potential confounding factors, the results were adjusted for sex, age, marital status (married/other), educational level , smoking , physical activity during leisure time (quintiles of METs/min-w), BMI and depressive symptoms assessed through the Beck Depression Inventory-II.p-values for the interaction were calculated with the log-likelihood ratio test.Finally, in order to assess the possible effect modification by sex, type 2 diabetes prevalence, adherence to the MDS, or smoking, product-terms were introduced in the different multivariable models. In addition, We identified 1367 participants at baseline with life-time prevalence of depression. p for trend = 0.759) between intermediate categories of consumption of fatty fish (approximately 10\u201325 g/day) and the prevalence of depression, suggesting a U-shaped relationship. As compared with the reference category, the ORs (95% CI) for the consecutive quintiles of consumption of fatty fish were 0.77 (0.63\u20130.94), 0.71 (0.58\u20130.87), 0.78 (0.64\u20130.96), and 0.84 (0.69\u20131.03), respectively. Similarly, moderate consumption of lean fish (approximately 20 g/day) was also associated with lower depression prevalence, with an OR for the 3rd vs. the 1st quintile of 0.77 (0.63\u20130.94).The association between energy-adjusted quintiles of consumption of different types of seafood products and depression is shown in n = 25) or in which the diagnosis date was very remote . In this sub-sample the results were no longer significant although the magnitude of effect was quite similar to that observed in the overall sample; the ORs and 95% CI for successive quintiles of fatty fish consumption were: 1 (ref.), 0.85 (0.60\u20131.20), 0.86 (0.61\u20131.20), 0.89 (0.63\u20131.25), and 0.90 (0.64\u20131.26). In the case of LC \u03c9-3 PUFA, we found the following estimates for the association between quintiles of intake and depression: 1 (ref.), 0.83 (0.60\u20131.15), 0.83 (0.59\u20131.15), 0.71 (0.51\u20131.00), and 0.82 (0.59\u20131.13).In an ancillary analysis, we excluded from the analyses all depression cases in which age of depression diagnosis was not available . Similarly, none of the product terms was statistically significant in the analysis of LC \u03c9-3 PUFA.In this cross-sectional analysis of the PREDIMED-Plus trial, we observed a non-linear inverse association only for between moderate levels of fish consumption (particularly fatty fish consumption) and LC \u03c9-3 PUFA intake and life-time prevalence of depression and depressive symptoms intensity, but not for the highest levels of intake (U-shaped relationship). The results were not modified by sex, by the presence of type 2 diabetes, or by several life-style factors such as smoking.Although a recent study has found a dose\u2013response inverse relationship between fish consumption and depression with an OR of 0.52 (95% CI: 0.37\u20130.74) for those participants with the highest consumption of fish (\u22654 times/week) as compared to those with a consumption lower than 1 time/week , a meta-Several possible explanations may contribute to a better understanding of this non-linear association between fish consumption and depression. One of the more feasible explanations is the intake of other nutrients that could counteract the effect of fish or LC \u03c9-3 PUFA intake on depression, including \u03c9-6 PUFA intake as some authors have suggested ,31. ActuThe results observed for specifically fatty and lean fish were not reproduced for other kinds of seafood products or for canned fish. The content in LC \u03c9-3 PUFA could differ regarding fish/seafood sources, being higher in fatty fish and lower in other sea products such as mollusks, crayfish, or octopus. Although the level of LC \u03c9-3 PUFA in canned fish (mainly tuna and anchovy in Spain) seems to be similar to that found in fresh fish , the conThe role of LC \u03c9-3 PUFA in depression has been extensively evaluated as coadjuvant in antidepressant treatment ,9. The rA large number of studies have reported the possible role of inflammation in depression through mechanisms such as activation of the hypothalamic\u2013pituitary\u2013adrenal axis, tryptophan depletion, neurotransmitter transport and metabolism disturbances, and decrease in brain-derived neurotrophic factor availability ,36. All The strengths of this study are its large sample size, the exposed populations consisting of people of both sexes, the adjustment for a wide array of potential confounders, and the use of validated tools to assess information. Some potential limitations of our study also need to be mentioned. The cross-sectional design of the study does not afford us the possibility of establishing any causal association between fish consumption and depression, and the presence of a possible reverse causality bias cannot be excluded. In fact, the presence of a depressive disorder could lead to less healthy dietary habits including lower fish consumption. Contrarily, participants with depression could also increase their fish or \u03c9-3 PUFA intake to improve their depressive condition. Another possible caveat might be that our participants are not representative of the general Spanish population. Our participants were aged between 55 and 75 years, were overweight or obese, and all met criteria for metabolic syndrome. Nevertheless, the lack of representativeness does not preclude the establishment of associations . Self-reIn conclusion, the findings from the current study support the idea that moderate intake of fish and LC \u03c9-3 PUFA (U-shaped relationship) may protect against depression independently of the sex differences, the presence of cardiometabolic disturbances, or life-style habits. More studies with longitudinal design are needed to confirm the reported results and definitively establish the role of seafood products and \u03c9-3 PUFA in depression development."} +{"text": "The latter predicts that the microcavities are made almost completely of SiO2, implying less photon losses in the structure. The theoretical photonic-bandgap structure and localized photonic mode location showed that the experimental spectral peaks within the UV photonic bandgap are indeed localized modes. These results support that our oxidation process is very advantageous to obtain complex photonic structures in the UV region.Obtaining silicon-based photonic-structures in the ultraviolet range would expand the wavelength bandwidth of silicon technology, where it is normally forbidden. Herein, we fabricated porous silicon microcavities by electrochemical etching of alternating high and low refraction index layers; and were carefully subjected to two stages of dry oxidation at 350\u2009\u00b0C for 30\u2009minutes and 900\u2009\u00b0C, with different oxidation times. In this way, we obtained oxidized porous silicon that induces a shift of a localized mode in the ultraviolet region. The presence of Si-O-Si bonds was made clear by FTIR absorbance spectra. High-quality oxidized microcavities were shown by SEM, where their mechanical stability was clearly visible. We used an effective medium model to predict the refractive index and optical properties of the microcavities. The model can use either two or three components (Si, SiO These can be formed by interchanging a sequence of different dielectric layers, each one with different refractive index. The refractive indexes are obtained by applying different current pulses, which change the porosity in each layer. Some optoelectronic devices such as sensors9, Light Emitting Diodes10 and photodetectors11 have been developed as well. Unfortunately, natural oxidation of the PS pores walls occurs, and it is contaminated by impurities when in contact with air13; therefore, it is an unstable material. In the past years, the thermal oxidation of Crystalline Silicon (Si) was arduously investigated mainly focusing on the influence of different oxidation characteristics such as the major role of pre-oxidation approaches in the strategy of thermal growth of high grade oxides on Si14. Therefore, stabilizing the PS optical parameters is solved by inducing its oxidation. This oxidation process improves PS transparency at short wavelengths of the VIS spectrum15 because silicon dioxide (SiO2) is a transparent material with low polarizability16. The porous texture in p+ and p\u2212 Si substrates is very sensitive to heat treatment; even at low temperatures (around 400\u2009\u00b0C) a thickening of the texture is observed which reduces the surface area and the reactivity of PS to oxidation; this effect increases with temperature17. The thermal oxidation process does not alter the morphology of the porous layers; only the pore size decreases after oxidation; however, the pore surface density is conserved18. Due to the low difference in thermal expansion coefficients between Si and SiO2, the oxide formation inhibits the PS skeleton relaxation19. V. Agarwal proposed a method that modifies the photonic bandgap (PBG) of PS structures by introducing sub-mirrors coupled with MCs to explore three different wavelength bandwidths from ultraviolet (UV) to near infrared (NIR). In order to stabilize these mirrors, they were partially oxidized with dry oxidation20. Gelloz used High-Pressure Water Vapor Annealing (HWA) for the stabilization of BRs obtained at low anodization temperatures (\u221220\u2009\u00b0C) using p-type Si; HWA was conducted at pressures from 1.3 to 2.6\u2009MPa, at 260\u2009\u00b0C and for three hours; this method improves the transparency of PS layers with an efficient response in the UV region due to a high oxidation of the PS structures21. F. Morales has manufactured BRs at room temperature. To stabilize the optical parameters of BRs in the UV range dry oxidation is performed22. BRs based on Oxidized Porous Silicon (OPS) and TiO2 were manufactured by Christian R. Ocier. In the first stage, PS BRs with a stopband at 530\u2009nm were fabricated and then were thermally oxidized. After oxidation, the stopband shifted to 440\u2009nm. In the second stage, OPS BRs were infiltrated with TiO2, and the stopband red-shifted to 492\u2009nm; at maximum infilling, with TiO2, the stopband had a transmission of 2% (at 620\u2009nm)23. M. Ghulinyan and C. J. Oton reported MCs centered in the infrared region where the PS is almost transparent; absorption losses play a much less important role than other loss mechanisms such as light scattering by the pores and the interfaces between layers24. Light scattering is one of the drawbacks for photon transmission in PS due to the disorder typically occurring within it25. Specifically, photon losses occurring within the near infrared interval are dominantly connected to Rayleigh scattering26. Irrespectively, scattering losses in PS waveguides can be diminished by oxidation. For instance, propagation losses in the visible and near infrared spectra were measured but when the PS waveguides were oxidized the losses decreased27. In another study Vorozov et al. have achieved a 75% reduction of the scattering losses in PS waveguides after oxidation28.Porous Silicon (PS) is a material used in the manufacture of one-dimensional Photonic Crystals: Bragg Reflectors (BRs)29. The second stage was performed at high oxidation temperature of 900\u2009\u00b0C. This oxidation process transforms almost completely the PS MCs into porous SiO2 MCs, as it was indicated by our three-component effective medium approximation, the high visible light transparency of the oxidized MCs and the presence of prominent Fourier-Transform Infrared Spectroscopy (FTIR) Si-O-Si peaks. Hence, this oxidation transformation induces an UV shift of the MCs localized mode, and a decrease of the optical losses within the MCs. In order to assure that the porous multilayers structure after oxidation was preserved, we used Scanning Electron Microscopy (SEM). Furthermore, we theoretically fitted the experimental\u00a0transmission and reflection spectra before and after dry oxidation; and the theoretical bandgap structure and localized mode location were calculated. The experimental localized mode was found inside the forbidden PBG and close to the theoretical localized mode prediction. The MCs optical losses were qualitatively assessed via the absorbance spectrum, whose amplitude decreased more than 50% in the UV light range and almost disappeared within the visible light after oxidation. We also followed the changes of the localized mode transmission peak, whose amplitude and bandwidth are modified by optical losses. A modified Breit-Wigner equation was used to get the dispersion in the localized mode due to absorption and scattering losses30; from this equation we estimated photon loss rates, which includes both types of losses, Rayleigh scattering and light absorption, whereby the lifetime of photons and photon loss can be defined at the localized mode wavelength.Taking into consideration the aforementioned Si and PS pre-oxidation and oxidation concepts, hereunder we present the manufacturing of porous silicon MCs with optical response in the UV range, which is a two-fold process. First, we fabricated porous silicon microcavities consisting of two Bragg reflectors with a defect layer between them with optical response in the blue range. The MCs had a localized mode inside their PBG. Second, the MCs were subjected to two stages of dry oxidation. The first stage is a pre-oxidation at low-temperature of 350\u2009\u00b0C, which is necessary to stabilize the silicon structure, to avoid the coalescence of the pores during further treatments at higher temperatureFinally, we demonstrate here that it is possible to obtain highly transparent MCs within the UV range thanks to the dry oxidation carefully carried out in two stages. This result opens up the possibility of novel PS based photonic devices.+ Si substrate. Alternating quarter-wave layers with high refractive index (nH) and low refractive index (nL) were fabricated to create two BRs\u00a0with a defect layer (refractive index nd) between them. The Bragg Reflectors had a PBG\u00a0in the blue band, approximately between 420\u2009nm to 560\u2009nm. The layers with low porosity can be observed in light gray, and layers with high porosity are displayed in dark gray \u00a0of Maxwell-Garnett for two components: Si and air2) is formed to get oxidized porous silicon (OPS). We used the model proposed by J. E. Lugo, for a system of three components: Si, SiO2 and air to obtain the porosity (pox), Si fraction (fSi) and SiO2 fraction (fox), and the complex refractive index of oxidized layers31.The PS-MCs after dry oxidation are no longer a mixture of Si and air; a third component (SiOpox), complex refractive index, Si fraction (fSi) and SiO2 fraction (fox) of layers having three components. The complex refractive index and porosity of OPS decrease when SiO2 is present in the porous layers. This decrease is attributed to the dry oxidation where a Si fraction and an air fraction are occupied by SiO2 after oxidation. SiO2 has a refractive index lower than that of Si but slightly higher than the corresponding to air.Table\u00a0Table\u00a0nL indicated that the dL layers are wholly oxidized while the nH value indicates that some unoxidized PS remained in the dH layers; this was confirmed by measurements of photoluminescence32.A study on dry oxidation of an asymmetrical BR structure based on PS with 20 periods has been reported by G. Amato. The BRs presented a blue shift caused by a decrease in the refractive index value. There the value of dL layers (nL) contain none or very small Si fractions, and in dH layers (nH) Si fractions are small ranging from 0% to 4.95%. The complete oxidation of PS layers depends primarily on the quantity of Si in the PS layers and their pore surface area. Therefore, they are oxidized differently.We found similar theoretical results Table\u00a0 where dL\u2212134 and the vibration mode of Si-O-Si symmetric stretching (1015\u2009cm\u22121)36. The peak at 2359\u2009cm\u22121 corresponds to CO2 bonds37, which is always present in the measurements.Oxidation leads to the appearance of vibration bands of Si-O-Si. The spectrum in Fig.\u00a0As the oxidation time increases from 30 to 120\u2009minutes, there is a slight increase in both peaks amplitudes at the wavenumber representing symmetric stretching and bending vibration modes of Si-O-Si bonds (see the insert). In Table\u00a0These FTIR results support the fact that PS was transformed into OPS, thereby changing the optical properties of the material.38. It was used to calculate the theoretical spectrum of transmission and reflection in MCs based on PS and OPS. Transmission and reflection spectroscopies were employed to obtain the experimental spectrum of MCs either based on PS or OPS. An UV-Vis-NIR spectrophotometer working in the wavelength range from 200 to 800\u2009nm was used.The Transfer Matrix Method is very well-known39 having a minimum reflection peak of 40%. It is approximately positioned at a wavelength of 480\u2009nm; Fig.\u00a038.The theoretical and experimental reflection spectrum of five unoxidized MCs are shown in Fig.\u00a0All MCs were designed to have a localized mode at 420\u2009nm. However, from Fig.\u00a040 has a strong absorption in the VIS and UV ranges. Figure\u00a0Figure\u00a0Moreover, the comparison between theoretical and experimental transmission spectra is shown in Fig.\u00a012.We can see a small variation of the localized mode position in the eight MCs shown in Fig.\u00a018. PS structures usually are oxidized to reduce optical losses. It has been reported that when obtaining PS structures by electrochemical anodization the sample surfaces are roughened; additionally, when such structures are oxidized by dry oxidation, they showed a surface roughness decrease that was a function of the oxidation temperature41.These loss factors mainly depend on the size of the pores, roughness between the interfaces and the intrinsic absorption coefficient of the Si2, within PS layers, that causes a refractive index decrease and a PS thickness increase. Therefore, the growth of SiO2 in PS films obeys the law valid for a Si film without pores42 and the combination of Si with oxygen increases the volume occupied by the solid base of the OPS film. This volume expansion occurs because the density of SiO2 is slightly less than that of Si44.The primary purpose of subjecting MCs to dry oxidation was to obtain MCs in the UV range and thus stabilize their optical parameters such as the refractive index. In Fig.\u00a0nH) and low refractive index (nL) whereby the microcavity is constituted.The MCs results displayed in Fig.\u00a0The amplitude of the reflection spectrum in the MCs is preserved, except for MC5, as it shows a larger amplitude in the UV range because it was oxidized for a longer time; there the presence of oxide has made the microcavity more reflective. All MCs have a localized mode in the UV range.Table\u00a0The transmission spectrum of three OMs placed on quartz substrates is depicted in Fig.\u00a0Table\u00a045, where it is reported a refractive index decrease due to SiO2 growing inside the Si matrix. Other authors reported a mixture of OPS and titanium dioxide (TiO2) to form transparent BRs in the VIS range where the oxidation process eliminates optical losses across the VIS range23. Several studies on BRs have been made in the UV range using different Si substrates and applying two oxidation processes22. The oxidation was used for stabilization of the\u00a0refractive index; the authors did not compare between experimental and theoretical results, they only supported their results with the experimental reflection spectrum confirming a decrease of light absorption, and by keeping the same amplitude reflection level in the VIS and UV range. We also obtained BRs within the UV range in this work. Our experimental and theoretical results were compared, and we found similar results (see supporting information).The effects of dry oxidation on BRs have been well studied within the VIS-NIR range46; the localized mode slightly blue shifted over time as a result of the microcavity\u00a0aging. A difference in the response of localized modes can be expected due to the doping inhomogeneity of the wafers as well12. If a reliable MC theoretical model is desired absorption losses at short wavelengths have to be taken into account, because such losses usually increases in that range.Several studies\u00a0of free-standing MCs and coupled MCs in the NIR region have been reported because Si is considered transparent to these wavelengths47.A study on OM in the NIR band has been reported; there the effect of absorption of PS was not considered in the simulation because the extinction coefficient is very small. However, the experimental reflection spectrum with its theoretical reflection spectrum did not show a good agreement. MCs were oxidized at different temperature for 5\u2009minutes and a wavelength shift to low wavelengths was observed2, and air. The results showed that the model proposed by J. E. Lugo (black line) fitted the experimental (blue line) transmission spectrum of the MC6 microcavity in the whole explored wavelength interval much better.We compared one MC experimental transmission spectrum with its theoretical counterpart obtained with three EMT approximations, namely Bruggeman, Lugo, and Looyenga Fig.\u00a0. Here weThe other two models predicted more absorption in the UV range; that is, Bruggeman and Looyenga overestimated the amount of Si in the OPS layers, but in the whole VIS-NIR range, the three effective medium approximations predicted similar results between the experimental and theoretical transmission spectrum.In the PBG simulation and the theoretical MCs transmission spectrum calculation on quartz\u00a0substrates, we considered incident light as perpendicular to the multilayers plane. For the PBG simulation and the theoretical MCs reflection spectrum calculation on Si substrates, the incident light had an angle of 20 degrees off the perpendicular to the multilayers plane. Our theoretical calculations of bandgap structures, transmission and reflection spectra approximated the experimental results well.39. The variational method localized mode location was 16\u2009nm off with respect to the experimental result, while the transfer matrix method predicted the same position as the experimental result.A microcavity exhibits many photonic modes in its transmission spectrum; some of them are known as localized or localized modes and others as extended modes. The localized mode frequency always lies inside the PBG. In our work, the MCs were designed to present a localized mode within the UV-VIS range. The microcavity structure was antisymmetric and the Fig.\u00a039.The wavelength of the localized mode depends on the value of the refractive index in the defect layer; the width of the peaks can be narrowed, and the steepness of the transmission curve increased by extending the length of the defect segment or by increasing the number of periods in the reflectorsThe comparison between the theoretical and experimental reflection spectrum within the UV range of five MCs on Si substrates is shown in Fig.\u00a0The localized modes are also found in the transmission spectrum. In Fig.\u00a0All the theoretical localized modes shown in Fig.\u00a0The localized modes are used for sensor applications. Photonic structures presenting localized modes have been exposed to various solvents as alcohols , wines, deionized water; these compounds affect the defect region by changing its refractive index due to the absorption of molecules inside the pores. Commonly, the localized mode peaks shift toward low-frequencies (long wavelengths) because the solvents used have a higher refractive index value than that of air.24; however, in the UV-VIS range both loss mechanisms have to be considered. We studied the absorption and Rayleigh scattering losses in UV microcavities. First, both the theoretical and experimental absorbance spectrum in the UV-VIS range was calculated using the Beer-Lambert law. Figure\u00a02) inside of PS layers.MCs centered in the infrared and the visible red region have been reported before, where the PS is almost transparent. Absorption losses play a much less significant role than other loss mechanisms such as light scattering or dispersion by the pores and the interfaces between layers30. The modified Breit-Wigner equation is used to fit the experimental transmission spectrum (blue line) as a function of energy. The oxidation process modifies the localized mode position and also sharpens the microcavity resonance bandwidth as can be observed in Fig.\u00a0Second, a theoretical analysis of absorption and Rayleigh scattering losses, at the localized mode wavelength have been realized in this work. We used the equation of Breit-Wigner modified by Miller to get the lifetime of photons and photon losses at the localized mode wavelenghtWe found that the lifetime of photons in the VIS range is smaller than the lifetime of photons in the UV range as shown in Table\u00a0How much of these losses are due to Rayleigh scattering only? And, is the level of Rayleigh scattering higher, in the visible or UV range? These are relevant questions because the scattering should become more pronounced in the UV range, as the ratio between the size of the porous structure and the wavelength increases. Thereby this scattering may limit the ultimate performance of the UV components that are made with the proposed approach.25. This model treats the features of porous silicon structure as a conglomeration of crystalline silicon wires with typical radius a\u22a5 and typical length a||, branches begin to have fluctuations and move away from a cylindrical shape. Due to these fluctuations in the dielectric constant, the Rayleigh scattering is an important parameter as a medium of energy loss. In this model, the parameters a\u22a5|| should be smaller than the period of the Bragg mirrors, that is a\u22a5||\u2009<\u2009dH,dL (see methods for details).We can estimate photon losses due to Rayleigh scattering before and after oxidation in our samples. We used a quantum mechanical model of scattering that takes into account the porous structure disorder, which allows the estimation of the total rate of Rayleigh scattering from the fundamental microcavity mode\u03b1 in the UV range are due to Rayleigh scattering. After using the quantum mechanical model, in average Rayleigh scattering levels are lower in the UV range than the visible range in 20%. In average Rayleigh scattering contributes up to 55% in the visible range. All results are summarized in Tables\u00a0Estimating Rayleigh scattering in the UV range is straightforward. Since the fraction of crystalline silicon is less than 1% in samples MC6, MC7, and MC8), we can infer that all optical losses 2 on the surface of the filaments throughout the porous structure, the oxide perfectly penetrates the pores; this is possible due to the particle size of oxygen (276\u2009pm). The Si filaments thickens because part of Si is converted into SiO2 and the physical thickness of each PS layer grows. The thickness of SiO2 grown on PS filaments depends upon the temperature and oxidation time.From these results, we can conclude that dry oxidation helps to reduce absorption and scattering losses in the UV region; forming SiO2 is a transparent material at short and larger wavelengths; for this reason, the extinction coefficient is almost negligible. The refractive index change of oxidized PS structures is the result of the formation of SiO2 inside the Si matrix; the Si with refractive index around of 3.5 is consumed forming OPS and part of the air with refractive index equal to 1 is replaced by SiO2 reducing the pore size. The values for the refractive index of Si and SiO2 are reported in the literature48.SiO2 microcavities to modulate the responsivity of a broadband photodetector in the UV region. The advantage of this kind of application is that the photodetector will be more selective to some specific UV wavelengths due precisely to the narrowness of the photonic bandgap. An example of this application is shown in Fig.\u00a0One specific application is to use porous Si-SiO2 microcavity\u00a0filter.The maximum transmission, due to the localized mode of sample MC7, dominates over other wavelengths within the UV range and induces a photocurrent maximum peak in the UV-A range. In this way, the photo-response of a photodetector in the UV was narrowed by using a porous Si- SiO40. Their photonic band structures were measured employing reflection and transmission spectroscopies and theoretically using the matrix transfer method, and the agreement between experiment and theory was quite good. From the experimental reflection and transmission spectra, a localized mode can be observed within the PBG; and from SEM the spatial presence of a defect layer is clearly noticeable. We theoretically confirmed the very nature of such mode and indeed they are localized modes. The dry oxidation is used to obtain MCs in the UV band. The presence of silicon dioxide was confirmed by FTIR absorbance spectra. The characteristic bands with main peaks at 795, and 1015\u00a0\u2009cm\u22121 correspond to the bending, and stretching, of Si-O-Si bonds, respectively. The position and the shape of the main Si-O-Si vibrational band at 1015\u2009cm\u22121 might indicate a stoichiometric composition50. Moreover, a carbon impurity vibrational band is also observed in the FTIR spectra, whose peak is smaller than the main Si-O-Si peak. The SiO2 grows within the porous structure. The excellent quality of silicon dioxide grown on the structure modifies the optical path and porosity. Thus, the refractive index and photon losses decreased, and the localized mode wavelength position shifted towards lower wavelengths.We designed and fabricated free-standing membrane microcavities (MCs) on quartz substrates and MCs on Si substrates in the UV range, all MCs were manufactured initially using porous siliconAdditionally, the transmission and reflection spectra showed a maximum and minimum peak respectively at the localized mode wavelength, for both VIS and UV bands. Besides the shifts, we found an increase in the transmission spectrum amplitudes at the localized mode wavelength of MCs in the UV range when compared with the one found in the VIS range. The change of the refractive index was known by using a three-component effective medium approximation model. Three MCs presented a transmission maximum of 5% in the blue range and 67% in the UV at the localized mode wavelength. The reflection minimum at the localized mode wavelength of five MCs on Si substrates was approximately 40% on the blue and UV range and then it had a value of 55% for one OM (oxidation time of 120\u2009minutes).2 layers.Furthermore, the two-stage dry oxidation process presented here has proven to be very effective for obtaining complex photonic structures such as MCs in the UV region. The importance of the pre-oxidation stage in the strategy of thermal growth of high grade oxides on Si was made evident by the high-quality OMs shown by SEM, where the layer\u2019s mechanical stability was preserved after the second oxidation stage at high temperature. This is quite impressive because, let\u2019s not forget, that MC layers initially made of PS were transformed almost completely in SiO2 is much limited compared to porous silicon. The reported effective refractive index for the porous SiO2 is only from 1.35 to 1.59. The small index contrast may limit the achievable performance of the optical components, as indicated by the small photonic bandgap (only ~20\u2009nm) of the demonstrated microcavities.Notwithstanding, one might think that the refractive index contrast of the porous SiO51. Nonetheless, if a significant refractive index contrast is needed, some experimental approaches could be implemented with the intention of increase such difference.The need for broad or narrow refractive index contrast would depend on the type of application is desired. For instance, if we think in sensor applications or light-emitting devices, a significant refractive index contrast is not necessary. Many sensing techniques using photonic devices utilize localized photonic modesH/nL increases, the photonic bandgap increases. So, the refractive index contrast of porous Si- SiO2 could be enlarged because our microcavity has initially been manufactured from porous silicon. Porous silicon can have a refractive index contrast with values close to 1 up to 3.5. Consequently, after oxidation, the refractive index contrast ranges from values close to 1 up to 1.59. Indeed, in this work, the porous silicon refractive index contrasts were small, and that is why we obtained slight index contrast for porous Si-SiO2.First, it is known that as the ratio n20. In this way, the photonic bandgaps of each porous silicon structure overlap, achieving a broader photonic bandgap. The same method could be applied to porous Si-SiO2 structures in the UV range. Notably, one way to obtain a broader optical response from these kinds of photonic structures is using chirping techniques52.Second, to overcome the small refractive index contrast limitation, some authors have proposed a new method to expand the photonic bandgap in porous silicon structures introducing Bragg reflectors and coupled multiple microcavities designed at different wavelengths from VIS to infrared53. The core region of the fiber is formed by silica with a higher index (nHcore\u2009=\u20091.45) than the average index of the cladding. The cladding of photonic crystal fibers consists of a large number of air-holes embedded within a silica background. The region of pure silica forms a waveguiding core. Waveguiding occurs because the \u201choley\u201c fiber cladding effectively has a lower refractive index (nLcl\u2009=\u20091) than the pure silica core, resulting in total internal reflection at the core-cladding interface54. Therefore, the light is confined to the central core by reflection from the cladding that surrounds it55. All conventional fibers guide light by total internal reflection (TIR), which requires that core have a higher refractive index than the cladding. Despite small index contrast, these technologies have been proven to work in the telecommunications industry.Third, small refractive index contrasts are found in some other photonic structures. For example, in photonic crystal fibers. Such fibers are made from undoped fused silica2 microcavity to narrow down the responsivity of a broadband photodetector in the UV region. The filtering process was successful and in Fig.\u00a0Four, we have given a specific example on how to benefit from the small index contrast of the porous Si-SiO2) with refractive index of n\u2009~\u20092.25 and Zirconium dioxide (ZrO2) that has a refractive index of n\u2009~\u20092.35, as well as low-index materials, such as SiO2 (n\u2009~\u20091.57).Another important point to discuss is what is the advantage of our approach compared to physical vapor deposition (PVD)? Because one can use PVD to deposit materials that have high refractive indices in the UV range, such as Hafnium dioxide 57 with a diameter base between 13 and 100\u2009mm. The deposition time to obtain HfO2 and ZrO2 layer is 1\u2009nm/min59, which is very long compared with our fabrication process. Besides, physical vapor deposition infrastructure is expensive.The advantages of our approach compared to physical vapor deposition are among others; the manufacturing time is short, the cost is low, and the samples are easy to fabricate. On the contrary, physical vapor deposition requires temperatures ranging from 52\u2009\u00b0C to 750\u2009\u00b0C and pressures in the range from 1\u2009\u00d7\u200910If a microcavity of 1.6\u2009\u00b5m thickness could be manufactured by physical vapor deposition, then the deposition process to form the structure will take more than 24\u2009hours, while in our manufacturing process it takes 3.5\u2009minutes to create the microcavity and 2\u2009hours more to oxidize the microcavities.2\u2009+\u2009ZrO2/ SiO2). The multilayer structure had 11 periods, where a maximum reflectance peak of almost 100% has been observed at 355 nm60. A mixture of HfO2\u2009+\u2009ZrO2 was used as a high refractive index layer, and SiO2 was used as a low refractive index, and the refractive index contrast is a little bit broader than the reported in our work. However, the deposition of these materials was done by plasma ion-assisted deposition, which is a costly and slow technique. HfO2 and ZrO2 absorb light in the UV range57, where the imaginary part (extinction coefficient) of the refractive index is not negligible, it is a limiter to manufacture photonic structures of good optical transparency quality. That is why the use of mixtures is preferred because of a possibly better UV transparency compared to pure hafnia layers60. Another point to consider while using these high dielectric oxide films is they do not show sufficient thermal stability because the structure of the oxide films is easily converted from amorphous to polycrystalline and react with the Si substrates61.Recently, it has been reported the fabrication of Bragg reflectors based on a mixture of different materials . Now the other two factors in Eq.\u00a0In summary, scattering should indeed become more pronounced in the UV range, as the ratio between the size of the porous structure and the wavelength increases but only when there is not a phase change which it is not the case here because a porous structure based on crystalline silicon is transformed almost entirely into a porous structure based on silicon dioxide.+ substrates also helps out reducing scattering losses. In reference24 it has been reported that scattering depends strongly, first, on the layer thickness (size of the porous structure) and secondly on the doping level of the substrate, during the formation of porous silicon, porous silicon layers develop a roughness which is responsible for the observed scattering light, scattering in the interface between PS and air has been found negligible (<1%) compared to the interface of the volume of the porous silicon layer on p-type substrate, however using p+ substrate the scattering light level is lower. Besides, the volume scattering loss can always be reduced by applying thermal oxidation, where AFM measurements shown a roughness decrement after oxidation, the thermal oxidation has a smoothening effect on porous silicon layers41.Moreover, the use of p2 piezoelectric properties. This procedure expands the field of research of silicon based photonic structures in the UV range.In the future, it would be possible by using this oxidation process the making of other kind of photonic structures like waveguides, Fibonacci filters, multi-cavities, and rugate filters. Among other possible applications, for instance, the UV microcavities can be used to modulate the optical response of photodetectors such as gallium nitride (GaN) and zinc oxide (ZnO) to achieve a much more selective photoresponse in the UV. Also, it can be used as vibration sensor due to the known SiO+ type cm\u2009\u00d7\u20091.5\u2009cm was put in a Teflon cell with etching area of 1\u2009cm\u00b2 that was used to carry out the electrochemical process. Then an aqueous electrolyte of 40% HF and ethanol at 99.7% with a volume ratio of 1:1 was placed in the Teflon cell. A ring-shaped tungsten electrode immersed in the electrolyte was used as the cathode, and an aluminum plate that contacts the unpolished backside of the c-Si wafer was used as the anode.The porous silicon MCs were obtained by electrochemical anodization on highly doped p2 (low porosity) for 4.1\u2009seconds (s), the second current pulse is 80\u2009mA/cm2 (high porosity) for 1.1\u2009seconds (s) and finally a current pulse of 80\u2009mA/cm2 (high porosity) for 2.2\u2009seconds is used to form a defect layer. After each current pulse, a pause of 3\u2009seconds was introduced to generate the flow of the electrolyte and prevent porosity gradients. The defect is an essential characteristic of MCs, which is created between two BRs and ours present a localized photonic mode.A power supply (Keithley 2460) controlled by a laptop was used to deliver the current profile of MCs. The current profile consisted in interchanging two different current pulses, the first current pulse is 5\u2009mA/cm2 was applied for 2\u2009s to the c-Si substrate. When the manufacturing process finished, and the MCs were self-supported, they were rinsed with ethanol and dried in the environment.The microcavity was self-supported on a quartz substrate with dimensions of 1.6\u2009cm\u2009\u00d7\u20091.6\u2009cm. The MC lifting up was carried out with an electrolyte of 40% HF and ethanol at 99.7% with a volume ratio of 1:1 in a Teflon cell, and a high current pulse of 450\u2009mA/cm16.The obtained MCs were subjected to two stages of dry oxidation; the first stage was a low-temperature pre-oxidation at 350\u2009\u00b0C for 30\u2009minutes that prevents PS layers collapsing during an additional heat treatment at high temperatures2. At this stage the oxidation temperature was constant, and the oxidation time ranged from 30 up to 120\u2009minutes. The samples were removed from the oxidation system when it reached room temperature.The second stage was a high-temperature oxidation at 900\u2009\u00b0C, applied to grow an oxide layer of greater thickness than the obtained in the preceding oxidation stage; the layer thickness is higher than the natural native oxide grown in the environment, thus leading to the consolidation of SiOcm\u22121. Images of Scanning Electron Microscopy (JEOL JSM7600F) were obtained to examine the geometrical characteristics of the MCs. All MCs were measured before and after dry oxidation. Characterization of the MCs was performed by SEM, FTIR and UV-Vis-NIR spectroscopy before and after the dry oxidation.The optical characterization of MCs was carried out with a Varian (Agilent) UV-Vis-NIR spectrophotometer at normal incidence and an incident angle to 20 degrees, and FTIR measurements (Varian 660 IR) were performed in attenuated total reflection mode in the spectral range 390\u20134000\u20092, Si, and air). The literature discusses different effective medium theory approximations such as those developed by Looyenga, Maxwell-Garnett, and Bruggeman, to obtain the refractive index of PS layers62.The effective medium theory was used to get the refractive index of two and three mediums (SiOP (porosity) is the air fraction of the non-oxidized PS layers, \u03b5air is the dielectric constant of air; \u03b5Si is the dielectric constant of Si, and \u03b5PS represents the effective dielectric constant of PS.In this case, we considered the Maxwell-Garnett\u2019s equation; this equation was used to calculate the theoretical refractive index value of PS, where it is considered as a homogeneous medium with an effective complex dielectric function. The equation for two mediums is expressed as follows:2, and air. This model is an extension of Maxwell-Garnett; the J. E. Lugo model considers the presence of SiO2 and its network expansion that occurs within the porous structure, due to the increase of SiO2 in the porous matrix of SP31.The simplest model proposed by J. E. Lugo was used to obtain the refractive index of OPS, which takes into account three components: Si, SiO2.The refractive index for three components can be obtained using the following equation:x is a dimensionless oxidation parameter, Pox is the porosity after the dry oxidation.The parameter \u03b2 is represented by:The porosity for a system of three components can be calculated asxL, that a given layer of PS can have with a specific porosity can be found solving the following expression:The oxidation parameter upper limit value, 44. We can obtain the oxide fraction (fox) after dry oxidation asThe constants 0.55 and 0.45 in Eq. \u03ba(\u03c9), which depends on the frequency; and it can be written as:kd is the z component of the wavevector for the first layer (defect layer), where z-axe is taken along of the normal direction to the layers and h(\u03c9) intersect gives the localized mode frequency. For the localized mode calculation, we used the wavelength average of the complex refractive index absolute value.Thus, the function mentclass2pt{minim30; it can be related to the transmission spectrum close to the localized mode wavelength. This equation is written asp represents the photon loss rate in the microcavity whereby the lifetime of photons can be defined at the localized mode wavelength as \u03c4\u2009=\u20091/\u0393p, which also includes the Rayleigh scattering losses and absorption losses, which are given by \u03b1\u2009=\u20091/\u03c4c, where c is the speed of light.The photon losses in the infrared range are mainly related to Rayleigh scattering on the rough structure of PS and in the VIS range due to absorption and scattering. One theoretical study on the optical losses at localized mode wavelengths of free-standing MCs has been reported. The localized mode is shown as one resonant transmission peak, whose amplitude is modified by optical losses. The equation of Breit-Wigner was modified by Miller to get the dispersion in the localized mode due to absorption and scattering lossesThe tunneling of photons in MCs follow Bose-Einstein statistics, where some of the photons can be scattered and lost; it is similar to the electron tunneling. Photon tunneling through in a microcavity goes from the first BR to the defect region and finally to the second BR.T represents the transmission. Therefore, this equation is related to the light intensity that passes through the microcavity, where part of the light is absorbed, and part of the light is transmitted.We used the Beer-Lambert law to get the absorbance spectrum of MCs on quartz substrates; this equation can be expressed asp is the mean porosity of the porous structure, which takes values between 0 to 1. \u03b51 and \u03b52 are the minimum and maximum dielectric constant bounds of the porous and solid-phase regions, respectively. For PS \u03b51\u2009=\u20091 and \u03b51\u2009=\u200912 and porous SiO2\u03b51\u2009=\u20091 and \u03b51\u2009=\u20093.9.If the porosity is not very small or high, the volume-averaged fluctuation of porous silicon dielectric constant is well approximated byRSL represents the Rayleigh Scattering Loss Rate in the microcavity, \u03c90 is the angular frequency of the localized mode, c is the speed of light and D(\u03c90) is the density of photon states in the Bragg Reflector given by \u03b5* expressed asdH is the high porosity layer thickness and dL is the low porosity layer thickness, and \u039b is the period (\u039b\u2009=\u2009dH\u2009+\u2009dL), \u03b5H and \u03b5L correspond to the dielectric constant of porous silicon layers, The Rayleigh Scattering Losses (RSL) can be obtained as63 that a\u22a5 will be utilized before oxidation. The growth of an oxide layer of thickness x0 will consume a layer of silicon of 0.45x0 thick31. If all crystalline silicon becomes silicon dioxide, the final size for a\u22a5 will be It is known\u03b1 in the UV range are due to Rayleigh scattering then \u03b1\u2009\u2248\u2009\u03b1RSL. Using the experimental values for \u03b1 in the UV range, presented in Table\u00a0Since practically all optical losses aSi|| values by considering that after oxidation, the period sample increased by 27%. Table\u00a0Now we can use The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request."} +{"text": "Our previous microarray analysis indicated miR-550-1 was significantly downregulated in AML. The specific biological roles of miR-550-1 and its indirect interactions and regulation of m6A in AML, however, remain poorly understood. At the present study, we found that miR-550-1 was significantly down-regulated in primary AML samples from human patients, likely owing to hypermethylation of the associated CpG islands. When miR-550-1 expression was induced, it impaired AML cell proliferation both in vitro and in vivo, thus suppressing tumor development. When ectopically expressed, miR-550-1 drove the G0/1 cell cycle phase arrest, differentiation, and apoptotic death of affected cells. We confirmed mechanistically that WW-domain containing transcription regulator-1 (WWTR1) gene was a downstream target of miR-550-1. Moreover, we also identified Wilms tumor 1-associated protein (WTAP), a vital component of the m6A methyltransferase complex, as a target of miR-550-1. These data indicated that miR-550-1 might mediate a decrease in m6A levels via targeting WTAP, which led to a further reduction in WWTR1 stability. Using gain- and loss-of-function approaches, we were able to determine that miR-550-1 disrupted the proliferation and tumorigenesis of AML cells at least in part via the direct targeting of WWTR1. Taken together, our results provide direct evidence that miR-550-1 acts as a tumor suppressor in the context of AML pathogenesis, suggesting that efforts to bolster miR-550-1 expression in AML patients may thus be a viable clinical strategy to improve patient outcomes.MicroRNAs (miRNAs) and N While there have been countless efforts to develop novel therapeutic strategies suited to the treatment of AML, the majority of patients still suffer from poor outcomes, with recent reports estimating a 5-year survival rate of 40% among AML patientsAcute myeloid leukemia (AML) is a form of cancer that arises when hematopoietic stem cells (HSCs) undergo oncogenic mutations. In the United States, 19,940 new AML cases are expected to be diagnosed, and 11,180 AML-associated deaths are expected to occur in 2020 (MASP1) and prolyl 4-hydroxylase subunit \u03b11 (P4HA1) et al.CDK4) in vitro and in vivo. In our previous research, we have identified a set of miRNAs with specific regulatory roles in the context of the proliferation, differentiation, and apoptosis of AML cells. These miRNAs include miR-9, miR-22, miR-26a, miR-150, miR-495, miR-181, miR-126, miR-196b and the miR-17-92 cluster in vitro and in vivo. Even so, however, the specific role of these miRNAs in AML is not completely understood.MicroRNAs (miRNAs) are small 20-24 nucleotide non-coding RNA molecules that exhibit endogenous biological functionality via targeting specific downstream mRNAs WWTR1), also known as the transcriptional co-activator with PDZ-binding motif (TAZ), was first detected based on its status as a 14-3-3 interacting protein WWTR1 and the paralogous Yes-associated protein (YAP) serve as central downstream regulatory factors in the Hippo signaling pathway, which modulates a wide range of cellular processes pertaining to cellular energy status, hypoxia, osmotic stress, tissue organ size, regeneration, homeostasis, and tumorigenesis et al.et al.et al.LATS1 and LATS2 to be downregulated in leukemia as a consequence of their hypermethylation, and reduced LATS2 expression has been found to be associated with worse outcomes among leukemia patients. This suggests the possibility that a reduction in LATS1/2 activity may underlie the alterations in YAP/WWTR1 stabilization and activation in the context of leukemia. However, clarity is still needed regarding the mechanisms governing increased WWTR1 activity in AML.WW-domain containing transcription regulator-1 containing 10% heat-inactivated FBS , 1% penicillin-streptomycin and 1% HEPES (Sigma-Aldrich). Murine progenitor cells were cultured in RPMI 1640 containing 10 ng/ml interleukin 3 (IL-3) , 10 ng/ml IL-6 (Peprotech), 100 ng/ml stem cell factor (SCF) (Peprotech), 55 nM 2-mercaptoethanol (BME) (Sigma-Aldrich), 1% HEPES, 10% FBS, and 1% PS. The AML patients' samples were acquired from the First Affiliated Hospital of Zhejiang University and the University of Chicago Hospital with informed consent. The study was approved by the institutional review board of two hospital's ethics committee.A MTT assay was used to measure viability based on provided directions. Briefly, MV4-11 and Kasumi-1 cells were plated into 96-well plates (10000 cells/100 \u03bcL), with dye solution added to wells at the indicated time points. After 4-hour 37\u00b0C incubation, stop buffers were added and cell absorbance was assessed the following day at 570 nm.6 cells were stained with an Annexin V-APC Apoptosis Detection Kit based on provided directions.A BD LSRII Flow Cytometer was used in all analyses, and FlowJo v10 was used for data analysis. For measurements of apoptosis, 0.5\u00d7106 cells were fixed overnight at 4\u00b0C in 75% ethanol, washed thrice in PBS, and stained using propidium iodide for 20 minutes.For cell cycle analyses, 0.5\u00d7106) were collected, washed thrice using PBS, and stained at 4\u00b0C with antibodies specific for CD11b, CD117, CD45.1, CD45.2, and Gr-1 (BD Biosciences) for 20 minutes. After two additional washed, cells were then fixed in a fixation buffer prior to analysis.For immunophenotyping analyses, BM, PB, and spleen cells supplemented with PMSF, EDTA, and protease inhibitors. Samples underwent 30 min of centrifugation at 12000\u00d7g, after which supernatants were isolated and loaded in equal protein amounts (30-50 ug) onto gels for SDS-PAGE analysis. After separation, proteins were transferred onto PVDF membranes which were then blocked with 5% skim milk in TBST for 1 h, followed by probing overnight at 4\u00b0C with anti-\u03b2-ACTIN (#3700), anti-WWTR1 (#83669), anti-PARP (#9532), anti-AKT (#4685), anti-p-AKT (#4060), anti-CDK6 (#13331), anti-Rb (#9309), anti-p-Rb (#8516), anti-E2F1 (#3742), anti-CCND1 (#2978), anti-BCL-2 (#15071), anti-p27 (#3686) , or anti-c-myc . A peroxidase-conjugated secondary antibody was then applied to blot for 1 h, followed for four washed with TBST, after which chemi-luminescence was used to detect protein bands.A total of 5\u00d7106 cells based on provided directions. cDNA was then synthesized from 1 \u00b5g of this RNA via M-MLV reverse transcriptase . A 7900HT real-time PCR system was employed for qPCR analyses, with SYBR Green used to setup triplicate reactions assessing relative mRNA expression. For miRNA expression, TaqMan qPCR was conducted according to provided directions (Applied Biosystems). The 2-\u0394\u0394Ct method was used to calculate miRNA and mRNA relative expression, which was normalized to endogenous levels of U6 and GAPDH, respectively.A miRNeasy kit was used for extracting total RNA from 1\u00d710WWTR1-CDS vectors, the WT sequence was amplified from healthy human BM MNCs prior to insertion into the pCDH vector . The MSCVneo-MLL-AF9 plasmid was kindly provided by Dr. Scott Armstrong.The pri-miR-550-1 sequence was amplified by PCR from healthy human BM mononuclear cells (MNCs). Primers with mutated sequences (Table 5 HEK293T cells were plated into 60-mm dishes. Retroviruses were then produced via transfecting cells with vector DNA and a packaging vector (PCL-Eco or PCL-Ampho) with the Effectene Transfection Kit (Qiagen). The WWTR1 overexpressing lentivirus was generated via co-transfection of the WWTR1-pCDH plasmid and packaging lentivirus vectors . At 48 and 72 h post-transfection, cellular supernatants were harvested and filtered through a 0.45 \u03bcm cellulose acetate filter prior to storage.One day prior to transfection, 5 \u00d710http://genie.weizmann.ac.il/pubs/mir07/), miRBase Targets (http://microrna.sanger.ac.uk), TargetScan (http://www.targetscan.org), and miRanda (http://www.microrna.org) miRNA-target gene prediction databases.miRNA target gene predictions were made through the use of the PITA . In addition, a mutated version of this 3'-UTR fragment was generated using primers bearing the mutant sequence. A total of 6,000 HEK293T cells were plated per well of a 24-well plate in triplicate, and following overnight culture these cells were co-transfected with pMIR-REPORT-WWTR1 or mutant pMIR-REPORT-WWTR1 vectors and MSCV-PIG-miR-550-1, mutant MSCV-PIG-miR-550-1, or MSCV-PIG empty vectors (20ng each). The \u03b2-galactosidase vector (1ng) (Ambion) was additionally transfected into all experimental cells, and after a 48 h incubation all cells were lysed. Relative luciferase activity was then measured via a Dual-Light Combined Reporter Gene Assay System (Applied Biosystems).We conducted dual luciferase reporter and mutagenesis assays based on a modified version of a previously reported protocol oC in fresh media, and this was then repeated the following day. Thereafter, 2 \u00d7 104 cells were plated in methylcellulose medium containing 10 ng/ml IL-3, IL-6, granulocyte-macrophage colony-stimulating factor (GM-CSF), 30 ng/ml SCF, and 2 \u03bcg/ml puromycin and/or 1 mg/ml G418, as appropriate. After incubating for 7 days, colony formation in each of the experimental groups was assessed, and these colonies were then replated.Colony formation assays were performed in accordance to a modified version of a previously reported protocol For BMT assays, donor mice were B6.SJL (CD45.1+), and recipient mice were C57BL/6 (CD45.2+). All animal experiments were approved by the local Institutional Animal Care and Use Committee (IACUC).MLL-AF9, MSCV-PIG-miR-550-1 + MSCVneo-MLL-AF9, and mutant MSCV-PIG-miR-550-1 + MSCVneo-MLL-AF9). The resultant donor cells were then mixed with helper cells at a ratio of 3 \u00d7 105 to 1 \u00d7 106 per recipient mouse. These cells were then injected into the tail vein of an 8-week old lethally irradiated (960 rads) recipient mouse. In secondary BMT assays, these lethally irradiated recipient mice were injected using leukemic BM cells that had been isolated from the initial primary recipient mice, and no helper cells were added.In the primary BMT assays, BM cells from healthy donor mice (B6.SJL) were transduced with the indicated retrovirus combinations samples were collected via tail bleeding in order to establish whole blood counts. Engraftment was evaluated via flow cytometry based on CD45.1 expression in PB samples. Moribund mice were euthanized, and liver, thymus, and spleen weight was determined. BM cells were collected from euthanized animals and prepared for cytospin slides, which then underwent Wright-Giemsa staining.6A was used to probe blots at 4\u00b0C overnight. Membranes were then washed thrice in TBST and probed at room temperature using HRP-conjugated goat anti-rabbit IgG for 1 h, prior to visualization with an ECL system.We isolated total RNA with the miRNeasy kit and quantified RNA levels. Next, RNA samples were spotted using a Bio-Dot Apparatus on Amersham Hybond-N+ membranes , and a UV cross-linker was then used to cross-link them to this membrane. Membranes then underwent two washes using Milli-Q, followed by treatment for 10 min using 0.02% methylene blue (Sigma-Aldrich). Membranes were then rinsed until the dye was washed away from background regions, and dots of methylene blue were then imaged. Next, 5% nonfat dry milk was used for membrane blocking for 1 h, after which an antibody against mP<0.05 was the threshold of statistical significance.SPSS v16 was used to compare all experimental results via Student's t-tests or two-way ANOVAs. Data are given as means \u00b1 standard deviations (SDs) from at least three repeat experiments. The Kaplan-Meier approach was used to assess overall survival. P=0.003) , inv(16), t, or mixed lineage leukemia (MLL) rearrangements, relative to normal controls (NC) , inv(16)03) Fig. , using qP=0.010) . We did Promoter methylation is known to be a key regulator of many miRNAs in the context of AML, including miR-126 and miR-375 MLL-AF9 + MSCV-PIG (as control) MLL-AF9 + MSCV-PIG-miR-550-1 , MSCVneo-MLL-AF9 + MSCV-PIG-miR-550-1 mutant , MSCVneo-AE9a , MSCVneo-AE9a + MSCV-PIG-miR-550-1 or MSCVneo-AE9a + MSCV-PIG-miR-550-1 mutant were separately co-transduced into normal murine BM progenitor cells prior to replating on methylcellulose medium. Following a 7 day incubation period, the media was then replaced and equivalent culture conditions were maintained for each group. We observed a marked reduction in colony formation capabilities for BM progenitor cells induced with the MLL-AF9 or AE9a fusion proteins following induction of miR-550-1 expression . To this end, MSCVneo-ion Fig. A-D. Indeion Fig. E-H. Whenion Fig. A-H, suggion Fig. E.in vitro.In order to assess the role of miR-550-1 in the context of AML biology, we next used the MV4-11 and Kasumi-1 human AML cell lines to conduct gain-of-function experiments. We found that forced ectopic miR-550-1 expression led to a clear reduction in the viability and proliferation for both of these cell lines Fig. A-F. Furtin vivo as it did in vitro by using a primary BMT assay system. Briefly, we co-transduced murine BM progenitor cells with the MSCVneo-MLL-AF9 + MSCV-PIG (as control), MSCVneo-MLL-AF9 + MSCV-PIG-miR-550-1 , or MSCVneo-MLL-AF9 + MSCV-PIG- miR-550-1 mutant vectors. Transduced cells were then injected into the tail veins of recipient mice . By flow cytometry, we found all mice in the MLL-AF9+miR-550-1 group displayed an apparent decline in the proportion of c-Kit+ blast cells in the BM, spleen (SP), and PB compared to control or MLL-AF9+miR-550-1 mutant group and mitochondrially-induced apoptosis in a range of cancer types WWTR1 mRNA expression in AML patient samples, revealing it to be significantly upregulation in these patients' samples relative to NC samples . We additionally examined the expression of miR-550-1 and WWTR1 in 90 AML samples in our cohort, again revealing a significantly negative correlation between these two factors targeting could lead to alterations in mRNA stability as a function of m6A modification WTAP was a direct miR-550-1 target gene , whereas they had no effect on those bearing the mutated WWTR1 3'-UTR . Similarly, breast cancer patients with elevated WWTR1 expression also exhibit poorer outcomes WWTR1 expression and poorer prognosis, increased tumor invasion, and metastasis in those with gastric cardia adenocarcinoma WWTR1 mRNA expression in AML, and this expression was negatively correlated with that of miR-550-1. YAP and TAZ have been reported to be transcriptional coactivators capable of recognizing cognate cis-regulatory elements via interactions with additional transcription factors, such as TEA domain family members (TEAD) et al.et al.WWTR1 lacking the 3'-UTR region was overexpressed, this significantly rescued the miR-550-1-induced G0/1-phase arrest observed in vitro. To date, the function of WWTR1 in AML was not previously determined, but our study thus highlights for the first time that WWTR1 is a key mediator related to the anti-leukemic activity of miR-550-1.In order to explore the specific mechanisms governing the link between miR-550-1 and reduced leukemia severity, we first conducted an 6A is the most common methylation event modifying mRNA molecules in mammals, regulating a range of processes such as heat shock, differentiation, DNA damage responses, tissue development, and miRNA processing 6A is linked to the pathogenesis of AML 6A methyltransferase complex component, regulating m6A methyltransferase activity. Recent work indicates that WTAP is able to improve CDK2 stability via binding to the 3'UTR region, thereby enhancing cell proliferation in renal carcinoma et al.WWTR1 stability by targeting WTAP.mWWTR1 mRNA stability, rather than promoting its degradation, was the primarily regulatory role of miR-550-1 in MV4-11 cells, further clarification is needed to determine why this effect was not identical in both cell lines. Whether miR-550-1/YAP/WWTR1 interact in a manner so as to form a negative feedback loop in AML remains unknown. Interestingly, a report by Chaulk et al.et al.WWTR1 3'-UTR. Ultimately our findings both identify a novel tumor-suppressor miRNA, and also characterize previously unknown regulatory pathways governing WWTR1 expression in AML.Although we found that impairing 6A modifications are important for regulating the ability of miR-550-1 to target WWTR1. Our results thus demonstrate that miR-550-1 is a latent factor which suppresses AML, and as such enhancing expression of this miRNA may be a valuable therapeutic strategy in those with AML.In summary, our study reveals the following: (1) elevated miR-550-1 expression is a favorable prognostic indicator in AML, and in AML patients it is at least partially dysregulated due to the hypermethylation promoter; (2) miR-550-1 is able to promote apoptosis and inhibit proliferation via regulation of the WTAP/WWTR1/BCL-2 and WTAP/WWTR1/CDK6/Rb/E2F1 pathways in AML; (3) mSupplementary figures and tables.Click here for additional data file."} +{"text": "Information-intensive transformation is vital to realize the Industry 4.0 paradigm, where processes, systems, and people are in a connected environment. Current factories must combine different sources of knowledge with different technological layers. Taking into account data interconnection and information transparency, it is necessary to enhance the existing frameworks. This paper proposes an extension to an existing framework, which enables access to knowledge about the different data sources available, including data from operators. To develop the interoperability principle, a specific proposal to provide a (public and encrypted) data management solution to ensure information transparency is presented, which enables semantic data treatment and provides an appropriate context to allow data fusion. This proposal is designed also considering the Privacy by Design option. As a proof of application case, an implementation was carried out regarding the logistics of the delivery of industrial components in the construction sector, where different stakeholders may benefit from shared knowledge under the proposed architecture. The recent advances in Information Technology, Internet of Things (IoT) and Cyber-Physical Systems (CPS), among other fields, have enabled digitization and automation of production processes and led to the definition of the fourth industrial revolution, also known as Industry 4.0 (I4.0) ,2,3. In Human factors, such as fatigue indicators, have significant effects on product quality and factory productivity in manufacturing activities . It is cReference Architecture Model for Industry 4.0 (RAMI 4.0) to [DateTime]; andProof of worth for accessing the data.When either people or agents request the web service to access the specific data according to the above structure, when they have the right of access, the web-service will access the DLT repository under the specific criteria provided and verify the ownership of the requested information. This verification will be based on consuming the smart-contract to obtain access to the symmetric encryption key used by the node, thus being able to decrypt it and compare signatures. Then, such data can be aggregated (according to the requested period of time) and be sent to the requester by the web service after encrypting it using the provided public key of the user/agent. The data consumers will receive back the data and handle it by using its private key, in such a way that only they themselves are allowed to consume it.The whole schema can be seen in This type of solution can help companies from at least three different perspectives. They can redesign their adopted business models related to the IIoT and its integration into the management dimension of the organization ,67. HoweIn the case of public messages, there are no requirements for encryption. Under the restricted approach, the encryption certificate can be shared among the interested stakeholders, in such a way they can collect the information on their own.Not only due to the transparency provided, but also because such data streams are delivered automatically by the different digital twins, the trust of shareholders and consumers will increase immediately. In addition, such knowledge will enable the management needs of owners, being more consistent with its potential value when shared.To validate the proposed architecture, a data-handling system based on a DLT (IOTA), and domain ontologies was designed to improve data interoperability, with integrated privacy criteria through message encryption and data sharing under the Industry 4.0 paradigm.The implemented system was adopted in a practical industrial scenario. The aim was to verify that our proposed architecture has practical potential in real-world applications, leveraging the implemented system to demonstrate its feasibility. The approach, which adopted a positivist view of research, relied on the literature and empirical data coming from the case itself, as well as on the insights of the researcher to build incrementally more powerful theories.The application case was conducted in a Spanish factory manufacturing steel rebars to reinforce concrete in the construction sector. The general workflow is as shown in Consider the truck loading process as an example. The requirement is that the right rebar bundles are loaded into the truck in a specific disposal sequence, which needs to be well managed as the bundles are delivered and distributed to different sites. As a lack of specific items could impact the delivery dates, an improper loading disposal sequence may negatively affect the scheduled work in the unloading process. In the present situation, the responsibility for such a decision relies on the crane operators, who take charge of loading the rebar bundles to the truck buffer using a crane machine.To effectively assess this internal logistics, different data sources should be taken into consideration. Such an analysis requires knowledge about the order and manufacturing sequences, which are handled by the ERP system, MES, or PLM. The logistics information is required, such as where and when the items are loaded into the truck, in order to understand whether the items were loaded and well-placed in a proper storage buffer in the truck.Moreover, more details, such as the crane operator profile, their working conditions, and working status are also relevant. Body-related parameters need to be integrated, mainly as the stress of the crane operators may affect the loading process. The movement trajectory/speed need to be considered, as analysis can be made to understand whether the loading process requires excessive physical efforts.When different stakeholders need to have access to specific process-related information, non-tampered data are required due to certification principles; therefore, an open system is a convenient tool, reducing the IT barriers and which is robust against facility ownership changes.The system prototype will adapt the proposal presented in Production related information from ERP/MES/PLM system;Ultra wide band (UWB) indoor positioning system (IPS) to track crane position and crane operator movements, to better define the location for rebar bundles; andSmart band to monitor crane operator\u2019s heart rate and blood pressure.Different devices, systems, and data sources composed the configured prototype, as indicated below:The MES manages sequence planning and the bottom-up data flow on the shop-floor . It provhttps://tracktio.com/) was used to track the crane hook and crane operators. Rebar bundle locations in different buffer areas of production or positions in the truck for customer delivery were derived from the dynamic behavior of the process. The position of crane operator was tracked as a reference for understanding the movement trajectory and speed. The IPS had its own data repository; the data were acquired from a web service.The UWB indoor positioning system (IPS) from the Tracktio\u2122 company ; then, it was reprocessed and transmitted to a MongoDB database located in the local cloud.For data interoperability, all data sources in this industrial scenario were semantically modeled, fostering their linkage to other domain knowledge. To this end, different existing domain ontologies were reused in this study. The data sources were in the three aforementioned sectors: production system, IPS, and individual wearable devices. The existing and shared/published ontologies for each sector were collected and one was selected to model the data source in this study, according to the mapping of data structure. The details of available ontologies for reusability are listed in To map the generated data sources in this industrial scenario, three ontologies were chosen for data modeling. The selection criteria were chosen on basis of the degree of matching between the ontological schema and data source structure. After deeper analysis of each ontology\u2019s schema, as listed in https://usc-isi-i2.github.io/karma/) provides a graphical user interface which automates the semantic modeling process. Karma learns to recognize the mapping of data to the chosen ontology classes and proposes a model that can generate JavaScript Object Notation for Linked Data (JSON-LD) for large data sets in a batch mode. JSON-LD is a lightweight Linked Data format, which was designed based on the concept of \"context\", linking object properties in JSON to concepts in an ontology. The JSON-LD data type was selected as it is lightweight and interoperate at Web-scale, but also provides embedded semantics. The data format is based on JSON and ontological schemas, maintaining a common space of understanding and supporting the evolution of schemes over time without requiring data consumers to change format. The data source, applied ontology schema, and the transformed JSON-LD format (containing semantic annotation) are listed in The mapping tool Karma (https://nodes.thetangle.org:443) was selected for data submission to the IOTA Tangle. The message sent to the IOTA Tangle is encrypted. A python script that implements the RSA (Rivest\u2013Shamir\u2013Adleman) public-key cryptosystem was used for encryption and validation of messages. PyOTA , an IOTA python API library, was used to implement data sending and retrieval to and from the IOTA Tangle. The source code can be found at in IOTA Tangle. Related transactions could be looked up and fetched by tags or bundles. After data retrieval, a data message should be validated and decrypted by stakeholders, where the semantic meaning of the data are encoded in the message itself. This design fosters data reusability by learning processes and growth of models, as they can select the appropriate meaning of data sets or ask for automatic preprocessing before accomplishing further transformations.As shown in The needs of the near-future dimensions of I4.0 require an increasing level of integration of data from different sources, including data from not only workers, but also from different providers and sources. This enforces PbD requirements, due to both company security rules and regulatory requirements. A major aspect of the GDPR are the so-called legal grounds for lawfully processing personal data, one of which is consent. In several IoT applications where consent is used, it may even need to be explicit consent.Data consistency and the use of different sources of information and/or different time periods or geographical positions;Data storage: although properly scrambled, masked, or blurred, the related persons probably are not aware such pieces of information do exist related to them; andProblems related to EU citizens when requesting products or services outside of the EU.There still remain, after the GDPR, some problems with personal data, as follows:Accountability regarding users; but also,Accountability within the organization.The harder problem is the second one, mainly due to the absence of accountability: as a user never knows who is gathering information about them, they cannot ask for access, erasure, or modification. Thus, there are two types of accountability:In our validation case, the delivery process to the customer has limited information from the process itself, as truck loading is mainly a human-driven activity. Therefore, it risks exposure to a significant time variability and existence of errors in missed or items wrongly included inside the truck. From a business perspective, adding transparency by maximization in the amount of automation involved is worthwhile for stakeholders. Furthermore, current private and centralized data management approaches limit the high-level integration of multi-modal data sources, as well as data ownership and re-usability .The deployed architecture, as proof of concept, was able to provide additional information, integrated in a convenient way in order to help to understand the type of items frequently creating problems, configurations of truck layout consistent with specific delivery sequences, the movements used to load particular item shapes, and so on.There exist potential contributions in the direction of empowering operators of I4.0 , includiObviously, to have significance in such an analysis, large periods of data collection and close integration are required. Indeed, path dependency does exiAnother critical enabling technology for data interoperability adopted in this study is semantic modelling and ontology engineering. Ontologies provide standardized definitions for different data sources and make possible of reasoning and autonomous decision-making, by formalizing the structure of the knowledge. Existing ontologies, i.e., vital sign ontology, positioning ontology, and the MES ontology, are reused in this study to accelerate the development. To obtain a higher level of interoperability, such ontologies can be further exploited and adapted to keep align with widely recognized upper-level ontology such as BFO and to bAs one of the fundamental enabling technologies of I4.0, the digital twin concept is briefly discussed in this study but not explored with details as it is not the main focus of this paper. Nevertheless, the proposed data integration and handling approaches are important basis for implementation of digital twins. For example, in some existing digital twin studies ,91, simiIn this paper, our main contributions were to extend and complement the I4.0 reference model LASFA, with reference to some more general ones , by adding additional data flows to complete the production process description and including different provider add-ons for digital twin construction. The proposed update enables higher level integration and more efficient process description. By considering the PbD design for exchange of information over public DLT systems, it also accounts for information transparency and data ownership for related stakeholders.The adopted configuration, which enables the semantic enrichment of data, also make it possible to implement rules enforcing interaction mechanisms and to derive new properties related to the employed ontologies. Although such a direction was not implemented in the proof of concept carried out, it may be useful for stakeholders accessing the data.The presented proposal benefits from the integration of semantically annotated data from different sources, including human health-related wearable device data, in an industrial context. When information related to people has been included, a signature layer is used to provide a PbD solution, in full accordance with the GDPR regulations and to enable positive-Sum .The core idea of the proposed architecture is to enable the sharing of different obfuscated data streams over a public immutable network-oriented database with quick-answer capabilities and good scalability characteristics. The proposed architecture enables extensions, in order to facilitate higher levels of control and empowerment of data owners by integrating an explicit delegation of access to specific data sets and time windows through the use of web services.Considering a simple application case, the benefits from a business perspective appear evident, with the integration of different dimensions not previously considered. There also exists additional value with the possibility of further performance analysis of procedures, when the proposed architecture is combined with artificial intelligence (AI) techniques. The theoretical implications of this research allow for a convenient framework providing a high level of control on the data produced by each agent, while also providing a platform to share data coherently with different business models. It also provides a flexible way to interconnect semantically powered data streams, which is a significant contribution for data driven ML/AI applications.This paper also makes a point from a practical application perspective; in particular, by providing an interesting implementation of data handling strategies aligned with Industry 4.0 principles enabling the integration of data coming from workers. Such tools can have a significant impact on advanced lean manufacturing and lean management implementations, as they go a step beyond classical digitization approaches. This could be the case for the standardization of formal communications under Lean Management (CPD)There are several aspects that provide us with further research paths, as we require long data collection approaches and the definition of consistent key performance indicators for individual applications. Furthermore, the definition of related business models for micro (smart contract-based) data monetization and traceability applications also provides avenues for further research.A clear limitation of the existing research is the adoption of a single and specific DLT technology; namely IOTA. The proposal is not conceptually dependent on IOTA; it simply served as a driver to implement our architecture in the proof of concept. However, some other technologies, such as Obyte, may be considered to be potential substitutes.Another limitation is the required digital infrastructure needed in the companies to successfully implement the digital twin concepts, which is capable of managing requirement of both internal and external stakeholders.Future research steps will be to deeply analyze the meaning of the integrated data built, in terms of value for business, looking to identify behavioral patterns or to create forecasting models, and its ability to better adjust operation times and, most importantly, timely delivery.Another interesting perspective to be considered is that wearable devices, such as smart watches and smar"} +{"text": "Critically appraised topics (CATs) are evidence syntheses that provide veterinary professionals with information to rapidly address clinical questions and support the practice of evidence-based veterinary medicine (EBVM). They also have an important role to play in both undergraduate and post-registration education of veterinary professionals, in research and knowledge gap identification, literature scoping, preparing research grants and informing policy. CATs are not without limitations, the primary one relating to the rapid approach used which may lead to selection bias or restrict information identified or retrieved. Furthermore, the narrow focus of CATs may limit applicability of the evidence findings beyond a specific clinical scenario, and infrequently updated CATs may become redundant. Despite these limitations, CATs are fundamental to EBVM in the veterinary profession. Using the example of a dog with osteoarthritis, the five steps involved in creating and applying a CAT to clinical practice are outlined, with an emphasis on clinical relevance and practicalities. Finally, potential future developments for CATs and their role in EBVM, and the education of veterinary professionals are discussed. This review is focused on critically appraised topics (CATs) as a form of evidence synthesis in veterinary medicine. It aims to be a primary guide for veterinarians, from students to clinicians, and for veterinary nurses and technicians . Additionally, this review provides further information for those with some experience of CATs who would like to better understand the historic context and process, including further detail on more advanced concepts. This more detailed information will appear in pop-out boxes with a double-lined surround to distinguish it from the information core to producing and interpreting CATs, and from the boxes with a single line surround which contain additional resources relevant to the different parts of the review. Evidence-based veterinary medicine (EBVM) can be defined as the application of scientifically generated evidence into clinical veterinary practice, whilst synergistically incorporating the expertise of the veterinary professional, the specific features of the patient and the values of the owner . In ordeMost people will have heard of \u201cliterature reviews\u201d or \u201cnarrative reviews.\u201d They are typically written by experts who summarize a number of information sources, often peer reviewed articles, on a particular area of interest and offer conclusions. They rarely control for bias or follow a specific methodology for identifying and selecting the sources that are included. Without these standards, the review may not cover the topic inclusively and the conclusions may support a specific agenda or view.Evidence syntheses [also known as \u201cresearch syntheses\u201d or \u201cknowledge syntheses\u201d ] collectCritically appraised topics (CATs) use the principles of SRs to minimize bias in gathering and appraising evidence, but do so much more quickly , 9\u201311. AEvidence synthesis methods exist along a spectrum of brevity and detail; CATs are the quickest, SRs the lengthiest and most thorough, and other types fall in between . As wellPublications describe the different types of evidence synthesis methods that have been used in research in health related , 7, and The CAT concept was developed by a group of internal medicine fellows at McMaster University, Canada and refiThe \u201cquick and dirty\u201d applied approach of a CAT makes it versatile and practical to be translated to other disciplines. Physiotherapy, occupational therapy, dermatology, urology, radiology, nursing \u201313, 20, CATs are primarily used in veterinary clinical practice to answer clinical queries resulting from specific cases or conundrums . These cCATs are also used in veterinary undergraduate and post-registration education , 24 to iOther uses in veterinary medicine for CATs are those relevant to any structured review of the literature, including identification of knowledge/research gaps , 30, preFor clinicians, it is useful to think of the CAT process in sequential steps or stages , 33. A sThe CAT process is explained in the steps below, using an example to highlight key points, with an overall summary of the example demonstrated in Transforming a clinical question into a searchable query can be daunting . One of The PICO format is often illustrated as:In [patient group] does [intervention and comparator] result in [outcome]The following clinical scenario will demonstrate the steps of the CAT process.You have been treating Miley, a 12-year-old Doberman, for osteoarthritis for the past two years. Her owners bring her in for a check-up. On clinical examination you find further reduction in her range of movement, and some signs of pain when you manipulate both of her hind limbs. She is currently on carprofen. Miley's owner asks about meloxicam, as one of the dogs at the park where he walks Miley receives it for a similar problem. You wonder whether Miley may show a greater improvement in clinical signs if she is treated with meloxicam instead of carprofen.In this clinical scenario, the PICO question might be:P = Patient group (dogs with osteoarthritis)I = Intervention (meloxicam)C = Comparator (carprofen)O = Outcome In [dogs with osteoarthritis] does [meloxicam compared with carprofen] result in [greater clinical improvement]?It is possible that further defining the patient group (e.g. forelimb osteoarthritis vs. osteoarthritis) and outcome would permit the evidence to be evaluated for applicability more specifically to the clinical case in front of the veterinary professional.By converting the scenario to a structured PICO format, a search strategy can be focused to answer the question, and appraisal of the evidence (see section below) can focus on the applicability as it relates to the specific question. For further information about searching see the box entitled \u201cGeneral references for defining a question.\u201dGeneral references for defining a question:Searching Skills Toolkit: Finding the Evidence. Oxford, UK: Wiley-Blackwell. ISBN: 9781118463130 (2009).De Brun C, Pearce-Smith N. http://www.ebvmlearning.org/ask/)EBVM Learning \u201cAsk\u201d module EBVM Toolkit 1 (https://pico.vet/index.html)PICOvet website . These symbols can also be used in the middle of terms to search for different spellings (e.g. \u201csterili$ation\u201d could be used to represent both the English \u201csterilisation\u201d and American \u201csterilization\u201d spellings); this is termed a wild card. Consult the help documentation for each database searched for guidance.Another technique to help with searching inclusivity is truncating or stemming a search term. This is indicated by the addition of a non-letter character, often Whilst it is important to identify outcome terms for the PICO as these will assist in determining which of the results are most appropriate, they are often not included in the search. Results from a search of the Patient, Intervention, and Comparison typically yield a sufficiently small number of results that are easily and quickly assessed. Additionally, outcomes may not be clearly defined, it may be difficult to identify all relevant terms for outcomes, and the more concepts that are combined, the greater the risk of excluding a relevant article. Being as specific as possible with the \u201cO\u201d or outcome in the PICO is also useful and important in the appraisal phase of evidence reviews .within components , whilst \u201cAND\u201d is used when combining separate components (e.g. patient and intervention term lists) to assure that each component is present in the search results. Capitalizing \u201cOR\u201d and \u201cAND\u201d to denote them as search commands is best practice because it can affect the results returned in some search interfaces.Although the CAT methodology is quite structured, there is a degree of choice and flexibility in how the search is carried out, depending on the timespan available and anticipated amount of evidence. To create a search that is broad (\u201csensitive\u201d) yet relevant (\u201cspecific\u201d), terms must be combined in an appropriate way . Best prAn additional consideration centres on the differing opinions as to whether the intervention and comparator components should be combined using the Boolean \u201cOR\u201d term. This permits citations to be identified if only one of the two components are mentioned in the abstract. Information specialists, or librarians, have specialist training and are highly skilled in generating searches that optimize the chances of identifying all relevant publications. It is best practice to seek guidance from them whether for training to conduct your own searches, or as collaborators.The search strategy for the above scenario might appear as follows:(dog OR dogs OR canine OR canines OR Canis)AND(osteoarthritis OR osteoarthritic OR OA OR arthritis OR arthritic OR joint disease OR joint diseases OR Degenerative Joint Disease OR DJD)AND(meloxicam OR Loxicom OR Metacam OR Inflacam OR Rheumocam OR Meloxidyl) OR (carprofen OR Rimadyl OR Canidryl OR Carprodyl OR Rimifin OR Carprieve OR Novox OR Vetprofen) OR Use of AND allows papers to be identified that contain terms from all components of the search, identifying the most relevant citations, as can be seen in Once a search strategy has been created, searching can commence within a literature or bibliographic database. These differ from searching the internet using a search engine (e.g. Google or Google Scholar) in two important ways. Bibliographic databases contain journal articles that are not generally available online or accessible via internet search engines. Coverage by internet search engines is not transparent and changes frequently.A number of bibliographic databases exist. Research suggests at least two databases should be searched, including CAB Abstracts since it contains the most comprehensive database for veterinary topics . The datFor those employed at a university or corporation, check with your information specialists or librarians to find the databases available to you. For those not affiliated with an institution, collaboration with individuals at universities or obtaining practice or individual subscriptions to databases [e.g. VetMed Resource ] is usefSearch results may be improved by the inclusion of standardized terms in the search . These tGeneral references about using subject headingshttps://knowledge.rcvs.org.uk/document-library/ebvm-toolkit-2-finding-the-best-available-evidence/)EBVM Toolkit 2 (http://www.ebvmlearning.org/acquire/)EBVM Learning Acquire (https://www.tamucet.org/product/pubmed-for-veterinarians/)PubMed for Veterinarians may be excluded , do not contain evidence of research methodology (e.g. narrative reviews), or are carried out in a non-applied setting to \u201cwiden the net.\u201d If this is not successful, the process of a traditional CAT ends here. Some published CATs include searches that don't return any citations to demonstrate evidence gaps . If the in vitro research, and 333 were excluded because they did not meet all components of the PICO question. A CAB Abstracts search returned 412 citations, one of which was relevant (the same paper as in the MEDLINE search). One paper was excluded as it was not in English, nine as they were narrative reviews, conference proceedings or related to in vitro research, and 401 were excluded because they did not meet all components of the PICO question. This left a total of one relevant paper from the two database searches, Moreau et al. CEVM website (https://knowledge.rcvs.org.uk/evidence-based-veterinary-medicine/ebvm-toolkit/)EBVM toolkit, RCVS Knowledge (Practice. (2013) 35:282\u20135. doi: 10.1136/inp.f1760Dean RS. How to read a paper and appraise the evidence. BMJ Open. (2016) 6:e011458. doi: 10.1136/bmjopen-2016-011458Downes MJ, Brennan ML, Williams HC, Dean RS. Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). Texas Vet. (2019) 81:54. uri: 1969.1/178285Moberly HK. How to read and appraise veterinary articles. Equine Vet Educ. (2020) 32:104\u20139. doi: 10.1111/eve.12896Pinchbeck GL, Archer DC. How to critically appraise a paper. Medicine\u2014https://www.cebm.net/2014/06/critical-appraisal/)Centre for Evidence Based Medicine (https://casp-uk.net/casp-tools-checklists/)CASP Joanna Briggs Institute (https://www.bmj.com/about-bmj/resources-readers/publications/how-read-paper)How to read a paper series, British Medical Journal .Crombie IK. How to Read a Paper: the Basics of Evidence-Based Medicine. 5th ed. Chichester, UK: Wiley-Blackwell (2014).Greenhalgh T. While specific questions need to be answered based on the study's design, there are key, easy questions that should be asked of all study types. These are :Does this study address a clearly focused question?Did the study use valid methods to address this question?Are the valid results of this study important?Are these valid, important results applicable to my patient or population?The Centre for Evidence-Based Medicine takes a time-efficient approach to the answers to these questions, saying that if the answer is no to any of them, clinicians should avoid reading the rest of the paper as it is not relevant .Veterinary professionals can worry that appraisal will be too difficult and may need advanced understanding of statistics. In reality, critical appraisal relies on the application of common sense in conjunction with an appraisal template with much of the focus on the study design, not the statistics. For example, of the 27 questions posed in the randomised controlled trial (RCT) critical appraisal sheet developed by the Centre for Evidence-based Veterinary Medicine (CEVM), only four relate to statistical calculations . None reIn the given scenario, the paper that was identified was a raPrior to the study commencing:\u2218 There was no assessment of how many animals would be required prior to the study commencing .Once the study had commenced:\u2218 The study focused on dogs weighing more than 20 kg and were older than 18 months of age with radiographic evidence of osteoarthritis in a range of joints. Subjects were excluded if there was history of other types of musculoskeletal comorbidities.\u2218 Outcomes measured were owner activity and pain scores, clinician orthopedic examination score, ground reaction force gait analysis and biochemical, haematological and faecal assessments.\u2218 Baseline characteristics and clinical characteristics of the subjects were not reported.\u2218 Aggregated results were reported for most but not all parameters; it was difficult to determine basic results as a consequence.n = 6) who showed an improvement at day 30 only (not at day 60). There was no statistically significant difference between the two treatments for this measure.\u2218 There were no statistically significant improvements in owner score compared to pre-treatment scores. The exception was a subset of dogs with stifle disease in the Metacam group , and in selected ground reaction force measures compared to pre-treatment scores. There was no statistically significant difference between the performance of the two treatments.The last part of the process is an overall assessment of all the evidence appraised. There is no standard way of amalgamating results from appraisals in the CAT format but it bIn the given scenario, the study weaknesses were felt to be substantial enough to conclude it was not possible to answer the clinical question. The clinical bottom line was that there was insufficient evidence to demonstrate a difference in relation to the greatest clinical improvement between the performance of meloxicam or carprofen in dogs with osteoarthritis. For an overall summary of the example CAT provided here, refer back to Production of the CAT can be carried out by more than one author to increThere are a number of excellent examples of CATs and resources available to help facilitate the construction of CATs, both in the medical and veterinary fields. This section will focus on published examples of CATs, collections of existing CATs, and website resources that can be utilized to construct CATs. The applied nature of CATs means that many of the most useful \u201chow to\u201d resources are not published in peer-reviewed journals, but on university webpages, open access online tutorials or online databases.Over time there have been a number of medical CAT databases in existence; in 2005 there were at least 13 different places where medical CATs appeared ; it is uwww.bestbets.org). This database was constructed by emergency clinicians working at the Manchester Royal Infirmary in the UK, in response to a lack of high quality evidence for some of what was seen regularly in emergency care , and how the \u201creview\u201d component of each format occurs , but they essentially follow the same process. The advantage for veterinary professionals is that there are several CAT collections available to utilise for decision making in clinical practice. The collections of veterinary CATs available at the time of article preparation are listed alphabetically in Published examples of veterinary CATs:There are several good examples of veterinary CATs that have been published in the literature. Two can be seen here, both of which are free to view. These examples demonstrate a contrast in relation to the types of question and approaches that can be used under a CAT format.BMC Vet Res. (2014) 10:73. doi: 10.1186/1746-6148-10-73Finka LR, Ellis SLH, Stavisky J. A critically appraised topic (CAT) to compare the effects of single and multi-cat housing on physiological and behavioral measures of stress in domestic cats in confined environments. This CAT contributed to the development of welfare guidelines for unowned cats .BMC Vet Res. (2015) 11:3. doi: 10.1186/s12917-015-0541-3Olivry T, Mueller RS, Prelaud P. Critically appraised topic on adverse food reactions of companion animals : duratioUseful web sources:Medicine:\u201cHow to\u201d resources\u2014https://www.cebm.net/2014/06/catmaker-ebm-calculators/)Centre for Evidence Based Medicine CATmaker: Physiopedia: Healthy Feet website: BMC adverse food reaction CATs: to describe diseases/conditions/procedures then it is more likely a CAT author from a different part of the world may miss a relevant publication. For example, the term \u201ctup\u201d can be used to describe a male sheep in the UK; in other countries this term is not generally used. The majority of known CAT collections in veterinary medicine are published in English, and to the authors' knowledge, none of the reviews in these databases go to the extent of searching for non-English publications for inclusion. This is a distinct limitation but is aFor relevant studies to be identified, published research must be indexed correctly. Information specialists rely on authors identifying the most appropriate key words for their publication and ensuring the most important terms are included in the title and abstract. It also depends on the terminology used to describe disease conditions or procedures. Additionally, depending on the database in question, some of the indexing of veterinary related publications is done by personnel who may not necessarily be familiar with some of the conditions that afflict animals. Automated indexing systems can both omit relevant subject headings from a record which can impact on retrieval or include erroneous subject headings. All of the above can impact on whether specific publications are returned after a structured search has been performed.There are sometimes misconceptions by veterinary professionals in relation to these clinically relevant reviews of the literature, analogous to those held by some medical professionals in relation to clinical guidelines . They caExcluding for educational purposes, the role of the CAT appears to have been superseded by SRs, which are often used as the basis for clinical guidelines for medical practitioners . SRs arWith the creation of more CAT collections in the veterinary sphere , professFor busy practitioners, having numerous different CAT collections to search across is suboptimal. In the future it may be that provision of software, such as the \u201cCAT crawler\u201d would ovThe CAT framework is still a current and useful process for veterinary professionals to use primarily for evidence-based clinical decision making and for undergraduate and post-registration training. With the provision of new CAT collections that can be utilized often at no cost, there are good options available for those in clinical practice who do not yet have the skills to generate CATs themselves. All veterinary professionals, with regular practice, have the ability to successfully navigate the CAT process. However, time must be given to those in clinical practice for the development of these skills so that more CATs can be generated, facilitating excellent evidence-based care of clients and their animals.MB, LC, HD, and LM were involved in creating the framework for the manuscript. MB, SA, ZB, LB, LC, HD, VF, DG, HM, LM, JS, and CW contributed to acquisition of data (publications) for the work. MB wrote the draft manuscript. MB, SA, ZB, LB, LC, HD, VF, DG, HM, LM, JS, and CW contributed to editing the manuscript, read and approved the final manuscript.The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest."} +{"text": "The aim of the present study was to determine the effect of dietary hybrid barley and/or wheat on production parameters, selected biochemical parameters of blood serum characterizing health status in fattening pigs. The use of hybrid barley as the basic ingredient of diets for fattening pigs provided similar production parameters as those obtained with wheat. No significant differences were noted in case of performance results and meatiness of fatteners. However, usage of hybrid barley with high level in diet decreased level of total cholesterol and LDL (low-density lipoprotein fraction) fraction in blood. It means that barley had a beneficial effect on blood lipid indices.p < 0.05). However, high-density lipoprotein fraction (HDL) content increased (p < 0.01) up to 1.04 mmol\u00b7dm\u22123, comparing to the group with 80% of wheat (0.84 mmol\u00b7dm\u22123). Summarized, the diet with high level of barley had a beneficial effect on blood lipid indices, what indicate a good health status of all animals.The aim of the study was to determine the effect of dietary hybrid barley and/or wheat on production parameters, selected biochemical parameters of blood serum characterizing health status in fattening pigs. In group I, hybrid barley constituted 80% of feed; in II\u2014wheat and hybrid barley were used, each in amount of 40% feed; in III\u2014contained 80% of wheat. No significant differences were noted in case of performance results and meatiness of fatteners. All estimated biochemical indices determined in serum were within normal range. Usage of 80% hybrid barley decreased concentration of total cholesterol, low-density lipoprotein fraction (LDL), and triglycerides in blood ( Therefore, development of new swine breeds, as well as changes in preferences of the meat processing industry, prompted feed science to evaluate hybrid grains in feeding of fattening pigs.Barley is a basic and, to a large extent, indispensable ingredient of feeds for fattening pigs. A rise in wider barley use in stock feeding is most of all consequential to improvement of its yields. Breeding works have led to development of hybrid barley characterized by higher and more stable yields under different cultivation and environmental conditions and by better technological quality parameters. Field observations show that under good weather conditions, the yield of hybrid barley is even 14% higher compared to conventional varieties. Moreover, hybrid varieties are characterized by a higher content of crude protein. They also have only slightly lower energy value than wheat because hybrid barley contains less crude fiber than conventional varieties . The nutritional experiment was carried out at a private pig farm. The study was conducted on 144 fattening pigs (Polish large white \u00d7 Polish landrace crossbreds), trying to keep sex ratio 1:1. Fattening lasted 78 days, from average body weight of 55 to 120 kg. During the experiments, the animals were kept in group pens equipped with nipple drinkers. A semi ad libitum feeding system was used .TM were used in the study. The animals were randomly assigned to three experimental groups of equal size differing in the type of complete diet used for feeding that was prepared based on hybrid barley and/or wheat. In group I, the diet contained 80% of hybrid barley. Group II was given an equal amount of wheat and hybrid barley (40% each). The diet for group III contained 80% wheat. The experimental design is illustrated in Hybrid varieties of barley HyvidoBefore preparation of the experimental diets, chemical analysis of hybrid barley and wheat was performed in order to determine their nutritional value and amino acid index. The energy value of the cereals was calculated based on digestibility coefficients listed in the Nutrient Requirements for Swine . Energy The following production data were collected during the course of the experiment: body weight (BW) of fatteners (at the beginning and the end of the experiment); feed intake (FI) and animals\u2019 losses. Then average daily gains (ADG) and feed conversion ratio (FCR) were calculated.The lean meat content in the pig carcass was measured on the slaughter line by the IM-03 apparatus as a part of the post-slaughter classification of pork. The measurement was taken at one point: between the 3rd and 4th ribs. The meatiness was determined as the data of fat and muscle thickness. The slaughter yield was calculated as a percentage of meat content in the carcass to total BW.On the last day of the fattening period, blood was collected from the external jugular vein (vena jugularis externa) directly into the Sarstedt type-tubes embedded with a clotting activator. Following, the samples were centrifuged using the MPW-223e laboratory centrifuge . The blood samples were collected from 10 randomly selected animals from each group in order to determine the contents of total cholesterol, high- (HDL) and low-density lipoprotein (LDL) fractions, and triglycerides. ABX PENTRA Cholesterol test based on an enzymatic photometric Trinder\u2019s reaction was used for quantitative diagnostic determination of cholesterol (CP reagent), high density lipoprotein cholesterol (HDL Direct CP reagent), and low density lipoprotein cholesterol (LDL Direct CP reagent). ABX PENTRA Triglycerides test (CP reagent) was used for quantitative determination of triglycerides based on an enzymatic colorimetric assay. In addition, biochemical parameters characterizing animal health status were analyzed. Serum proteinogram was obtained according to standard methods. The BCA (bicinchoninic acid) Protein Assay Kit was used for quantitation of total protein (TP), and its fraction using the filter paper electrophoresis. Serum glucose and urea levels were measured on an enzymatic test using Biosystem S.A. reagents.p < 0.05 and p < 0.01. For analysis the following experimental model was used:ij is mean value of observed depend variable, \u00b5 is mean value of a population, ai influence of treatment, and eij influence of random factors.All numerical data as mean values for each pen were evaluated statistically by one-factor ANOVA using Statistica 12 programme . DiffereChemical composition of wheat and hybrid barley are presented in \u22121 more tryptophan.Amino acid index in hybrid barley amounted to 65 and was higher than in wheat, in which it reached the value of 59. The levels of the most important exogenous amino acids were comparable. Hybrid barley contained by 0.4 g\u00b7kg\u22121 , crude protein by 9 g\u00b7kg\u22121, and lysine content by 0.35 g\u00b7kg\u22121. On the other hand, high barley content in the diet resulted in an increased fiber level and tryptophan content . Amino acid level satisfied the requirements of fattening pigs. The diets were characterized by a balanced, correct lysine-to-metabolic energy ratio. In addition, the contents of exogenous amino acids relative to lysine (assumed to be 100) agreed with nutritional recommendations [The nutritional value of the diet is presented in ndations . In compp = 0.085). ADG in all the groups amounted to above 800 g (\u00b112.77) and were not statistically significantly different (p > 0.05).\u22121 per head). FCR per 1 kg of body weight gain (BWG) was from 2.77 to 2.82 kg and did not differ significantly (p > 0.05). The fattening pig fed the diet containing 80% wheat showed slightly lower (ca. 2%) FCR. The lower FCR can be attributed to a higher energy concentration in the diet. The level of nutrition has a significant impact on the animals\u2019 weight gains [In the present experiment, FI (per one fattening pig) was similar and ranged from 179 to 184 kg . The obtained result was not dependent on the applied diet. Analysis of production performance data showed no significant differences between treatments (but only slightly greater gains (ca. 2%), slightly lower FCR (ca. 2%) and slightly higher meatiness (ca. 0.5%) with increasing wheat percentage content in the diet).In the present study, relatively high meatiness at slaughter weight of ca. 120 kg was achieved compared with groups II and III, in which total cholesterol concentration ranged from 2.37 to 2.49 mmol\u2219dm\u22123.Serum lipid indices are presented in p < 0.05). The highest level of this component in serum was noted in group III at the level of 1.18 mmol\u2219dm\u22123, while in group I it amounted 1.05 mmol\u2219dm\u22123.Low density lipoprotein (LDL) fraction also significantly varied between the groups I and III (p < 0.05). In groups I and II it ranged from 0.50 to 0.59 mmol\u2219dm\u22123, while in group III the level of this component reached 0.72 mmol\u2219dm\u22123. The obtained results indicate the conspicuous relationships resulting from the use of different basic energy sources, i.e., hybrid barley and wheat in pig feeding.Serum concentration of triglycerides also significantly decrease with increasing hybrid barley content in the diet .In the present study, HDL fraction content increased with the higher share of hybrid barley in the feed dose. In group I HDL level reached 1.04 mmol\u2219dmp > 0.05). No significant differences were noted between different basic energy sources.The nutritional values of hybrid barley and wheat were within the standard range recommended by the Nutrient Requirements for Swine . The cruThe amino acids (AA) content showed that the tryptophan (Trp) concentration was higher compared with wheat grains. Tryptophan is an indispensable AA often limiting pig growth. The dietary Trp is a precursor of serotonin synthesis, which is responsible for feed intake . TherefoThe prepared diets were characterized by parameters complying with the experimental objectives. The nutritional value of the diets remained in agreement with valid nutritional recommendations .p < 0.001) and tended to grow 11% faster (p = 0.008) than gilts fed barley diet [Different cultivars and agronomic conditions cause variation in nutrient composition of wheat and barley grain, that could affect growth performance in pigs. Complete replacement of wheat by barley in pig diets generally reduced their growth and feed intake . Howeverley diet . This caley diet . Anotherley diet showed tley diet did not In the present experiment, no significant differences were noted between treatments in respect to ADFI and FCR. Daza et al. suggesteThe use of hybrid barley-based diet in pigs can be designed to obtain the comparable performance results as for animals fed a wheat-based diet. Wheat grains are usually used for the production of a very-high-nutritional value feed mixture. This means that by replacing wheat with hybrid barley it is possible to reach also a high-quality nutritional value of feed mixture. Slight differences in the lower energy level in the diets (12.7 vs. 13.2 MJ ME) did not affect FCR. Although, it is known that the lower FCR can be attributed to a higher energy concentration in the diet, which can be found for example for corn-based feeds . Lower ep < 0.05) compared with a barley grain-based diet. In a feeding trial by Banaszkiewicz et al. [Analysis of data of the production performance showed no differences in weight gain and meatiness with increasing content of wheat in the diet. Similar results were reported by ,24,25, wz et al. , in whic\u22121) the collected results were similar. It could indicate that keeping Lys content/ME ratio, which was on the same level in the present study, allowed to maintain similar pig growth performance. A comparable relationship was observed by other authors [Using the feed mixtures of high-quality and high nutritional value, irrespective of the grain source, allowed to obtain a good meatiness without any difference between groups. In spite of a slightly lower level of crude protein in the diet with 80% of hybrid barley compared to 80% of wheat grain (155 vs. 164.5 g kg authors .\u22123).Total blood cholesterol content depends mostly on genetic traits , diet, and endogenous synthesis of this compound in the liver ,31. TotaApplication of barley in swine diets usually increases dietary fiber content. Barley contains \u03b2-glucans (included in a soluble dietary fiber), which can be easily fermented by the gut microflora and stimulated the production of butyrate, that is the main source of energy for colonocytes, what moreover affects the health status of animals . DietaryStudies in humans and animals have revealed that barley specifically reduces total cholesterol and LDL lipoprotein content ,42 with The collected results showed that the high level of hybrid barley in diet had beneficial effect on the lipid indicators in blood serum. Usage diet containing 80% of hybrid barley indicated a good animal health status. It means that the level of cholesterol and its fractions can be modified by the diet and also correlated to the amount of dietary barley.All estimated biochemical indices determined in serum were within normal range , which cAnimals fed with the reduced energy and protein level in feed showed a tendency towards lower total protein, albumin, and globulin content . Our stuDietary \u03b2-glucans may also reduce protein fermentation . Literat\u22121, and meatiness of ca. 54%). All the animals were characterized by good health status. In addition, the diet with high level of barley had a beneficial effect on blood lipid indices. Therefore, hybrid barley can be recommended for use as the only cereal ingredient in diets for fattening pigs.The use of 80% hybrid barley as the basic ingredient of diets for fattening pigs (from 55 to 120 kg body weight) provided similar production parameters as those obtained with 80% wheat. Data collected in all the experimental groups were similar (daily gain of fattening pigs of 800 g, feed conversion 2.81 kg\u00b7kg"} +{"text": "Online teaching for medical students is not an unusual tool used in medical education. Alongside clinical placements, medical students are familiar with online teaching platforms from various members of the faculty. However, the new and necessary method of examining medical students from their own home during the Covid-19 Pandemic is a novel approach. It is vital that medical students continue to be examined, as this establishes the attainment of the curriculum learning outcomes. As medical students in our penultimate year of study, our teaching program has been significantly impacted by the global pandemic ,2. AdditFollowing the cancellation of clinical placements, medical students were forced to adjust quickly to learning entirely from home. Medical schools were hasty to implement online lectures and teaching opportunities despite the global pandemic. Moreover, clinicians were able to continue teaching using online platforms, such as zoom . There hHowever, there was apprehension among medical students, and questions were raised regarding how examinations were to take place . AdditioThere were various factors that had to be considered regarding the medical school examinations. In addition to establishing whether the medical students have attained the curriculum learning outcomes, the marks were to be used to calculate the Educational Performance Measure (EPM) and were of important value to those wishing to apply for an Academic Fellowship Programme (AFP). Furthermore, any previously arranged Objective Structured Clinical Examinations (OSCEs) had to be cancelled or postponed due to the social distancing measures. The lack of assessment of clinical skills in addition to lack of clinical exposure has led to rising student anxiety and eagerness to return to placement when it is safe to do so. For now, we can only predict how this will impact the future preparedness of junior doctors.At King\u2019s College London, our medical examination went ahead as planned, but was completed as an open-book examination (OBE) from home via an online system. This format paralleled that of Imperial College London whose final year medical students had recently sat their final year exams during the Covid-19 Pandemic . With thOnline OBEs are something that students across the world are preparing for during the Covid-19 Pandemic. It is an unusual feeling to sit an important examination from your home. Nonetheless, we felt the exam tested our knowledge and closely simulated the reality of having a patient presenting to you. There was not enough time to process all the information given to you and look up every question. The exam was well designed to assess your ability to assimilate all the information given to you and to reach a conclusion, thereby testing your knowledge and problem solving and not your ability to google."} +{"text": "It is suggested that programming of the immune system starts before birth and is shaped by environmental influences acting during critical windows of susceptibility for human development. Prenatal and perinatal exposure to physiological, biological, physical, or chemical factors can trigger permanent, irreversible changes to the developing immune system, which may be reflected in cord blood of neonates. The aim of this narrative review is to summarize the evidence on the role of the prenatal and perinatal environment, including season of birth, mode of delivery, exposure to common allergens, a farming environment, pet ownership, and exposure to tobacco smoking and pollutants, in shaping the immune cell populations and cytokines at birth in humans. We also discuss how reported disruptions in the immune system at birth might contribute to the development of asthma and related allergic manifestations later in life. Non-communicable diseases (NCDs) are predominantly chronic diseases that include metabolic and cardiovascular diseases, cancer, autoimmune conditions, neurological disorders, chronic lung disease, asthma and other allergic diseases. Typically, these diseases share common features: early-life exposure to environmental agents, chronic low-grade inflammation, and immune disturbance during development. Growing evidence is showing that environmentally induced disruption of normal immune system development may play a significant role in the current global epidemic of NCDs whose prevalence has dramatically increased worldwide in the last decades . Most ofIn this review, we summarize the current evidence on the role of the prenatal and perinatal environment in shaping the human immune system at birth, including the influences of physiological, biological, physical, and chemical factors. In particular, we focus on neonatal immune cell populations and cytokines and discuss how reported disruptions at birth might contribute to the development of asthma and related allergic manifestations later in life.The immune system is a complex network of cells, proteins, tissues and organs that defends the host against microbes and molecules that are recognized as foreign, and ultimately protects against disease. In humans, there are two principal subsystems: the innate and the adaptive or acquired immune system, each of which comprises cellular and humoral components to perform their functions. Leukocytes, the cellular component of the immune system, are divided into myeloid and lymphoid cells. Myeloid leukocytes are the main cellular components of the innate system, which includes granulocyte cells , monocytes, macrophages, mast cells, and dendritic cells (DC) ,4. InnatFive major maturational periods through immune system development are described as critical windows of vulnerability . FirstlyA complex interplay between environmental exposures acting during critical windows of development in early life and genetic susceptibility likely contribute to occurrence of asthma and allergy ,14. IdenSeason of birth may influence the occurrence of asthma and allergic manifestations later in life. Children born in autumn have a higher risk of asthma compared with those born in spring ,16. MoreMode of delivery has also been identified as an important factor in the occurrence of asthma and allergy. Cesarean section, whose rate has increased in parallel with the prevalence of childhood asthma over the past decades, is related to increased risk of wheeze up to school age ,23, asthPrenatal and postnatal early life exposure to common allergens , mold orMaternal smoking during gestation is associated with increased risk in the offspring of respiratory infections , wheezinin utero exposure to toxic metals. Finally, growing evidence has shown prenatal exposure to air pollutants, primarily NO2 and particulate-matter (PM2.5 and PM10), to be associated with the occurrence of wheezing [Prenatal exposure to persistent organic pollutants (POPs) such as organochlorine pesticides has been associated with increased risk of wheeze and asthma later in life ,57,58. Fwheezing , asthma wheezing ,69,70,71wheezing ,68, eczewheezing , and impwheezing during cA literature search was conducted by two independent reviewers (AMG-S and EM) in MEDLINE (via PubMed) through September 2020. The search strategy used the following keywords: outcome (\u201cleukocytes\u201d OR \u201clymphocytes\u201d OR \u201cimmune system\u201d OR \u201cTh1\u201d OR \u201cTh2\u201d OR \u201ccytokines\u201d OR \u201cIgE\u201d) combined with \u201ccord blood\u201d and with the next keywords for exposures \u201d. Limits: Human, English. Identification and first screening of the articles were performed using the information available in the title and the abstract. Potentially relevant studies were retrieved in full text and assessed for eligibility; any discrepancies were resolved by discussion between the two independent researchers or by discussion with a third review author.The selection criteria were: (a) article written in English; (b) original research article based on an epidemiologic study performed in human individuals ; (c) outcome assessment included phenotyping of immune system cells or cytokine profile patterns assessed in cord blood of newborns; and d) assessment of season at birth, mode of delivery, farming exposures, pets, common allergens, tobacco smoking, persistent and non-persistent pollutants, toxic metals and outdoor air pollution as exposures during pregnancy or around birth. After screening of retrieved articles, 78 articles met our inclusion criteria, including 49 cohort studies, 28 cross-sectional studies and 1 retrospective study conducted between 1979 and 2020.Immune cell distributions ,74 and cSome studies have reported that neonates born during spring or summer time had significantly lower production and response of cytokines, including TNF-\u03b1, IFN-\u03b3, IL-5 and IL-10 ,79. HoweAn increased number of leukocytes in cord blood of neonates is associated with vaginal delivery ,84,85,86In addition, mode of delivery may alter cytokine patterns at birth. Compared to those delivered by cesarean section neonates born by vaginal delivery had an enhanced innate response, showing higher IL-1\u03b2, IL-6 and IL-8 levels but lower granulocyte colony-stimulating factor (G-CSF) levels ,99,100. Impact of farming-related exposures, pets and indoor allergens on the immune system at birth has been investigated in diverse studies . DecreasSeveral studies have investigated cord blood immunoglobulin E (IgE), a key feature in allergic manifestations, in relation to prenatal exposure to common allergens. Total IgE levels decreased in cord blood from newborns whose mothers had been exposed to dogs ,106 and Prenatal farm environment is related to increased proinflammatory cytokines (TFN-\u03b1 and IL-6) in cord blood of neonates ,110. MorIn addition, farm environments could reduce a Th2-related response in newborns. Prenatal exposure to stables was related to a decreased dust mite allergen-induced IL-5 response ; and newExposure to maternal tobacco smoking during pregnancy has been associated with a reduction in neutrophils ,117,118,Diverse studies have investigated the effects of organic pollutants on prenatal immune system development ,134,135 Furthermore, prenatal exposure to organic pollutants has been associated with impaired immunoglobulin contents in cord blood plasma of neonates, including decreased IgM and increased IgG levels . FurtherSeveral studies have associated exposure to organic compounds during pregnancy with changes in cord blood cytokines patterns ,138,139.Diverse epidemiological studies have examined the effects of prenatal exposure to toxic metals on immune system biomarkers in cord blood of neonates ,142,143 in utero exposure to lead and chromium has been associated with higher IL-13 levels in cord blood [Effects of prenatal exposure to metals on cytokines patters at birth have also been investigated ,138,140.rd blood . 2 concentrations 14 days before delivery have been associated with decreased counts of leukocytes, neutrophils and monocytes [2 derived from traffic during the first trimester of pregnancy were associated with decreased counts of leukocytes, lymphocytes, monocytes and basophils [2.5 [Studies examining the relationship between prenatal exposure to outdoor air pollutants and distributions of immune cells in cord blood of neonates ,147,148 onocytes . Moreoveils [2.5 ,146. 2.5 during early gestation (first trimester) [2 and PM2.5 exposure in early and late gestation [10 during the first trimester of pregnancy [2.5, NO2 and O3 [Associations between prenatal exposure to outdoor air pollutants and the distribution of NK cells in cord blood are inconsistent. Some studies have showed reduced NK cells in cord blood associated with exposure to PAHs and PMimester) , and to estation . Howeverestation , exposurregnancy , and exp2 and O3 ,147,148.2.5 in late pregnancy [2 in early pregnancy [2.5 in late pregnancy [10 during whole pregnancy [10 and NO2 15-days before delivery [As for T lymphocytes, results are also inconsistent. Reduced T cells have been found in newborns living in a high PM polluted area , among tregnancy ,146. Morregnancy , PAHs anregnancy ,146, andregnancy . Distribregnancy . Garcia-delivery .2.5 [2.5 derived from traffic [2 concentrations during the first and second trimester of pregnancy [Most studies have reported a reduction in Tc cells in newborns associated with prenatal exposure to higher levels of outdoor air pollutants, including short-term exposure in later pregnancy to PAHs and PM2.5 ,146, and traffic . Howeverregnancy . 10 concentrations before birth, and decreased levels of IL-10 in relation to higher PM10 concentrations in last trimester of pregnancy [2 levels to be associated with increased IL-33 and thymic stromal lymphopoietin (TSLP) concentrations in cord blood of neonates, and increased IgE levels among female neonates exposed to higher PM2.5 levels during pregnancy [The influence of prenatal exposure to air pollutants on cytokines in cord blood of neonates has been poorly examined ,149. Latregnancy . In addiSeason of birth, mode of delivery, prenatal exposure to common allergens and chemicals, including tobacco smoking, organic pollutants, metals and outdoor air pollutants, may impair distributions of immune system cells , as well as, alter immunoglobulins and cytokine patterns in cord blood of neonates.Winter birth is associated with increased leukocytes, NK and activated Th cells and a prCord blood of neonates delivered vaginally shows increased leukocytes and some of their subsets ,92,93,94A prenatal farming environment is related to lower IgE response to seasonal allergens , a higheMaternal smoking during pregnancy impacts both innate and adaptive immunity of neonates, which could attenuate or exacerbate pathogenic immune responses against infections in early life. Reduced number of leukocytes, lymphocytes, including T helper CD4+, and detrimental IFN-\u03b3 response may impair infant defense system against respiratory viral infections in early life, which likely contributes to increased incidence and severity of these infections , and theAlthough more evidence is warranted, current evidence suggests that prenatal exposure to POPs might contribute to a failed innate immune responses , and induce lymphocyte activation (increased B cells and IgG production) and a Th2-related response in neonates ,129,130.Prenatal exposure to metals may impair the innate immune response at birth through changes in T, Treg and Th memory cells ,141,142,In utero exposure to outdoor air pollutants impairs leukocyte and lymphocyte distributions in neonates; early and late gestation seem to be potential developmental windows of higher susceptibility [in utero exposure to outdoor air pollutants may result into a higher risk of respiratory tract infections in childhood given the role of Tc cells in the defense against virus infections [2 and PM [tibility ,147,148.fections . Limited2 and PM ,149.This review has some limitations. Studies with small sample size were included in the current review. However, our aim was to summarize all the evidence to date on the impact of prenatal and perinatal environmental influences on immune system at birth. Publication bias cannot be discarded, because studies with no significant findings are less likely to be published. Only articles published in English were included. Finally, heterogeneity between studies in exposure assessment, as well as the small number of studies for any given exposure\u2013outcome relationship, currently make the combination of studies for meta-analysis impossible, but this review summarizes the current evidence and may guide future studies.The prenatal and perinatal periods seem to represent crucial biological windows of opportunity for environmental influences to shape the neonatal immune system. Identified disturbances in immune cell populations and cytokines at birth could lead to a higher susceptibility to respiratory infections, asthma and allergic manifestations later in life. Although some associations and mechanisms warrant further investigation, overall, promoting a proper immune system development during early life should be recognized as a major element in the public health agenda to prevent NCDs, especially asthma and allergic manifestations."} +{"text": "The6-cyclohexyl-6-phenyl derivative 3b, with a cis configuration between the CH2N+(CH3)3 chain in the 2-position and the cyclohexyl moiety inthe 6-position, showed pKi values formAChRs higher than those of 2 and a selectivity profileanalogous to that of the clinically approved drug oxybutynin. Thestudy of the enantiomers of 3b and the correspondingtertiary amine 33b revealed that the eutomers are -(\u2212)-3b and -(\u2212)-33b, respectively.Docking simulations on the M3 mAChR-resolved structurerationalized the experimental observations. The quaternary ammoniumfunction, which should prevent the crossing of the blood\u2013brainbarrier, and the high M3/M2 selectivity, whichmight limit cardiovascular side effects, make 3b a valuablestarting point for the design of novel antagonists potentially usefulin peripheral diseases in which M3 receptors are involved.A seriesof novel 1,4-dioxane analogues of the muscarinic acetylcholinereceptor (mAChR) antagonist M1, M3, and M5 mAChRs are associated with Gq/11 proteinsto trigger phospholipase-C activation. Their activation increasesneuronal excitability through the opening of nonspecific cation channels,mobilization of intracellular Ca2+, or inhibition of small-conductanceCa2+-activated K+ channels. M2 andM4 subtypes couple to Gi/o proteins, inhibitingadenylate cyclase and reducing the levels of intracellular adenosine3\u2032,5\u2032-cyclic monophosphate (cAMP).1 mAChRs mediate several functions in the central nervoussystem (CNS), where they play a crucial role in cognitive functions2 and pain circuits.3 Moreover, in the periphery, M2 and/or M3 subtypesare involved in smooth muscle contraction,4 cardiovascular function,5 and glandularsecretion.6 Acetylcholine is not only aneurotransmitter but can also act on non-neuronal cells, and the muscarinicsystem is involved in the regulation of stem7 and cancer cells,8 in immunity and inflammation,9 and in the mucocutaneous epithelial barrier.10 Moreover, muscarinic signals have been demonstratedto be transmitted by mesenchymal stem cells (MSCs) from differenttissues.12Muscarinic acetylcholinereceptors (mAChRs) are proteins with seventransmembrane domains separated by intracellular and extracellularloops. Acetylcholine binds to the extracellular region of mAChRs andthereafter activates GTP-binding regulatory proteins in the intracellularcompartment. The mAChR family consists of five closely related members-1,23 whereasaromatic rings characterized potent antagonists, such as the 6,6-diphenylderivative (S)-224 -2 more efficaciouslyreduced the volume-induced contractions of the urinary bladder.The 1,4-dioxane nucleus has been demonstratedto be a versatilescaffold for the development of compounds interacting with differentreceptor systems, (S)-224 1. In in ybutynin 1, an ant3 subtype and potentially usefulfor the treatment of OAB, the diphenyl group in the 6-position ofcompound 2 has been replaced by different lipophilicgroups . To elucidate the binding mode of the describedcompounds and to rationalize the biological results, docking simulationson the M3 mAChR-resolved structure were performed.Moreover, considering the pivotal role played by stereochemistryin the interaction of both 1,4-dioxane agonists and antagonists withthe five mAChR subtypes,3\u20138 were synthesizedfollowing the procedure reported in 20(22 by reaction with sodium hydride and trimethylsulfoniumiodide in dimethyl sulfoxide (DMSO), according to the procedure reportedby Corey and Chaykovsky.32 The openingof oxiranes 21,3322, and 23 acetateand subsequent treatment with an aqueous solution of potassium iodideand iodine. The cis and trans isomers of 28 and 30 were separated by column chromatography, while attemptsto obtain the pure diastereomers of 29 failed. The iododerivatives 27a and 27b were synthesizedas previously reported.16 The phenyl thioethers 30a and 30b were oxidized with meta-chloroperbenzoic acid (m-CPBA) to give the sulfoxides 31a and 31b after 30 min at room temperature(r.t.) with one equivalent of m-CPBA or the sulfones 32a and 32b after 2 h at r.t. with 2 equivalentsof m-CPBA. Concerning the sulfoxide derivatives 31a and 31b, a further center of chirality wasintroduced into the molecule. In both cases, only one of the two diastereomerswas obtained. The amination of the intermediate iodo derivatives 27\u201332 with dimethylamine afforded the correspondingfree amines 33\u201338, which were transformed intothe methiodides 3\u20138 by treatment with methyl iodide.Compounds ted in 20 was conv2, and 23 with all3a and 3b was determinedby X-ray diffraction analysis performed on 3b are deshielded compared tothe same protons of diastereomer 28a (3.22 ppm), precursorof methiodide 4a. This deshielding effect for CH2I protons of diastereomer 28b suggests anaxial position for the side chain, as already evidenced in 1,4-dioxaneanalogues bearing a CH2I chain35 and, consequently, the relationship between the biphenyl substituentand the chain is trans ,37 whose primary hydroxyl group wasselectively protected with tert-butyldimethylsilylchloride (TBDMSCl) to give compound 46, which was treatedwith allyl bromide in the presence of NaH affording olefine 47. The cleavage of the silyl ether with tetrabutylammoniumfluoride (TBAF) yielded the corresponding primary alcohol 41. The intermediates 48 and 49 were obtainedas previously described in the literature.13The novel compounds 2, and 23 with all50\u201354 were obtainedstarting from olefins 40\u201344 in the same reactionconditions used for the preparation of 28\u201330.The diastereomers were separated by column chromatography. The thioethers 54a and 54b were oxidized to give sulfoxides 55a and 55b, respectively, and sulfones 56a and 56b as above described for 31 and 32. Similarly to what was observed for 31a and 31b, also for 55a and 55b, only one of the two diastereomers was obtained. The amination of 48\u201356 with dimethylamine afforded the correspondingamines 57\u201365, which were transformed into themethiodides 9\u201317 by treatment with methyl iodide are deshielded compared to the same protonsof diastereomer 52b (3.19 ppm), precursor of methiodide 13b . This deshielded effect for CH2I protonsof diastereomer 52a suggests an axial position for theside chain, as also evidenced in 1,4-dioxane analogues bearing a 2-CH2I chain35 and, therefore, a cisconfiguration between the chain and the biphenyl substituent compared to those of diastereomers 51b, 53b, and 54b , demonstrating a trans configuration betweenthe substituents in the 1,4-dioxane nucleus for 51a, 53a, and 54a.Similarconsiderations can be made for diastereomers 2I fragment andthe 5-substituents of 11a and 11b was assignedby 1H NMR analysis . In particular, evidentNOEs were observed between the axial proton in the 3-position andthe hydrogen atoms of the phenyl ring in the 5-position and betweenthe axial protons in 2- and 6-positions of 11a, indicating that the 5-phenyl nucleus and the2-side chain are axially and equatorially oriented, respectively.Therefore, the relative configuration between the 2-side chain andthe 5-phenyl substituent is cis in 11a and, consequently,trans in 11b . In the 1H NMR spectrum of 18c, the axial hydrogen atomin the 3-position at \u03b4 3.69 ppm showed two large coupling constants(J = 11.2 Hz and J = 10.0 Hz), onewith the geminal equatorial hydrogen atom and the other with the axialhydrogen atom in the 2-position. Hence, the chain in the 2-positionis equatorially orientated. Moreover, NOEs were observed between theaxial proton in the 3-position and the proton in the 5-position at3.69 and 5.22 ppm, respectively, and between the axial proton in the2-position at 4.24 ppm and the phenyl ring in the 6-position, indicatingthat the 2-side chain is trans oriented with both phenyl substituents was obtained. The amination with dimethylamine and subsequent reactionwith methyl iodide yielded the same diastereomer (18c) obtained following the previously described procedure.In the effort to obtain the fourth diastereomer, in which the stereochemicalrelationship among the three substituents is cis, the olefine 66 was trea19a and 19b were prepared followingthe procedure described in 72, obtained by reaction of the \u03b1-allyloxy ketone 71, one with the geminal equatorially locatedproton and one with the axial proton in the 2-position. Hence, theCH2N(CH3)2 fragment in the 2-positionassumes the equatorial position. Analogously, as shown by the 1H NMR spectrum of 75b, precursor of 19b, the CH2N(CH3)2 fragment in the2-position is equatorial because the axial proton in the 3-positionat 3.58 showed two large coupling constants (J =11.5 Hz and J = 10.3 Hz), one with the geminal equatoriallypositioned hydrogen atom and one with the axially oriented hydrogenatom in the 2-position. Moreover, the proton in the 5-position of 75b (5.82 ppm) is deshielded compared to the same proton of 75a (4.95 ppm) . The observation that in the 1H NMR spectraof the cis and trans diastereomersof 5-phenyl-1,4-dioxane-2-carboxylic acid and 6-phenyl-1,4-dioxane-2-carboxylicacid, whose structure had previously been determined by NOE measurements,13 the equatorially oriented protons are deshieldedcompared to the axially oriented protons allows us to hypothesizethat the proton in the 5-position is axially oriented in 75a and equatorially oriented in 75b. Therefore, the relativeconfiguration between the 2- CH2N(CH3)2 chain and the 5-phenyl ring is trans in 19a and cisin 19b (The relative configurationbetween the 2-substituent and the 5-phenylgroup of the diastereomers 3b and (\u2212)-3b were separated by preparative HPLC performed on the intermediateamine (\u00b1)-33b using a Regis Technologies Whelk-O1 H (25 cm \u00d7 2 cm) columnas the chiral stationary phase and n-hexane/2-propanol85/15 v/v as the mobile phase at a flow rate of 18 mL/min. The enantiomericexcess (e.e.), determined by analytical HPLC using a Regis TechnologiesWhelk-O 1 H (25 cm \u00d70.46 cm) column as the chiral stationary phase and n-hexane/2-propanol 85/15 v/v as the mobile phase at a flow rate of1 mL/min, proved to be >99.5% for both enantiomers.The enantiomers (+)-33b was determined byquantum mechanical simulations of ECD. The ECD spectra of the twoenantiomers of the tertiary amine 33b -2 and (R)-2, whose absoluteconfiguration is known.24Time-dependent density functional theory (TDDFT)calculations havebeen shown to be practical means to simulate the CD spectra of thisseries of ligands, especially with reference to the R,6R)-33b led to only two populated conformers at r.t. for (+)-33b and for (\u2212)-33b.TDDFT calculations were run with several different DFT functionalsand basis sets see , either 3\u201319 was assessed by radioligand binding assays withhuman recombinant hM1\u2013hM5 receptor subtypesstably expressed in Chinese hamster ovary (CHO) cell lines using [3H]N-methylscopolamine ([3H]NMS)as a radioligand to label mAChRs, following previously described protocols.49 The affinities, expressed as pKi, areshown in 2, oxybutynin and trospium, whichare included for useful comparison.The pharmacological profile of methiodides 2 witha cyclohexyl group, affording 3, proved to be the mostfavorable for the interaction with mAChRs. In particular, the diastereomer 3b, with a cis configuration between the CH2N+(CH3)3 chain in the 2-position and thecyclohexyl fragment in the 6-position of the 1,4-dioxane ring, showspKi values for all mAChR subtypes, exceptfor M2, higher than those of the 6,6-diphenyl derivative 2. Compound 3b displays a selectivity profileanalogous to that of the clinically approved drug oxybutynin, withaffinities for M1, M3, and M4 higherthan those for M2 and M5 subtypes. Interestingly,the M3/M2 selectivity ratio of 3b (14.5) is significantly higher than those of the lead 2 and trospium . The M3/M2 selectivity profile of 3b is noteworthy becausethe presence of a quaternary ammonium head, enhancing the charge transferinteractions that it elicits with the surrounding aromatic residues,generally increases the pKi values forall muscarinic subtypes at the expense of the selectivity ratios.Indeed, these aromatic side chains, and in particular four tyrosineresidues, represent a structural signature which is completely conservedby all mAChR subtypes.The analysis of data reveals that among all the modifications,the replacement of one of the two phenyl rings of 3a is detrimental for the bindingaffinity for all the mAChR subtypes, confirming that stereochemistryplays a crucial role in the interaction of 1,4-dioxane derivativeswith the mAChRs.30The trans configuration between the substituentsin 2- and 6-positionsof the diastereomer 2 witha para-biphenyl group, affording the diastereomers 4a and 4b, induces a dramatic decrease in affinityfor all the mAChR subtypes. The higher flexibility of the terminalphenyl group of 4 obtained by introducing a methylenebutton (mixture 5a/b) or a sulfur atom (diastereomers 6a and 6b) between the two phenyl groups doesnot improve mAChR affinity. Similar results are obtained by oxidizingthe sulfur atom of 6 to sulfoxide and sulfone, affordingcompounds 7 and 8, respectively. In thepairs of diastereomers 6a/6b, 7a/7b, and 8a/8b, the trans isomers show pKi values slightly higher than those of the corresponding cis isomers.The replacement of the 6,6-diphenyl group of 2, affording compound 10, is also detrimental for the binding to the five mAChR subtypes.The removal of one aromatic group of 10, obtaining thediastereomers 9a and 9b, further decreasesthe mAChR affinity. Similar to what was observed for the 6-substitutedligands, the replacement of an aromatic group of 10 witha cyclohexyl ring is favorable for the binding to the five mAChRs.In this case, stereochemistry seems not to play a role in the bindingat mAChRs, both diastereomers 11a and 11b showing similar pKi values, with a preferencefor the M1 subtype. The increased distance between thediphenyl lipophilic moiety and the ammonium head of 10, yielding the diastereomers 12a and 12b, decreases the pKi values for all themAChRs. Analogous to what was observed for the corresponding 6-substitutedderivatives, all the other modifications performed on the 6,6-diphenylgroup of 10 , affording 13\u201317, are detrimental for the affinity for mAChRs.Though with low affinity, the diphenylsulfone 17a showsselectivity for M2 over the other subtypes. This selectivityprofile agrees with what was reported for other muscarinic derivativesbearing the diphenylsulfone moiety.51The shift of the diphenyl group from the 6- to 5-position of the1,4-dioxane ring of 9a and 9b and the previously described 6-mono-phenyl derivatives,24 the presence of a phenyl substituent in both5- and 6-positions of the 1,4-dioxane ring seems to be advantageous, especiallywhen the two phenyl groups are in a cis stereochemical relationship(18c). Instead, the insertion of a phenyl substituentin the 5-position of the 6,6-diphenyl derivative 2, affording 19a and 19b, markedly reduces the binding affinities.Compared to the 5-mono-phenyl derivatives 30 prompted us to prepare and study the enantiomers of the most interestingligand 3b. Moreover, considering that the basic functionof mAChR antagonists can also be a tertiary amine,24 the racemic 33b and its enantiomers were includedin this study.The well-established influence of chirality on the biological activityof mAChR ligandsKi values of(\u00b1)-3b, (\u00b1)-33b and their enantiomers-(+)-3b and-(\u2212)-3b, -(+)-33b e-(\u2212)-33b are reported in 2 and its enantiomers (R)-(+)-2 and (S)-(\u2212)-2.The p33b shows high affinity for all mAChRs,though with pKi values slightly lowerthan those of the correspondingammonium salt (\u00b1)-3b. Moreover, it maintains theinteresting selectivity for M3 over M2 subtype(M3/M2 = 7.0) already observed with methiodide(\u00b1)-3b (M3/M2 = 14.5). Betweenthe enantiomers of the tertiary amine [-(+)-33b and -(\u2212)-33b] as well as those of thequaternary ammonium salt [-(+)-3b and -(\u2212)-3b], the eutomers are the ones in whichthe absolute configuration of the carbon atom in position 2 is S . Such a configuration is the same of theeutomer (S)-(\u2212)-2, suggestingthat these derivatives bind to the same mAChR sites. The eudismicratios (ERs) between the enantiomers of the tertiary amine are significantlyhigher than those between the corresponding enantiomers of the methiodidefor all mAChR subtypes, especially for M3, for which theeutomer -(\u2212)-33b shows a pKi value 195-foldhigher than that of the distomer -(+)-33b.As expected, the data reveal how the racemic tertiaryamine (\u00b1)-R)-2/(S)-2, -3b/-3b, and -33b/-33b, docking simulations were carried outon the human M3 mAChR structure in complex with a selectiveantagonist (PDB Id: 5ZHP).522S,6S)-3b, endowed with the highest affinity, reveals the following set ofinteractions: (a) the charged ammonium head is engaged by a set ofcontacts comprising the key ion-pair with Asp1473.32 plusseveral charge transfer interactions with surrounding aromatic sidechains ; (b) the O4 dioxane atom is involved in a key H-bond with Asn5076.52, while the O1 atom is shielded by the close ammonium headand cannot elicit significant interactions; (c) the phenyl ring canstabilize \u03c0\u2013\u03c0 stacking interactions with a setof surrounding aromatic residues such as Tyr1483.33, Trp1994.57 and Trp5036.48; (d) the cyclohexyl ring isaccommodated within a subpocket in which it can contact alkyl sidechains such as Leu225ECL2, Ala2355.43, and Ala2385.46. On these grounds, one may argue that the observed enantioselectivitycan be ascribed to four moieties, the arrangement of which is influencedby the chiral centers: (a) the O4 dioxane atom, a feature which involvesall three pairs of enantiomers; (b) the cyclohexyl and (c) the phenylrings which concern only the compounds 3b and 33b; (d) the ammonium head which seems to play a marginal role for 2 and 3 reasonably due to the symmetry of thetrimethyl ammonium group, while the need to properly arrange the protontoward Asp1473.32, and the N-methyl groupstoward the aromatic residues, may impact on the enantioselectivityof 33b.To investigate thefactors influencingthe observed enantioselectivity of the three pairs of enantiomers-2 is able to establish astrong H-bond with Asn5076.52, while the distomer (R)-2 less suitably arranges the O4 atom -2 and (R)-2, respectively) which, therefore, weakly contactsAsn5076.52.Hence, inspection of 3b, shows that both of them are able to convenientlyaccommodate the dioxane ring and the ammonium head but unavoidably differ for the arrangementsof the two rings in the 6-position. Indeed, while the eutomer -3b properly accommodatesthe phenyl and the cyclohexyl rings as described above, the distomer-3b is constrainedto approach the phenyl ring toward the alkyl side chains with thecyclohexyl ring completely surrounded by aromatic residues. Notably,the capacity of both enantiomers of 3b to stabilize similarH-bonds with Asn5076.52 suggests that the greater flexibility of the cyclohexyl ring with respectto the phenyl one allows the distomer -3b to minimize the configurational effectson the pose of the dioxane ring.Similarly, 33b, highlightsthat they differ for the arrangement of boththe O4 dioxane atom and the cyclohexyl/phenyl rings. In detail, whilethe eutomer -33b can elicit the key H-bond with Asn5076.52 and toinsert the cyclohexyl and phenyl rings within the suitable subpockets,the distomer -33b cannot contact Asn5076.52 and accommodatesthe two rings in the 6-position within the wrong subpockets. Notably,the unique difference between 3b and 33b involves the ammonium head which is a quaternary salt only in theformer. 33b are able to properlyarrange the ammonium head even though the lack of the symmetric trimethylgroup in 33b increases the relevance of the C2 configurationand can explain why the enantiomers of 33b are constrainedto differ for the arrangement of the O4 dioxane atom, while both enantiomersof 3b are able to properly accommodate the dioxane ringby minimizing the effects of the C2 configuration.Finally, 2 and 3b. Again, the combination of both factors (dioxane and cyclohexyl/phenylrings) reveals a synergistic effect by showing an ER value for 33b markedly higher than the previous ones. Such a synergisticeffect can be explained at an atomic level by considering that, whileboth enantiomers of 2 are able to stabilize the H-bondwith Asn5076.52 even though the distomer elicits weakerinteractions , the 33b distomer is substantially unable to approach Asn5076.52, thus missing this key interaction. Finally, similar trendscan also be seen when analyzing the corresponding affinity valuesand, in particular, the affinities of the distomers. Indeed, whilethe eutomers show comparable affinity values with 3b and 33b which reveal slightly higher values probably due to thefavorable hydrophobic interaction stabilized by the cyclohexyl ring,the distomers show greater differences in affinity which are ascribableto their reduced interactions. Hence, -3b which only fails in properly arrangingthe rings in the 6-position reveals the greatest affinity, followedby (R)-2 which elicits a weak H-bondwith Asn5076.52. The lowest affinity is shown by -33b, which does notstabilize the mentioned H-bond and unsuitably arranges the rings inthe 6-position.These observationsfind encouraging confirmations in the reportedERs, thus allowing for some meaningful considerations. First, theobserved differences in the dioxane arrangement exert a conceivablygreater impact on affinity compared to those in the cyclohexyl/phenylrings as seen when comparing the ER values of Ki values on M3 mAChR greater than6. While avoiding systematic analyses, the docking results allow forsome general considerations. The lower affinity values of the ligandsbearing cyclohexyl/phenyl rings in 5 can beascribed to the steric hindrance exerted by these rings on the O4dioxane atom which weakens the key H-bond with Asn5076.52. In contrast, the reduced steric hindrance exerted on the O1 dioxaneatom allows this to be engaged in additional H-bonds as seen for -11a with Tyr1483.33. The low affinity of ligands bearing a 4-(phenylthio)phenylmoiety (15a and 15b) and similar diphenylgroups is explainable by considering that these bulky substituentsconstrain the ligands to assume inconvenient poses, where even theammonium head assumes suboptimal arrangements, without adding anyadditional contacts. Finally, the lower affinity values of the ligandswith substituents in both 5- and 6-positions is ascribable to the same factors affecting thebinding of compounds substituted only in 5, namely, the greater sterichindrance on the O4 dioxane atom which weakens the H-bond with Asn5076.52.For completeness and even though the affinityvalues of the singleenantiomers were not measured, docking simulations also involved otherproposed derivatives by focusing attention on those with p12 Considering also the pluripotentMSC nature and their contribution to bone, blood, and systemic homeostasis,53 viability studies on MSCs were performed todetermine the functional profile of 3b, the most interestingcompound in this series. Namely, the effect of this compound was similarto that of the well-known mAChR antagonist atropine because it wasable to down-regulate MSCs viability when used at high concentration(10\u20134 M), while increased cell viability when usedat low concentration (10\u201310 M) 11A. Succ2 was replaced by lipophilicsubstituents in 5- and/or 6-position of the 1,4-dioxane nucleus. Amongthe novel compounds, the 6-cyclohexyl-6-phenyl derivative 3b, with a cis configuration between the CH2N+(CH3)3 chain in the 2-position and the cyclohexylring in the 6-position, showed pKi valuesfor all mAChR subtypes, except for M2, higher than thoseof 2. Moreover, its selectivity profile is similar tothat of the therapeutically used drug oxybutynin, with pKi values for M1, M3, and M4 subtypes higher than those for M2 and M5 subtypes.The study of the enantiomers of 3b and those of the correspondingtertiary amine 33b, whose absolute configuration wasdetermined by quantum mechanical simulations of ECD, provided usefulinformation about the role played by chirality in the interactionwith mAChRs. In particular, the absolute configuration of the carbonatom in the 2-position of the eutomers -(\u2212)-3b and -(\u2212)-33b is the same as (S)-(\u2212)-2, suggesting that these derivatives bindto the same mAChR sites. The ERs between the enantiomers of the tertiaryamine 33b proved to be higher than those between thecorresponding enantiomers of methiodide 3b for all mAChRsubtypes, especially for M3. Docking studies on the M3 mAChR-resolved structure allowed us to shed light on thebinding mode of the proposed compounds. In particular, while the enantiomersof 33b differ for the arrangement of O4 dioxane atom,both enantiomers of 3b are able to properly accommodatethe dioxane ring by minimizing the effect of the C2 configuration.Finally, the assays on MSCs from mouse bone marrow showed for 3b a functional profile similar to that of the mAChR antagonistatropine concerning both the dose\u2013response effect producedon the metabolic activity of viable MSCs and the effect in contrastingthe increase of carbachol-induced MSC viability.Inthe present study, the 6,6-diphenyl structural element of thepotent mAChR antagonist 3b presents a quaternary ammonium function thatshould prevent the crossing of BBB, minimizing central anticholinergicactivity and, therefore, limiting CNS side effects. The predictionby SwissADME that 3b is a potential P-gp substrate makesthe profile of such a compound more and more interesting.54 Not to mention that the transformation intoa quaternary amine markedly enhances the metabolic stability of thiscompound. Indeed, the metabolic prediction based on the similarityanalysis using the MetaQSAR database on the tertiary amine indicatesthe oxidation in alpha to the N atom as a truly probable metabolicreaction which is largely inhibited by the presence of a permanentpositive charge.55 Moreover, the M3/M2 selectivity ratio of 3b (14.5),which is significantly higher than those of the quaternary ammoniumcompounds 2 and trospium ,might limit cardiovascular side effects. Therefore, the methiodide 3b might represent a valuable lead compound for the designof novel antagonists potentially useful in peripheral diseases inwhich M3 receptors are involved.Compared tothe tertiary amine drugs clinically used for the treatmentof OAB, 1H NMR and 13C NMR spectra were recorded on VarianGEM200, Varian Mercury AS400, or Bruker 500 MHz instruments, and chemicalshifts (ppm) are reported relative to tetramethylsilane. Spin multiplicitiesare given as s (singlet), d (doublet), dd (double doublet), t (triplet),or m (multiplet). IR spectra were recorded on a PerkinElmer 297 instrument,and spectral data were obtained for all compounds reported and are consistent with theassigned structures. The microanalyses were recorded on a FLASH 2000instrument (Thermo Fisher Scientific). The elemental composition ofthe compounds agreed to within \u00b10.4% of the calculated value.Optical activity was measured at 20 \u00b0C with a PerkinElmer 241polarimeter. Analytical chiral HPLC was performed on a Shimadzu chromatographysystem using a Regis Technologies -Whelk-O 1 (25 cm \u00d7 0.46 cm) column. Preparative chiral HPLCwas performed on a Shimadzu chromatography system using a Regis Technologies-Whelk-O 1 (25 cm \u00d7 2cm). Mass spectra were obtained using a Hewlett Packard 1100 MSD instrumentutilizing electron-spray ionization (ESI). The compounds were detected,and a purity of >95% was confirmed by UV absorption at 220 nm.Allreactions were monitored by thin-layer chromatography using silicagel plates , visualizing with ultraviolet light. Chromatographicseparations were performed on silica gel columns by flash chromatography. Compounds were named followingIUPAC rules as applied by ChemBioDraw Ultra (version 11.0) softwarefor systematically naming organic chemicals. The purity of the novelcompounds was determined by combustion analysis and was \u226595%.Melting points (mp) weretaken in glass capillarytubes on a B\u00fcchi SMP-20 apparatus and are uncorrected. 33a in Et2O (10 mL) was treated with an excessof methyl iodide and left at r.t. in the dark for 24 h. The solidwas filtered and recrystallized from EtOH (91% yield); mp 270\u2013271\u00b0C. 1H NMR (DMSO): \u03b4 0.42\u20131.91 , 3.02\u20133.56 3, CH2N, dioxane), 3.79 , 4.62 , 7.18\u20137.40 . 13C NMR (DMSO): \u03b4 26.0, 26.4, 26.6, 27.0, 28.5, 37.5(cyclohexyl); 54.1 (N(CH3)3); 63.6, 66.8, 67.9,70.0, 78.9 (CH2N and dioxane); 126.0, 127.4, 127.8 (ArH);140.5 (Ar). ESI/MS m/z: 318.2 [M]+, 763.4 [2M + I]+. Anal. Calcd (C20H32INO2) C, H, N.A solution of 33b following the procedure described for 3a: a white solid was obtained, which was recrystallized from2-PrOH (87% yield); mp 245\u2013246 \u00b0C. 1H NMR (DMSO):\u03b4 0.62\u20131.94 , 2.98\u20133.61 3, CH2N, dioxane), 3.86, 4.62 ,7.21\u20137.54 . 13C NMR (DMSO): \u03b426.5, 27.3, 47.6 (cyclohexyl); 54.2 (N(CH3)3); 64.9, 65.9, 68.0, 69.2, 80.2 (CH2N and dioxane); 127.7,128.3, 128.7 (ArH); 139.8 (Ar). ESI/MS m/z: 318.2 [M]+, 763.4 [2M + I]+. Anal.Calcd (C20H32INO2) C, H, N.This compound was prepared startingfrom S,6S)-(\u2212)-33b followingthe procedure described for 3a: a white solid was obtained,which was recrystallized from 2-PrOH (88% yield). [\u03b1]D20 = \u221242.5; mp and 1H NMR spectrum were identicalto those of racemic compound (\u00b1)-3b. Anal. Calcd(C20H32INO2) C, H, N. C, 53.94; H,7.24; N, 3.14. Found: C, 54.06; H, 7.41; N, 3.29.This compound was prepared starting from -(+)-33b following the proceduredescribed for 3a: a white solid was obtained, which wasrecrystallized from 2-PrOH (85% yield). [\u03b1]D20 = +42.9 ; mp and 1H NMR spectrum were identical to those ofracemic compound (\u00b1)-3b. Anal. Calcd (C20H32INO2) C, H, N.This compound was prepared starting from ; mp 242\u2013243 \u00b0C. 1H NMR (DMSO):\u03b4 3.03\u20133.58 3, dioxane), 3.76 , 3.95 , 4.45 , 4.89 , 7.24\u20137.75 . ESI/MS m/z: 312.2 [M]+, 751.3 [2M +I]+. Anal. Calcd (C20H26INO2) C, H, N.This compound was prepared startingfrom 34b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromMeOH (89% yield); mp 256\u2013257 \u00b0C. 1H NMR (DMSO):\u03b4 3.15 3), 3.43\u20133.95, 4.30 , 4.51 , 5.16 , 7.32\u20137.73 . ESI/MS m/z: 312.2 [M]+, 751.3 [2M + I]+. Anal. Calcd (C20H26INO2) C, H, N.This compound was prepared startingfrom 35a/b followingthe procedure described for 3a: a white solid was obtained,which was recrystallized from MeOH (89% yield); mp 228\u2013232\u00b0C. 1H NMR (DMSO): \u03b4 2.94\u20133.98 3, CH2N, CH2Ar and dioxane), 4.19\u20134.53 ,4.79 , 5.06 , 7.08\u20137.38 . ESI/MS m/z: 326.2 [M]+. Anal. Calcd(C21H28INO2) C, H, N.This mixture of cis/trans (6:4)diastereomers was prepared starting from 36a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromMeOH (89% yield); mp 196\u2013198 \u00b0C. 1H NMR (DMSO):\u03b4 3.12 3), 3.18\u20133.52, 3.71 , 3.93 , 4.42 , 4.82, 7.23\u20137.48. ESI/MS m/z: 344.2[M]+. Anal. Calcd (C20H26INO2S) C, H, N, S.This compound was prepared startingfrom 36b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (89% yield); mp 197\u2013198 \u00b0C. 1H NMR (DMSO):\u03b4 3.02\u20133.98 3, CH2N, dioxane), 4.22 , 4.48 , 5.11 , 7.21\u20137.42 . ESI/MS m/z: 344.2 [M]+. Anal. Calcd(C20H26INO2S) C, H, N, S.This compound was prepared startingfrom 37a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (82% yield); mp 166\u2013167 \u00b0C. 1H NMR (DMSO):\u03b4 3.03\u20133.59 3 and dioxane), 3.78 , 3.93 , 4.45 , 4.88, 7.42\u20137.81. ESI/MS m/z: 360.2[M]+. Anal. Calcd (C20H26INO3S) C, H, N, S.This compound was prepared startingfrom 37b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (79% yield); mp 173\u2013174 \u00b0C. 1H NMR (DMSO):\u03b4 3.10 3), 3.24\u20133.98, 4.22 , 4.46 , 5.15 , 7.42\u20137.80 . ESI/MS m/z: 360.2 [M]+. Anal. Calcd (C20H26INO3S) C, H, N, S.This compound was prepared startingfrom 38a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (87% yield); mp 128\u2013129 \u00b0C. 1H NMR (DMSO):\u03b4 3.00\u20133.55 3, dioxane), 3.74 , 3.94 , 4.43 , 4.93 , 7.51\u20138.02 . ESI/MS m/z: 376.2 [M]+. Anal. Calcd(C20H26INO4S) C, H, N, S.This compound was prepared startingfrom 38b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (86% yield); mp 231\u2013232 \u00b0C. 1H NMR (DMSO):\u03b4 3.11 3), 3.38\u20133.72, 3.78 , 3.92 , 4.26 , 4.51 , 5.21 , 7.52\u20137.99 . ESI/MS m/z: 376.2 [M]+. Anal. Calcd(C20H26INO4S) C, H, N, S.This compound was prepared startingfrom 57a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (92% yield); mp 204\u2013205 \u00b0C. 1H NMR (DMSO):\u03b4 3.12 3), 3.42 , 3.70\u20133.96 , 4.13\u20134.42, 4.66 , 7.31\u20137.47 . ESI/MS m/z: 236.2 [M]+, 599.2 [2M + I]+. Anal.Calcd (C14H22INO2) C, H, N.This compound was prepared startingfrom 57b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (93% yield); mp 216\u2013217 \u00b0C. 1H NMR (DMSO):\u03b4 3.17 3), 3.36\u20133.62, 3.89 , 4.29 , 4.57 ,7.26\u20137.42 . ESI/MS m/z: 236.2 [M]+, 599.2 [2M + I]+. Anal. Calcd(C14H22INO2) C, H, N.This compound was prepared startingfrom 58 following the procedure described for 3a:a white solid was obtained, which was recrystallized from 2-PrOH (93%yield); mp 255\u2013256 \u00b0C. 1H NMR (DMSO): \u03b43.08 3), 3.18\u20133.34 , 3.75 , 3.86 ,4.38 , 4.85 , 7.19\u20137.57 . ESI/MS m/z: 312.2 [M]+, 751.3 [2M + I]+. Anal. Calcd (C20H26INO2) C, H,N.This compound was prepared startingfrom 59a following the procedure described for 3a: a white solid was obtained, which was recrystallized from2-PrOH (75% yield); mp 161\u2013162 \u00b0C. 1H NMR (DMSO):\u03b4 0.53\u20131.82 , 2.98\u20133.21 3, CH2N, dioxane), 3.54, 3.88 , 4.21 , 4.66 , 7.21\u20137.44 .ESI/MS m/z: 318.2 [M]+, 763.4 [2M + I]+. Anal. Calcd (C20H32INO2) C, H, N.This compound was prepared startingfrom 59b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (79% yield); mp 175\u2013176 \u00b0C. 1H NMR (DMSO):\u03b4 0.40\u20132.24 , 3.02\u20133.82 3, CH2N, dioxane), 4.18, 4.45 ,7.18\u20137.42 . ESI/MS m/z: 318.2 [M]+, 763.4 [2M + I]+. Anal. Calcd(C20H32INO2) C, H, N.This compound was prepared startingfrom 60a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (74% yield); mp 218\u2013219 \u00b0C. 1H NMR (DMSO):\u03b4 2.90\u20133.68 3, CH2N, dioxane), 3.92 , 4.28 2), 4.56 , 7.05\u20137.62 .ESI/MS m/z: 326.2 [M]+ Anal. Calcd (C21H28INO2) C, H,N.This compound was prepared startingfrom 60b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromMeOH (81% yield); mp 266\u2013267 \u00b0C. 1H NMR (DMSO):\u03b4 2.95\u20133.46 3, CH2N, dioxane), 3.70 , 3.91 2), 4.12 , 4.39 , 7.08\u20137.46 . ESI/MS m/z: 326.2 [M]+. Anal. Calcd(C21H28INO2) C, H, N.This compound was prepared startingfrom 61a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (91% yield); mp 257\u2013259 \u00b0C. 1H NMR (DMSO):\u03b4 3.02\u20133.54 3, CH2N), 3.71\u20134.01 , 4.19 , 4.40 , 4.63 , 7.28\u20137.78 . ESI/MS m/z: 312.2 [M]+. Anal. Calcd (C20H26INO2) C, H, N.This compound was prepared startingfrom 61b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromMeOH (91% yield); mp 300\u2013301 \u00b0C. 1H NMR (DMSO):\u03b4 3.05\u20133.68 3, CH2N, dioxane), 3.91 , 4.31 ,4.60 , 7.25\u20137.72. ESI/MS m/z: 312.2[M]+, 751.3 [2M + I]+. Anal. Calcd (C20H26INO2) C, H, N.This compound was prepared startingfrom 62a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromMeOH (88% yield); mp 161\u2013162 \u00b0C. 1H NMR (DMSO):\u03b4 3.00\u20133.92 3, CH2N, dioxane), 3.94 , 4.12 , 4.38 , 4.60 , 7.08\u20137.42. ESI/MS m/z: 326.2[M]+. Anal. Calcd (C21H28INO2) C, H, N.This compound was prepared startingfrom 62b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromMeOH (83% yield); mp 206\u2013207 \u00b0C. 1H NMR (DMSO):\u03b4 3.02 3), 3.30\u20133.62, 3.78\u20133.88 ,3.93 , 4.29 , 4.50 , 7.09\u20137.34 . ESI/MS m/z: 326.2 [M]+. Anal. Calcd (C21H28INO2) C, H, N.This compound was prepared startingfrom 63a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (83% yield); mp 168\u2013169 \u00b0C. 1H NMR (DMSO):\u03b4 3.02\u20133.29 3, CH2N), 3.65\u20133.97 , 4.14 , 4.39 , 4.66 , 7.20\u20137.44 . ESI/MS m/z: 344.2 [M]+, 815.3 [2M + I]+. Anal. Calcd (C20H26INO2S) C, H, N, S.This compound was prepared startingfrom 63b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (91% yield); mp 173\u2013175 \u00b0C. 1H NMR (DMSO):\u03b4 3.01\u20133.62 3, CH2N, dioxane), 3.90 , 4.27 , 4.58 , 7.20\u20137.46 . ESI/MS m/z: 344.2 [M]+. Anal. Calcd(C20H26INO2S) C, H, N, S.This compound was prepared startingfrom 64a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (91% yield); mp 78\u201380 \u00b0C. 1H NMR (DMSO):\u03b4 2.95\u20133.94 3, CH2N, dioxane), 4.10 , 4.38 , 4.72 , 7.38\u20137.83 . ESI/MS m/z: 360.2 [M]+. Anal. Calcd(C20H26INO3S) C, H, N, S.This compound was prepared startingfrom 64b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (91% yield); mp 175\u2013176 \u00b0C. 1H NMR (DMSO):\u03b4 2.91\u20133.59 3, dioxane), 3.86 , 4.25 ,4.60 , 7.40\u20137.81. ESI/MS m/z: 360.2[M]+. Anal. Calcd (C20H26INO3S) C, H, N, S.This compound was prepared startingfrom 65a following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (88% yield); mp 218\u2013219 \u00b0C. 1H NMR (DMSO):\u03b4 2.98\u20133.48 3, CH2N), 3.67 , 3.75\u20133.92 ,4.08 , 4.39 , 4.79 ,7.48\u20138.07 . ESI/MS m/z: 376.2 [M]+. Anal. Calcd (C20H26INO4S) C, H, N, S.This compound was prepared startingfrom 65b following the procedure described for 3a: a white solid was obtained, which was recrystallized fromEtOH (90% yield); mp 192\u2013193 \u00b0C. 1H NMR (DMSO):\u03b4 2.96\u20133.60 3, CH2N, dioxane), 3.79\u20134.00 , 4.29 , 4.64 ,7.48\u20138.00 . ESI/MS m/z: 376.2 [M]+. Anal. Calcd (C20H26INO4S) C, H, N, S.This compound was prepared startingfrom 69a following the procedure described for 3a: a white solidwas obtained, which was recrystallized from 2-PrOH (90% yield); mp190\u2013191 \u00b0C. 1H NMR (DMSO): \u03b4 3.07\u20133.683, CH2N, dioxane),3.90 , 4.60 , 5.06 , 6.85\u20137.37 . ESI/MS m/z: 312.2 [M]+, 751.3 [2M +I]+. Anal. Calcd (C20H26INO2) C, H, N.This compound was prepared startingfrom 69b following the procedure described for 3a:a white solidwas obtained, which was recrystallized from EtOH (81% yield); mp 239\u2013240\u00b0C. 1H NMR (DMSO): \u03b4 3.02\u20133.68 3, dioxane), 3.97 , 4.47 , 4.62 , 4.78 , 6.92\u20137.32 .ESI/MS m/z: 312.2 [M]+, 751.3 [2M + I]+. Anal. Calcd (C20H26INO2) C, H, N.This compound was prepared starting from 69c following the procedure described for 3a: a white solidwas obtained, which was recrystallized from EtOH (80% yield); mp 193\u2013194\u00b0C. 1H NMR (DMSO): \u03b4 2.97 3), 3.40\u20133.61 , 3.69 , 4.12 , 4.24 , 5.22 , 5.39 , 7.05\u20137.58 . ESI/MS m/z: 312.2 [M]+, 751.3 [2M +I]+. Anal. Calcd (C20H26INO2) C, H, N.This compound was prepared startingfrom 75a following the procedure described for 3a: a white solid was obtained, which was recrystallized from2-PrOH (80% yield); mp 275\u2013276 \u00b0C. 1H NMR (DMSO):\u03b4 2.89\u20133.51 3, CH2N), 3.62\u20134.18 , 5.01 ,6.62\u20137.62 . ESI/MS m/z: 388.2 [M]+. Anal. Calcd (C26H30INO2) C, H, N.This compound was prepared startingfrom 75b following the procedure described for 3a: a white solid was obtained, which was recrystallized from2-PrOH (82% yield); mp 262\u2013263 \u00b0C. 1H NMR (DMSO):\u03b4 3.01\u20133.52 3, CH2N), 3.81\u20134.20 , 6.15 ,6.87\u20137.80 . ESI/MS m/z: 388.2 [M]+. Anal. Calcd (C26H30INO2) C, H, N.This compound was prepared startingfrom 27a. An oil was obtained (90%yield). 1H NMR (CDCl3): \u03b4 0.60\u20131.92, 2.33 2),2.34\u20132.54 , 3.21 ,3.40 , 3.89 , 4.19 ,4.46 , 7.21\u20137.32 .A solution of 27a . 1H NMR (CDCl3): \u03b4 0.61\u20131.89 , 2.15\u20132.462), 3.28 , 3.60\u20133.85 ,4.52 , 7.20\u20137.52. The free base was transformed into the oxalate salt,which was crystallized from EtOH: mp 142\u2013143 \u00b0C. 1H NMR (DMSO): \u03b4 0.65\u20131.83 ,2.76 2), 2.90\u20133.11 , 3.16 ,3.55 , 3.78 , 4.62 , 7.21\u20137.48 , 8.21 . 13C NMR (DMSO): \u03b4 26.4, 26.4, 26.5,26.5, 27.2, 44.1 (cyclohexyl); 47.6 (N(CH3)2); 57.8, 65.5, 68.2, 69.5, 80.0 (CH2N and dioxane); 127.4,128.3, 128.5 (ArH); 140.0 (Ar); 164.5 (COOH). ESI/MS m/z: 304.2 [M + H]+, 326.2 [M + Na]+ Anal. Calcd (C19H29NO2.C2H2O4) C, H, N.This compound was prepared starting from 27b followin33b were separated by chiralHPLC by using a Regis Technologies Whelk-O 1 H column;mobile phase: n-hexane/2-propanol 85/15% v/v; flowrate 18 mL/min; detection was monitored at a wavelength of 220 nM.Retention times: 5.6 min for compound (\u2212)-33b and11.4 min for compound (+)-33b. ee >99.5% for bothenantiomers.The enantiomers of(\u00b1)-S,6S)-(\u2212)-33b: [\u03b1]D20 = \u221231.2 . The 1H NMR spectrumwas identical to that of racemic compound (\u00b1)-33b. The free base was transformed into the oxalate salt, which wasrecrystallized from EtOH: [\u03b1]D20 = +47.7 , mp 142\u2013143\u00b0C. Anal. Calcd (C21H31NO6)C, H, N.-(+)-33b:[\u03b1]D20 = +31.5 . The 1H NMR spectrumwas identical to that of racemic compound (\u00b1)-33b. The free base was transformed into the oxalate salt, which wasrecrystallized from EtOH: [\u03b1]D20 = +46.9 , mp 142\u2013143\u00b0C. Anal. Calcd (C21H31NO6)C, H, N.(228a followingthe procedure described for 33a: an oil wasobtained (91% yield). 1H NMR (CDCl3): \u03b42.32 2), 2.45 , 3.36\u20133.47 , 3.85\u20133.95 ,4.72 , 7.34\u20137.62 .This compound was prepared starting from 28b following the procedure described for 33a: an oil wasobtained (93% yield). 1H NMR (CDCl3): \u03b42.29 2), 2.63 , 3.66\u20134.00 , 4.89 , 7.32\u20137.62.This compound was prepared starting from 29a/b following the procedure describedfor 33a: an oil was obtained (91% yield). 1H NMR (CDCl3): \u03b4 2.28 2), 2.32 2),2.47 , 2.67 , 3.28\u20134.05 , 4.65 , 4.80 , 7.08\u20137.39.This mixture of cis/trans (6:4) diastereomers was preparedstarting from 30a following the procedure described for 33a: an oil wasobtained (90% yield). 1H NMR (CDCl3): \u03b42.32 2), 2.43 , 3.32 , 3.80\u20134.05 , 4.66 7.20\u20137.38.This compound was prepared starting from 30b following the proceduredescribed for 33a: an oil wasobtained (92% yield). 1H NMR (CDCl3): \u03b42.28 2), 2.62 , 3.64\u20133.98 , 4.81 7.27\u20137.40 .This compoundwas prepared starting from 31a following the procedure described for 33a: an oil wasobtained (90% yield). 1H NMR (CDCl3): \u03b42.28 2), 2.41 , 3.23 , 3.80\u20134.00 , 4.68, 7.38\u20137.65 .This compound was prepared starting from 31b following the procedure described for 33a: an oil wasobtained (93% yield). 1H NMR (CDCl3): \u03b42.22 2), 2.58 , 3.60\u20133.98 , 4.82 , 7.41\u20137.68.This compound was prepared starting from 32a followingthe procedure described for 33a: an oil wasobtained (90% yield). 1H NMR (CDCl3): \u03b42.28 2), 2.41 , 3.20\u20133.40 , 3.80\u20134.01 ,4.72 , 7.42\u20138.00 .This compound was prepared starting from 32b following the procedure described for 33a: an oil wasobtained (90% yield). 1H NMR (CDCl3): \u03b42.24 2), 2.60 , 3.61\u20133.98 , 4.86 , 7.44\u20137.99.This compound was prepared starting from 48a(33a: an oil wasobtained (95% yield). 1H NMR (CDCl3): \u03b42.31 2), 2.38, 2.85 , 3.68\u20133.99, 4.62 , 7.25\u20137.46 .This compound wasprepared starting from 48a followin48b(33a: an oil was obtained (85% yield). 1H NMR (CDCl3): \u03b4 2.19\u20132.53 2), 3.54 , 3.75\u20134.03 , 4.58 , 7.30\u20137.40 .This compound was prepared starting from 48b followin49 following the proceduredescribed for 33a: an oil was obtained (75% yield). 1H NMR (CDCl3): \u03b4 1.98\u20132.42 2), 3.30 ,3.68 , 3.87 , 4.61 ,7.12\u20137.52 .This compoundwas prepared starting from 50a following the procedure described for 33a: an oil wasobtained (80% yield). 1H NMR (CDCl3): \u03b40.58\u20131.88 , 2.00\u20132.40 2), 3.28 ,3.55 , 3.71\u20133.91 , 4.60 , 7.22\u20137.42 .This compound was prepared starting from 50b following the proceduredescribed for 33a: an oil was obtained (85% yield). 1H NMR (CDCl3): \u03b4 0.58\u20131.87 , 2.18\u20132.80 2, CH2N), 3.58\u20133.82 , 4.42 , 7.21\u20137.40 .This compoundwas prepared starting from 51a followingthe procedure described for 33a: an oil wasobtained (85% yield). 1H NMR (CDCl3): \u03b42.19\u20132.73 2), 3.57\u20133.78 , 4.30 , 4.422), 7.14\u20137.40 .This compound was prepared starting from 51b following the procedure described for 33a: an oil wasobtained (82% yield). 1H NMR (CDCl3): \u03b42.10\u20132.47 2), 3.38 , 3.60\u20133.92 , 4.28, 7.12\u20137.40 .This compound was prepared starting from 52a following the procedure described for 33a: an oil wasobtained (85% yield). 1H NMR (CDCl3): \u03b42.28\u20132.85 2), 3.72\u20134.08 , 4.69 , 7.31\u20137.63.This compound was prepared starting from 52b followingthe procedure described for 33a: an oil wasobtained (80% yield). 1H NMR (CDCl3): \u03b42.18\u20132.56 2), 3.55 , 3.83 , 4.00 ,4.62 , 7.30\u20137.62 .This compound was prepared starting from 53a following the procedure described for 33a: an oil wasobtained (86% yield). 1H NMR (CDCl3): \u03b42.30\u20132.92 2), 3.68\u20134.02 , 4.62 , 7.17\u20137.38 .This compound was prepared starting from 53b following the procedure described for 33a: an oil wasobtained (81% yield). 1H NMR (CDCl3): \u03b42.18\u20132.52 2), 3.52 , 3.81\u20134.02 , 4.54 , 7.16\u20137.36 .This compound was prepared starting from 54a following the procedure described for 33a: an oil wasobtained (86% yield). 1H NMR (CDCl3): \u03b42.32\u20132.88 2), 3.65\u20133.99 , 4.62 , 7.20\u20137.40.This compound was prepared starting from 54b following the proceduredescribed for 33a: an oil wasobtained (87% yield). 1H NMR (CDCl3): \u03b42.17\u20132.52 2), 3.50 , 3.72\u20134.02 , 4.56, 7.22\u20137.38 .This compoundwas prepared starting from 55a following the procedure described for 33a: a solidwas obtained (84% yield). 1H NMR (CDCl3): \u03b42.22\u20132.82 2, CH2N), 3.60\u20133.95 , 4.62 , 7.41\u20137.69.This compound was prepared starting from 64b followingthe procedure described for 3a: a white solidwas obtained (84% yield); mp 96\u201399 \u00b0C. 1H NMR(CDCl3): \u03b4 2.16\u20132.47 2), 3.44 , 3.77 , 3.92 , 4.55 , 7.38\u20137.65.This compound was prepared starting from 56a followingthe procedure described for 33a: an oil wasobtained (85% yield). 1H NMR (CDCl3): \u03b42.28\u20132.81 2), 3.60\u20133.98 , 4.66 , 7.46\u20137.99.This compound was prepared starting from 56b followingthe procedure described for 33a: an oil wasobtained (84% yield). 1H NMR (CDCl3): \u03b42.16\u20132.50 2), 3.50 , 3.70\u20134.02 , 4.60, 7.43\u20137.95 .This compound was prepared starting from 68a(3. The organic layers were washed with 2 N HCl (15 mL), NaHCO3 saturated solution (15 mL), and H2O (15 mL) andthen dried over Na2SO4. The evaporation of thesolvent afforded the intermediate tosyl derivative, which was usedin the next step without further purification. Dimethylamine (10 mL)was added to a solution of tosyl derivative in dry benzene (20 mL),and the mixture was heated in a sealed tube for 72 h at 110 \u00b0C.After evaporation of the solvent, the residue was dissolved in CHCl3, which was washed with NaOH 2 N and dried over Na2SO4. The solvent was concentrated in vacuo to give a residue, which was purified by column chromatography,eluting with CHCl3/CH3OH (9.5:0.5). An oil wasobtained (85% yield). 1H NMR (CDCl3): \u03b42.32 2), 2.52 , 3.69 , 4.05\u20134.20 , 4.37, 4.52 , 6.96\u20137.23 .Tosyl chloride was added to a stirred solution of 68a . 1H NMR (CDCl3): \u03b42.38 2), 2.82\u20133.12 , 3.98\u20134.18 , 4.40 , 4.67 , 6.97\u20137.25 .This compound wasprepared starting from 68b followin68c following the procedure describedfor 69a: an oil was obtained (82% yield). 1H NMR (CDCl3): \u03b4 2.14 2), 2.18\u20132.46 , 2.99 , 3.74 ,3.98 , 4.19 , 5.11 ,5.20 , 7.10\u20137.38 .This compound wasprepared starting from 74a following the procedure described for 69a: an oil wasobtained (72% yield). 1H NMR (CDCl3): \u03b42.22 2), 2.48 , 3.78 ,3.98 , 4.22 , 4.95 ,6.71\u20137.60 .This compound was prepared starting from 74b following the procedure described for 69a: an oil wasobtained (75% yield). 1H NMR (CDCl3): \u03b42.23\u20132.72 2), 3.40 , 3.58 , 3.99 ,5.82 , 6.87\u20137.72 .This compound was prepared starting from 56 with default grids and convergence criteria. The calculations wererun on the N-protonated forms of 2 and 33b (charge +1). Conformational searches were run with the Monte Carloalgorithm implemented in Spartan\u201918 using MMFF. All structuresthus obtained were first optimized with the DFT method using \u03c9B97X-Dfunctional and 6-31G(d) basis set in vacuo and thenre-optimized using \u03c9B97X-D functional and 6-31G+(d) basis set,first in vacuo then using the SMD solvent model foracetonitrile. TDDFT calculations were run using several combinationsof functionals , basis sets (def2-SVP and def2-TZVP), either in vacuo or using IEF-PCM solvent model for acetonitrile; they included atleast 16 excited states (roots). Boltzmann populations were estimatedat 300 K from internal energies. ECD spectra were generated usingthe program SpecDis,58 by applying a Gaussian band shapewith 0.25 eV exponential half-width, shifted by 15 nm, scaled by afactor 2, from dipole-length rotational strengths.Merck molecular force field(MMFF) and DFT calculations were run with Spartan\u201918 , with standard parameters and convergence criteria.DFT and TDDFT calculations were run with Gaussian\u201916 ,1\u20135) were grown in Dulbecco\u2019s modified Eagle\u2019smedium (DMEM) with nutrient mixture F12 , containing10% fetal bovine serum, penicillin (100 U/mL), streptomycin (100 U/mL), l-glutamine (4 mM), and geneticin at37 \u00b0C in a 5% CO2 humidified incubator. In order toharvest the cells, the culture medium was removed;the cells were washedwith PBS and then trypsinized by trypsin\u2013EDTA treatment for2\u20133 min. Serum (0.7 mL) was added to inactivate the trypsin,and the cells were spun down by centrifuging at 300g for 5 min. The cells were then resuspended in ice-cold 25 mM sodiumphosphate buffer containing 5 mM MgCl2, pH 7.4 (bindingbuffer) and homogenized using a cell disrupter . The homogenate was sedimented by centrifugation . The supernatant was discarded, and the resultingmembrane pellets were resuspended with Ultra-Turrax in the same bufferto give a final protein concentration of 1\u20132 mg/mL. The proteincontent was determined by the method of Bradford (1976) with bovineserum albumin (Sigma) as a standard and stored at \u221280 \u00b0C.CHO-K1 cellsstably transfected with the human muscarinic receptor subtypes were incubated with radioligand and unlabeledtest compounds for 2 h at r.t. Bound and free radioactivity were separatedby filtering the assay mixture through UniFilter GF/B plates usinga FilterMate Cell Harvester .The filter bound radioactivity was counted by a TopCount NXT MicroplateScintillation Counter . Data(cpm) were normalized to percentage-specific binding and analyzedusing a four-parameter logistic equation in GraphPad Prism 5.02; IC50 values were determined, and Ki values were calculated.59 The values reported in Inhibition radioligandbinding assays were conducted as previously describedKi values on M3 mAChRgreater than 6 and the recently resolved M3 mAChR structurein complex with a selective antagonist (PDB Id: 5ZHP).52 The protein structure was completed by adding hydrogenatoms, and the ionizable groups were set to be compatible to physiologicalpH using the VEGA suite of programs.60 Theprepared structure was finally minimized by using the NAMD program61 and keeping fixed the backbone atoms to retainthe experimental folding. The structure of the considered ligandswas optimized by PM7-based semi-empirical calculations.62 Docking simulations were performed by PLANTS63 by focusing the searches within a 8.0 \u00c5radius around the bound resolved antagonist. The simulations werecarried out using the ChemPLP primary score with speed equal to 1and 10 poses were generated for each ligand. The obtained complexeswere optimized by using NAMD and by keeping fixed all atoms outsidea 10 \u00c5 radius sphere around the docked ligand and then rescoredby ReScore+.64Docking simulations involved the ligandswith pin vitro studies were performed by using bone marrow MSCsas a model. MaleBALB/c mice were kept in a laminar-flowcage in a standardized environmental condition. Food ,and water was supplied ad libitum. Mice were sacrificed by CO2 narcosis and cervical dislocation in accordance with therecommendations of the Italian Ethical Committee and under the supervisionof authorized investigators. Long bones (femurs and tibiae) were dissectedand cleaned from skin, muscle, and connective tissues as much as possible.Bones were placed in a culture dish containing sterile PBS. Then,the bone cavity was flushed in DMEM with a syringe in order to collectthe bone marrow cells into a 50 mL sterile tube. The procedure wasrepeated until all marrow was removed. Cell suspension was filteredthrough a cell strainer (70 \u03bcm size) to remove cell clumps orbone debris. Then, bone marrow cells were plated in 100 mm culturedishes in DMEM containing 10% heat-inactivated-fetal calf serum (HIFCS),penicillin, and streptomycin. In order to obtain a population of bonemarrow MSCs, the protocol by Solimani and Nadri65 was followed. Cells were incubated at 37 \u00b0C with 5%CO2 in a humidified chamber. After 3 h, the nonadherentcells that accumulate on the surface of the dish were removed by changingthe medium and replacing it with a fresh complete medium. After 8h of culture, the medium was further replaced with fresh completemedium. The last step was repeated every 8 h for up to 72 h of initialculture. Then, the adherent cells were washed with sterile PBS andadded with a fresh medium every 3\u20134 days. After 2 weeks ofinitiating culture, cells were washed with PBS, detached by trypsinization,counted, and plated at the density of 5,000 cells/well in 96 cultureplates in DMEM containing 10% HIFCS,penicillin, and streptomycin.The 3b (from 10\u20134 to 10\u201310 M) for 24 h. Control cultures were performedby incubating the cellswith the only vehicle (DMSO) or by untreated cells. Parallel othercultures were incubated with 3b from 10\u20134 to 10\u201310 M for 1 h, and then, the culture mediumwas replaced with a fresh medium. The MSCs were maintained in thepresence of carbachol at 10\u201310 M for 24 h. At theend of each procedure, the MSCs viability was measured by MTS assay.Specifically, cells were incubated with Cell Titer 96 Aqueous OneSolution Reagent for 2 h in a humidified5% CO2 atmosphere. The quantity of the formazan productwas directly proportional to the number of living cells in culture.The colored formazan was measured by reading the absorbance at 490nm using a 6-well plate reader.MSCs were treated with compound"} +{"text": "This allyl-functionalized macrocycle features a deeper cavity compared to the previously reported trianglamine host molecules. Solid\u2013vapor sorption experiments verified the successful separation of 1-He from an equimolar mixture of 1-He and trans-3-He. Single-crystal structures and powder X-ray diffraction patterns suggest that this selective adsorption arises from the formation of a thermodynamically stable host\u2013guest complex between 1-He and P-TA. A reversible transformation between the nonporous guest-free structure and the guest-containing structure shows that 1-He separation can be carried out over multiple cycles without any loss of performance. Significantly, P-TA can separate 1-He directly from a liquid isomeric mixture and thus P-TA modified silica sieves (SBA-15) showed the ability to selectively separate 1-He when utilized as a stationary phase in column chromatography. This capitalizes on the prospects of employing macrocyclic hosts as molecular recognition units in real-life separations for sustainable and energy-efficient industrial practices.The separation of \u03b1-olefins and their corresponding isomers continues to be a big challenge for the chemical industry due to their overlapping physical properties and low relative volatility. Herein, pillar[3]trianglamine (P-TA) macrocycles were synthesized for the molecular-sieving-like separation of 1-hexene (1-He) selectively over its positional isomer Crystalline allyl-functionalized trianglamine macrocycles (P-TA) that show a pillared-like cavity were successfuly prepared and employed for the robust molecular sieving of 1-He from vapor and liquid (in solution) isomeric mixtures. A report by Huang and co-workers highlighted the impact of the macrocyclic cavity and hydrogen acceptors in the sorting of positional isomers using pillar[5]arenes.32 We thus ventured to design and prepare a new generation of triangular macrocycles with a deeper cavity for higher alkene separation. Herein, we present the synthesis of crystalline allyl-functionalized trianglamine macrocycles (P-TA) that showed a pillar-like cavity and can be employed for the robust molecular sieving of 1-He from vapor and liquid isomeric mixtures (2Cl2) followed by slow evaporation, washing and drying provided P-TA modified SBA-15 that can be readily used in column chromatography (as a stationary phase) for 1-He separation. To the best of our knowledge, this is the first example of a selective adsorption of 1-He over trans-3-He employing a host\u2013guest tailored molecular sieving technique in solution.Recently, we reported a series of trianglimine and trianglamine macrocycles for the adsorptive separation of hydrocarbons and haloalkanes.mixtures . To bettP31 space group and the asymmetric unit containing one unit each of the macrocycle and dichloromethane (DCM) was confirmed using PLATON SQUEEZE,50 which was consistent with the presence of 0.5 [C6H12] per formula unit. Notably, 1-He was calculated to have stronger noncovalent interactions with P-TA (C\u2013H\u22ef\u03c0 and C\u2013H\u22efO interactions) compared to trans-3-He after 8 h. The uptake of trans-3-He was suppressed by competitive adsorption (ca. 0.2 molecule per trianglamine), implying a good adsorptive selectivity of 1-He decreased from 1064.9 m2 g\u22121 to 424.6 m2 g\u22121 and 1.55 cm3 g\u22121 to 0.76 cm3 g\u22121, respectively towards the selective separation of linear hexene isomers. The activated P-TA could selectively capture 1-He over trans-3-He with over 84% selectivity. The formation of a host\u2013guest complex between 1-He and P-TA is thermodynamically more stable. Compared to the parent TA, the introduction of allyl group in P-TA expanded its space and reshaped the cavity to selectively host longer chain hydrocarbons. Although molecular adsorbents such as P-TA show slow kinetics in solid\u2013vapor sorption experiments, their stability and recyclability make it possible for them to be used directly as solid\u2013liquid adsorbents. Most importantly, these host macrocycles can be used as smart recognition units to improve the molecular sieving of SBA-15. The stability, recyclability and ease of fabrication and tuning of P-TA make this class of macrocycles the ideal hosts for molecular-sieving-like separations of a wide range of industrially valuable isomeric compounds.Characterization data including NMR, TGA and crystal data are included in the ESI file.\u2020Y. D. performed the major experiments. L. A., B. H., A. D., and P. Y. contributed to the characterization experiments. J. D. helped with the computational calculations.N. M. K. supervised the work and finalized the paper.There are no conflicts to declare.SC-013-D2SC00207H-s001SC-013-D2SC00207H-s002"} +{"text": "Peripheral blood mononuclear cells (PBMCs) have shown promise as a tissue sensitive to subtle and possibly systemic transcriptomic changes, and as such may be useful in identifying responses to weight loss interventions. The primary aim was to comprehensively evaluate the transcriptomic changes that may occur during weight loss and to determine if there is a consistent response across intervention types in human populations of all ages.Included studies were randomised control trials or cohort studies that administered an intervention primarily designed to decrease weight in any overweight or obese human population. A systematic search of the literature was conducted to obtain studies and gene expression databases were interrogated to locate corresponding transcriptomic datasets. Datasets were normalised using the ArrayAnalysis online tool and differential gene expression was determined using the limma package in R. Over-represented pathways were explored using the PathVisio software. Heatmaps and hierarchical clustering were utilised to visualise gene expression.Seven papers met the inclusion criteria, five of which had raw gene expression data available. Of these, three could be grouped into high responders and low responders (LR). No genes were consistently differentially expressed between high and low responders across studies. Adolescents had the largest transcriptomic response to weight loss followed by adults who underwent bariatric surgery. Seven pathways were altered in two out of four studies following the intervention and the pathway \u2018cytoplasmic ribosomal proteins\u2019 (WikiPathways: WP477) was altered between HR and LR at baseline in the two datasets with both groups. Pathways related to \u2018toll-like receptor signalling\u2019 were altered in HR response to the weight loss intervention in two out of three datasets.Transcriptomic changes in PBMCs do occur in response to weight change. Transparent and standardised data reporting is needed to realise the potential of transcriptomics for investigating phenotypic features.CRD42019106582PROSPERO: The online version contains supplementary material available at 10.1186/s12263-021-00692-6. There has been a global increase in the prevalence of obesity over the last 40 years. This rise has been challenging to abate, despite the development of a variety of specific interventions at the individual level and attempts to shift food and exercise patterns at the public health level . A 2007 Our lack of understanding of how subtle differences in physiology could play a role in treatment response is a barrier to personalising approaches that would optimise an individual\u2019s outcome . It is wPeripheral blood mononuclear cells (PBMCs) have shown promise as a tissue of exploration in obesity research as they are exposed to a range of metabolites from the diet and resulting from physiological changes in multiple tissues . Due to This review sought to explore global gene expression changes in PBMCs before and after a weight loss intervention in human populations of all ages. This review considered randomised control trials and cohort studies that administered an intervention primarily designed to decrease weight in any human population with overweight or obesity. The primary aim was to comprehensively evaluate the transcriptomic changes that may occur during weight loss and to determine if there is a systematic response across intervention types. Secondly, by locating primary data, we aimed to assess the gene expression differences between participants who respond differently to the intervention (high versus low weight loss) to elucidate any potential patterns of transcriptomic response that differ between high and low responders.This review was prospectively registered with PROSPERO (No. CRD42019106582).. Table A literature search was conducted in July 2019 with no date limits2 or equivalent). The interventions must have included a weight loss component and measured global gene expression (either through microarray technology or RNA sequencing) in PBMCs at baseline and after the intervention as an outcome. Inclusion of a control group was not required.Studies were included that reported original research conducted in humans of any age, classified as having overweight or obesity . Where datasets were not publicly available, the corresponding authors were contacted to provide the raw data. If the raw gene expression data were not available, normalised individual level gene expression data were obtained.Studies were excluded from the quantitative synthesis if they had < 50% of the sample-level gene expression data available as either raw gene expression data or normalised, or were completed > ten years ago.Authors were contacted to provide individual-level data relating to weight outcomes so that individuals could be re-grouped into HR and LR to each intervention. HR were defined as individuals who lost \u2265 5% body weight over the intervention period , and lowThe available raw microarray data underwent quality control checks and were normalised using ArrayAnalysis, a standardised pipeline . The mosp value < 0.05. Paired analysis was utilised to compare baseline and post-intervention gene expression levels within participants and unpaired analysis was utilised to compare baseline HR and LR gene expression levels.Significantly differentially expressed transcripts were determined by linear modelling of normalised data using the limma package in R 1717 using Euclidean distance for clustering.z-score \u2265 1.96 and at least five genes within the pathway had an unadjusted p value <\u00a00.05 [Overrepresented pathways were identified using the PathVisio software (version 3.3.0) which utilises the Wikipathways database , 20. Thee <\u00a00.05 . For pat and pathways related to cardiomyopathy in HR compared to LR. In this study, LR were defined as maintaining or increasing BMI standard deviation score (BMI SDS) over the intervention period. Harvie et al. [Of the seven included articles, six reported significant changes in PBMC gene expression levels in response to a weight loss intervention , 23\u201327 a pathway , 28. Samed to HR . Pinhel ed to HR , the onle et al. had the After searching online data repositories and contacting authors, raw gene expression data was obtained for 3 studies , 24, 26 p > 0.05). Of the two remaining datasets, 828 transcripts were significantly differentially expressed (adj p < 0.05) in PBMCs after the intervention in Rendo-Urteaga et al. [After correction for multiple testing, no transcripts were significantly differentially expressed in PBMCs after the intervention in Harvie et al. and van a et al. and 28 ta et al. , 27. No and low (n = 17) responders [n = 13) [Two articles provided individual-level weight change data and individual subjects were then grouped into high , 24. All[n = 13) .LEPR (Leptin Receptor) expression levels were lower in PBMCs of HR compared with LR . There were no significantly differentially expressed transcripts between HR and LR at baseline for Harvie et al. [For Rendo-Urteaga et al. , 23 trane et al. . Pathwaye et al. , 24. Twop value < 0.05) between HR and LR in Harvie et al. [TNF, IL1B, IL6 and CCL3 and CCL4 but a small, non-significant upregulation of TLR genes [ . ContrasLR genes Fig. 6) compared with adults 6 months after bariatric surgery (mean weight loss: \u2212 28.8 kg) , 24. ThiStunted metabolic response to a stimulus such as weight loss demonstrates an inadequate ability to respond in a systematic and coordinated way . Excess Differentially expressed genes and pathways were considered separately, using different criteria, as they provide different levels of information. Robust cut-offs are required to establish whether the transcription levels of a single gene can significantly change with weight loss or between responder groups whereas the clustering of genes on a given pathway is less likely to happen by chance and their cumulative change, whilst small in each instance, may collectively be biologically meaningful. The pathways \u2018toll-like receptor signalling\u2019 and \u2018regulation of toll-like receptor (TLR) signalling\u2019 (WP1449 and WP75 respectively) were enriched after the interventions in HR for Harvie et al. and Rendo-Urteaga et al. , 24. TLRThe pathway \u2018cytoplasmic ribosomal proteins\u2019 (WP477) was differentially expressed between HR and LR in both Rendo-Urteaga et al. and Harvie et al. with genes relating to this pathway generally decreased in HR in Harvie et al. and gene activity more varied in Rendo-Urteaga et al.\u2019s HR , 24. RibDifferences in intervention response both within and across studies highlight inconsistencies in gene expression responses to weight loss interventions. This raises the question of whether a HR to one intervention would necessarily be a HR in another. In order to be able to work towards utilising these data for therapeutic use, we need to work towards standardisation of biological material collection, reporting and data pooling. One limitation of pathway analysis is the presence of pathways with overlapping or similar functions allowing genes to be represented on multiple pathways which may lead to over-representation of genes of interest. Nevertheless, modulation of functionally similar pathways can indicate shifts in expression of broader biological functions.The lack of commonality in response across studies may partially be explained by the high heterogeneity amongst participants, study designs and differences in the number of subjects in each subgroup. Whilst study design differences introduce variability, it allows for the exploration of whether transcriptomic changes in PBMCs in response to weight loss are conserved across a range of individuals and intervention designs, which is a strength of this review. It appears in the included studies that transcriptomic changes with weight loss are not consistently conserved. High heterogeneity amongst included studies has also allowed for the exploration of transcriptomic data within nutrition research in such a way that accommodates the inevitable variability within datasets and this approach could be applied with the inclusion of future studies. This is critical given the known high individual variability in response to dietary interventions. As was demonstrated, gene expression responses to the intervention were different between HR and LR, which may mask effects when assessing the transcriptomic response of the group as a whole.Obesity itself is a complex and heterogenous condition with the potential for complications to arise in any tissue with only partially overlapping pathophysiology . Small sPBMCs are a heterogenous cell population which in itself introduces variability . One of Transcriptomic analysis shows promise in investigating phenotypic features that could be used to develop group-specific strategies. To achieve this, data reporting must be transparent and standardised. For example, a limitation in this review is that not all included transcriptomic datasets could be analysed alongside participant-level weight data. The use of multiple datasets is important as it has enabled the capture of variability and commonality across studies which cannot be seen when assessing single studies. Next, steps involve meta-analytic techniques that require raw gene expression data together with relevant phenotypic data, in order to address variability in transcriptomic responses. There is, therefore, a need for reporting standards for nutrigenomic studies that include detailed guidelines on reporting for collection, analysis and open-access availability of raw data and phenotypic outcomes. The recent OBEDIS guidelines take the first step towards standardisation of obesity research with the core variables required for weight loss interventions and is a stepping stone to work towards international standardisation within obesity research including omics technologies such as transcriptomics .In conclusion, this review shows that transcriptomic shifts in PBMCs do occur in response to weight loss. These shifts appear to be variable and, to date, present an inconsistent picture; however, variability itself may be a useful indicator of metabolic health and further exploration of this is needed. An integral part of moving this area of research forward lies in developing reporting standards that require transparency in method reporting and open access to transcriptomic and phenotypic data. Any move towards personalised weight management needs to be underpinned by a comprehensive understanding of the biological variation of obesity, and treatment response.Additional file 1. :Quality and Risk of Bias Assessment for included studies. Quality assessment tool and risk of bias assessment of included studies. If \u201cYes\u201d were answered for eight or more questions studies were designated Positive, if eight or more answers were \u201cNo\u201d the studies were designated Negative otherwise studies were designated NeutralAdditional file 2: Table of excluded studies. Table of studies excluded from the systematic review after full text screening with reasons for exclusionAdditional file 3: Significant genes lists for included studies. Tables of genes significantly differentially expressed in response to the intervention and between HR and LR at baseline for the studies which yielded significant genes.Additional file 4: Pathway analysis tables. Pathways overrepresented in included studies when comparing differentially expressed genes (unadjusted p <0.05) in PBMCs between baseline and post-intervention and when comparing high and low responders at baseline (HR = reduction in body weight of \u226510%). Table S1. Pathways overrepresented when comparing differentially expressed genes (unadjusted p <0.05) between baseline and post intervention. Table S2 Pathways overrepresented when comparing differentially expressed genes (unadjusted p <0.05) between high and low responders at baseline (HR = >10% body weight loss over the intervention period). Table S3. Pathways overrepresented in HR and LR responses to the intervention.Additional file 5: PathVisio diagrams of the pathway \u201ccytoplasmic ribosomal proteins\u201d for the comparisons in which this pathway was significantly enriches. Figure S1. Harvie et al.\u00a0Gene expression differences for the wikipathway \u2018cytoplasmic ribosomal proteins\u2019 in baseline compared to post-intervention. Figure S2 Harvie et al. Gene expression differences for the wikipathway \u2018cytoplasmic ribosomal proteins\u2019 in high versus low responders at baseline. Figure S3. Rendo-Urteaga et al. Gene expression differences for the wikipathway \u2018cytoplasmic ribosomal proteins\u2019 in high versus low responders at baseline.Additional file 6: PRISMA checklist for Systematic Literature Reviews. Description of data: PRISMA checklist for standardised reporting and minimum requirements for systematic literature reviews."} +{"text": "In recent years, there has been a clear trend toward personalized therapy procedures in patients with thyroid cancer with the aim to avoid unnecessary overtreatment of patients and to ensure an improved quality of life. We confirmed that early diagnostic control at 6 months after initial radioiodine therapy shows no significant disadvantages compared to a delayed control after 9 months. Further, it was observed that patients stimulated by hormone withdrawal before radioiodine therapy had significantly better outcomes compared to patients stimulated exogenously with recombinant human thyroid-stimulating hormone (rhTSH). However, early diagnostic control after TSH stimulation represents the most balanced solution for the patient, specifically regarding hypothyroidism symptoms after hormone withdrawal.p = n.s.), Tg levels (p = n.s.), re-therapy rates (p = n.s.), and responder rates (p = n.s.). Significantly less relevant pathological I-131 uptake was found in WBS (p = 0.006) in endogenously compared to exogenously stimulated 6m-DC patients, resulting in lower re-therapy (p = 0.028) and higher responder rates (p = 0.001). Conclusion: DC at 6 months after RAI therapy and stimulation with recombinant human thyroid-stimulating hormone (rhTSH) represent the most balanced solution. Particularly regarding quality of life and mental relief of patients, early DC with rhTSH represents sufficient and convenient assessment of ablation success.Background: The aim was to assess ablation success after initial radioiodine (RAI) therapy in early-stage PTC patients and compare outcomes of first diagnostic control after 6 and 9 months (6m/9m-DC) to examine whether time could possibly avoid unnecessary overtreatment. Methods: There were 353 patients who were matched regarding age, sex, and tumor stage and divided in two groups depending on time of first DC (6m- and 9m-DC). Therapy response was defined as thyroglobulin level <0.5 ng/mL, no pathological uptake in the diagnostic I-131 whole-body scintigraphy (WBS), and no further RAI therapy courses. The 6m-DC group was further divided into endogenously and exogenously stimulated TSH before RAI therapy and compared regarding outcome. Results: No significant differences were found between 6m-DC vs. 9m-DC regarding I-131 uptake in WBS ( Differentiated thyroid carcinoma (DTC) is the most common malignant endocrine neoplasm with increasing incidence worldwide . PapillaTo evaluate ablation success, a diagnostic control, including a I-131 whole-body scintigraphy (WBS) under TSH stimulation, neck ultrasound, and laboratory examination, is recommended, which should be performed 6 to 12 months after RAI therapy (first diagnostic control (DC)), depending on the respective guidelines. The optimal time to perform DC to evaluate ablation success after initial radioiodine therapy in DTC patients is widely discussed among experts and results in different recommendations in the corresponding German, British, and American guidelines ,6. One aThe aim of our study was to assess ablation success after initial RAI therapy in early-stage PTC patients receiving a \u201clow-dose\u201d radioiodine therapy of approx. 2000 MBq (54 mCi) I-131 and to compare the outcome of first diagnostic control after 6 months and after 9 months in order to examine whether time could possibly avoid unnecessary overtreatment of patients.\u00ae, Sanofi Genzyme, Cambridge, MA, USA) i.m. on 2 consecutive days or underwent hormone withdrawal prior to RAI therapy to achieve TSH levels \u226530 \u00b5U/mL according to current guideline recommendations [We retrospectively reviewed 1693 consecutive patients with DTC from our institutional database. Only patients with early stage PTC /pT1b/pT2, N0, R0, M0) were included in this study, who underwent total thyroidectomy followed by RAI therapy according to German guidelines . All patFollow-up examinations, including a cervical ultrasound and laboratory examination in patients with inconspicuous follow-up, were usually performed every 3 months in the first year, every 6 months in the second year, and annually thereafter. At first diagnostic control (DC) at 6 or 9 months after initial RAI therapy, additionally to laboratory measurements, a diagnostic I-131 whole-body scintigraphy (WBS) was performed approximately 72 h after application of approx. 370 MBq I-131 (10 mCi) in hypothyroidism (TSH levels \u2265 30 \u00b5U/mL) or after administration of rhTSH i.m. on 2 consecutive days. I-131 uptake in the WBS was assessed retrospectively by experts; four experienced nuclear medicine physicians re-evaluated I-131 uptake in the WBS without knowledge of the medical reports. Tg and Tg recovery were measured under stimulation, respectively, 3 days after the last rhTSH injection. Neck ultrasonography was performed in all patients but not further evaluated in this study. Patients were classified as responders to adjuvant RAI therapy if stimulated Tg levels were lower than 0.5 ng/mL and non-pathological cervical or distant I-131 uptake was seen in the WBS. In contrast, if Tg was >0.5 ng/mL or pathological uptake was detected in the WBS leading to an additional RAI therapy, the first adjuvant RAI therapy was considered as inadequate.To compare outcome of ablation success in PTC patients at different time points, patients were firstly divided into two groups depending on the time of first DC. Patients of an historical, institutional collective receiving first DC at 6 months after initial RAI therapy (6m-DC) were compared to a newer patient cohort receiving first DC at 9 months after initial RAI therapy (9m-DC). Patients were matched regarding age, sex, and tumor stage. Due to non-homogenously distributed patients regarding TSH stimulation at initial RAI therapy in the 6m-DC group, a subgroup analysis was performed comparing endogenous vs. exogenous TSH stimulation (6m-DC-endo vs. 6m-DC-exo). Since the newer patient cohort was exclusively stimulated by rhTSH for RAI therapy, the 9m-DC (9m-DC-exo) group consisted only of patients receiving the initial RAI therapy under exogenous stimulation, whereas the 6m-DC group consisted of both, endogenously (6m-DC-endo) and exogenously stimulated patients (6m-DC-exo).All continuous variables were expressed as mean standard deviation (SD). The Mann\u2013Whitney U test was used to compare metric variables. A Chi-squared test was used to compare categorial variables. All analyses were performed using SPSS computer software .n = 245/353) and the follicular variant of PTC in 31% (n = 108/353). Multifocality was observed in 46% of patients (n = 161/353), whereas in 52% of patients only one lesion was detected (n = 185/353). In seven patients no information on multifocality was found. Mean tumor size was 13.81 \u00b1 8.29 mm. Data on tumor size were missing for 12 patients.In this study, 353 patients were included. The mean age at diagnosis was 48.5 \u00b1 13.7 years. Most patients had pT1a(m)-stage or pT1b-stage , followed by pT2-stage . The histological subtype was classical PTC in 69% of patients . Most of the patients were stimulated with rhTSH .p = 0.181), sex (82% in 6m-DC (167/204) vs. 73% in 9m-DC (109/149), p = 0.050), and tumor stage (T1a(m)/T1b-stage: 80% in 6m-DC (163/204) vs. 78% in 9m-DC (116/149), p = 0.640). All 9m-DC patients were stimulated with rhTSH at initial RAI therapy, whereas only around half of the 6m-DC patients received rhTSH at initial RAI therapy (56% in 6m-DC (114/204) vs. 100% in 9m-DC (149/149), p = 0.001). Stimulated Tg level before RAI therapy was comparable in both groups . Patient characteristics of the group analysis are demonstrated in In 204/353 (58%) patients, DC was evaluated at 6 months after initial RAI therapy (6m-DC) and in 149/353 (42%) patients at 9 months after initial RAI therapy (9m-DC). Both groups were matched regarding age . Furthermore, non-pathological cervical or distant I-131 uptake in the WBS based on expert opinion was somewhat lower in the 9m-DC group but without reaching statistical significance (84% in 6m-DC (172/204) vs. 78% in 9m-DC (116/149), p = 0.112). Overall, in the clinical routine, an additional RAI therapy cycle was not performed nearly in the same proportion of patients in both groups, albeit with a similar trend to a worse outcome in the 9m-DC group (86% in 6m-DC (176/204) vs. 79% in 9m-DC (118/149), p = 0.078) resulting in similar outcome of the overall responder rates (75% in 6m-DC (152/204) vs. 67% in 9m-DC (100/149), p = 0.129).At initial DC, no significant differences were found among groups regarding Tg responder rates after TSH stimulation (90% in 6m-DC (184/204) vs. 88% in 9m-DC (131/149), p = 0.920), sex (81% in 6m-DC-endo (73/90) vs. 83% in 6m-DC-exo (94/114), p = 0.804), and tumor stage (pT1a(m)/pT1b-stage: 79% in 6m-DC-endo (71/90) vs. 81% in 6m-DC-exo (92/114), p = 0.748). Stimulated Tg level before RAI therapy was comparable in both groups . Patient characteristics of the subgroup analysis are demonstrated in In the subgroup analysis, 90/204 (44%) patients were included in the 6m-DC-endo and 114/204 (56%) patients in the 6m-DC-exo group. Subgroups were matched regarding age . Non-pathological I-131 uptake in the WBS was significantly lower in the 6m-DC-endo group compared to the 6m-DC-exo (92% in 6m-DC-endo (83/90) vs. 78% in 6m-DC-exo (89/114), p = 0.006). The same was observed for re-therapy rates: 6m-DC-endo patients needed, significantly less often, further RAI therapy cycles (92% in 6m-DC-endo (83/90) vs. 82% in 6m-DC-exo (93/114), p = 0.028). The overall responder rates at the first DC were consequently significantly higher in the 6m-DC-endo group compared to the exogenously stimulated group (87% in 6m-DC-endo (78/90) vs. 65% in 6m-DC-exo (74/114), p = 0.001).At initial DC, Tg responder rates after TSH stimulation were comparable in both groups (94% in 6m-DC-endo (85/90) vs. 87% in 6m-DC-exo (99/114), To our knowledge, this is the first study comparing ablation success (DC) after initial RAI therapy at 6 months versus 9 months in early-stage PTC patients. Due to the lower activity doses applied in early-stage PTC patients, we hypothesized that early DC can lead to not fully ablated tissue in the short period of time, resulting in higher non-responder rates after initial RAI therapy, and should be, therefore, delayed. However, our results showed no significant differences at first DC between both groups (6m-DC vs. 9m-DC) regarding I-131 uptake in the WBS, Tg levels, re-therapy rates, and overall responder rates. This finding is crucial in clinical routine since the possibility to perform after-care examinations at an earlier point in time enables patients to gain emotional and mental relief earlier. Furthermore, for women of childbearing age early, completion of therapy may have significant impact due to strict contraception for a certain period after RAI therapy.http://www.ema.europa.eu, accessed on 4 January 2022) in 2005 and the United Stated Food and Drug Administration in 2007; its comparability to hormone withdrawal was already shown in previous studies [Due to non-homogenously distributed patients regarding TSH stimulation at initial RAI therapy in the 6m-DC group, we performed a subgroup analysis comparing endogenously vs. exogenously stimulated TSH before initial RAI therapy. We found significantly fewer patients with a relevant I-131 uptake in the WBS in the group undergoing thyroid hormone withdrawal (6m-DC-endo) compared to patients with exogenously stimulated TSH (6m-DC-exo) before RAI therapy, resulting in significantly lower re-therapy and consequently higher responder rates in these patients. As the effectiveness of RAI therapy is presumably dependent on the TSH level, TSH stimulation increases the I-131 uptake and, therefore, plays a major role in RAI therapy. The use of rhTSH in stimulating the uptake of I-131 in non-metastasized DTC patients was already established and approved by the European Medicines Agency vs. high-dose (3.7 GBq) RAI therapy, though with higher success rates compared to our results in both the rhTSH stimulated group (87%) and the hormone withdrawal group (87%) . This fiA study by Carvalho et al. reported on only a marginal risk of recurrence in 1420 DTC patients if patients showed negative WBS and Tg levels at 12 months after RAI therapy .However, thyroid hormone withdrawal is associated with many adverse effects and can significantly affect quality of life ,19. PossRegarding Tg responder rates, in the subgroup analysis comparing 6m-DC-endo with 6m-DC-exo, our results showed comparable rates, though a trend toward higher Tg responder rates in the endogenously stimulated group was observed (6m-DC-endo 94% vs. 6m-DC-exo 87%). However, long-term data with an observation period of 10 years demonstrated equal survival time regardless of TSH stimulation by rhTSH or thyroid hormone withdrawal.In this study, only patients with early-stage PTC were included, and therapy was performed according to German guidelines including total thyroidectomy and RAI therapy . It mustThere are some limitations to this study. Firstly, there may have been a selection bias because of the retrospective design. Secondly, patients of an historical, institutional collective were compared to a newer patient cohort; therefore, a corresponding endogenously stimulated patient cohort with DC at 6 months after RAI therapy is consequently missing. Furthermore, it should be emphasized that results of this early-stage patient cohort cannot be transferred to patients with high-risk PTC.Thyroglobulin antibodies are only present in a minority of the patients and, therefore, could not be included in the analysis in a convincing way. This is indeed a limitation of the study, which cannot be overcome. However, although the recovery is less sensitive than the direct measurement of the antibodies, we are convinced that an undisturbed recovery adds confidence for the validity of the measured thyroglobulin. The diagnostic impact of cervical ultrasound was not assessed.Our overall results led to the conclusion that early DC at 6 months after RAI therapy with rhTSH represents the most balanced solution considering possible additional information provided by DC after withdrawal of thyroid hormone weighed against symptoms of hypothyroidism. Particularly with regard to quality of life and mental relief of patients, early DC after stimulation with rhTSH represents a sufficient and convenient assessment of ablation success."} +{"text": "IntroductionBreast cancer is the most common cancer among women worldwide and one of the main causes of death in the female sex. Genetic polymorphisms in the mu-opioid receptor (OPRM1) and catechol-o-methyltransferase (COMT) genes have been shown to increase breast cancer risk. Variants in these genes may carry a prognostic impact in breast cancer. Long follow-up intervals are critical to adequately analyze prognosis in diseases with prolonged survival times and late relapses.ObjectiveTo analyze the impact of genetic polymorphisms on the survival of a cohort of breast cancer patients with very long follow-up.MethodsThis was a retrospective study of patients treated at Portuguese Oncology Institute of Porto (IPO Porto), a Portuguese comprehensive cancer center, with invasive carcinoma of the breast with very long follow-up, with analysis of genetic polymorphisms OPMR1 rs1799971 and COMT rs4680 on biological samples. Statistical analysis of survival was performed using the Kaplan-Meier method, log-rank test, and Cox regression method.ResultsA total of 143 patients with invasive breast cancer were included, with a median follow-up of 21.5 years. There was a statistically significant difference in overall survival (OS) at 30 years according to the OPMR1 polymorphism, with lower survival in patients with the AA genotype (p<0.05). The difference in OS according to the COMT polymorphism was also statistically significant, with worse survival in patients with genotype T allele (p<0.05). The genetic variants were not associated with patient age, stage at diagnosis, or tumor grade.DiscussionThe genetic polymorphisms of OPRM1 and COMT affected the overall survival of breast cancer patients, in concordance with previous research. Further investigation is needed in order to clarify the prognostic impact of these genetic alterations on breast cancer. Breast cancer is the most common type of cancer diagnosed in women worldwide, with 2.3 million new diagnoses in 2020. It still represents one of the main causes of death in females. In 2020, there were 7.8 million breast cancer patients globally, thus representing the most prevalent tumor in the world -2.Prognostic definition in breast cancer is essential for the accurate choice of therapy and in order to inform patients\u2019 expectations. Prognosis is\u00a0the sum of clinical aspects such as patient age, lymph node status, and tumor size; histological features, namely, grade and lymphovascular invasion; division into molecular subtypes by hormone receptor status, HER2 status, and proliferative index (Ki67). More recently, tumoral genetic profiling has added prognostic information to traditional classifications .The interaction between our genes and the environment is recognized as modulating many aspects of biology, but plenty of the implications of this interdependence are not yet identified. Several genetic alterations affect cancer susceptibility and may have predictive and prognostic value in cancer.There are three subtypes of opioid receptors\u00a0to which exogenous and endogenous opioids bind: mu, delta, and kappa. The mu-opioid receptor is fundamental in pain modulation, feelings of reward in drug abuse, and adverse effects of opioid drugs\u00a0. It is eThe enzyme catechol-o-methyltransferase (COMT) plays a crucial role in the catecholaminergic system, metabolizing several fundamental substances, such as dopamine and estrogen, regulating pain perception . Older sLittle is known on how the individual genetic polymorphisms affect the oncologic outcome in breast cancer and the clinical relevance of key gene variants. The polymorphisms of the OPRM1 and COMT genes may influence survival and treatment response in this disease. A previous study found a lower mortality rate in patients with OPRM1 polymorphism in the G allele . Prior rEarlier detection and more effective treatments\u00a0have increased the life expectancy in breast cancer patients. Long-term follow-up studies add valuable information on the survival of diseases with long overall survival and disease-free survival\u00a0such as breast cancer. Additionally, late relapses of breast cancer are frequent, with relevant recurrence rates 10 years after diagnosis. Follow-up intervals of over 20 years may be key for establishing the prognostic factors in this disease .The goal of the present study was to analyze the impact of these genetic polymorphisms on the survival of a cohort of breast cancer patients with a very long follow-up.PopulationA retrospective study was carried out, selecting consenting adult patients of a Portuguese comprehensive cancer center with histologically confirmed invasive carcinoma of the breast with over 12 years of follow-up (diagnosis between 1979 and 2009), with analysis of clinical records (paper and electronic). All patients were treated at the Portuguese Oncology Institute of Porto (IPO Porto), Portugal.Exclusion criteria consisted of the absence of informed consent, lack of access to clinical records, and unavailability of biological samples.Patients' clinical characteristics, data on treatments, response, relapse, and survival were obtained from medical records. The staging was made uniform through the American Joint Committee on Cancer (AJCC) 5th edition system, in accordance with the data available.Biological samples and genotype selectionPeripheral venous blood samples of the patients were obtained using a standard technique and collected in ethylenediaminetetraacetic acid (EDTA)-containing tubes. Genomic DNA was extracted using the extraction kit Qiagen\u00ae, QIAmp DNA Blood MiniKit , as indicated by the manufacturer\u2019s procedure.All samples were obtained with the informed consent of the participants prior to their inclusion in the study, according to Helsinki Declaration principles and after approval of the study by the Portuguese Institute of Oncology ethics committee (CES-IPO: 233/2017).The most common genetic polymorphisms according to the available literature from OPMR1 and COMT were analyzed, namely, OPMR1 rs1799971 and COMT rs4680. Genotyping of the selected genetic variants was conducted using the TaqMan\u00ae Allelic Discrimination methodology in a real-time polymerase chain reaction system . The procedures for real-time PCR reactions and amplifications were conducted according to the manufacturer\u2019s protocol. To guarantee the quality of SNP genotyping, two negative controls were included in each amplification reaction preventing false positives and double sampling was conducted in at least 10% of the samples randomly chosen, with an accuracy above 99%. The genotyping results were individually validated by two researchers with no previous knowledge of the patient\u2019s clinicopathological data.Statistical analysisAssessment of the association between genetic polymorphisms and patients\u2019 clinicopathological characteristics was performed using the chi-square test (\u03c72) for categorical variables.The overall survival (OS) and OS at 30 years were defined from the date of diagnosis to the date of death and the percentage of patients alive after 30 years of diagnosis, respectively. The disease-free survival (DFS) time was defined from the data from the date of diagnosis to the date of disease recurrence. Patients without disease relapse or those lost to follow-up were censored at their last date of record. Statistical analysis was carried out with SPSS software . Analysis of survival was performed using the Kaplan-Meier method, log-rank test, as well as Cox regression method to calculate the hazard ratio (HR) and 95% confidence intervals (CI) for the association between the genotypes and risk of death, with adjustments according to previous treatment with endocrine therapy. A level of p<0.05 was considered statistically significant.In this study, 143 patients were included, all adult women\u00a0diagnosed with invasive carcinoma of the breast between 1979 and 2009, with a median follow-up of 21.5 years. Patient characteristics are described in Table Tumor staging was compatible with T1 in 31%, T2 in 38%, T3 and T4 in 7% each, and not reported in 17%; 48% had no affected nodes and 46% were N+; nodal involvement was not reported in 6%. Tumor grade was G1 in 7%, G2 in 23% G3 in 14%, and not reported in 55%. Only 3% of patients received neoadjuvant chemotherapy; 48% of patients were submitted to adjuvant chemotherapy, of which 6% was based on anthracycline and taxane. Three patients with known HER-2-positive disease were treated with adjuvant trastuzumab. Adjuvant radiotherapy was carried out in 54% of patients. Endocrine therapy was not employed in 35% of patients, whereas 33% were treated with adjuvant tamoxifen, 11% with an aromatase inhibitor, and 8% with a combination of tamoxifen and a switch to an aromatase inhibitor.Relapse was identified in 50% of patients in this cohort: 13% consisted of local relapse, 34% with distant metastasis, and 4% were diagnosed with both. Treatment was multimodal in most cases, including hormone therapy.\u00a0To date, 57% of patients have died, 64% of which due to breast cancer. Median OS was 257 months (21.4 years), OS at 30 years was 51.7%. Median DFS was 239 months (19.9 years).\u00a0There was no association between the genotypes and tumor grade, stage at diagnosis, or patient age at diagnosis (p>0.05).In our study, there were no statistically significant differences in disease-free survival according to any of the polymorphisms (p>0.05).\u00a0There was a statistically significant difference in OS at 30 years according to OPMR1 polymorphism, with lower survival in patients with AA genotype (p=0.006) and an HR of 3.30 , with HR= 2.1 (Figure In our study, the genetic polymorphisms of OPRM1 and COMT affected the overall survival of breast cancer patients. The AA genotype for OPRM1 conferred a 3.3 times higher risk of death at 30 years and the presence of the T allele increased the risk of death by 2.1 times at 30 years, regardless of adjuvant hormone therapy. The genotypes did not influence patient age, stage at diagnosis, or tumor grade. These findings demonstrate the current extent of our ignorance on the prognostication of breast cancer, despite all the previous advances.Due to the increasing survival rates of breast cancer patients, longer follow-up studies are required to shed light on prognostic data. Our paper represents a unique study\u00a0that travels back in time through biological samples and unveils data on patients with long follow-up times. Real-world data of different geographic locations are necessary in order to fully grasp the implications of genetic variants in diverse populations. Our study population, of European ancestry, contrasts with a predominance of genetic studies of these polymorphisms on patients of Asian descent.The present result is consistent with a previous paper describing the survival impact of the OPRM1 rs1799971 genotype in a sample of over 2000 women . OpioidsPrior research confirms the effect on the survival of the COMT rs4680 polymorphism, in accordance with the present study ,21. MatsThere is a growing interest in the characterization of genetic alterations in the oncologic population, paving the way for personalized cancer treatment. Prognostic scores could be developed to guide the tailoring of therapy to individual risk and according to enzymatic levels of key metabolizers such as COMT.Currently, staging, endocrine receptor\u00a0and HER-2 status, and Ki-67 and tumor grade define adjuvant treatment choices, occasionally supported by genetic signature tools such as OncotypeDx and Prossigna. This study reminds us of the potential of analyzing patient characteristics\u00a0instead of tumoral features. Successful cancer treatment is certainly dependent on the interactions between the tumor, host, and environment; however, little advances have been made in this field.Our main limitations are intimately associated with the problems of retrospective studies. Moreover, we ought to mention the heterogeneity of our study population, which derives from the historic nature of this retrospective study - the evolution of staging and tumoral biomarker identification\u00a0and treatment is evident. In order to diminish this potential bias, we restaged all patients with the same system, the AJCC 5th Edition\u00a0of 1997, which was compliant with the lack of modern biomarker data at the time of diagnosis. The availability of biological samples of women treated decades ago was also a constraint for patient selection and population size. Bio-banks were not common in the past, therefore, we count ourselves fortunate to have these archived samples accessible for investigation.Multicentric, international, retrospective, and prospective studies on this area of research are needed to expand the new perspectives on cancer survival this study has unlocked.In the present study, OPRM1 and COMT polymorphisms demonstrated a prognostic impact in breast cancer, significantly affecting overall survival at 30 years. Genetic variants were not associated with age, staging, or tumor grade, thus potentially constituting a new independent prognostic variable. This knowledge could aid to individualize\u00a0oncologic treatment according to each patient's genetic risk.Further research is needed, preferably with prospective trials, in order to clarify the present results. The importance of broad inter-institutional bio-banks must be underlined, to allow future investigation in this budding area."} +{"text": "We temporary retreat the paper by Groenewegen et al and will provide a revised version. The reason is that we have made a serious mistake in recoding the dependent variables that form the task shifting scale. Instead of recoding \u2018not applicable (no nurse in my practice)\u2019 into \u2018no\u2019, as stated in the method section, we have recoded it by mistake into \u2018yes\u2019. As a consequence Figure\u00a01 and Table\u00a01 are incorrect. The tables presented in the Supplementary Material are correct. The analysis has to be done anew and the conclusions will be partly different. For those countries that have a low number of practices with \u2018not applicable (no nurse in my practice)\u2019 the differences will be small and this is the majority of countries. However, the results and discussion sections have to be rewritten.The mistake was made by the first author (PG) and the statistician (PS). They take full responsibility. The paper had undergone the normal procedures of checks by the other authors, the internal peer review within Nivel and external peer review by the journal. The mistake was found after a question by a reader who noted an inconsistency between Figure\u00a01 and the information in Supplementary Tables\u00a02-5. We thank her for pointing this out to us. We apologise to readers and to the journal.The article PDF has been watermarked and the abstract and HTML version have been amended to state that the article has been retracted."} +{"text": "JEV is one of the zoonotic pathogens that cause serious diseases in humans. JEV infection can cause abortion, mummified foetus and stillbirth in sows, orchitis and semen quality decline in boars, causing huge economic losses to pig industry. In order to investigate the epidemiology of JEV in pigs in Sichuan province, a rapid and efficient fluorescent Reverse transcription recombinase-aided amplification (RT-RAA) detection method was established. Aborted fetuses and testicular swollen boar samples were detected by RT-RAA in pigs in the mountain areas around Sichuan Basin, and the detection rate of JEV was 6.49%. The positive samples were identified as JEV GI strain and GIIIstrain by sequencing analysis. We analyzed the whole gene sequence of a positive sample for the GI virus. The Envelope Protein (E protein) phylogenetic tree analysis was far related to the Chinese vaccine strain SA14-14-2, and was most closely related to the JEV GI strains SH17M-07 and SD0810 isolated from China. The results showed that we established an efficient, accurate and sensitive method for clinical detection of JEV, and JEV GI strains were prevalent in Sichuan area. It provides reference for the prevention and control of JEV in Sichuan. In human infections, most cases showed mild clinical symptoms, such as headache, fever and lethargy. However, severe neurological disorders, such as paralysis, memory deficits and seizures can occure sometimes, it can kill up to 40 percent of patients with severe illness3. Swine are the main host of JEV in livestock and poultry. Pigs infected with JEV usually suffer from reproductive disorders and4. Which brings great economic losses to pig industry. The JEV is mainly transmitted by mosquitoes. The climate in Sichaun is warm and humid, and large areas of rice cultivation nurture a large number of mosquitoes, forming a continuous viral cycle between mosquitoes and pigs, resulting in the widespread epidemic of JEV in Sichuan5. At present, there is no specific drug for the treatment of JEV infection, theprevention and control of JEV can only be carried out through vaccination6. Vaccinating pigs not only provides specific protection to pigs, but also breaks the cycle of transmission, thereby reducing the threat to human health. However, after the outbreak of African swine fever (ASF), the pig industry has rebounded, a large number of breeding sows and boars has been put into production, and the incidence of reproductive disorders has increased, which is of great significance for the formulation of JEV prevention and control strategies.Japanese Encephalitis virus (JEV) is a zoonotic virus of the genus flavivirus, damagecentral nervous system disease in both humans and animals. JEV is an endemic disease around the world, including Russia, China, Japan, India, Australia and Southeast Asia, with about 68,000 reported cases each year, and half of which occurred in China7. Serum Neutralization Test (SNT) is the reference method for serological detection of JEV, but there may be cross-reaction and it is necessary to detect other flaviviruses of the same genus as JEV to obtain correct results8. RT-PCR, RT-qPCR, ddPCR and other polymerase chain reaction based detection methods also need 2\u20133\u00a0h to complete11. We have establishes a real-time RT-RAA method with simple operation, strong specificity and high sensitivity. RT-RAA is an emerging nucleic acid detection method. Its principle is to add recombinase and single chain binding protein and other elements into the amplification system, so that the nucleic acid amplification can be rapidly amplified at 39\u00a0\u00b0C, and the results can be obtained within 10\u201330\u00a0min. We established the RT-RAA method for rapid detection of JEV, and used this method to conduct an epidemiological investigation of JEV in pig farms in Sichuan. The epidemiological data were analyzed in order to provide reference for the prevention and control of JEV in pig farms of Sichuan province in China.Many methods have been used in the epidemiological investigation of JEV. Including virus isolation, ELISA, RT-PCR, RT-QPCR, Droplet Digital PCR (ddPCR), etc. Virus isolation is time-consuming and labor-intensive, taking more than a week to complete. In serological investigation using ELISA, it is difficult to analyze the results due to the cross-reaction between flaviviruses12. Clinical samples were collected from 185 aborted fetuses and testicular with swelling in mountain areas around Sichuan Basin. The Sichuan provincial laboratory management committee (LicenceNo: SYXK (chuan) 2019\u2013187) approval has been received. The \u201cGuidelines for Experimental procedure\u201d of the Ministry of Science and Technology were followed.JEV/SC/2016-1 strain was provided by Sichuan Zoology Biotechnology Co., LTDThe sequence between E960-1100 in the reference sequence was analyzed, and the homology between the JEV strains was 79.8\u2013100% was used to extract RNA from virus samples and clinical samples according to the kit instructions, and RNA was stored at \u2212\u00a080\u00a0\u00b0C.E.coli DH5\u03b1 receptor cells. After culture, the plasmid was identified by PCR and sent to Shanghai Sangong Bioengineering Technology Service Co., LTD for sequencing. Bacteria with correct sequencing results were expanded and cultured, and plasmids were extracted using a plasmid extraction kit. The plasmids were linearized and digested with mMESSAGE mMACHINE\u2122 T7 Transcription Kit for in vitro transcription to obtain standard plasmid transcription13. This product is purified and aliquoted and stored at \u2212\u00a080\u00a0\u00b0C.Extract the RNA of JEV-SC-1, use PrimeScript\u2122 RT Master Mix to reverse transcribe the cDNA, and then use RT-PCR primers to amplify.. The product gel was recovered and ligated with pMD 19-T simple Vector and transformed into Fluorescent RT-RAA use a fluorescent RT-RAA nucleic acid amplification kit , and the RT-RAA reaction system refers to the kit instruction , blue ear virus (PRRSV), geta virus (GETV), epidemic diarrhea virus (PEDV) and infectious gastroenteritis virus (TGEV) were detected by the established RT-RAA fluorescence detection method to evaluate the specificity of the established method.2O, and the diluted plasmid transcript was detected by RT-RAA to evaluate the sensitivity of the established method.The recombinant plasmid transcript was diluted 10 times gradient by ddHFrom September 2020 to September 2021, 185 samples of aborted fetuses and testicular swollen boars were collected from mountain area around Sichuan Basin, Sichuan Province. Fluorescent RT-RAA and Diagnostic Kit for JEV RNA (RT-PCR Fluorescence Probing) (Guangzhou VIPOTION) Biotechnology, Guangzhou, Guangdong, China) were used to detect samples simultaneously. RT-qPCR system and reaction procedure refer to the kit instruction was used to splice the sequence and then sequence analysis was performed 15. In this study, we constructed a rapid detection RT-RAA method using JEV E protein sequnce to detection the virus. It can detect multiple genotypes of JEV, providing a wider range of JEV detection The RT-RAA detection method established in this study directly added reverse transcriptase into the reaction system, which is simpler and more convenient than the conventional PCR detection using cDNA to configure the PCR system after reverse transcription. The combination of recombinant enzyme, single chain binding protein and DNA polymerase in RAA system enables rapid amplification of nucleic acid at constant temperature, and the addition of fluorescent probe enables real-time monitoring of amplification reactions. For the JEV RT-RAA fluorescence detection method in this study, the amplification curve could be seen after reaction at 39\u00a0\u00b0C for 10\u00a0min, and the result could be judged after 30\u00a0min. However, the RT-LAMP and RT-qPCR take about 1\u00a0h for detection JEV, and the RT-PCR takes longer time than those two17. The detection limit for JEV plasmid transcripts was 5.5 copies/\u03bcL. It is similar to JEV RT-LAMP and TaqMan RT-qPCR16, slightly higher than SYBR Green I RT-qPCR, and 100 times higher than RT-PCR10. The fluorescence detection method of JEV RT-RAA established in this study is rapid, sensitive and specific, and can be used for clinical diagnosis.RAA (Recomninase Aided Amplification) is a new thermostatic Amplification technology proposed in recent years. RAA has been widely used in pathogen detection due to its fast detection speed, strong specificity and high sensitivity18, which increase the risk of JEV transmission. In addition, under the influence of ASF epidemic, farmers paid less attention to JEV and the vaccination rate declined. Hence the prevalence of JEV increased in pigs19.JEV is one of the main arboviruses in my country and one of the main pathogens that cause reproductive failure in pigs. To investigate the prevalence of JEV in Sichuan, we performed JEV detection on 185 collected clinical samples using the established JEV RT-RAA. In the detection of 185 clinical samples, we detected 12 JEV positive cases, with a positive rate of 6.49%. JEV still accounts for a high proportion of abortion cases. The reason is the unique geographical location of mountain area around Sichuan Basin. Due to the lack of water, a large number of reservoirs have been built to store water, and these reservoirs are good shelters for mosquitoes20. After 2000, GIstrain replaced G III strain and become the pandemic strain in China, Japan, South Korea, Vietnam and Thailand25. After thatstrain, the number of clinical cases of JEV decreased, because the less virulent characteristic of the GI strain. However, study also proved that GIand GIII viruses have similar infection rates in asymptomatic infected patients, indicating that GIand GIII were equally virulent, and this conclusion was also verified in mice experiments27.Among the 12 positive samples, there were 11 JEV GIstrains and 1 JEV GIII strain.strain The GIstrain was first identified in the 1980s, originated from Southeast Asia and then rapidly spread to the entire Asian continent28. It has also been found that some individuals vaccinated with GIII JEV vaccine have a reduced ability to neutralize antibodies against different GIstrain strains29. This has raised concerns about whether differences in the presence of antigens between strains of different genotypes could affect the effectiveness of the vaccine. This has also accelerated the development of a new vaccine for JEV.By analyzing the amino acid mutation sites of E protein of JEV-SC-2020-1 strain, it was found that JEV-SC-2020-1 was completely consistent with SA14-14-2 strain in the key epitope region of E protein, and had a mutation with Bejing-1. It is speculated that vaccines of SA14-14-2 and Bejing-1 strains can provide protection for JEV-SC-2020-1, but the protective efficacy of SA14-14-2 may be higher than that of Bejing-1. In previous studies, it has been believed that GIII strain vaccine can provide good immune protection effect against species genotype virus. However, recent studies have found that JEV vaccine inoculated with GIII strain has reduced neutralization efficacy against GIstrain JEVSupplementary Information 1.Supplementary Information 2."} +{"text": "Autism spectrum disorder (ASD) is a child neurodevelopmental disorder, the onset of which is generally within 3 years of age, and often leads to lifelong impaired social and cognitive functions, which impose significant mental pressure and economic burdens on the family and society.\u00a0In 2012, the incidence data released by the World Health Organization showed that the global prevalence of ASD was ~0.625% , and biological samples, including blood, stool, and urine. Specifically, the scales shown in Table S1 were applied to evaluate the core symptoms of ASD, cognition, and the symptoms of comorbidities . To measure the conditions of the participants\u2019 parents, we also asked the parents to complete the questionnaires listed in Table S1. Questionnaires about protective factors, such as folic acid intake and parenting style were alsIn the ASD high-risk cohort, the same measurements as for the ASD registry cohort were collected from participants recruited from the SBC & ELP when their offspring reached the age of 36 months and from outpatient participants when they first visited Xinhua Hospital. At each visit during pregnancy, participants from the SBC & ELP were subjected to questionnaires, biophysical measurements, and biospecimen collections, including demographic characteristics, environmental exposure, housing characteristics, chemical exposure, use of pesticides, occupational exposure, social support, health behavior, diet, medical history, health status, venous blood, urine, and environmental pollutants . Fetal head MRI imaging data were collected during the second and third trimesters, when brain ventricular enlargement was screened by ultrasound. After delivery, we conducted maternal and newborn medical chart abstraction, biophysical measurements, and physical examination of the child. Biospecimen and questionnaire data were collected including the placenta, cord blood, mother\u2019s hair and nails, child\u2019s urine and blood, feeding, diet, sleep, and environmental exposure.Up to March, 2022, the ASD registry cohort, which started in 2015, has recruited 1,091 ASD participants (age range: 2.00\u201314.50 years) and 113 non-ASD participants (age range: 1.26\u201311.88 years). Data collection from the ASD high-risk cohort started from 2013 and has recruited 1,278 participants .Specifically, MRI imaging data were collected for most participants in the cohort, including T1-weighted and T2-weighted structural, resting-state functional, and diffusion weighted MRI data. The structural images of each participant were inspected by two experienced radiologists from Xinhua Hospital, and no abnormalities were found in any participant\u2019s structural images. EEG and fNIRS data were also collected from children who could not tolerate MRI or for specific research purposes.Our cohort has several strengths. First, it is a single-center study with a large sample size, which makes the multi-modality data comparable. Second, multi-modality data enable multiomics analysis, thus supporting research into the neurobiological basis of ASD. Besides, this cohort is open and non-fixed, which allows adjustment based on new findings. Protocols are designed to establish multi-center studies with a unified design in the future, to identify the specific factors for ASD etiological screening, and to formulate prevention and control strategies in various regions. Finally, the pregnant cohort allows us to determine the risk and protective factors, mechanisms, and markers associated with ASD. Comparisons between the SAED cohort and other ASD cohorts are presented in Table S2.Some limitations should also be stated. This cohort was limited mainly to residents of Shanghai, Zhejiang, and Jiangsu in order to maintain long-term follow-up. Although the cohort covers both suburban and urban populations, few participants living in rural settings have been enrolled due to the study location. Furthermore, the study population was recruited from outpatient clinics, which means the ASD-related population unwilling to seek medical care has not been included. These issues limit the cohort\u2019s ability to represent the entire ASD-related population.Supplementary file1 (PDF 203 KB)Below is the link to the electronic supplementary material."} +{"text": "Historically (and still today) the common dispensing practice for factor therapy (in countries without significant limitations to factor availability) is to round the prescribed dose up or down 10%, which is done due to the combination of weight-based dosing and limited vial size availability. This practice has served the community well, making the filling of prescriptions simpler and in some cases reducing wastage of a very expensive product. The study by Donners et\u00a0al. [Before answering this question, a basic understanding of emicizumab dosing and dispensing is required. First, emicizumab dosing begins with 4 weekly bolus doses of 3 mg/kg, which are then followed by maintenance doses consisting of 3 strict weight-based options for patients of all ages and all weights . Second, emicizumab is only available in 4 vial sizes; however, the smallest vial size (30 mg) has a different concentration (30 mg/mL vs 150 mg/mL for the other 3) and cannot be combined in 1 syringe with the other vial sizes , clearlyr2 diagnostics emicizumab calibrator [So, what can be done about this in general, and what should prescribers consider given the data from this study? For the first question, it seems to me that only the manufacturer can address this. Options could among other things include offering several more vial sizes between the ones currently offered. Although this seems to be a simple fix, apparently it is quite a complicated manufacturing issue, and as I am no expert in this area, only Roche can address the community about this. Another option that would rely on the manufacturer is to have a precise dosing device that could dispense the exact amount of drug, but at this point in the life cycle of emicizumab, that seems even less likely than the prospect of more vial sizes. The other option is for academia (perhaps in partnership/sponsorship from the manufacturer) to conduct a phase 4 study evaluating different dosing approaches such as dose bands, ie, all patients between 10 and 20 kg, 20 and 30 kg, etc. receive the same dose. Certainly, this would significantly simplify dosing and eliminate most if not all the drug wastage. Regarding the second question of what prescribers can/should do now, again, there are different options. The first is to continue to prescribe and administer the drug according to the dose per the prescribing information with minimal rounding such as in situations like the one described above where rounding is a few percent off the exact dose. In our institution, we have made a table of suggested dosing regimens by weight to minimize wastage; however, it may necessitate patients using a dosing frequency or injection volume they may not prefer . We do nlibrator ) or \u201cleslibrator ) approxiIn conclusion, there is a clear unmet need regarding emicizumab dosing regimens and available vial sizes resulting in wastage of the product as well as complicating the administration process, and simple and safe solutions are either not available or not optimal. Optimistically, newer agents currently in clinical trials are taking a different approach to dosing, including fixed dosing for fitusiran with 1 dose for those over 12 years and another for those less than 12 years , dosing"} +{"text": "Hallux valgus (HV) is a common clinical deformity of the forefoot, primarily a deformity of the 1st metatarsophalangeal joint in which the bunion is deflected laterally relative to the 1st metatarsal, often in combination with a medial bunion or pain at the head of the 1st metatarsal. For HV bunions that do not respond to conservative treatment, surgical intervention is required, which generally involves osteotomy of the first metatarsal or the first phalanx. However, the choice of fixation method after osteotomy is controversial. Most scholars choose screws or plates for internal fixation (IF) to achieve strong and reliable fixation, while some experts do not perform IF after osteotomy, but reposition the osteotomized end and perform external fixation (EF) with a figure-of-eight bandage between the 1st and 2nd toes, which has been advocated by some scholars because it requires only local anesthesia and has the characteristics of minimally invasive and no additional material for IF, and has achieved good clinical results. Therefore, it is necessary to compare the choice of IF or EF after HV osteotomy to evaluate whether there is a difference between the 2 and to conduct a meta-analysis to provide a reliable basis for clinical guidance.We will search articles in 7 electronic databases including Chinese National Knowledge Infrastructure, Wanfang Data, Chinese Scientific Journals Database, Chinese databases SinoMed, PubMed, Embase, and Cochrane Library databases. All the publications, with no time restrictions, will be searched without any restriction of language and status, the time from the establishment of the database to October 2022. We will apply the risk-of-bias tool of the Cochrane Collaboration for randomized controlled trials to assess the methodological quality. Risk-of-Bias Assessment Tool for Non-randomized Studies was used to evaluate the quality of comparative studies. Statistical analysis will be conducted using RevMan 5.4 software.This systematic review will evaluate the functional outcomes and radiographic results of internal versus EF after HV osteotomy.The findings of this study will provide evidence to determine whether IF or external fixation is more effective after HV osteotomy. The prevalence of HV can be as high as 35%, and approximately 30% of patients undergo surgery for foot pain and discomfort.,3 The gold standard for surgical treatment of HV has not yet been fully unified, and the mainstream treatment option is osteotomy, which is mainly performed by various forms of osteotomy of the 1st metatarsal and 1st proximal phalanx or by combined osteotomy of the metatarsal and phalanx to correct the HV deformity. There are various ways of osteotomy and post-osteotomy fixation, but how to achieve the best outcome with minimal trauma has been the focus of HV research.Hallux valgus (HV) is a condition in which the deformity of the first metatarsophalangeal joint is the main manifestation, and the unsightliness, pain and functional impairment caused by the deformity are the main reasons why patients come to the clinic.,7 Most scholars believe that strong IF at the osteotomy end is required after correction of the deformity to avoid postoperative loss of correction angle and to reduce the probability of postoperative recurrence.\u201313 However, some experts believe that strong IF is not always needed after osteotomy, but only external fixation (EF) between the 1st and 2nd toe after correction of the deformity.,15 It is believed that the figure-of-eight bandage of EF can be adjusted for tightness, as IF is not needed only for minimally invasive incision, which protects blood flow and avoids soft tissue damage, maintains the stability of the fracture at the osteotomy end, and also allows a moderate amount of micro-movement at the osteotomy end, which maintains the elasticity of the fracture was fixed with good clinical results.\u201318Internal fixation (IF) with hollow screws or plates after HV osteotomy is currently the mainstream approach.In this study, we attempted to conduct a meta-analysis of related studies to evaluate and compare the mode of fixation of the osteotomy end after HV osteotomy, and to analyze the radiographic performance, function and complications after IF and EF of the osteotomy end to provide high-level clinical guidelines and evidence-based medical evidence to guide clinical decision-making and application.We have prospectively registered this research at the international Prospective Register of Systematic Reviews, registration number: CRD42022374125. We performed this protocol based on the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA) statement guidelines.The participants with HV deformity requiring osteotomy surgery will be included regardless of their country, ethnicity, sex and occupation.In the experimental group, all patients received IF after HV osteotomy, including screws, plates, etc. In the control group, all patients received EF after HV osteotomy, such as figure-of-eight bandage EF.Clinical and radiological outcomes will be assessed based on the following criteria:Clinical outcomes measured with\u2460Pre and postoperative American Orthopaedic Foot and Ankle Society score.\u2461Pre and postoperative visual analog scale score.Radiological outcomes measured with\u2460Pre and postoperative HV angle.\u2461Pre and postoperative inter-metatarsal angle.Duration of surgery, cost and complications will be defined as secondary outcomes.We will include comparative studies which published in Chinese or English, such as randomized controlled trials, retrospective studies and cohort studies. Review, case reports, experimental studies, expert experience, animal studies and conference abstracts will be excluded.Chinese National Knowledge Infrastructure, Wanfang, Chinese Scientific Journals Database, Chinese databases SinoMed, PubMed, Embase, and Cochrane Library databases were searched for this study, using the keywords \u201challux valgus,\u201d \u201cbunions,\u201d \u201costeotomy,\u201d \u201dorthopedic,\u201d \u201cscarf,\u201d \u201cReverdin-Isham,\u201d \u201cChevron,\u201d \u201cakin,\u201d \u201cbandage,\u201d \u201cfixation,\u201d and \u201csurgery.\u201d The search strategy in PubMed is shown in Table Different researchers will separately screen the titles and abstracts of records acquired potential eligibility which comes from the electronic databases. The obtained literature is managed by Notoexpress, irrelevant and duplicate articles are excluded by reading the title and abstract, full text screening and data extraction will be conducted afterward independently, and finally selected according to the inclusion criteria. Any disagreement will be resolved by discussion with the third author until consensus is reached or by consulting a third author. PRISMA flowchart will be used to show the selection procedure Fig. .The following data were extracted: lead author, publication year, country of origin, study design, sample size, age, mode of fixation, outcome measures, and complications. Any differences of opinion will be resolved through group discussion or consultation with a third reviewer. When relevant data is not reported, we will contact the author via email or other means to obtain missing data. PRISMA flow diagram will be filled out after the screening study is completed to provide specific information.Two independent investigators evaluated the quality of the included studies. The Cochrane Collaboration Risk of Bias Tool was used to evaluate the quality of the randomized controlled trials. The methodological quality of the non-randomized studies was assessed using the Risk-of-Bias Assessment Tool for Non-randomized Studies. The level of evidence was assessed according to the Oxford Centre for Evidence-based Medicine Levels of Evidence.https://www.cochranelibrary.com/search. The mean difference will be used as the effect analysis statistic for continuous variables, while the risk ratio will be used as the effect analysis statistic for categorical variables. We will also calculate 95% confidence interval for each statistic, and summarize statistical heterogeneity among summary data using the I2 statistic. Cases with I2\u2005\u2264\u200550% will not be considered to have significant heterogeneity, thus a fixed-effects model will be applied for meta-analysis. In cases where there is statistical heterogeneity among studies, we will further analyze the source of heterogeneity. A random-effects model will be used to pool the data, after excluding the obvious source of clinical heterogeneity, and in cases where obvious clinical heterogeneity exists, the researchers will perform subgroup, sensitivity or only descriptive analyses. Study-specific and pooled estimates will be graphically presented using forest plots, and P\u2005<\u2005.05 considered statistically significant.Statistical analysis will be conducted using RevMan 5.4 software (Cochrane Collaboration) Subgroup analysis according to the age, type of studies and gender will be performed to find the source of heterogeneity when significant clinical heterogeneity is observed.Sources of heterogeneity were assessed by sensitivity analysis, by excluding studies of low quality or small sample size, if the heterogeneity did not change significantly, the results were robust, otherwise, the excluded studies may have been sources of heterogeneity.,21In this study, fewer than 10 included studies were evaluated for publication bias using funnel plot, otherwise Egger regression test would be used.No ethical approval is required because the study will be a review of literature and will not obtain data from a single patient. We will publish our findings through a peer-reviewed journal.The purpose of this study was to comparatively assess the final functional outcome and complications of 2 fixation modalities after HV osteotomy. As medical technology advances, patients seek the best outcomes and least pain, and the surgeon seeks the best technology for the patient, this study hopes to provide useful, high-grade evidence-based medical evidence for patients and clinicians to inform better decisions.Conceptualization: Xing Zhou, Wei Dong, Rujie Zhuang.Data curation: Wei Guo.Formal analysis: Jiankun Chen.Funding acquisition: Jingfan Yang, Jinlei Li.Investigation: Wei Dong, Jingfan Yang.Methodology: Xing Zhou.Project administration: Jinlei Li.Software: Xing Zhou.Supervision: Jiankun Chen, Rujie Zhuang.Validation: Wei Guo, Hong Yin.Visualization: Liu Weitong.Writing \u2013 original draft: Wei Guo.Writing \u2013 review & editing: Rujie Zhuang."} +{"text": "The relationship between in-utero antiretroviral (ARV) drug exposure and child growth needs further study as current data provide mixed messages. We compared postnatal growth in the first 18-months of life between children who are HIV-exposed uninfected (CHEU) with fetal exposure to ARV drugs (prophylaxis or triple-drug therapy (ART)) and CHEU not exposed to ARVs. We also examined other independent predictors of postnatal growth.We analysed data from a national prospective cohort study of 2526 CHEU enrolled at 6-weeks and followed up 3-monthly till 18-months postpartum, between October 2012 and September 2014. Infant anthropometry was measured, and weight-for-age (WAZ) and length-for-age (LAZ) Z-scores calculated. Generalized estimation equation models were used to compare Z-scores between groups.Among 2526 CHEU, 617 (24.4%) were exposed to ART since -pregnancy (pre-conception ART), 782 (31.0%) to ART commencing post-conception, 879 (34.8%) to maternal ARV prophylaxis (Azidothymidine (AZT)), and 248 (9.8%) had no ARV exposure. In unadjusted analyses, preterm birth rates were higher among CHEU with no ARV exposure than in other groups. Adjusting for infant age, the\u00a0mean WAZ profile\u00a0was lower among CHEU exposed to pre-conception ART [-0.13 ] than the referent AZT prophylaxis group; no differences in mean WAZ profiles\u00a0were observed for the post-conception ART ), None ) and newly-infected ) groups. Mean LAZ profiles\u00a0were similar across all groups. In multivariable analyses, mean WAZ and LAZ profiles\u00a0\u00a0for the ARV exposure groups\u00a0were completely aligned. Several non-ARV factors including child, maternal, and socio-demographic factors independently predicted mean WAZ. These include child male ) versus female, higher maternal education grade 7\u201312 and 12\u2009+\u2009) versus\u2009\u2264\u2009grade7, employment versus unemployment, and household food security . Similar predictors were observed for mean LAZ.Findings provide evidence for initiating all pregnant women living with HIV on ART as fetal exposure had no demonstrable adverse effects on postnatal growth. Several non-HIV-related maternal, child and socio-demographic factors were independently associated with growth, highlighting the need for multi-sectoral interventions. Longer-term monitoring of CHEU children is recommended.The online version contains supplementary material available at 10.1186/s12879-022-07847-9. Maternal triple antiretroviral therapy (ART) scale-up for prevention of vertical HIV transmission has improved maternal health and reduced new pediatric infections to\u2009<\u20091% in high income countries , 3. Therin-utero ART exposure on infant dried blood spots. Infant HIV infection status was assessed using polymerase chain reaction (PCR) testing on the same dried blood spots [Infants whose mothers reported living with HIV or infants with a positive 6-week HIV antibody test regardless of maternal self-reported HIV status, were eligible for recruitment into a prospective cohort study, nested within a national cross-sectional survey, to measure vertical transmission risk until 18-months postpartum. Recruitment was from 29 October 2012 to 31 May 2013, with follow-up until September 2014. Detailed cohort methods are described elsewhere [3) and infant prophylaxis PMTCT policy Option A: 1 April 2010- 31 March 2013). Women with CD4 cell counts\u2009>\u2009350 cells/mm3 were given Azidothymidine (AZT) prophylaxis from 14\u00a0weeks gestation. In April 2013, the PMTCT programme transitioned to lifelong ART for all pregnant and lactating WLHIV (WHO PMTCT Option B+) [Our primary exposure of interest was fetal exposure to maternal ARV, based on self-reported maternal ARV drug use data obtained using a structured questionnaire. During the study period, the national PMTCT programme was implementing CD4 count criteria-based life-long maternal ART initiation (CD4 cell counts\u2009\u2264\u2009350 cells/mmtion B+) , 23. ARTtion B+) . We clasStudy questionnaires also included questions on self-reported 24-h and 1-week child feeding practices, maternal (Tuberculosis (TB), HIV, CD4 count, syphilis) and child morbidity and treatment, maternal obstetric history, socio-demographics characteristics, and peripartum community social support at each time point.Trained nurse data collectors collected anthropometric data using standardised procedures based on WHO guidelines . Child wWe defined low birthweight (LBW) as birthweight <\u20092.5\u00a0kg; preterm birth (PTB) as birth before 37 completed weeks gestation; and small for gestational age (SGA) as birthweight for gestational age Z-score below \u2212\u20091.28 . We estiAnthropometric measurements and Z-scores were flagged based on criteria or medians (inter quartile range) for continuous variables. Proportions were compared using Pearson chi-squared test while F-test was used for comparing means. Generalized estimation equations, with a gaussian distribution, were used for univariable and multivariable regression analyses to account for correlations between repeated anthropometric measurements within the same participant. Model covariates were selected based on literature \u00a0in the unadjusted and adjusted analyses Fig.\u00a0, Table 3Short and long-term health outcomes of CHEU require close monitoring, particularly in settings with high antenatal HIV prevalence and ART coverage such as South Africa. In this national prospective cohort study of CHEU, we found no evidence supporting a detrimental effect of fetal exposure to maternal ARV on birthweight, birthweight-for-gestational age Z-score and birth length. The PTB proportion tended to be higher among children born to women who had received no ART, particularly those who were thought to be newly-infected. Women who had not received ART also reported fewer ANC visits and more frequent home deliveries. The lower antenatal attendance and unmanaged HIV could have contributed to the higher PTB among this group. These findings emphasize the importance of closing HIV testing and ART initiation gaps in healthcare services.3 and AZT for those with higher CD4 cell counts [In unadjusted analysis, the\u00a0mean WAZ profile\u00a0was lower among children with fetal exposure to maternal ART from pre-conception than in children with foetal exposure to maternal AZT prophylaxis started in pregnancy. We hypothesize that the direction of this association may be explained by confounding by severity of disease and may not be a true effect of ART exposure. PMTCT policy during the study period recommended ART initiation only for women with CD4 cell counts\u2009\u2264\u2009350\u00a0cells/mml counts . As expel counts , 35 and l counts outcomesl counts . Taken tl counts .Our data highlight other important non-HIV specific factors that are associated with growth of CHEU . Mean WAOur study has several strengths. First, this is the first study reporting on 18-month postnatal growth patterns of CHEU from a national sample in South Africa. This enabled us to assess the relationship between growth and individual-level factors and to describe growth across geographical locations. Second, study data were collected before universal ART roll-out, enabling us to compare growth of children by fetal exposure exposed to ART, AZT prophylaxis and unmanaged HIV. This comparison is important as the PMTCT programme still has HIV testing and treatment gaps that drive pediatric HIV infections and other adverse outcomes , 48. ThiOur study has some limitations. First, as adverse perinatal outcome risk may vary by ART drug combinations , lack ofIn conclusion, our national cohort study showed postnatal growth of CHEU did not vary by fetal exposure to maternal ARVs. Together with data from other studies \u201336, thesAdditional file 1: Box 1. Anthropometry data cleaning criteria. Figure S1. Directed Acyclic Graph representing the hypothesized relationships, 2012\u20132014, South Africa. Figure S2. Study cohort profile of HIV exposed uninfected infants from 6-weeks to 18-months postpartum, 2012\u20132014, South Africa. Table S1. Proportion of underweight children from 6-weeks to 18-months postpartum by in-utero antiretroviral exposure status, 2012\u20132014, South Africa. Table S2. Proportion of stunted children from 6-weeks to 18-months postpartum by in-utero antiretroviral exposure status, 2012\u20132014, South Africa. Table S3. Frequency of maternal antiretroviral treatment over time by baseline maternal antiretroviral categories, 2012\u20132014, South Africa"} +{"text": "A total of 72 CMEC patients admitted to the otolaryngology department of our hospital from 2019 to January 2021 for surgical treatment are selected. According to the different intervention methods, the microscope group and the otolaryngology intervention group are established, respectively, with 36 patients in each group. The patients in the microscope group are treated with a microscope for middle ear cholesteatoma surgery, and the patients in the otoscope intervention group are treated with an otoscope for middle ear cholesteatoma surgery. The experimental results show that ear endoscopic intervention has better clinical efficacy for CMEC patients, which can effectively shorten the operation time, reduce the incidence of postoperative complications, and effectively improve the hearing of patients.The clinical efficacy of ear endoscopic intervention in patients with congenital middle ear cholesteatoma (CMEC) is explored, and the relationship between the expression of reactive oxygen species (ROS), phosphorylated protein kinase B (P-Akt), hypoxia-inducible factor-1 Congenital middle ear cholesteatoma (CMEC) is not common in clinical practice, but with the continuous improvement of technological level in the medical field and the improvement of the awareness and diagnosis level of otolaryngology on this disease, clinical reports of CMEC cases are increasing, and the surgical treatment methods for this disease are constantly optimized and improved , 2. Befo\u03b1 (HIF-1\u03b1) expression, and the degree of bone damage based on the recurrence mechanism of patients, aiming to provide an effective basis for clinical optimization and improvement of diagnosis and treatment model.Otoendoscopic intervention is an important product of the continuous improvement of otorhinolaryngology treatment technology in the medical field in recent years, showing certain advantages in the treatment of middle ear lesions, but there are few studies related to the clinical efficacy of this operation on CMEC . Based oThe rest of this paper is organized as follows: \u03b1 is a functional unit of hypoxia-inducible factor-1, which can increase the expression products of these genes by regulating hypoxia-response-related genes and thus exert relevant biological effects [With the continuous improvement of technology in the medical field, the diversification of treatment techniques for middle ear cholesteatoma has been effectively promoted, but the efficacy and prognosis of different surgical methods for patients are also different . In the effects .\u03b1 in neoplastic diseases, but there are few studies on hiF-1 \u03b1 in middle ear cholesteatoma. ROS is distributed in various tissues and cells. These studies have found that when the body suffers from hypoxia or inflammatory stimulation, the mitochondria of cells can produce excessive ROS, which will bring a series of changes to the biological functions of the body cells and tissues [In recent years, there are many studies on hiF-1 tissues . It suggEar endoscopic surgery is a new type of surgical treatment, mainly through ear endoscopy-assisted observation of patients' lesions, in order to reduce the damage of the patient's lesions, and then achieve the purpose of treatment. Compared with microscopic surgery, ear endoscopy can better expand the surgical field of vision, reduce the damage to patients, shorten the length of hospital stay, and find the parts that cannot be seen under the microscope, which is more minimally invasive and easier to be accepted by patients . The res\u03b1 expression was detected in the epithelium of middle ear cholesteatoma and skin of the external auditory canal, and it was observed that the expression of HIF-1\u03b1 in cholesteatoma was significantly higher than that in the skin of external auditory canal. Previous studies have reported that chronic inflammation of the mastoid air chamber can lead to mucosal hypertrophy and accumulation of sticky secretions, which affects middle ear ventilation and drainage and leads to hypoxia in middle ear tissues [\u03b1 expression. In addition, Spearman correlation analysis showed that the degree of bone destruction in cholesteatoma was positively correlated with the expression of ROS-Akt and HIF-1\u03b1, which suggested that the degree of bone destruction in cholesteatoma was closely related to the expression of the above three indicators. It is suggested that the above three indicators can be used for the corresponding detection to evaluate the disease development of patients, which is of great significance for the subsequent development and improvement of effective diagnosis and treatment plans.In addition, this study focused on the occurrence and development mechanism of middle ear cholesteatoma, and combined with the analysis of previous clinical literature, suggesting that it may be formed in an obvious hypoxia environment. The results of this study showed that ROS and P-Akt expressions were higher in cholesteatoma than in normal skin, which suggested that ROS mainly played a role in promoting cell proliferation during the formation of middle ear cholesteatoma. In this study, HIF-1 tissues , 13. TheP > 0.05), which confirmed that the comparison between groups is scientific and reasonable.A total of 72 CMEC patients admitted to the otolaryngology department of our hospital from January 2019 to January 2021 for surgical treatment are selected. According to different surgical intervention methods, the microscope group and the otoscopy intervention group are established, respectively, with 36 patients in each group. There are 16 males and 20 females in the microscope group, aged from 12 to 40\u2009years, including 4 children, with an average age of (23.15\u2009\u00b1\u20092.83) years. All patients in the microscope group have unilateral disease. In the ear endoscopy intervention group, there are 17 males and 19 females, aged from 11 to 38\u2009years, including 3 children, with an average age of (22.84\u2009\u00b1\u20093.03) years. All patients in the group have unilateral disease. There are no significant statistical differences in baseline data of the two groups, including gender, age, and nature of the disease all patients included in this study meet the diagnostic criteria for middle ear cholesteatoma ; (2) cliThe patient exclusion criteria are as follows: (1) eustachian tube mouth closed; (2) patients with facial paralysis; (3) the deaf; (4) poor clinical compliance or withdrawal from the study due to various reasons.The patients in the microscope group are treated with a microscope for middle ear cholesteatoma surgery, and the specific steps are as follows: group patients receive general anesthesia, after tracheal intubation and take the patient supine, regular disinfection draping is performed, after the operating room had been used for patients, a puncture of 0.5cm is made on the ear parallel to the ditch after ear arc incision, a square flap muscle is cut and subcutaneous tissue is left to spare, making the mastoid left fully exposed outside, up to the line, next to the mastoid tip. After that,, the mastoid cortex is fully excised to the body surface projection of the sigmoid sinus, and the mastoid cavity, tympanic sinus, and epitympanic chamber are opened successively, and all lesions in the tympanic sinus, mastoid cavity, and tympanic chamber are completely removed. The ossicular chain is examined with a microscope and reconstructed with the corresponding treatment. The eardrum is repaired with epistomatous fascia and fixed with a gelatin sponge. The patients in the ear endoscopy intervention group are treated with ear endoscopy for middle ear cholesteatoma surgery, and the specific steps are as follows: the patients under general anesthesia and general anesthesia patients supine, head to the contralateral, lateral plane, auricle surrounding mucosa iodine disinfection three times, draping, along the external auditory canal under endoscope cartilage and ear cavity gap into the needle, an external auditory canal subcutaneous local infiltration anesthesia, along the skin and mucous membrane under endoscope border do before after the arc incision, external auditory canal bone exposure, peel the mucous membrane to expose the drum ring, lift the bone ring to expose the tympanum, chisel out the posterior superior wall of the external auditory canal, expose the superior tympanum, open the tympanum sinus and the mastoid cavity along the superior tympanum, remove the lesions in the superior tympanum sinus and the mastoid cavity, explore the ossicular chain and reconstruct the ossicular chain, repair the tympanum with the musculofascial. For reduction of the mucous membrane, if more bone is removed, the mucous membrane can be affixed to the sinus tympanum and mastoid cavity, and an iodoform gauze strip is used to fill the surgical cavity. All patients received an intravenous infusion of antibiotics after surgery, and iodoform gauze strips are removed from the external auditory canal 3 weeks after surgery. All patients are followed-up for 6 months.n\u2009=\u200936) and severe group (n\u2009=\u200932) are established according to the degree of bone damage. Meanwhile, 30 cases of normal external auditory canal bone tissue are randomly selected as the control group.With the informed consent of all patients, 68 cholesteatoma specimens are collected intraoperatively, and the mild group , 5\u2009min for each time. The tissue sections are placed in the repair box filled with EDTA antigen repair buffer PBS (pH8.0) for antigen repair in a microwave oven for 10\u2009min with low heat. After natural cooling, the slides are placed in PBS (pH7.4) and washed by shaking on a decolorizing shaker 3 times, 5\u2009min each. After the sections are slightly shaken dry, a histochemical pen is used to draw circles around the tissues. PBS is shaken dry, and BSA is dropped and sealed for 30\u2009min. The sections are stained with ROS fluorescent probe-dihydroxyethylene. The sections are placed flat in a wet box and incubated at 4\u00b0C overnight. The slides are placed in PBS (pH7.4) and washed by shaking on a decolorizing shaker 3 times, 5\u2009min each. The slices are briefly shaken dry and sealed with antifluorescence quenching sealing tablets. The sections are observed under a fluorescence microscope and the collected images are analyzed semiquantitatively by image-pro Plus 6.0 Image analysis software. The interpretation of ROS frozen section immunofluorescence detection results is that after directly sealing the sheet, it shows red under the fluorescence microscope.\u03bcL single detergent cracking solution (including PMSF) in a homogenizer, put on ice after the homogenizing operation, 5 minutes later the ice is crushed again, repeat this process several times to ensure that the tissue is crushed to make the tissue as crushed as possible. After cracking for 30\u2009min, the cracking liquid is transferred to a 2\u2009mL centrifuge tube with a pipette and centrifuged at 12,000\u2009rpm at 4\u00b0C for 5\u2009min. The supernatant is divided into a 0.5\u2009ml centrifuge tube and stored at \u221220\u00b0C for inspection. According to the molecular weight of the target protein, 10% SDS-PAGE gel is prepared for vertical electrophoresis, and the loading amount is adjusted according to the protein concentration, keeping 45\u2009\u03bcg protein loading per well. The sample protein is concentrated at a voltage of 90\u2009V for 1\u2009h and then separated by electrophoresis at a voltage of 120\u2009V and 100\u2009min. After electrophoresis, the PVDF membrane is immersed in pure methanol for 10\u2009min and deionized water for 20\u2009min and transferred to the PVDF membrane at a constant pressure of 100\u2009V for 120\u2009min. The shaker is closed with 5% skim milk powder for 1.5\u2009h at room temperature, and the membrane is washed with PBS liquid 3 times, 15\u2009min each time. Then, P-Akt and internal reference GAPDH antibodies are diluted strictly according to the instructions and incubated at 4\u00b0C overnight. The film is washed with TBST solution 3 times, 15\u2009min/time, and the secondary antibody is added, and the bed is shaken at room temperature for 2\u2009h. The ECL color solution is used for color development and fixer solution is used to terminate color development. Each experiment is repeated 3 times.All obtained samples are extracted for protein extraction, a 100\u2009mg tissue sample is placed in a 1\u20132\u2009ml homogenizer, and the tissue is cut into pieces as far as possible with scissors (sterile). Add 400\u2009\u03b1 in cholesteatoma and normal tissues is detected. Positive cell count is performed in 5 fields under 400x magnification, and the proportion of positive cells is calculated. Criteria: (\u2212): <10% (+): the number of positive cells is 10%\u201325% (++): the number of positive cells is 26%\u201350% (+++): positive cell number >50%.Application of the above to obtain tissue samples of patients and complete fixed with 10% formalin fixation fluid, using paraffin embedding and serial section, after drying, after the slice for regular dewaxing process, hydration, gradient ethanol distilled water rinse after soaking in the PBS, 3% hydrogen peroxide deionized water incubation, high-pressure antigen repair, goat serum incubated with the working fluid. After incubation at room temperature, PBS is washed, and horseradish enzyme-labeled streptomycin ovalbumin working solution is added. After incubation at room temperature, PBS is cleaned, DAB is developed, hematoxylin is restained after rinsing with distilled water, and xylene is transparent and sealed. The expression of HIF-1\u03b1 in each group is compared. The correlation between ROS, P-Akt, HIF-1\u03b1 expression, and the degree of bone destruction in CMEC patients is analyzed.Intraoperative and postoperative indicators are compared between the two groups, including operation time, postoperative complications, and surgical success rate. The hearing changes of the two groups are compared before and 6 months after surgery. The expression of ROS, P-Akt, and HIF-1s) is used to represent the data, and the t-test is performed. Spearman correlation coefficient is used to analyze the correlation between the expression of ROS, P-Akt and HIF-1\u03b1, and the degree of bone damage in CMEC patients. P < 0.05 indicates the statistical difference in data comparison.SPSS 26.0 software is used to complete effective processing of the data in the study. The measurement data are tested for normality and homogeneity of variance to meet normal distribution. The mean\u2009\u00b1\u2009standard deviation (P < 0.05).\u2217 represents a comparison with before surgery, P < 0.05. It can be observed from P > 0.05), and the index values decrease significantly 6 months after surgery. The inter-group comparison shows that the index values of the otoscopy intervention group decrease significantly than the microscope group .\u03b1 expressions in each group. \u03b1 expression in each group. Through the above experimental results, it can be observed that the levels of ROS, P-Akt, and HIF-1\u03b1 in severe and mild groups are significantly higher than those in the control group, and those in the severe group decrease significantly than those in the mild group (P < 0.05). The percentage of HIF-1\u03b1 expression in cholesteatoma cells is (++) and (+++), which indicates that the severe group and mild group increased significantly, and the severe group significantly increases than the mild group (P < 0.05).\u03b1 expression, and the degree of bone damage. It can be seen from \u03b1 in patients, which shows a positive correlation (P < 0.05).\u03b1, and the degree of bone damage is analyzed. Ear endoscopic intervention has better clinical efficacy for CMEC patients and is worthy of clinical application. At the same time, this study conducts an in-depth research on the occurrence and development mechanism of middle ear cholesteatoma disease, confirming that the degree of bone destruction is significantly correlated with ROS, P-Akt, and HIF-1\u03b1, and suggests that hypoxia can promote the proliferation of cholesteatoma cells. The disease progression and disease of such patients can be determined by detecting the above indicators. It can also develop and improve the diagnosis and treatment plan around it, and then improve the clinical efficacy of patients. The study provides a new target and idea for the prevention and treatment of cholesteatoma.The clinical efficacy of ear endoscopic intervention in patients with CMEC is explored, and the relationship between the expression of ROS, P-Akt, HIF-1"} +{"text": "This study examined whether the behaviour of Internet search users obtained from Google Trends contributes to the forecasting of two Australian macroeconomic indicators: monthly unemployment rate and monthly number of short-term visitors. We assessed the performance of traditional time series linear regression (SARIMA) against a widely used machine learning technique (support vector regression) and a deep learning technique in forecasting both indicators across different data settings. Our study focused on the out-of-sample forecasting performance of the SARIMA, SVR, and CNN models and forecasting the two Australian indicators. We adopted a multi-step approach to compare the performance of the models built over different forecasting horizons and assessed the impact of incorporating Google Trends data in the modelling process. Our approach supports a data-driven framework, which reduces the number of features prior to selecting the best-performing model. The experiments showed that incorporating Internet search data in the forecasting models improved the forecasting accuracy and that the results were dependent on the forecasting horizon, as well as the technique. To the best of our knowledge, this study is the first to assess the usefulness of Google search data in the context of these two economic variables. An extensive comparison of the performance of traditional and machine learning techniques on different data settings was conducted to enable the selection of an efficient model, including the forecasting technique, horizon, and modelling features. Forecasting the trends of economic indicators is crucial to policy makers and investors to make informed decisions. However, the official release of the indicators suffers from an information time lag because of the time and effort needed to collect the required data. To address this issue, researchers have aimed to nowcast and forecast the economic indicators.The unemployment rate is one of the key indicators due to its direct connection to the economic cycle and its influence on decision-makers. Several researchers have attempted to improve the forecasting for the unemployment rate for various developed and developing countries. While some authors have applied different machine learning techniques to forecast unemployment ,2, otherIn addition to the unemployment rate, we selected another indicator for our experiments, the number of short-term travellers visiting Australia. Being a destination for millions of tourists, the tourism industry in Australia is directly linked to its economic wellbeing. Forecasting the number of incoming travellers will assist investors in making their investment decisions and government agencies to properly allocate their resources to accommodate the number of travellers. Researchers have used online search data for different applications within the tourism industry. While some have focused on forecasting the hotel demand for particular cities or countries ,7,8, othThe selection of the two indicators analysed in this study, which are released monthly by the Australian Bureau of Statistics, was based on their ability to reflect the behaviour of Google users across different geographical locations. While Google Trends data collected within Australia were used to forecast the monthly unemployment rate, we employed globally searched keywords via the Google engine to forecast the number of travellers visiting Australia. This approach enabled us to assess the applicability of Google Trends data for two distinct settings and evaluate the forecasting horizon associated with the behaviours of both local and international users. Furthermore, we present a novel forecasting framework that selects the optimally performing model from two families of techniques suitable for forecasting time series data, namely traditional linear techniques (SARIMA and SARIMAX) and machine learning techniques (SVR). The framework also incorporates feature selection techniques, which play a crucial role in the forecasting process. It should be noted that prior literature had not extensively explored this aspect to the extent that is presented in this paper.In our paper, we examined the predictive power of Google Trends data using support vector regression (SVR) and convolutional neural networks (CNNs) against the traditional linear regression techniques such as SARIMA and SARIMAX in forecasting the two selected time series indicators. The paper is organised as follows. Over the last few years, several attempts have been made to explore the potential benefits of using Internet search data in forecasting economic variables . In thisForecasting unemployment has become an area of interest for researchers. There are two areas of focus to improve its accuracy: incorporating additional data sources (mainly Internet search data) and using non-traditional techniques.Incorporating online search data, in particular Google Trends, in unemployment forecasting has drawn researchers\u2019 attention. Evidence can be found for research applied in different countries: the U.S. ,11,12,13These studies did not establish whether Internet data can replace or complement traditional methods. Some authors obtained better results when combining both data in their model . Most reThere is another set of research focused on using alternative techniques to forecast unemployment. Researchers have compared several machine learning techniques such as artificial neural networks (ANNs) ,30,31, SConsidering that researchers have not tested the impact of search data when forecasting unemployment using traditional and machine learning techniques, we were interested in assessing the efficacy of Google Trends in Australia where Google search is widely used. For this forecasting purpose, we employed SARIMA, SVR, and CNNs on an expanded list of search keywords that is related to Australia.The real-time characteristics of Internet search data have motivated researchers to examine their predictive power in the tourism and hospitality industry. The scope of past research varied from forecasting hotel demand to the number of visitors to cities and countries.Several research works have successfully employed Internet data to forecast the demand for hotel rooms and flights for different forecasting horizons ,7,8,32. While fewer studies have forecasted the number of visitors on a macro level, there has not been an assessment of the benefit of using search data with historical visitors\u2019 data using machine learning techniques. In our paper, we used the same approach applied on unemployment data to evaluate the SARIMA and SVR results in forecasting the number of short-term visitors coming to Australia. A similar comparison was performed recently by Botta et al. , but insThe initial stage of our research involved data collection. We utilised two main sources of data: Australian economic indicators and Google Trends data. For the economic indicators, we extracted the historical data of two key indicators for the Australian economy from the Australian Bureau of Statistics\u2019 website: monthly unemployment rate and monthly number of short-term visitors arriving in the country. These indicators represent essential aspects of the economy, and forecasting their future values would offer significant insights for policymakers and economic stakeholders. Those figures are often calculated by conducting surveys and collecting data from different agencies, leading to a delay in publishing the most-recent numbers. Monthly unemployment rate data are available from February 1978, while the number of visitors\u2019 data cover the period starting in January 1991.Australia has a stable economy. The unemployment rate has not surpassed the 10% mark since 1994. Since then, the Australian unemployment rate has fluctuated between 4% and 6%. As seen in Australian unemployment data are seasonal in nature, where the same trend is repeated each year. For example, there has always been an increase in the unemployment rate post December, and this is expected to continue in the future. Since we used the SARIMA model, the parameter m that indicates the cycle of the trend was 12.A closer inspection of In parallel, we collected Google Trends data related to the aforementioned economic indicators. Google Trends data, which have been offered by Google since 2014, provide the search frequency of keywords, which shows the ratio of the search amount of a certain keyword to the total search amount of all keywords in a certain period of time, and then further normalises the search frequency into the interval of , which can avoid changes in the amount of keyword searches due to an increase in the number of users. It represents a rich source of insights about public interest in various topics over time. By selecting search terms related to the economic indicators, we could gauge public interest in these topics and examine the potential predictive power this interest holds for future economic conditions.In this paper, we searched for keywords related to each of the two target indicators used in this paper. For unemployment, the process of selecting search keywords began by considering what Internet users would search for if they became or were about to become unemployed . It seemThere were some limitations associated with selecting search keywords relevant to unemployment. Centerlink offers several services other than unemployment benefits; therefore, a change in its trend does not necessarily reflect the changes in demand for those benefits. Additionally, there are certain job vacancies relevant to industries such as construction that might not be posted on the \u201cSeek\u201d website. An increase in unemployment in the construction industry might not lead to an increase in access to the popular job search website. Furthermore, there are other popular platforms such as LinkedIn that can be accessed via a mobile application or directly through the website to look for job vacancies.Given the limitation of using Google Trends data, we intended to use the extracted time series data as a proxy for changes in the labour market, rather than an accurate reflection of changes in the Australian unemployment rate.The selected Google indicators to be used in forecasting the number of short-term visitors is shown in One of the limitations of using keywords looked up all over the world is that this includes the searches of users within Australia. Searches for those keywords by Australian residents do not contribute to the number of tourists visiting Australia. For this exercise, we assumed that the search for these terms within Australia did not create any noise as there were no noticeable changes in the search trends. Additionally, the search results were limited to the Google engine and did not include the usage of people residing in China due to the restriction on using Google in China. Chinese nationals consist of large proportion of tourists visiting Australia.After the data collection, we proceeded to the feature-engineering phase. The goal was to transform the collected data into a format that could be more effectively utilised by our predictive models. This involved creating new variables based on our raw data that better represent the underlying trend patterns for the predictive models. This process was applied in our study to increase the predictive performance of our models.For our dataset from the Australian Bureau of Statistics (ABS) and Google Trends, the original data were augmented by creating time-based features. These features were designed to capture the dynamic behaviour and trends in the data over time. These included lagged values of the indicators themselves and derived statistics such as moving averages.We created lag features for each dataset, specifically for the 12 previous months. The assumption here was that the current month\u2019s value of a given economic indicator or Google Trends value would have some correlation with its past values. For instance, if the unemployment rate was high last month, it could likely be high in the current month as well, barring any substantial changes in the economic environment.Lagged features were derived by shifting the time series data by one period (month) to create a new feature (Lag-1), by two periods to create another feature (Lag-2), and so on, up to twelve periods (Lag-12). This was carried out because it is plausible that both the dependent variables and the Google Trends indicators could have monthly seasonality that last up to a year, and we wanted our models to capture this potential seasonal effect.We also created moving average features, which represent the mean of the data points over a specified period. These were calculated for the last 3, 4,\u2026, and 12 months. The rationale for creating these features is that, while individual data points (such as a spike in search interest or a dip in unemployment) can be quite volatile, the average value over a certain period can provide a smoother representation of the underlying trend in the data.In addition to the lag and moving average features, a \u201cmonth\u201d feature was created to capture any potential seasonal effects. This feature represents the month of the year (a number between 1 and 12) at each data point. This is particularly important for data such as tourism, which can show substantial variation depending on the time of year. The high-volatility components associated with time series data are often very difficult to model successfully; hence, a scaling and/or transformation process is usually performed on the series prior to implementing the actual experiments .Since we wished to be able to correctly predict the direction of movement of the number of short-term visitors, we applied a data transformation to the data series, which would result in better performance . Naturalty is the transformed number of visitors and tp is the original value.To achieve a logarithmic transformation with our short-term visitors\u2019 data, the following equation was applied.In recent years, many feature-selection methods have been proposed. These methods can be categorised into three : filter,Filter methods calculate the score of each feature and rank them accordingly without dependency on the model. They are simple to implement, easy to interpret, and work effectively with high-dimensional data. Filter methods are fast strategies that provide good results in classification tasks ,40,41. AAfter engineering a wide range of features from the target variables and Google Trends indicators, we applied different feature-selection methods that incorporated recursive feature elimination (RFE) with mutual information (MI) and the f_test. These methods provided us with a robust and diverse perspective on feature importance. For the exogenous variables derived from Google Trends, we used the Pearson correlation to determine the most-relevant variables, which were used to train the SARIMAX model.The wrapper method, RFE, uses a machine learning algorithm to rank features by importance and recursively eliminates the least-important features. This method can capture interactions between features since it uses a machine learning model for ranking.The filter methods, the f_test and mutual information, rank features based on their individual predictive power. The f_test checks the correlation between each feature and the target variable, while mutual information measures the dependency between the feature and the target. A higher mutual information means a higher dependency.By using these methods together, we obtained the benefits of both: the power of a machine learning model to capture complex relationships and the speed and simplicity of univariate statistics.The filter feature selection approach used for the SVR and CNN models is shown in Create a training dataset.Perform RFE using a decision tree as an estimator.Select the top 50% of the features from RFE.Compute the mutual information value (MIV) and f_test for the remaining features.Filter out the features based on the f_test and MIV. Select the top 10% of features based on the f_test and the top 25% based on the MIV.The approach of using the Pearson correlation as a feature-selection method for our SARIMAX model is a straightforward, yet effective one given that SARIMAX is not capable of modelling non-linear relationships.The Pearson correlation coefficient measures the linear relationship between two datasets. It ranges from \u22121 to 1. A correlation of \u22121 indicates a perfect negative linear relationship; a correlation of 1 indicates a perfect positive linear relationship; a correlation of 0 indicates no linear relationship.In our experiment, we selected only those exogenous variables that have a correlation value greater than 0.4 (either positive or negative) and considered to have a moderate to strong linear relationship with the dependent variable. This could help reduce the dimensionality of our data and might improve the interpretability and performance of our models.In summary, we chose a combination of feature selection and reduction techniques in our experiments to highlight the importance of incorporating such techniques in the modelling process to improve the accuracy of the models. The comparison of different techniques is out-of-scope for this paper. However, the selected techniques can detect different relationship between the created features and the target variable.The seasonal auto-regressive integrated moving average (SARIMA) is an extension of the ARIMA model. ARIMA models are a subset of linear regression models that attempt to use the past observations of the target variable to forecast future values. The \u201cS\u201d in SARIMA stands for seasonal. It adjusts the model to deal with repeated trends. Seasonal data can be easily identified by looking at repetitive spikes over the same period of time. Those spikes are consistently cyclical and easily predictable, which suggests that we should look past the cyclicality to adjust for it.Since SARIMA can only use the past values of Y and X, SARIMAX is used to incorporate exogenous variables. When using SARIMAX, the input data will include parallel time series variables that are used as a weighted input to the model.To find the optimal SARIMA and SARIMAX models, a grid search to determine the value of the parameters for the best model was performed. The best model found will have the lowest Akaike\u2019s information criterion (AIC) and Bayesian information criterion (BIC).SARIMA and SARIMAX were used as the baseline models for the time series forecasting of the two Australian indicators of interest: monthly unemployment rate and monthly number of short-term visitors.Since the SARIMAX model can only detect the linearity between the target variable and the past values of the input data, we employed SVR and CNNs to check whether there was non-linearity between the input feature and the target variable, and therefore, the forecasting performance can be improved over that of SARIMAX.SVR, introduced by Drucker et al. , is a caConvolutional neural networks (CNNs) were introduced by Yann LeCun, Yoshua Bengio, and others in the 1990s . InitialTo carry out non-linear regression using SVR and CNN, it is necessary to create a higher-dimensional feature space from the time series data, as discussed in In order to evaluate the performance of the SARIMA, SVR, and CNN models on out-of-time sample data, we used two different metrics: mean-squared error (MSE) and symmetric mean absolute percentage error (SMAPE) . These tMSE is a metric corresponding to the expected value of the squared error or loss. If i\u0177 is the predicted value of the i-th sample and iy is the corresponding true value, then the MSE estimated over n (number of samples) is defined as:The SMAPE is an accuracy measure based on percentage (or relative) errors, defined as follows:At is the actual value and Ft is the forecast value.The At and Ft is divided by half the sum of the absolute values of the actual value At and the forecast value Ft. The value of this calculation is summed for every fit point t and divided again by the number of fit points n. A perfect SMAPE score is 0.0, and a higher score indicates a higher error rate.The absolute difference between Further statistical significance testing was applied to evaluate the performance of the different techniques and to determine if there were significant differences among them. One approach is to use the analysis of variance (ANOVA) on the predicted values generated by multiple models . ANOVA assesses the variation between the predicted values of different models and compares it to the overall variation in the data. The goal was to determine if there are statistically significant differences in the performance of the models.After performing ANOVA, if significant differences are detected, further analysis can be conducted using post hoc tests to identify specific pairs of models that significantly differ from each other. One commonly used post hoc test is Tukey\u2019s honestly significant difference (HSD) test. The Tukey HSD compares all possible pairs of models and determines if the differences in their predicted values are statistically significant.The statistical significance approach helps with comparing and ranking the models based on their performance and identifying the models that significantly outperform or underperform others. It provides a quantitative and objective measure to assess the statistical differences between the techniques, allowing for informed decision-making in selecting the most-appropriate model for time series forecasting tasks.In this paper, we sought to examine the out-of-sample forecast performance of the SARIMA, SVR, and CNN models with a focus on two key Australian indicators: unemployment rate and monthly number of short-term visitors. The methodology, delineated in The design of our experiments was intended to assess the influence of the COVID-19 pandemic on the correlation between our chosen indicators and Google Trends data. By intentionally omitting data from the last three years and focusing on the pre-pandemic period, we evaluated if the dynamics between the indicators and Google Trends were dissimilar during a relatively more economically stable period.In each experimental setup, we constructed four iterations of each of our 12 models .In this section, we present an overview of the experiments\u2019 results for each individual set of experiments. Additional comparison between Experiments 1 and 2 and Experiments 3 and 4 were conducted to highlight the difference in the performance between the models and the features selected for different data-driven settings influenced by COVID19. Given the large number of experiments and comprehensive statistical significance tests for the built models, only the comparison of results using the MSE are presented in Experiment 1:The first experiment revealed significant differences in performance among the models across the four forecasting horizons. The SARIMAX_ALL model outperformed all others for the 3-, 6-, and 12-month horizon levels, indicating its strong predictive power in the short- to mid-term. Interestingly, the SARIMA_HIST model, utilising historical data without the inclusion of exogenous variables, performed better for the 24-month horizon, hinting at its efficacy in capturing long-term trends and cycles.Compared to the SARIMA_RECENT, which takes into account only recent data, SARIMA_HIST\u2019s superior performance for the 24-month horizon suggested that a broader historical context enhances long-term forecasting. SARIMAX_ALL\u2019s outperformance of SARIMA_HIST and SARIMA_RECENT for shorter horizons demonstrated the value of integrating all available features, including exogenous variables, into time series models for short-term forecasts.Experiment 2:In the second experiment, the superiority of the SARIMAX_ALL model continued for the 6- and 12-month horizons, but faced competition from the CNN_TARGET_GI_FS model for the 3-month horizon. This indicates that deep learning models like CNN_TARGET_GI_FS can capture intricate data patterns more effectively in the short-term. For the 24-month horizon, however, the SARIMA_HIST model again outperformed, reaffirming the notion that simpler models utilising a broader historical context fare better in long-term forecasting.Feature selection models, such as SARIMAX_FS and CNN_TARGET_GI_FS, performed comparably to their all-feature counterparts for shorter horizons, suggesting that narrowing down the feature set does not necessarily impair short-term predictive capacity.Experiment 3:The third experiment introduced a new dominant model: SVR_TARGET_GI_FS. This machine learning model with feature selection demonstrated the best performance at the 3- and 6-month horizon levels, outperforming both SARIMA variants and SARIMAX_ALL. This suggested that machine learning techniques coupled with feature selection can excel in short-term forecasts. However, the SARIMAX_ALL model still held its ground for the 12-month horizon, and SARIMA_HIST regained superiority for the 24-month horizon.Again, feature selection models showed strong performance. The SVR_TARGET_GI_FS model\u2019s superiority for shorter horizons over SARIMAX_ALL indicated that feature selection can even outperform all-feature models in certain situations.Experiment 4:In the final experiment, the deep learning model CNN_TARGET_GI_FS excelled for the 3- and 6-month horizons, while SARIMAX_ALL performed best for the 12-month horizon. For the 24-month horizon, the SVR_TARGET_FS model, a machine learning model with feature selection, surpassed other models, affirming the potency of feature selection for longer-term forecasting.Across all four experiments, the results demonstrated the strengths and weaknesses of each model for different forecasting horizons, the potential advantages of machine learning and deep learning techniques over traditional SARIMA/SARIMAX models, and the possible gains from employing feature selection.Taken together, these experiments provided nuanced insights into the interplay between traditional models (SARIMA and SARIMAX) and more modern, ML and DL techniques. While the former maintained strong performance at medium-term horizons, in particular when supplemented with a complete feature set, the latter\u2014especially when utilising feature selection\u2014appeared more-effective for both short- and long-term forecasting. Thus, the decision between ML/DL and traditional methods hinges on the forecasting horizon, underlining the importance of a targeted approach in time series prediction.Compared to SARIMA_RECENT, which takes into account only recent data, SARIMA_HIST\u2019s superior performance for the 24-month horizon suggests that a broader historical context enhances long-term forecasting. SARIMAX_ALL\u2019s outperformance of SARIMA_HIST and SARIMA_RECENT at shorter horizons demonstrated the value of integrating all available features, including exogenous variables, into time series models for short-term forecasts.Comparing both experiments, it was clear that the inclusion or exclusion of the COVID period data significantly influenced the predictive power of the models. In the shorter-term forecasts , the exclusion of the COVID period seemed to enhance the performance of models such as the CNNs, possibly due to the reduction of unprecedented volatility in the training data.In contrast, the SARIMAX model, which was the most-effective short- and mid-term forecasting model when the COVID data were included, saw its dominance reduced when the COVID data were excluded. This indicated that the SARIMAX model might be particularly effective at accounting for abrupt exogenous shocks such as the COVID pandemic.For the 24-month horizon, the SARIMA_HIST model remained the superior performer, with or without the COVID data, indicating its robustness in long-term forecasting regardless of drastic economic changes.These comparisons highlight the importance of considering the stability of the economic environment and the characteristics of the training data when selecting and interpreting forecasting models.Comparing both experiments, it became evident that the inclusion or exclusion of the COVID period data significantly impacted the models\u2019 predictive performance. In the short-term forecasts , the exclusion of the COVID period data seemed to improve the performance of the CNN with the feature selection model, possibly due to the removal of the unpredictable COVID-induced volatility from the training data.Conversely, the SARIMAX using the all exogenous variables model, which was the most-effective short- and mid-term forecasting model with the COVID data included, saw a reduction in its dominance when the COVID data were excluded. This indicated that the model was particularly potent when dealing with abrupt exogenous shocks such as those experienced during the COVID-19 pandemic.For the long-term 24-month horizon forecast, the SARIMA_HIST model remained the best performer, irrespective of whether the COVID data were included or excluded, highlighting its robustness in long-term forecasting regardless of drastic changes in the economic environment.These findings underlined the importance of considering both the stability of the economic environment and the nature of the training data when choosing and interpreting forecasting models. They also demonstrated how different models may respond differently to periods of economic volatility, further emphasising the need for careful model selection based on the specific context and forecasting horizon.This research investigated the efficacy of various traditional and machine learning models in forecasting key economic indicators, namely the monthly unemployment rate and the monthly number of short-term visitors to Australia. It also explored the role of Google Trends data in enhancing the forecasting performance of these models.Overall, the results indicated that both machine learning (ML) and deep learning (DL) models offer considerable advantages over traditional SARIMA and SARIMAX models in forecasting these indicators, particularly in the shorter-term forecasting horizons. For instance, the SVR model demonstrated superior performance over SARIMA and SARIMAX in predicting the unemployment rate across all forecasting horizons in Experiments 1 and 2. Similarly, the CNN model was more effective than its traditional counterparts in predicting short-term visitor numbers in Experiments 3 and 4, especially in the short- to mid-term forecasting horizons.These findings align with the growing recognition of ML and DL techniques as valuable tools in economic forecasting, capable of handling complex data structures and identifying intricate patterns in the data. However, the results also underscored the robustness of traditional models such as SARIMA and SARIMAX in long-term forecasting, reminding us of their enduring relevance in certain forecasting contexts.Importantly, the inclusion of Google Trends data proved to enhance the forecasting performance of several models. Models incorporating Google Trends data, such as SARIMAX and the CNN with feature selection, consistently outperformed their counterparts that relied solely on historical data, particularly in the short- to mid-term forecasting horizons. These findings affirmed the potential of Google Trends data as a valuable supplement to traditional economic data, particularly in an era where digital information plays an increasingly central role in economic activities.This study, however, was not without its limitations. The forecasting performance of the models might be sensitive to the inclusion or exclusion of extreme events, such as the COVID-19 pandemic period data. The volatility introduced by such events can impact the predictive capability of different models in various ways, making it difficult to ascertain the most-effective model across all possible contexts.Furthermore, while the study considered a broad range of models and data types, there are still other potentially useful models and data sources that remain unexplored. For instance, other types of ML and DL models, such as recurrent neural networks (RNNs) and transformers, might offer different insights or outperform the models investigated in this study.Future research should aim to address these limitations and explore these uncharted territories. More-comprehensive investigations could consider a broader range of extreme events and their impacts on different models or investigate other types of ML and DL models and their efficacy in forecasting economic indicators. Moreover, future studies could explore other types of auxiliary data, such as social media data or other online data, to gauge their potential in enhancing economic forecasts.In conclusion, this research underscored the potential of ML and DL techniques in economic forecasting and highlighted the value of integrating Google Trends data into these models. However, it also stressed the importance of model selection based on the specific forecasting context and the need for the continuous exploration of novel models and data sources to enhance our forecasting capabilities."} +{"text": "The interconnected hierarchically porous structures areof keyimportance for potential applications as substrates for drug delivery,cell culture, and bioscaffolds, ensuring cell adhesion and sufficientdiffusion of metabolites and nutrients. Here, encapsulation of a vitaminC-loaded gel-like double emulsion using a hydrophobic emulsifier andsoy particles was performed to develop a bioactive bioink for 3D printingof highly porous scaffolds with enhanced cell biocompatibility. Theproduced double emulsions suggested a mechanical strength with therange of elastic moduli of soft tissues possessing a thixotropic featureand recoverable matrix. The outstanding flow behavior and viscoelasticitybroaden the potential of gel-like double emulsion to engineer 3D scaffolds,in which 3D constructs showed a high level of porosity and excellentshape fidelity with antiwearing and self-lubricating properties. Investigationof cell viability and proliferation using fibroblasts (NIH-3T3) withinvitamin C-loaded gel-like bioinks revealed that printed 3D scaffoldsoffered brilliant biocompatibility and cell adhesion. Compared toscaffolds without encapsulated vitamin C, 3D scaffolds containingvitamin C showed higher cell viability after 1 week of cell proliferation.This work represented a systematic investigation of hierarchical self-assemblyin double emulsions and offered insights into mechanisms that controlmicrostructure within supramolecular structures, which could be instructivefor the design of advanced functional tissues. Of great interestin science and technology is the implementation of such porous hierarchiesin artificial materials from the molecular level to the macroscopicdimensions with the highest possible precision. For example, a microporestructure simplifies cell migrationand proliferation, intracellular signaling, and cell adhesion.6 To produce innovative methods to design functional hierarchicallystructured porous materials, knowledge of the relationships amongnatural porous constructions and their functionalities is crucial.A wide variety of additive manufacturing techniques have been utilizedto fabricate a hierarchically macroporous structure with well-definedporosity,9 providing an extensive selection in formulating biomaterialsto offer outstanding bioactivity, biodegradability, biocompatibility,drug delivery, and endow suitable tensile strength.The fabrication of next-generationengineering scaffolds with improvedmultifunctionality needs the rational design of structured materials,supported by an understanding of basic structure\u2013function relations.To fabricate a well-defined hierarchically macroporous structure,bottom-up methods related to the self-assembly routes can be alsoemployed.12 The 3D printing is widely utilized to produce anisotropic and microfibrousstructures comparable to classical muscular tissue through an exclusivebottom-up layer-by-layer method.15 The printed microfibroustubular structures have been employed in numerous vital human tissuesincluding blood vessels, nerve ducts, and muscle fibers. They canefficiently encapsulate microfibrous structures because of their effectivenutrient diffusion and bionic geometry.18 Numerous kinds of microfibrousstructures can be produced through 3D printing, including flat microfibers,18 porous microfibers,19 coaxial/parallel laminar composite microfibers,21 and microfibers with microstructure patterns.22 The rheological parameters strongly influence the developmentof functionalized microfibrous structures with regards to actual extrudability,21 printability,22 printingaccuracy,23 and shape fidelity24 of 3D-printed structures. Alongside this, accuratestability and formation of the printed objects are also essentialto engineering a self-supported 3D architecture.26 A good understanding of the flow behavior,27 viscoelasticity,28 and thixotropic feature29 of printing inks are therefore necessary todevelop a high-quality 3D object.Additivemanufacturing, frequently known as 3D printing, is aninnovative concept with valuable possibility to manufacture hierarchicalmesostructures using specific control of printing inks, which canopen up a new avenue to create efficient shape-changing objects.32 The lack of printability and shape fidelity rationallyshow inadequate mechanical strength/toughness postprinting with weakbiocompatibility and bioactivity features. Therefore, 3D-printed microfibrousstructures should offer the necessities of biocompatibility in additionto biodegradability to mimic and support cell growth and degrade accordingto the degree to which new cells develop. There is normally a positivecorrelation between biocompatibility and mechanical strength or toughnessto uphold mechanical support concerning healing and inhibiting a stress-shieldingimpact. Principally, the comparatively poor biocompatibility relatesto low mechanical strength and toughness,33 which can result in critical limitations during tissue engineering.For instance, the biocompatibility of microfibrous structures reducesafter the incorporation of synthetic polymers into a high-toughnesshydrogel.34 Hence, this evidence demonstratesserious issue in manufacturing fibrous hydrogels with a porous structure,improved toughness, high mechanical properties, and biocompatibility.Indeed, several extrinsic functionalizations on the scaffold structurewere proposed to aid in improving their bioperformance,35 including encapsulation,36 biopolymeric additives,37 anddopants/coatings.38 This allows their appropriatenessfor tissue engineering, along with other biomedical applications.These tough microfibrous offer great biocompatibility to improveddifferentiation and proliferation of the human mesenchymal stem.39Conventionally there has beena correlation between ideal printableink and the 3D printing quality of microfibrous structures with aporous structure.40 which proposes to uphold the printing architecturesand be capable of sticking to the previously deposited layers. Concerningthe signs of progress in the application of Pickering systems, thegel-like emulsions produced by Pickering emulsions are of evolvingattention in 3D printing and bioprinting applications.42 As a semisolid colloidal dispersion, Pickering emulsion gels arestabilized by an adsorbed layer of solid particles, which combinethe properties of both emulsions and gels.43 They offer crucial consistency in the 3D-printed structures, attributingto porous architectures.45 In recent years, therehas been attention to exploiting emulsion gels as printable inks inthe 3D printing process. For instance, Yu et al.46 investigated the 3D printing performance and freeze\u2013thawstability of soy-based emulsion gel ink as affected by varying percentagesof guar and xanthan gums. Li et al.47 preparedan emulsion gel by self-assembly of gelatin and Pickering emulsionsbased on gallic acid modified-chitosan nanoparticles. Shahbazi etal.48 produced emulsion gels by oil replacementwith diverse biosurfactants. The obtained emulsion gels exhibitedgood 3D printing performances with shear-thinning, thixotropic, andviscoelastic properties, demonstrating the potential to create a hierarchicallyporous structure using emulsion gels. Soy protein possesses optimalfunctional and physicochemical features to produce emulsion gels,49 which can be effectively printed to developa well-defined 3D porous structure.51However, more advanced materials shouldbe utilized to strengthenprinting inks to produce microfibrous and anisotropic structures,37 Thus, the production ofinnovative bioinks with greater cytocompatibility and printing performanceis required. The gel-like double emulsions offer a superlative candidatefor housing cells as their functional properties endow to mimic theimportant fundamentals of native extracellular matrix (ECM),55 owing to the fact that an extremely swollen network can be obtainedpossessing excellent mechanical strength56 and self-lubricating property58 corresponding thoseof soft tissues. Additionally, the composition of gel-like doubleemulsions has been reported to be effortlessly tuned, revealing biologicalmultifunctionality to several polymeric emulsifiers. This offers aneffective environment for proliferation and cell adhesion.61Theemulsion templating methods offer a relatively good environmentfor cells to proliferate, yet the poor mechanical properties and surfaceactivity of common emulsifiers restrict their applications. Accordingly,the tailored functionalization of emulsion-based inks for bioprintingstill remains a challenge, avoiding performance improvements in printingapplications.54 To overcome this limitation, the encapsulation of micronutrientsand bioactive materials has attracted attention, which can help toreduce vitamin C degradation. High-intensity technology is a high-power(>1 W cm\u20132) and low-frequency (20\u2013100kHz)ultrasound process also known as \u201cpower ultrasound\u201dor \u201chigh-intensity ultrasound\u201d (HIU) for the encapsulationof bioactive compounds. HIU emulsification also offers a fast andsimple yet efficient procedure, by which a double emulsion is likelydeveloped through low amounts of surface-active materials.Concerning the bioactive properties, the application of hightemperaturesand extrusion force during 3D printing (or other processing conditionsor even storage) can lead to thermal and/or other environmentallyrelated degradation of bioactive compounds, thus decreasing theirfunctional properties. Because of this, vitamin C is more susceptibleto degradation because of its poor thermal stability.1/O/W2) containing vitamin C was prepared with a hydrophobic emulsifierand soy protein particles. After rheological and mechanical characterizations,the precursor double emulsion-based inks were printed via an extrusion-based3D printing system to fabricate a hierarchical porous gel-like structure.Furthermore, we prepared bioactive 3D-printed scaffolds and comprehensivelycharacterized their antiwearing, self-lubricating, and mechanicalproperties. Finally, bioactive double emulsion-based bioink preparedwith NIH 3T3 cells was used to prepare 3D-printed scaffolds, and thecell response and survival of cells in different biological environmentswere assessed.Herein,we hypothesized that the utilization of a double emulsiongel-like structure in an extrusion-based 3D printing method offersthe production of macroporous 3D structures with outstanding cellbiocompatibility and improved printing quality because of the degreeof control over the pore diameters in the artificial materials. Accordingly,a double emulsion of water-in-oil-in-water (W22.1\u20133 at 20 \u00b0C) was prepared from the local market. All other reagentswere of analytical grade.The soy protein isolate(SPI) was obtained fromArcher Daniels Midland Company . Vitamin C was purchased from Pharmanostra, China, Hong Kong. Polyglycerol polyricinoleate(Grindsted PGPR 90) was provided by Danisco Canada Inc., Scarborough,Ontario, Canada. Sunflower oil emulsions were prepared as the first step inthe formation of double emulsions. Vitamin C was added to the internalaqueous phase at 75 mg/mL (w/v), which is based on the daily recommendedvitamin C.62 This concentration was alsoselected to simplify the spectrophotometric detection of vitamin C.63 The oil phase (O) consisted of sunflower oilcontaining 5% (w/w) PGPR 90 as a hydrophobic emulsifier. The aqueousand oil phases were both heated to 45 \u00b0C. Water and oil phaseswere then cooled to room temperature, and vitamin C was added to thewater phase, where the solution was stirred vigorously in the darkto make it completely dissolve. The W1/O primary emulsionwas prepared by adding the inner aqueous phase (W1) (0.2mass fraction) to the oil phase (0.8 mass fraction). The mixture waspre-emulsified through a rotor-stator device at 12,000 rpm for 2 min. An additional homogenization wastested to break down the clusters produced at a high homogenizationshearing rate .50 The mixture (70 mL each) was poured into a glass double-walled beakerfitted with a cooling installation. Then, a high-intensity emulsificationprocess fitted with a 13mm diameter probe was used in the ultrasonic processing of W1/O emulsions. The W1/O emulsions were subjected to ultrasoundtreatments for 0, 2,4, 6, and 8 min (with pulse mode durations of 2 s on and 4 s off),respectively. The untreated samples were used as the control and storedat 4 \u00b0C. During the ultrasound, the probe was immersed in theemulsions to a depth of 25 mm, and ice-cold water was cycled aroundthe glass double-walled beaker. The sample temperature was maintainedbelow 8 \u00b0C. The ultrasound intensity was 30.09 \u00b1 1.24 Wcm2 as measured referring to a protocol by previous work.64 The W1/O primary emulsions were signedas PU-2, PU-4, PU-6, and PU-8, which proposed the treated emulsionwith sonication times of 2, 4, 6, and 8, respectively. Control emulsion(PU-0) was the W1/O primary emulsion without the sonicationtreatment. Normally, the emulsion type formed can be determined viatwo tests, namely, the dilution test and the electrical conductivity.In the dilution test, when the emulsion is diluted with water (continuousphase) and stays stable, it is an O/W emulsion, but if it gets destabilized,it is a W/O emulsion. In the case of electrical conductivity testof the emulsion samples, it was determined and measured through afour-point probe technique with a Keithley Source Meter at ambient conditions.The primary (W2.31/O primary emulsion (0.30 mass fraction)was added to the external aqueous phase (W2) (0.70 massfraction) stabilized by soy protein particles . The mixture was emulsifiedusing a rotor-stator device at12,000 rpm for 2 min. The double emulsions, W1/O/W2, were stored at 4 \u00b0C for further analysis and testing. Thecodes of DE-PU0, DE-PU2, DE-PU4, DE-PU6, and DE-PU8 were consideredfor the double emulsions containing PU-0, PU-2, PU-4, PU-6, and PU-8primary emulsions.Double emulsions were produced in SPIdispersion and preheated to 45\u00b0C. The W2.42.4.1\u03bb = 632.8 nm). The dropletsize was specified as the surface-weighted mean = (\u2211nidi3 / \u2211nidi2), where n is the number of droplets with a diameter d.65 The electric potential of the printable inks were also obtained through a Zetasizer Nano-ZS90 at a fixed detector angleof 90\u00b0. To minimize multiple scattering effects, the emulsionswere diluted to a final concentration of 0.005 wt % with deionizedwater before analysis. After loading the samples into the chamberof the Zetasizer, they were equilibrated for 5 min before zeta potentialdata were obtained over 40 continuous readings.The inks were diluted to a droplet level of about 0.005 wt % withdeionized water at the pH of emulsions (pH = 6.8). The dispersionwas stirred gently at room temperature to ensure the emulsions werehomogeneous. The droplet sizes and particle size distribution (PSD)of the inks were measured with a laser diffraction device for 14 days. The devicemeasured the size based on the scattering of a monochromatic beamof laser light (2.4.2\u20131) to mark the protein and/or modified MMC andoil droplet, respectively. The level of both Nile blue A and Nilered solutions was 0.01% (w/v). The excitation wavelengths of the fluorescentin the system were 488 nm (Nile red) and 633 nm (Nile blue A). Theink microstructures were imaged at ambient temperature directly afterstaining. All images were obtained at 40\u00d7 magnification and processedusing Olympus Fluoview software .65CLSM images of the emulsions were taken with a Nikon Eclipse Tiinverted microscope . A portion (5 mL) of the inks wasstained with the appropriate amount of Nile Blue A indeionized water or the blend of Nile Blue A and Nile Red in 1,2-propanediol was used to monitor the rheological properties of doubleemulsions. The rheological behavior of ink samples was characterizedby an AR 2000ex rheometer using parallelplate geometry . To evaluate the steadyrheological properties, the shear stress (\u03c4) was measured asa function of increasing shear rate (\u03b3\u0307) from 0.1 to 1000s\u20131\u2013102 Pa) was primarily performed at a constantfrequency of 10 rad s\u20131 to detect the correspondingelastic (G\u2032) and loss moduli (G\u2033). Moreover, the impact of the shear rate (0.1\u2013103 s\u20131) on the apparent viscosity of the double emulsionwas evaluated.66It was coupled with a parallel plateprobe P35TiL with a gap of 1 mm. To determine the linear viscoelasticregion (LVR), a stress sweep test for 500 and 510 s, respectively.Finally, a five-intervalthixotropy test (5-ITT) was used to gatherthixotropic data for the double emulsions. The 5-ITT detected theviscosity profiles of the samples under alternating high and low shearrates and a Chebyshev polynomial-basedstress decomposition method, the torque-deformation waveform datawere performed using MITLaos program (Version 2.2 beta). To analyzethe nonlinear response of samples, the torque-deformation waveformdata at different strains and frequencies(1 and 10 rad/s) were collected using a HAAKE MARS60 rheometer with native rheometer control software (Rheowin JobManager). The raw strain\u2013stress data were collected at a samplingrate of 512 s\u20131. The S/N is the ratio of the amplitudeof the highest peak (the first harmonic) divided by the standard deviationof the noise.The constructof Lissajous plots obtained from large amplitude oscillatory shear(LAOS) analysis and the Chebyshev coefficients were conducted followingour previous study.2.5XZ plane construction while the platform translates alongthe Y-axis . For the 3D printing process, a snowflake , octopus ,and cylindrical were initially modeledand converted to STL files . A nozzle withan inner diameter of 1 mm was employed to extrude emulsion gels ona silicon platform using a direct-ink-write 3D printing process. Aftercomprehensive consideration, emulsions with improved flow behaviorwere designated for the printing process, with layer height being0.5 mm, shell 2 mm, and nozzle movement speeds during printing being20 mm s\u20131.50 The mainprinting process was carried out at ambient temperature.The machine architectureis very simple, where the extrusion head transfers in the 2.6Pr):The ink printability has been associated with the ability to producesquare-shaped internal pores in a printing object. A perfect squaregeometry of pores yields a value of 1 with the following printabilityindex was used to detect the morphological structure and integrityof the 3D-printed architectures. Before image processing, the sampleswere coated with a thin layer of gold at 20 mA for 2 min . The applied energy levels were inthe range of 5 kV to prevent the film samples from being damaged witha magnification of 20.00 kX.2.8\u20131 through an Instron 3366 electronicuniversal testing machine . The elastic modulus(E) of 3D-printed samples was determined by the averageslope over 10\u201330% of strain from the stress\u2013strain curve.The fracture energy (\u0393) and toughening mechanism of the 3D-printedobjects were evaluated as follows. (1) Each loading\u2013unloadingcycle was applied to the 3D-printed constructs under a tensile strainlower than their corresponding yielding strains. (2) The successiveand progressive stretches, where each specimen was stretched to differentstrains in the first loading and then relaxed to zero force, followedby the second loading. The Esecond/Efirst and \u0393second/\u0393first were determined and usedto evaluate the effect of various stretches on the fracture processand toughening mechanism for the 3D structures. For the recovery experiments,the notched samples were tested by a cycle of loading\u2013unloadingat a fixed strain (\u03b5 = 400%). Then, the deformedand relaxed notched samples were sealed in a polyethylene bag andstored in a water bath at 90 \u00b0C. Finally, the specimens weretaken out at different time intervals and cooled down to room temperaturefor tensile tests again.50Mechanical assay for tensile strength of dumbbell-shaped 3D structures was performedat 100 mm min2.9d = 5 mm, surface roughness Sq < 0.2\u03bcm) were used as a counter material for testing friction on the samples. The sampleswere fixed onto single-use measuring plates using double-sided adhesive tape, and then mounted onto thebottom plate of the rheometer . The frictiontorque was recorded over a deflection angle range of 0\u00b0 \u2264 \u03c6 \u2264 16\u00b0 at a sliding velocity of v = 1 mm/s. All measurements were performed in torque-controlledmode at a normal force of FN = 0.2 N.They were also performed in triplicate without any lubricant. Eachtest lasted for 0.5 h and three repeat times were performed to evaluatethe average wear volume and friction coefficient.Oscillatorytribology measurements were conducted on a commercial shear rheometer equipped with a custom-made measuringhead. The measuring head holds three probing pins. Three steel spheres with a 95% confidence interval was used to comparethe significance of the results obtained. Statistical analysis wasperformed using SPSS software, version 19.0.33.110 Besides,the sizes of the primary and secondary emulsion droplets are importantinfluential features to form a highly stable printable ink.50 In particular, the primary droplets (the inneraqueous phase containing vitamin C in the oil droplets stabilizedby PGPR) should be small enough to offer encapsulation inside thesecondary droplets (oil droplets stabilized by soy protein particles),which themselves must be enough to avoid creaming . As the integration of abioactive gel-like double emulsion into 3D printing to construct aprinted hierarchical porous architecture has not yet been explored,a comprehensive characterization of the size of W1/O andsecondary W1/O/W2 double emulsions, as wellas their interfacial framework and flow behavior, was evaluated asa function of HIU time processing.The development of a high-quality 3D-printed hierarchical porousstructure is directly associated with the engineering of a printableemulsion with shear-thinning properties, viscoelastic features, andthixotropic behavior, requiring a deep understanding of materials\u2019printability and extrudability.3.1.148 When oil is in the continuous phase, if wateris introduced, it will not mix with the W/O emulsion, but the incorporationof oil will dilute the emulsion with a full dissolution or blurrededge. As can be seen , the oil added to the emulsion showed an obviousedge, verifying a W/O-type emulsion. Additionally, we performed anelectrical conductivity test to further verify the type of emulsionsystem. The key idea behind the electrical conductivity test is thatwater is a good conductor of electricity but oil is not. Hence, ifthe emulsion sample conducts electricity, it is an O/W emulsion, butif it does not, it is a W/O emulsion. The conductivity values of allprimary emulsions were low and ranged from 0.06 to 1.02 \u03bcS/cm.These results confirm that the system belongs to a W/O-type emulsion.The formation of W/O emulsion can be proved through a simple dilutionmeasurement and electrical conductivity of the emulsion system.37 or construction of a porous structure.40 The effect of ultrasound conditions on the diametersize of primary aqueous droplets of W1/O emulsion (containing75 mg/mL (w/v) vitamin C in the internal aqueous phase) was assessedusing light scattering but different HIU times. This PGPR was used as it was the minimumamount required to create stable multiple emulsions.65 To develop a stable double emulsion with improved encapsulationefficiency, small droplet size and resistance against coalescenceare necessary.65 The PSD and mean dropletsize of the W1/O emulsions,stabilized by PGPR, are presented in d3,2) value of around30 \u03bcm. Compared to PU-0, PU-2 showed a lower PSD with a tendencyfor exhibiting a bimodal distribution value was slightlydecreased to around 26 \u03bcm, which may be attributed to the ultrasoundeffect.61 In contrast, applying HIU treatmentwith higher time produced a W1/O emulsion with monomodaldistribution, and also shifted the peak of PSD to the lower sizes of PU-4, PU-6, and PU-8 was decreased (p < 0.05), which resulted in the lowestPDI .Controlling the particle size is an effective way to meet the functionalrequirements of printable inks for 3D printing purposes. It was statedthat a reduction in the particle size of ink dispersions improvesthe ink functionality in terms of printability and shape fidelityribution 1a but yeribution 1b.However sizes1a. Compaecreased 1b, whichecreased 1c. This westPDI 1b. Thisd DE-PU6 1b. As th1/O emulsion processed by HIU were detected byCLSM (Section S1). Generally, there are two kinds ofattractive droplet\u2212droplet interactions induced by polymericor particulate type surfactant-stabilized emulsions, which are depletionflocculation and bridging flocculation.48 Once the surfactant is unadsorbed or poorly adsorbed, depletionflocculation can be driven by the osmotic pressure gradient relatedto surfactant exclusion from a narrow area adjacent to the droplets.This leads to the droplet\u2019s attraction toward each other.48 In contrast, adsorbing of a lower level of surfactantonto the droplet surface results in the droplet\u2019s linkage viabridges and consequently their flocculation.48 The CLSM micrograph of the PU-2 emulsion also shows the developmentof large particle size in the continuous phase with rather unevensize distribution, with no evidence of local flocculation. This tendencycould be due to a reduced surface hydrophobicity and structural flexibilityof PGPR due to insufficient emulsification treatment. This proposesa decreased specific surface area upon low-period HIU processing.This change was not favorable for diffusion to the expansion and bordertoward the superficial oil drops, which resulted in the sharing ofa layer of surfactant between adjacent droplets, leading to phaseseparation .45 With increasing HIU treatment time,it can be seen that droplets existed in the unflocculated and separatedshapes, and the droplet size gradually reduced with increasing HIUtreatment time. Once the treatment time reached 6 min (PU-6 sample),the emulsion droplets were the smallest, which was in accordance withthe PSD and PDI results. Moreover, CLSM images of PU-4 and PU-6 clearlyshowed the attendance of the darkened areas inside the green fluorescentwater droplets .The electrostaticforces between the emulsion droplets are themain factor in the stability of emulsions against flocculation. Inthis case, zeta (\u03b6) potential is the main parameter to monitorthe physical stability of emulsion systems, which provides an indicationof the electrostatic repulsion between droplets as they approach eachother.3.23.2.11/O emulsions were utilizedin the preparation of the double emulsion-based inks (W1/O/W2) for the 3D printing process. For the followingmeasurements, the double emulsions containing primary W1/O emulsions stabilized by soy protein particles were emulsifiedthrough a simple shearing force treatment with no sonication process.To verify the development of secondary oil droplets with sufficientsize to encapsulate the primary aqueous droplets, PSD, , PDI, and \u03b6-potential were also evaluated forthe prepared W1/O/W2 emulsions showed a multimodal PSD of about70 \u03bcm (1/O/W2 emulsion containing a 2 min-sonicated W1/O sample (DE-PU2),however, with a significantly lower (p < 0.05). Because of the comparatively biggerdroplet diameter of DE-PU0 and DE-PU2 double emulsions, they werevery unstable to gravitational separation. This provides an opticallyopaque (white) layer of the droplets, which was obviously noticeableon the top of the emulsions after a few storage hours (data not shown).In contrast, a reduction in the fraction of larger droplets and particleswas detected for the double emulsions containing PU-4 and PU-6 W1/O emulsions of the DE-PU4 and DE-PU6 emulsions was also stablefor 2 weeks .65Freshly produced primary Wodal PSD 1e with aut70 \u03bcm 1f. This mulsions 1f. Compa 2 weeks 1g. This Thus, the physical stability of the DE-PU4and DE-PU6 emulsions is likely related to the gradual particle adsorptionat the O/W interface. 651/O emulsions. As expected,50 there was a greater decrease in the magnitude of \u03b6-potentialfor all double emulsions compared to primary W1/O emulsiondroplets. This may be due to the presence of the anionic characterof soy proteins.65 The DE-PU2 emulsioncontained untreated primary emulsion droplets (PU-0) and showed anegative \u03b6-potential value (anionic feature) of about 12 \u00b10.2 mV. This proposes that soy particles could not effectively adsorbonto the surfaces of the droplets as the \u03b6-potential value ofits primary W1/O emulsion (PU-0) was detected to be \u221210\u00b1 0.2 mV exhibiting the expected structures forW1/O/W2 emulsions with small water dropletstrapped inside the larger oil droplets that were dispersed in water.Compared to the DE-PU0 and DE-PU2, the sizes of the O/W droplets inthe DE-PU4 and DE-PU6 emulsions were smaller. The droplets of DE-PU4and DE-PU6 emulsions were also homogeneously distributed throughoutthe continuous phase, and their internal structures were maintained(CLSM image of DE-PU8 was not provided). Microscopic images indicatedthat most of the aqueous phase droplets regarding DE-PU4 and DE-PU6emulsions were submicron in diameter and uniformly dispersed intothe oil phase. According to the PSD, the large droplets comprise over90% of the total W1/O volume with the smaller submicrondroplets representing less than 10%. Based on the microscopic images,the larger droplets appear most likely to be aggregates of the submicronemulsified droplets, formed during the emulsification process by collisionsthat occur simultaneously with size reduction in the presence of strongshearing forces. It was concluded that soy protein particles couldproduce a smaller droplet in W1/O/W2 emulsionswith superior physical stability against coalescence because of irreversibleparticle adsorption at the O/W interface.663.2.31/O/W2 emulsions were assessed through flow, oscillatory,and thixotropic experiments (data not shown).The shear-thinning property of the double emulsions may be becauseof disruption and deformation of the flocs as the shear rate increases. 48 Moreover, the existence ofsoy protein particles promoted bridging flocculation of oil droplets, atlower amplitudes compared to theviscous modulus, G\u2033 (\u03c4). This obviouslyspecifies an elastic gel property regarding all double emulsions,which follows the preceding flow property results , overwhelming high utilization stress, and are thusless susceptible to destruction. As can be observed from G\u2032 (\u03c4) > 103 Pa) with a longer LVR . This highlights that these double emulsionshad a linear viscoelastic solid-like behavior (predominantly elastic)with high stiffness under the stress sweep (improved gel\u2013soltransformation). The amplitude sweep data also provides evidence thatDE-PU6 and DE-PU4 emulsions can reasonably increase the resistanceof the system to any deformation. This phenomenon may be due to thesoy protein particles promoting bridging flocculation in the system,which theoretically happens when a single particle attaches to a surfaceof more than one droplet. Typically, bridging flocculation (commonlyin intermediate concentrations) includes a strong attractive interaction,which might be responsible for the stiffness of DE-PU6 and DE-PU4emulsions.The oscillatory amplitude sweep experiment 3c shows results 3a,b. Mor\u20131) the viscosity was slightlyreduced; in the meantime, at a high-shear of 80 s\u20131 the sensitivity became evident. However, the results for doubleemulsions revealed a high degree of recovery, even after five cyclesalternating between 80 and 0.1 s\u20131. The suitabilityof double emulsions is likely considered for processing with a 3Dextrusion printing system, in which a reforming network with reversiblestructure is extremely appreciated.50 Notethat regarding the DE-PU6 emulsion a higher viscosity value was detectedcompared to that of other double emulsions. The difference in thixotropicproperties could be attributed to the affecting the elastic and theviscous components of viscoelastic properties as a result of particlesize change and the development of a flocculated system can detect viscous and elasticLissajous-Bowditch plots. The viscoelastic moduli are independentof deformation rates throughout the LVR area, and the Lissajous curveis elliptical due to the sinusoidal oscillatory stress response. Followingthis area, elastic and viscous moduli mainly relate to the appliedstrain in the nonlinear region, where the presence of greater harmonicsin the stress response leads to a twisted, nonsinusoidal shear stresswaveform. \u20131). The results of intracycle stress arenormalized with respect to the maximum stress of the oscillation cycle.As can be seen, the deformation strain and the emulsion type stronglyaffect the shape of the Lissajous plots, where all of the double emulsionspresented a perfectly elliptical shape at a strain of 1.1% (the curveof DE-PU2 not shown). This shows a mechanically stable viscoelasticdouble emulsion within the LVR, which is in accordance with the previousdata for oscillatory amplitude sweep measurements andan alteration to the parallelogram-like shape of the graphs (a strainof 61\u2013200%) show an ultimate change from elastic- to viscous-prevailedproperties, highlighting an increased viscous dissipation upon intracycledeformation, as well as highly non-linear mechanical response.The nonlinearstress response can be detected bythe LAOS experiment. LAOS offers a visual difference in the complexemulsion microstructure, which cannot be evaluated through a classicalrheological experiment.urements 3c. The n65 Contrary, the DE-PU0, DE-PU2, and DE-PU8 emulsionsalready show yielding at the strain of 6.1%. As aforementioned, theDE-PU4 and DE-PU6 emulsions presented a more reduced droplet sizecontaining droplet-rich domains presentedinferior distortion from their initial elliptical geometry and normallyminor surrounded region of the circles with increasing strain compared to the rest of double emulsions.This means that the microstructures of DE-PU0, DE-PU2 (data not shown),and DE-PU8 emulsions were less elastic with lower durability comparedto DE-PU4 and DE-PU6 samples. Therefore, they are more likely to fractureupon large deformations (like processing during 3D printing), whichresults in a more pronounced degree of nonlinear response. Besides,a constant upturn of the decomposed elastic stress was detected inthe DE-PU4 and DE-PU6 emulsions within the strain range between 1.1and 6.1%. This signifies that these samples hold more of their elasticcharacter and even show a slight number of intracycle strain rigidifying. domains 1e,f, an domains 3a, and m domains 3d. It is domains 3c, they 65 With increasing strain from 1.1 to 61%, DE-PU4and DE-PU6 emulsion-based inks showed only a minordistortion from their initial shape, where the change of the surroundingzone of the loops was not as apparent. Regarding DE-PU0, DE-PU2 (datanot shown), and DE-PU8 samples, it is found that the nonlinear viscouscontribution appeared at a strain of 6.1%, as demonstrated by therhomboidal form of the Lissajous plots with a slope change of thedecomposed stress plots , Rabinowitschcorrection is necessary. For the power-law model (\u03b7 = k\u03b3\u0307 n\u20131)the shear rateShear recovery experiments were performed to evaluatethe rheologytime dependence of the inks after printing. To correlate these measurementswith the actual printing shear rate, the maximum shear rate (MSR)was evaluated. The shear rate in a 3D printer nozzle for Newtonianbehavior is defined as\u20131 with the consideration of the rheology dependence on the emulsiontype. Reducing the frequency and strain amplitude each by an orderof magnitude locates the LAOS conditions at sufficiently small strainrates and strains such that the experiment is largely dominated bythe elasticity leading to the yield stress . Note that this strain value is stillabove zero deformation strain such that the dispersion is in the weaklynonlinear regime. The experimental data show predominantly elasticbehavior, with a stress overshoot and viscous flow evident as themaximum shear rates are reached.Not surprisingly, progressingto lower frequencies but higher strainamplitudes leads to a better model agreement,as the stress response is dominated by the viscous behavior of a relativelyunstructured material. Depending on the calculation, the MSR of theprinting system was detected to be between 60 and 423 s3.33.3.134 All double emulsions could besuccessfully extruded during printing, although DE-PU0 ink displayeda sagging structure, printing layer fusion, and phase separation . In a particularshape, a square geometry (high precision) can be measured if Pr = 1, an irregular shape if Pr > 1,anda round shape if Pr < 1. As Pr was detectedto be 1.13 \u00b1 0.11 for DE-PU0 (not provided) and DE-PU2, whereasthe Pr was found to be 0.91 \u00b1 0.05 and 0.94\u00b1 0.04 for DE-PU4 and DE-PU6, respectively, showing better printingperformance. In addition, the pattern geometry of the latter printedobjects was somewhat rounded, as the Pr values wereclose to 1. It is concluded that the shape fidelity of DE-PU4 andDE-PU6 was upheld even if more layers were added to the structures.The printedpattern precision of an axial pore in an 3.3.3E), fracture energy (\u0393), and toughening mechanism)of the 3D-printed objects were measured. The mechanical data showedthat the lowest elastic modulus, \u2032E\u2032 was detected to be 32 and 32 kPa for DE-PU0 and DE-PU2, respectively,whereas its value was maximum for DE-PU4 and DE-PU6 with a value of32 and 32 kPa, respectively in the second loading\u2013unloading phase comparedwith their values in the first loading\u2013unloading phase (\u0393first or Efirst) was considered. As shown in Esecond/Efirst or \u0393second/\u0393first with increasing strain in the first loading\u2013unloading cycle.This indicates that the elastically printed matrices were broken withan increase in the extension level. According to the previous dataon the greatest 'E' values of DE-PU4 andDE-PU6,this result could be attributed to the higher viscoelasticity, greaterviscosity, and superior thixotropic properties of the relevant inks.On the other hand, 'E' parameterand energydissipation (U) were found to be recovered to around80 and 65%, respectively, presenting a brilliantly recoverable structure.The disrupting strength results well agree with the 5-ITT in Section S8). To evaluate the feasibility of this type of scaffoldin diverse biological environments, three different cell lines wereutilized (column (vi) in 69 Further, we evaluatedthe effect of vitamin C on cell viability through indirect tests.It was obvious that the 3D-printed scaffolds containing vitamin Coffered more viable cells of SH-SY5Y, Saos-2, and NIH/3T3 comparedto 3D-printed scaffolds with no encapsulated vitamin C.Because of mimicking the imperative multifunctionalities of ECM,the gel-like double emulsions show a promising material for cell culture.7 cells mL\u20131) to measure cell viability during printing . The one-layer printed grids showedwell-defined printed architectures with good printing resolution (column(i) in 65 In thiscase, the live/dead test of NIH/3T3 cells encapsulated into one-layerprinted grids also showed a small fraction of cell density, especiallyfor DE-PU2 and DE-PU8 (column (iii) in 67 It was reported that oxidative stress and thesubsequent DNA damage can be avoided through the activity of vitaminC.68 For example, Liao et al.69 demonstrated that vitamin C efficiently quenchedsinglet oxygen (1O2), subsequently decreasingoxidative damage resulting from chlorin e6 (Ce6)-mediated photodynamictherapy on NIH/3T3 cells. Our results also have verified the capabilityof the bioactive 3D-printed scaffolds containing vitamin C into amultifunctional biocompatible 3D structure, along with their potentialto offer cell growth while preventing NIH/3T3 cells oxidative damagein complex printed architecture.70NIH/3T3 cells were then considered to be 3D printedat a high cellseeding density , printing quality,and tribology measurements of double emulsions, as influenced by HIUtime, were evaluated. Phase separation and a bimodal distributionwith a decreased nonlinear property under large amplitude oscillatoryshear stress of double emulsions were largely reduced upon the applicationof power ultra sonication. The outstanding flow behavior broadensthe potential of double emulsion in the development of 3D-printedporous structures, in which the 3D-printed double emulsion-based inksshowed high shape fidelity and integrity. Further, a high level ofporosity with a uniform structure in terms of orientation and shapeof spaces was observed in the 3D-printed objects. The printed scaffoldswith encapsulated vitamin C induced high cell viability within theprinted grid after 1 week of cell proliferation. This emphasizes thepositive impact of vitamin C on the proliferation of NIH/3T3 cells.These results indicate that vitamin C-loaded gel-like double emulsionsenhanced the cellular affinity, cell biocompatibility, and dimensionalstability of the 3D-printed scaffolds under the physiological conditions,which has great potential to be utilized in tissue engineering applications.In summary, vitamin C wasencapsulated within an inner water phaseof W"} +{"text": "Pneumocystis jirovecii pneumonia (PJP) on the pulmonary diffusion capacity in people with HIV (PWH) with a history of advanced immunodeficiency.To assess the impact of past Prospective cross-sectional study.+ lymphocyte count <200\u200acells/mm3, matched by age, sex, smoking status and time since HIV diagnosis. All PWH completed a pulmonary function test (PFT) consisting of pre-bronchodilation spirometry, body plethysmography and single-breath carbon monoxide transfer factor (TLCO) measurement. TLCO, diffusion impairment (defined as a TLCO Z-score <\u22121.645), total lung capacity (TLC) and forced expiratory volume in one second/forced vital capacity (FEV1/FVC) Z-scores were assessed. Multivariable regression analyses were conducted with Z-scores and odds of diffusion impairment as outcomes.Adult PWH with past PJP >1\u200ayear ago were included as the study group. The control group consisted of PWH with a nadir CD4Z-score and diffusion impairment rate did not differ significantly between groups . Past PJP was not independently associated with TLCO Z-score , diffusion impairment nor TLC or FEV1/FVC Z-scores, whereas current (vs. never) smoking was associated with more diffusion impairment and lower TLCO Z-scores.PFTs of 102 participants were analyzed, 51 of whom had past PJP with a median of 10 years since PJP. Mean TLCO In our study, past PJP was not associated with long-term diffusion impairment. Our findings suggest that smoking plays a more important role in persistent pulmonary function impairment whereas PJP-related changes seem to be reversible. Pneumocystis jirovecii pneumonia (PJP) is one of the most common opportunistic infections in people with HIV (PWH) and advanced immunodeficiency . Six of all participants had a mild COVID-19 infection more than 6\u200amonths before PFT; none of them required specific treatment or hospital admission. Both groups were comparable regarding age vs. \u22120.92 (1.04), P\u200a=\u200a0.790). Multivariable linear regression showed that only current (vs. never) smoking was associated with lower TLCO Z-scores , while no association was observed with past PJP vs. 12/51 (24.53%), P\u200a=\u200a0.650). Rates of mild and moderate diffusion impairment were also similar between groups and no severe diffusion impairment was observed. Current (vs. never) smokers had higher odds of diffusion impairment , whereas similar odds were found for past PJP vs. no past PJP . See Table S1, Supplemental Digital Content 1, for all PFT Z-scores, % predicted and exact rates of diffusion impairment severity by group.When diffusion impairment was defined dichotomously as TLCO Z-score was evaluated in the PJP+ group. Only current smoking was independently associated with lower TLCO Z-scores and no association was found with time since PJP nor steroid-use during PJP .The association between time since PJP, steroid-use during PJP and TLCO P\u200a=\u200a0.894). No independent association for past PJP was found , whereas current smoking was associated with lower FEV1/FVC Z-scores . Past PJP was not associated with obstructive impairment (4/51 (7.84%) vs. 6/51 (12.00%), P\u200a=\u200a0.484). See Table S2, Supplemental Digital Content 2, for linear regression results.FEV1/FVC Z-scores were similar for PJP+ and PJP\u2212 (\u22120.31 (1.08) vs. \u22120.28 (1.10), Z-scores were found for both groups (\u22120.09 (1.04) vs. \u20130.01 (1.13), P\u200a=\u200a0.705). Current smoking was independently associated with higher TLC Z-scores . No association was found for past PJP . Restrictive impairment was not associated with past PJP (4/51 (7.84%) vs. 2/51 (3.92%), P\u200a=\u200a0.400). Given the low number of obstructive and restrictive impairment, multivariable logistic regression was not performed for these outcomes.Similar TLC Sensitivity analyses were conducted using 98/102 PWH with an undetectable viral load at the time of PFT (excluding 3/51 PJP+ and 1/51 PJP\u2212) and 96 without previous COVID-19 (excluding 6/51 PJP+). All results were similar to those of the main analyses.Z-score or dichotomously as Z-score <\u22121.645. Current (vs. never) smoking was found independently associated with lower TLCO Z-scores and diffusion impairment. Notably, more than 25% of PWH in our study had diffusion impairment, as defined by the latest ERS/ATS guidelines [In this cross-sectional study on the potential long-term pulmonary sequelae of PJP in PWH, we found no effect of past PJP on the diffusion capacity, evaluated either as a continuous TLCO idelines \u201317.Previous PJP does not seem to put PWH at greater risk of long-term diffusion impairment. Although their diffusion capacity is substantially more often diminished compared to the general population, this seems to be driven by other factors, such as HIV infection and smoking \u201313. ThouWe hypothesize that PJP-related damage either recovers in the long-term or its contribution is negligible in the presence of persistent pulmonary impairment from smoking and HIV infection. From a pathophysiological perspective, the diffusion impairment is the result of a different process in the acute and postacute phase of PJP. It is assumed that high influx of inflammatory cells is responsible for hypoxemia in the acute stage of PJP , whereasOur study has several strengths. Next to the aforementioned long-term PJP-related outcomes, the PFT measurement was performed systematically in accordance with the latest ERS/ATS quality standards and a matched group of PWH with advanced immunodeficiency without past PJP serving as control, as well as multivariable adjustment were used to minimize confounding bias.Certain limitations also apply to our study. We could not account for previous bacterial pneumonias or underlying undiagnosed pulmonary vascular and interstitial lung disease, all of which can result in diffusion impairment, but given the relatively homogeneous study population, we do not expect these factors to differ between groups . AdditioIn conclusion, our study did not show an association between past PJP and persistent diffusion impairment in PWH. Our findings suggest that PJP-related pulmonary damage recovers in the long-term or that its contribution, in the presence of pulmonary impairment from smoking or HIV infection, is marginal.Author contributions: I.B., B.W. and T.M. designed the study. B.W. wrote the study protocol. P.O. was responsible for the site work including the recruitment and data collection. All authors had access to data. P.O. performed the analysis, interpreted results and drafted the manuscript. All authors contributed to the interpretation of the data, critically reviewed the manuscript and approved the final manuscript.Funding: The study was funded by Gilead Sciences, which had no role in trial design, data collection, analysis or manuscript preparation.There are no conflicts of interest."} +{"text": "Thereafter, these two groups were subdivided into male and female groups. The primary clinical outcomes were major adverse cardiac and cerebrovascular events (MACCE), defined as all-cause death, recurrent myocardial infarction, repeat coronary revascularization, and stroke. The secondary clinical outcome was stent thrombosis. After multivariable- and propensity score-adjusted analyses, in-hospital mortalities were similar between the male and female groups in both the SDT < 24 h and SDT \u2265 24 h groups. However, during a 3-year follow-up period, in the SDT < 24 h group, all-cause death and cardiac death rates were significantly higher in the female group than those in the male group. This may be related to the lower all-cause death and CD rates in the SDT < 24 h group than in the SDT \u2265 24 h group among male patients. Other outcomes were similar between the male and female groups and between the SDT < 24 h and SDT \u2265 24 h groups. In this prospective cohort study, female patients showed higher 3-year mortality, especially in the SDT < 24 h, compared to male patients.We compared the effects of sex differences in delayed hospitalization on major clinical outcomes in patients with non-ST-segment elevation myocardial infarction after new-generation drug-eluting stent implantation. A total of 4593 patients were classified into groups with ( Acute myocardial infarction (AMI) occurs due to thrombus formation resulting from a rupture or erosion of vulnerable atherosclerotic plaques . The conhttp://www.kamir.or.kr (accessed on 1 November 2011). At the time of initial enrollment, only patients aged 18 and over were included. The exclusion criteria of this study were as follows: patients who did not undergo PCI ; those who underwent plain old balloon angioplasty ; unsuccessful PCI ; coronary artery bypass graft ; or BMS, or first-generation (1G)-DES implantation ; those who had STEMI ; or those who were lost to follow-up and SDT \u2265 24 h groups, and these two groups were subdivided into male (group A [n = 2492] and group C [n = 849]) and female (group B [n = 825] and group D [n = 427]) subgroups , a total4, 1.0%) . Overallubgroups . We descAfter conventional CAG via a transfemoral or transradial approach , 200\u2013300Based on current guidelines, NSTEMI was defined as the absence of persistent STE with increased cardiac biomarker levels in an appropriate clinical context ,24. In tt-test, and data were expressed as mean \u00b1 standard deviation or median (interquartile range). For categorical variables, intergroup differences were analyzed using the chi-square or Fisher\u2019s exact test, and data were expressed as counts and percentages. Both in the groups with or without delayed hospitalization, univariate analyses were performed for all variables with the assumption that p value at <0.05 is a significant value. Subsequently, a multicollinearity test [p value at <0.05 is considered statistically significant. We used the SPSS software version 20 to perform statistical analyses. For continuous variables, intergroup differences were evaluated using the unpaired ity test was perfity test . Moreoveity test or the city test , multicoity test . In thisp = 0.913) and SDT \u2265 24 h groups. Similarly, in-hospital CD rates were not significantly different between the male and female groups in both the SDT < 24 h and SDT \u2265 24 h groups. These results were confirmed by PS-adjusted analyses. During a 3-year follow-up period, in the < 24 h group, multivariable-adjusted analysis revealed that MACCE , non-CD , recurrent MI , any repeat revascularization , stroke , and ST rates were not significantly different between the male and female groups. However, all-cause death and CD rates were significantly higher in the female group than in the male group. These results were confirmed by the PS-adjusted analysis. In the SDT \u2265 24 h group, after multivariable-adjusted and PS-adjusted analyses, MACCE , all-cause death , CD , NCD , recurrent MI , any repeat revascularization , stroke , and ST rates were not significantly different between the male and female groups. In the total study population, after the multivariable-adjusted and PS-adjusted analyses, all-cause death and CD were significantly higher in the female group than in the male group. In p = 0.327) and female groups. Moreover, in-hospital CD rates were not significantly different between the SDT < 24 h and SDT \u2265 24 h groups in both the male and SDT \u2265 24 h groups. These results were confirmed by PS-adjusted analyses. During a 3-year follow-up period in the male group, the multivariable-adjusted analysis revealed that all-cause death and CD rates were significantly higher in the SDT \u2265 24 h group than those in the SDT < 24 h group. However, MACCE, NCD, recurrent MI, any repeat revascularization, stroke, and ST rates were not significantly different between the SDT < 24 h and SDT \u2265 24 h groups. These results were confirmed by the PS-adjusted analysis. In the female group, after multivariable-adjusted and PS-adjusted analyses, MACCE , all-cause death , CD , NCD , recurrent MI , any repeat revascularization , stroke , and ST rates were not significantly different between the SDT < 24 h and SDT \u2265 24 h groups. In the total study population, after multivariable-adjusted and PS-adjusted analyses, all-cause death and CD were significantly higher in the SDT \u2265 24 h group than in the SDT < 24 h group. p = 0.019 and p = 0.012, respectively), reduced LVEF , cardiogenic shock , CPR on admission , atypical chest pain , EMS use , and high GRACE risk scores were common independent predictors of MACCE in both the SDT < 24 h and SDT \u2265 24 h groups. As shown in p < 0.001 and p < 0.001, respectively), reduced LVEF , CPR on admission , atypical chest pain , and high GRACE risk scores were common independent predictors of all-cause death in both SDT < 24 h and SDT \u2265 24 h groups. p-for-interaction, demonstrated comparable MACCE and all-cause death rates between the male and female groups. In the SDT < 24 h group, however, the female group had a higher all-cause death rate compared with the male group in patients with young age and hypertension (p = 0.026). The main findings of this prospective observational study after multivariable- and PS-adjusted analyses were as follows: (1) in-hospital mortalities were not significantly different between the male and female groups in both the SDT < 24 h and SDT \u2265 24 h groups; (2) however, during a 3-year follow-up period in the SDT < 24 h group, all-cause death and CD rates were significantly higher in the female group than those in the male group; (3) furthermore, in the male group, all-cause death and CD rates were significantly lower in the SDT < 24 h group than those in the SDT \u2265 24 h group; (4) MACCE, non-CD, recurrent MI, any repeat revascularization, stroke, and ST rates were similar between the male and female groups and between SDT < 24 h and SDT \u2265 24 h groups; (5) old age, reduced LVEF, CPR on admission, atypical chest pain, and high GRACE risk scores were common independent predictors of MACCE and all-cause death in both the SDT < 24 h and SDT \u2265 24 h groups.p < 0.001) than in males among 7,026,432 AMI hospitalizations between 2004 and 2015 in the National Inpatient Sample. In our study, in the SDT < 24 h group and total study population, all-cause death and CD were significantly higher in females than in males. In the SDT \u2265 24 h group, these mortalities were also numerically higher in the female group without reaching statistical significance compared with the male group , DM (as the abbreviation \u201dDM\u201d was introduced in the introduction section) , high GRACE risk scores , and the left anterior descending artery as a treated vessel was also significantly higher in the female group than in the male group and SDT \u2265 24 h (p = 0.176) groups. Additionally, DBT was not an independent predictor of all-cause death in both the SDT < 24 h (p = 0.817) and SDT \u2265 24 h (p = 0.203) groups. This result is consistent with the previous results [p = 0.010) and CD rates were higher in the SDT \u2265 24 h group than in the SDT < 24 h group. Similarly, in the total study population, the 3-year all-cause death and CD rates were higher in the SDT \u2265 24 h group than in the SDT < 24 h group. However, in the SDT \u2265 24 h group, the 3-year primary and secondary clinical outcomes than those without delayed hospitalization. Because they [Prehospital delay is the total amount of time taken by patients to present to the emergency department following the onset of acute symptoms . In a reuse they includeduse they . In thesuse they has limiuse they demonstruse they .In our study, in-hospital mortality, including all-cause death and CD, was not significantly different between the male and female groups in both the SDT < 24 h and SDT \u2265 24 h groups . DespiteAs far as we know, no specific large-scale study exists, and we could not provide comparative results between our study and other studies. In addition, the population size was insufficient to conclude that the KAMIR-NIH data included 20 tertiary, high-volume university hospitals. Hence, we believe that our results may be the first to compare the long-term clinical outcomes between the SDT < 24 h and SDT \u2265 24 h groups in male and female patients after the successful implantation of new-generation DES and could provide valuable information to cardiologists. This study had some limitations. Delayed hospitalization was divided into three phases: patient decision, time to first medical contact, and transportation phases . Howeverp = 0.022 and p = 0.012, respectively) in the SDT < 24 h group than in the SDT \u2265 24 h group among male patients. Hence, female patients showed higher 3-year mortality, especially in the SDT < 24 h group and in the total study population, than male patients. However, further large-scale studies are required to confirm our results.In this nonrandomized, multicenter, prospective cohort study, in-hospital mortalities were similar between the male and female groups. In our study, females with NSTEMI are older at presentation, have more comorbidity, and present later and with more atypical symptoms, and these factors were independent predictors of mortality. This led to lower all-cause death and CD rates ("} +{"text": "Bovine ephemeral fever (BEF) is a viral disease of cattle that is transmitted by blood-feeding insects. In Israel, farmers routinely report data on every BEF case to the Farm Herd Management Program (NOA), and they are registered in the Israel Cattle Breeders Association herd book. In this study, we used the statistical capability of national data stored in the Israeli herd book to evaluate the economic effects of BEF outbreaks. Our results show substantial economic losses from the reduction in milk production and culling of valuable cows. Due to climatic change, the risk of bovine ephemeral fever virus (BEFV) emergence and spread in Europe is real. Since the European cattle population has never been exposed to BEFV, the economic losses to dairy and beef production in this continent during its first BEF outbreak may be considerable. Additionally, it could also cause financial damage due to restrictions on animal trade and transportation, like the current EHDV-8 outbreak in the Mediterranean basin. These results, exhibiting for the first time to our knowledge, the impact of BEF outbreaks at a population level could enable us to conduct an accurate risk assessment in future cases of BEFV emergence.Culicoides biting midges). While the dispersal of arboviral diseases such as bovine ephemeral fever (BEF) into naive areas is often the result of globalization and animal movement, the endemization and local outbreaks of these diseases are mainly influenced by environmental changes. Climate change affects the activity, distribution, dynamics, and life cycles of these vectors (arthropods), the replication of viruses within their vectors, and weakens animal\u2019s immune systems. Although BEF does not currently occur in the Americas and Europe (other than in the western regions of Turkey), the risk of BEFV emergence, spread, and endemization in Europe is real. Over the past two decades, arboviruses such as the bluetongue virus (BTV) and Schmallenberg virus (SBV) have emerged in Europe without warning and caused significant losses to the dairy and meat industries. Since the European cattle population has never been exposed to BEFV, the economic losses to dairy and beef production in this continent due to the reduction in milk production, loss of valuable cows, and abortion, should BEF emerge, would probably be considerable. Moreover, arboviruses can also cause substantial financial damage due to restrictions on animal trade and transportation, like the current EHDV-8 outbreak in the Mediterranean basin. In this study, we used national data stored in the Israeli herd book to examine the economic aspects of BEF outbreaks in affected dairy cattle farms countrywide. Our results demonstrate that BEF outbreaks can have immediate and delayed effects, causing severe economic losses due to culling and a reduction in milk production that affects dairy farm income for months after clinical diagnosis. To our knowledge, this is the first extensive study on the impact of a BEF outbreak at a population level, enabling to conduct accurate risk assessments in future cases of BEFV emergence and re-emergence.Bovine ephemeral fever virus (BEFV) is an arthropod-borne virus (arbovirus) transmitted by blood-feeding insects (mosquitoes and Ephemerovirus . Similar to other members of this family, BEFV is a negative-sense single-strand (ss) RNA virus with a bullet-shaped morphology [Culicoides biting midges [Bovine ephemeral fever virus (BEFV) is a rhabdovirus classified as the type species of the genus rphology . Its 14.rphology . BEFV isg midges ,4,5,6.The disease caused by BEFV is bovine ephemeral fever (BEF). BEF manifests in anorexia, depression, ocular and nasal discharge, salivation, muscle stiffness, lameness, rumenal stasis, sternal recumbency, and other inflammatory responses ,7,8,9. TWhile the dispersal of arboviral diseases such as BEF into naive areas is often the result of globalization and animal movement , the endIn Israel, farmers routinely report data on every BEF case to the Farm Herd Management Program (NOA), and they are registered to the Israeli herd book (Israel Cattle Breeders Association). In this study, we used the statistical capability of national data stored in the Israeli herd book to examine the effects of the 2021 outbreak on severely affected dairy farms nationwide. We tried to evaluate these economic losses in terms of the reduction in milk production, loss of valuable cows (culling rates), and abortion.This study used retrospective data from the Israeli herd book (Israel Cattle Breeders Association). Farmers routinely report data on every BEF event to the Farm Herd Management Program (NOA). Retrospective data of reported BEF events were analyzed to present the accumulation and distribution of the number of new clinically diagnosed cases per day during the 2021 outbreak.The data set examined in this section included thirty dairy farms that had a high proportion of infected animals. A dairy farm with a high proportion of infected animals was defined as a farm with at least fifty cows reported with clinical signs of BEF by the farm veterinarian in 2021, and the cows at the farm were confirmed positive for BEFV by qPCR . A map oThe parameters chosen to evaluate the economic effect were milk production loss, culling rates, and abortion rates.2, and the square root (DIM) within lactation.i. Milk production loss was analyzed on cows with a daily milk yield from 30 days before BEF diagnosis until 30 days after diagnosis. Cows culled from the herd up to 30 days post BEF diagnosis was removed from the milk production analysis. Data were analyzed using SAS . The GLM model was used to analyze milk production over time. The model includes the following effects: herd, the month of calving (January to December), the month of diagnosis (July to December), days from diagnosis , lactation number , the interaction between days from diagnosis and lactation number and DIM, DIMii. To accurately evaluate culling rates due to BEF infection, only cows culled within ten days from their diagnosis date were included. The distribution of these days from the diagnosis date until culling was determined according to the farmers\u2019 reports to the Farm Herd Management Program (NOA).iii. Abortion rates were evaluated for cows that were pregnant during the BEF outbreak. They were analyzed using the Glimmix procedure in SAS. The model includes the following effects: herd, month/year (date) of insemination, lactation number , and BEF diagnosed or non-diagnosed.According to the Israeli dairy board during the BEF outbreak (July\u2013November 2021), the price of 1 kg of milk was USD 0.7, and the cost of an average cow was approximately USD 2300. Our economic evaluations were based accordingly.During this outbreak, severely infected herds were located in the following five geographic locations: the coastal plain , the Sharon plain , the Negev desert, the upper Jordan valley and lower Jordan valley (near the Dead Sea) .https://ims.gov.il/en/data_gov (accessed on 1 October 2023)).The onset of the outbreak was at the end of July 2021. The number of cases remained relativity low until the beginning of September, when a rapid elevation was observed and reached a peak at the beginning of October with more than 300 infected cows per day nationwide. From October onward, the outbreak declined, reaching zero new cases at the end of December . AccordiThe percentage of affected cows (morbidity) varied from 10% to 90.7%, with an average of 38.5% per herd, as shown in The culling rates within ten days of BEF diagnosis ranged from 0 to 15.9%, with an average of 4.8% per herd . The avep-value = 0.3744; Abortion rates did not differ between cows in which BEF was not confirmed and cows in which BEF was confirmed , and abortion. In order to conduct this analysis, we used the national data stored in the Israeli herd book to examine the effects of the 2021 outbreak on severely affected dairy farms countrywide. To our knowledge, this is the first extensive study on the impact of a BEF outbreak at a population level.Spatially, severe cases were distributed in five geographic locations as follows: the coastal plain , the Sharon plain , the Negev desert, the upper Jordan valley and the lower Jordan valley (near the Dead Sea) . In formhttps://my.icar.org/stats/list (accessed on 1 October 2023)). Although the percentage of the drop in milk production is lower than previous reports in high-producing cows, this reduction lasts longer, and the recovery of the cow is slower. As in previous reports, a sick cow does not truly recover from the infection as its milk production does not return to its original baseline, which not only causes immediate economic damage but also delayed damage, affecting the dairy farm income for months. Accordingly, during the first forty days from infection, the economic losses of the 2021 outbreak due to the reduction in milk production alone are estimated to be around USD 123,000 in a small herd of 300 cows and approximately USD 410,000 in a herd of 1000 cows.It was shown that milk production usually drops by at least 50% in cows in which BEF has been diagnosed and that the highest-producing animals are generally the most severely affected. Milk yield should return to approximately 90% of previous levels after about three weeks, but cows affected late in lactation often do not return to production . Our resDeaths from ephemeral fever are uncommon and rarely involve more than 1\u20132% of the herd ,9. HowevCulicoides biting midges) were collected for two consecutive nights during the outbreak (21 November). At the same time, whole blood was collected from six random cows on each farm. Insect pools and whole blood were tested for the presence of other arboviruses. BTV8 was detected by PCR [Culicoides biting midges, mosquitos and other pests in the animal habitats and modify dairy farm infrastructures.According to the literature, abortion occurs in approximately 5% of BEF-infected cows, especially those in the second trimester of pregnancy . This std by PCR in both d by PCR in insecd by PCR , our resd by PCR . Commercd by PCR ,28, farmd by PCR ,29. On td by PCR ,19. Addid by PCR ,27. Mored by PCR ,28,30, wCulicoides biting midges are highly affected by climate changes. They are known to cause enormous economic damage as they affect herds\u2019 morbidity, mortality, fertility, and abortion rates. In some cases, even the threat of infection can severely limit animal trade and transportation. This study demonstrates the substantial economic impact of BEF outbreaks in different parameters, such as a reduction in milk production and elevation in culling rates. Furthermore, this outbreak can have immediate and delayed effects, affecting the dairy farm\u2019s income for months later. Globalization and climate change are increasing the risk of a BEF outbreak in Europe in the near future. Thus, the evaluations in this study enable the accurate risk assessment of BEFV emergence and emphasize the need for developing new vaccines and other effective strategies to fight arboviruses and their blood-sucking vectors.Arthropod-borne viruses (arboviruses) transmitted by blood-feeding insects such as mosquitoes and" \ No newline at end of file