content
stringlengths
37
2.61M
That's how many Mexicans responded as word spread that notorious cartel kingpin Joaquin "El Chapo" Guzman busted out of a maximum-security prison in a brazen escape. Details about the mile-long tunnel Guzman apparently used for his getaway have dominated the headlines. But corruption, analysts told CNNMexico , was ultimately the key that unlocked the drug lord's cell. "We are angry because there is a problem of impunity related directly to corruption, and the authorities have not taken the necessary measures," said Maria Elena Morera, an activist who heads an organization pushing for better security policies . "This exposes us to the world." Criticism of widespread government corruption in Mexico is nothing new . But a series of scandals in the past year already had top Mexican officials in the hot seat. And Guzman's escape, experts say, shines an even harsher spotlight on a problem that historically has stretched from police on the streets to the highest halls of power. How bad is it? Just how corrupt is Mexico, and how does it compare to other countries? To get an idea, there's an annual ranking that's a good place to start. In a map published by Transparency International, Mexico juts out in bright red -- a stark contrast to its neighbors to the north. It's a visual representation of Transparency International's annual corruption perception index, which ranks countries based on what people in the private sector say about their governments. In the words of Alejandro Salas, the organization's regional director for the Americas, Mexico has a "horrible position" in the ranking. The country came in 103rd place out of 175 nations -- tied with Boliva, Moldova and Niger -- in last year's survey, with a score of 35 on a scale of 0 (highly corrupt) to 100 (very clean). The least corrupt country, Denmark, scored a 92. The United States came in 17th place with a score of 74. "In Mexico corruption continues to be a huge problem," Salas said. It's hard to tell if the situation is getting worse, he said, but it's definitely not getting better. The country's criminal justice system consistently gets low marks, he said. Of the more than 1,000 Mexicans who responded to a 2013 survey from Transparency International, 90% said police were corrupt or extremely corrupt, and 80% felt the same way about the country's judiciary. More than 60% of people said someone in their household had paid a bribe to police in the past year. And more than half said someone in their household had forked out a bribe in court. The numbers will likely be worse next time the survey goes out, Salas said. Guzman's escape, he said, sends a powerful message. "It sends you this bad signal, that actually, the authorities are not in control. ... The signal it sends to you is that the democratic institutions of the country are not working," he said. "So who is going to believe now in the Mexican justice system, the Mexican prison system, or in the political authorities that are responsible for this?" Recent scandals fuel concerns The drug lord's prison break isn't the only thing that's eroded some Mexicans' faith in their government. Two recent high-profile scandals from the past year remain fresh in the minds of many. In September, 43 students were kidnapped and killed in southern Mexico, an operation authorities say was orchestrated by a local mayor who didn't want a protest to disrupt one of his events. Investigators said the students were abducted by police on the mayor's orders, then turned over to a gang that's believed to have killed them and burned their bodies before throwing some remains in a river. The case sparked national protests as outraged citizens said they were fed up with the government and how it was handling the crisis. JUST WATCHED Protests over missing students turn violent Replay More Videos ... MUST WATCH Protests over missing students turn violent 01:59 In November, an investigative report from Mexican news website Aristegui Noticias alleged that Mexico's President and his wife had been living in a lavish $7 million mansion owned by a contractor that's won lucrative government projects. In response, the government said first lady Angélica Rivera had been making payments on the house with money she'd made from her acting career. It wasn't long before Rivera announced she was selling the house , but the controversy over the matter is still simmering. And now, Guzman's escape. "This act cannot be downplayed. The most wanted criminal of the last generation got out of the prison that is presumably the most secure in the country," Mexican security analyst Alejandro Hope told CNN en Español. "This is a severe blow to the government and it is a severe blow to society." And for Mexico's President, Hope said, the price could be steep. "The escape is going to cost the President. Pressure is going to mount on him personally, and on his government, to make changes," he said. Words vs. actions Things looked more promising at the beginning of Peña Nieto's presidency. His campaign stump speeches and platforms listed stamping out corruption as a top goal. At the time, he was trying to win more support from skeptics who feared putting his political party back in charge. The Institutional Revolutionary Party (PRI) ruled Mexico for decades, its grip on power so strong that there was a widely known term -- the dedazo -- to describe how leaders would hand-pick their successors no matter what happened at the polls. Peña Nieto succeeded in calming concerns about his party, winning the election as he vowed to usher in a new era in Mexican politics . Earlier in his presidency, he won praise for taking on some of the country's most established institutions, like the Pemex state-run oil company and the national teacher's union. Amid the outcry over the slain students last year, he proposed a reforms once again, including a constitutional change that would give the state control over local police as part of efforts to fight corruption. But now, Salas said, many people are beginning to seriously question whether the President ever had the political will to press forward. "Of course, one thing is to propose laws and discuss institutions. ... But you need to continue showing in tangible ways that you are actually committed to it," he said. "We keep having these scandals that make one doubt the whole discourse." In drug lord's escape, how far did corruption go? Even though authorities are still investigating the details behind Guzman's escape, there's little doubt that he had help inside and outside the prison to pull off the daring plan. "I think the question really is how far up did the corruption go?" said journalist Ioan Grillo, author of "El Narco: Inside Mexico's Criminal Insurgency." "Was it simply a couple of guards who were bribed, or did it go higher up the chain?" JUST WATCHED How will 'El Chapo' stay under the radar? Replay More Videos ... MUST WATCH How will 'El Chapo' stay under the radar? 01:27 Mexican Interior Minister Miguel Angel Osorio Chong says some workers inside the prison must have played a role, and the prison's director has been fired. As reporters grilled him over Guzman's escape, Osorio Chong said he understood the frustration aimed at the government, but he argued that fighting corruption has long been a top priority of the administration. And he noted that the reason surveillance cameras didn't record Guzman's escape was because they had two "blind spots" due to human rights requirements authorities had to follow. The claim drew a swift rebuke from Amnesty International. In a Twitter post addressed to the interior minister, the human rights organization's Mexico office fired back. "Human rights are not a factor in the escape of criminals," it said, "but rather the endemic corruption of the security system." Even before Guzman's escape, corruption among high-ranking officials could have played a role in how the case was handled, influencing Mexican authorities' decision to block Guzman's extradition to the United States , CNN law enforcement analyst Tom Fuentes said. "At a certain level, the Mexican government is very afraid of him, because of the extensiveness of the corruption in that country that supports him and the other cartels," Fuentes said. "They're afraid if he came to the U.S. and was looking at Supermax and life without parole, he might just give it up and do a lot of damage to the Mexican government." Now, Mexico's top officials seem to be offering excuses and describing the escape as an isolated incident rather than owning up to rampant corruption in their ranks, Francisco Rivas of the National Citizen Observatory for Security Justice and Legality said in a post on the organization's website "I was left with a terrible doubt," he said, "about whether the cartel is more powerful than the Mexican government."
An efficient on-demand routing approach with directional flooding for wireless mesh networks Current on-demand ad-hoc routing protocols are not appropriate for wireless mesh networks (WMNs), because flooding-based route discovery is both redundant and expensive in terms of control message overhead. In this paper, we propose an efficient on-demand routing approach with directional flooding (DF), which is suitable for the WMNs with limited mobility. In the route discovery process to reach a gateway, our DF scheme can reduce the number of route request (RREQ) packets broadcast by using a restricted directional flooding technique. Simulation results show that ad hoc on-demand distance vector (AODV) with DF (AODV-DF) can significantly reduce routing overhead by RREQ packets and enhance overall performance compared with the original AODV.
OvASP1, the Onchocerca volvulus homologue of the activation associated secreted protein family is immunostimulatory and can induce protective antilarval immunity Vaccination of mice with a recombinant protein, OvASP1, the Onchocerca volvulus homologue of the activation associated secreted gene family stimulated very high titres of both IgG1 and IgG2a without adjuvant. rOvASP1 was also immunoreactive with IgG isotypes from both O. volvulusinfected (INF) and putatively immune (PI) humans, with higher IgG4 in the former group. The protein also stimulated IFN secretion by PBMC from INF and PI and IL5 only in INF. Using a mouse diffusion chamber model, vaccination with rOvASP1 resulted in partial but significant protection against challenge with infective thirdstage larvae (L3) but only when formulated with Freund's complete adjuvant (FCA) or alum. Protection was Th1dependent (highly elevated IgG2a) with FCA and contingent on a strongly Th2skewed (IgG1) response with alum. IgE responses to rOvASP1 with or without adjuvant were weak or absent. When immunization using rOvASP1 in adjuvant failed to induce adequate Th1 (FCA) or Th2 (alum) responses, protection efficacy was compromised. The recombinant protein appears to stimulate a mixed Th1/Th2 response but the outcome in terms of protective immunity is the result of a subtle interplay of its intrinsic and adjuvantaugmented properties. OvASP1 is potentially secreted based on its localization in the secretory granules of L3.
Tibetan musician Techung plays the danmyan, a traditional Tibetan musical instrument, at his hotel in Taipei on Dec. 9. Determined to use music to raise awareness of Tibet’s struggle to regain independence and to introduce his culture, Tibetan musician Techung has been touring the world and made his first pubic performance in Taiwan at the Tibet Freedom Concert in Taipei last Wednesday. “I actually never wanted to learn music — I was put into a music school when I was little,” Techung said in an interview with the Taipei Times in Taipei last week. Techung was born on the border between Tibet and India in 1961 when his parents escaped from China-controlled Tibet. As Techung reached school age, his parents decided to send him to the Tibetan Institute of Performing Arts (TIPA) in Dharamsala, India, the seat of the Tibetan government in-exile. The institute was created in 1959 with the goal of preserving the traditional Tibetan performing arts. Although Techung did not choose to study music, he said that music has had a profound impact on his life. One of the songs he performed at yesterday’s concert describes how people hang wind horse flags — or Tibetan prayer flags — atop a mountain to express gratitude for blessings from gods for their safety during a journey. “It’s a very good representation of the Tibetan culture, because respecting nature is an important part of the Tibetan culture,” he said. Traditional Tibetan lyrics are usually either about religious beliefs or about respecting the environment. “We express our love for nature, our gratitude towards the gods for gifting us with the beautiful environment, reminders to protect the environment and warnings about punishment from the gods if you damage it,” Techung said, adding that along with the seemingly “harder” topics in Tibetan music, there are also many folk songs praising romantic love. “For those of us born in exile and living in exile, I also wrote a lot of songs about my experiences in exile, and my feelings for Tibet,” he said. Using mostly traditional Tibetan instruments, Techung has won the best modern and traditional music award at a Tibetan Music Awards ceremony in Dharamsala in 2003, and a best Asian folk album title in the US. After being trained at the TIPA and touring with the institute for 17 years, Techung moved to the US when he was 30 to pursue studies in theater and has been living there ever since. “After having performing in concerts in the west, I decided in recent years that it’s about time for me to come back to Asia,” he said. Before coming to Taiwan, he also performed in Japan earlier this year. Traveling with the Students for a Free Tibet executive director and deputy director to several universities around the country since his arrival last week, Techung has a very good impression of Taiwan. “The people here are very friendly, and I was excited about the interest that university students in Taiwan have taken in the Tibet issue,” he said. However, he also wanted to warn the Taiwanese about developing a relationship with China.
Birth Weight, Season of Birth and Postnatal Growth Do Not Predict Levels of Systemic Inflammation in Gambian Adults Objectives Studies testing whether systemic inflammation might lie on the causal pathway between aberrant fetal and post-natal growth patterns and later cardiovascular disease have been inconclusive, possibly due to the use of single markers of unknown predictive value. We used repeated measures of a comprehensive set of inflammatory markers to investigate the relationship between early life measures and systemic inflammation in an African population. Methods Individuals born in three rural villages in The Gambia, and for whom early life measurements were recorded, were traced (n = 320). Fasting levels of eight inflammatory markers (C-reactive protein, serum amyloid A, orosomucoid, fibrinogen, 1-antichymotrypsin, sialic acid, interleukin-6 and neopterin) were measured, and potential confounding factors recorded. The association between early life measurements and systemic inflammation was assessed using regression analysis. Results Levels of most markers were unrelated to early growth patterns. In analyses adjusted for age and sex, more rapid growth between birth and 3 months of age was associated with higher levels of fibrinogen, orosomucoid, and sialic acid. These relationships persisted after further adjustment for body mass index but after full adjustment only the association with fibrinogen remained. Conclusions This study provides little evidence that size at birth or growth in early infancy determine levels of inflammatory markers in young Gambian adults. Am. J. Hum. Biol. 25:457464, 2013. © 2013 Wiley Periodicals, Inc. There is evidence that size at birth and rate of development during early life, in particular low birth weight (b;;;;) and accelerated early postnatal growth (;;;Adair 2007;;), predict increased risk of future cardiovascular disease (CVD) and its associated risk factors. Despite the large body of evidence in this field it is still not understood whether size at birth, the rate of postnatal growth or a combination of small size at birth followed by accelerated growth is most important for determining disease risk. Furthermore, the biological mechanism(s) underpinning the associations between early life environment and later disease risk also remain unclear. Research in the last decade has also shown that raised levels of systemic inflammatory markers predict later morbidity and mortality from CVD ((Danesh et al.,, 2000(Danesh et al.,, 2004. Previous studies investigating the relationship between early life environment and systemic inflammation have primarily focused on the relationship between birth weight and C-reactive protein (CRP) (;;Gillum, 2003;;) or fibrinogen (;;;a;;;). These studies, predominately in white Caucasian populations and typically defining systemic inflammatory status using a single measure of one inflammatory marker, have reported conflicting results. The current study widens the evidence base in this field by investigating whether early life variables, including early postnatal growth, predict levels of a wide range of inflammatory markers in Gambian adults. This is the first study to test this hypothesis in an indigenous African population. Furthermore, this study collects duplicate measures of inflammatory markers to assess which marker(s) most reliably characterises chronic systemic inflammation. Study population The study population was all consenting individuals aged 18-30 years born in three rural villages (Keneba, Kantong Kunda, or Manduar) in West Kiang, The Gambia (West Africa) between 1976 and 1987 and for whom birth weight was recorded as part of the United Kingdom Medical Research Council (MRC) Keneba Antenatal Scheme. Details of this scheme and the research setting are available elsewhere (). Participants attended a study clinic at one of two MRC stations depending on whether they lived in rural West Kiang (MRC, Keneba) or in the urban centres near the coast (MRC, Fajara). Logistical limitations meant that only potential participants traced to within 90 min drive from either of the two stations were recruited. Ethical approval for this study was obtained from the London School of Hygiene and Tropical Medicine Ethical Committee and the joint Gambian Government / MRC Unit The Gambia, Ethics Committee. All study participants gave informed written consent before participating. Early life measurements Birth weight (kg) was recorded by the resident paediatrician or midwife to the nearest 10 g and within 72 h of birth. Gestational age was assessed using the score of Dubowitz et al.. Low birth weight (LBW) was defined as <2,500 g. Postnatal weight was measured regularly, at postnatal clinics, to the nearest 10 g using standard equipment; the exact timing of measurements varied by child but date of measurement was recorded for all weights. Growth velocity from birth to 3 months was calculated as the difference between population-derived sex-specific birth weight standard deviation (SD) score and population-derived sex-specific weight at 3 months SD score, where weight at 3 months was defined as the weight nearest to 3 months that was recorded between 2.0 and 4.0 months of age. The weight nearest 12 months that was recorded between 9.6 and 14.4 months of age was used for weight at 1 year. Hungry (wet) season of birth was defined as a birth month between July to December inclusive and harvest (dry) season between January to June inclusive (). Systemic inflammatory markers Blood samples were collected after an overnight fast and then centrifuged for 20 min at 3,000 rpm and 4 C within 1 h of collection and immediately frozen to 280 C. Serum samples were allowed to clot at room temperature, and then centrifuged and processed as described for plasma. Analyses of CRP, serum amyloid A, a 1-antichymotrypsin (ACT), orosomucoid, interleukin-6 (IL-6), sialic acid and neopterin, were performed at MRC Human Nutrition Research, Cambridge, UK. Fibrinogen was measured at Addenbrooke's Hospital Clinical Laboratory, Cambridge, UK. CRP was measured using a high-sensitivity particleenhanced turbidimetric immunoassay (Dade Behring, Milton Keynes, UK) on a Dimension ARX Analyzer (Dade Behring, Milton Keynes, UK). The assay has a lower detection level of 1.1 mg/l. Plasma serum amyloid A was measured in duplicate using the enzyme-linked immunosorbent assay (ELISA) principle (Anogen, Mississauga, Canada). Fibrinogen was measured using the Clauss assay. Plasma orosomucoid was measured using an immunoturbidimetric method (Sentinel, Milan, Italy) adapted for use on the Dimension ARX Analyzer (Dade Behring, Milton Keynes, UK). ACT was measured using an immunochemical assay (Dako, Glostrup, Denmark) adapted for use on the Hitachi 912 analyzer (Roche, Welwyn Garden City, UK). IL-6 was measured in duplicate using a highsensitivity ELISA principle (Diaclone, Besanon, France). Sialic acid was measured by a colorimetric enzyme assay (Roche, Welwyn Garden City, UK) and adapted for use on the Hitachi 912 Analyzer (Roche, Welwyn Garden City, UK). Serum neopterin was measured in duplicate using a competitive enzyme immunoassay principle (BRAHMS Atiengesellschaft, Berlin, Germany). Interassay coefficients of variation were <9.6% for all analyses. Potential confounding factors Anthropometry. Weight was measured to the nearest 100 g using an electronic portable scale (Chasmors, UK) and height to the nearest mm using a portable stadiometer (CMS Weighing Equipment Ltd, London, UK). BMI (kg/ m 2 ) was calculated and categorized using standard cutoffs. Waist and hip circumference (cm) were measured to the nearest 0.1 cm. Central obesity was defined as a waist-to-hip ratio 0.90 (men) or 0.80 (women) (). Whole body composition was measured using dual energy X-ray absorptiometry (DXA) on a Lunar DPX1 (Lunar Corporation, Madison WI). Infectious disease markers. Participants were only enrolled if considered 'healthy' at the time of recruitment, based on a screening questionnaire collecting data on recent clinic visits, current medication use, appetite and recent weight loss. Axillary temperature was also recorded. A thick film was prepared from whole blood to look for the presence of malaria parasites. The remaining whole blood was used to measure white blood cell (wbc), lymphocyte, granulocyte and monocyte counts (10 9 /l) and haemoglobin (g/dl) using a Medonic CA 530 Oden 16 Parameter System Haemoglobinometer (Medonic, Stockholm, Sweden). Chronic disease markers. Fasting glucose, total cholesterol, high density lipoprotein (HDL)-cholesterol, triglyceride and leptin levels were measured at MRC Human Nutrition Research, Cambridge, UK. Plasma glucose concentration was measured using an adaptation of the hexokinase-glucose-6-phosphate dehydrogenase method (Dade Behring, Milton Keynes, UK). Impaired fasting glucose (IFG) was defined by a fasting plasma glucose 6.1 and 6.9 mmol/l and type 2 diabetes by a level 7.0 mmol/l (). Plasma lipids were measured using enzymatic methods on a Dade Behring Dimension (Dade Behring, Milton Keynes, UK). Low density lipoprotein (LDL)-cholesterol was derived using the Friedewald equation (). Leptin was measured by ELISA (R&D Systems, Abingdon, UK). Insulin was measured at Addenbrooke's Hospital Clinical Laboratory, Cambridge, UK using a time-resolved fluoroimmunoassay (AutoDELFIA, PerkinElmer Life & Analytical Sciences, Wallac Oy, Turku, Finland). Blood pressure was measured using a fully automatic digital blood pressure monitor (Omron 7051T, Omron Healthcare, IL. Hypertension in adults was defined by a systolic blood pressure (SBP) 140 mm Hg and/or a diastolic blood pressure (DBP) 90 mm Hg. Lifestyle measures. Questionnaire data confirmed whether participants were still in full-time education and their smoking status (current smoker, ex-smoker or never smoked). Smoking status was analyzed as ever or never smoked due to the small number of current smokers in the study population. Additional data were collected, by a female fieldworker, on whether women used oral or injectable hormonal contraceptives. Data collection protocol Markers of systemic inflammation and infectious disease status were assessed at two time points . In a random sub-sample of 15 women these levels were further assessed at Day 28 to investigate the reliability of a single measurement to characterise habitual systemic inflammatory status. All other measurements were collected at baseline only. Baseline data were collected between 23 February and 1 June, 2-week data between 9 March and 15 June and 4-week data between 24 March and 30 March, 2006. STATISTICAL METHODS A total of 209 (65.3%) participants had a CRP and 266 (83.1%) an IL-6 measurement below the minimum assay detection level (<1.1 and <0.8 pg/ml, respectively). CRP and IL-6 were therefore analyzed as binary variables (CRP <1.1 vs. 1.1 mg/l; IL-6 <0.8 vs. 0.8 pg/ml). Orosomucoid, ACT, sialic acid and neopterin were log e transformed to normality. A log e transformation of serum amyloid A failed to produce a normal distribution and it was necessary to add 100 to each serum amyloid A value and then take the log e transformation to obtain a normal distribution. The effects of potential confounding factors (listed in Table 1) on levels of inflammatory markers were assessed using logistic regression analysis for CRP and IL-6 and linear regression analysis for all remaining inflammatory markers. Population-derived SD-scores for continuous measures of chronic and infectious disease were used when examining their association with inflammatory markers. This enabled direct comparison between the effects of different measures on levels of inflammatory markers. SD scores were generated from untransformed data for those variables which were normally distributed and from transformed data (log e ) for those variables which were transformed to produce a normal distribution. All SD scores were sex-specific. SD-scores could not, DXA 5 dual energy X-ray absorptiometry. a Underweight 5 body mass index (BMI) <18.5 kg/m 2 ; normal 5 18.5-24.9 kg/m 2 ; overweight 5 25.0-29.9 kg/m 2 ; obese 5 30.0 kg/m 2. b DXA measurements were not available for 36 study participants, all of whom were located in the urban coastal areas and were unable to travel to Keneba (where the DXA machine was located) for measurements. c Central obesity defined as waist circumference 90.0 cm (males) or 80.0 cm (females). d Hypertension was defined as a systolic blood pressure 140 mm Hg and/or a diastolic blood pressure 90 mm Hg. e Impaired fasting glucose was defined as a fasting glucose 6.1 and 6.9 mmol/l. f Type 2 diabetes was defined as a fasting glucose 7.0 mmol/l. however, be generated for anthropometric, body composition and leptin measurements as a number of these measures could not be transformed to produce the normal distribution necessary to generate SD scores. The effect of anthropometric, body composition and leptin measurements on levels of inflammatory markers were examined in males and females separately because of the strong sex-differences in these measures observed within the study population. The associations between birth weight, weight at 1 year and postnatal growth with levels of inflammatory markers were analyzed using logistic regression for CRP and IL-6 and linear regression for all remaining inflammatory markers according to the analytical approach recommended by Lucas et al.. This approach uses four separate models. The 'early model' relates early size (e.g. birth weight) to later outcome (e.g. adult fibrinogen level). The 'later model' relates later size (e.g. BMI) to later outcome. The 'combined model' is the early model adjusted for later size. The 'interaction model' is the combined model including an early size/later size interaction term (e.g. birth weight*BMI). Associations between low birth weight and season of birth with levels of inflammatory markers were investigated using logistic regression for CRP and IL-6 and linear regression for all remaining inflammatory markers. All analyses were undertaken twice: first, adjusting for age and sex only and second, adjusting for all those potential confounding factors observed to predict levels of each inflammatory marker. Study participants had up to three measures of each inflammatory marker recorded at baseline (Day 0), 14 days and 28 days. Of the 320 participants for whom levels of inflammatory markers were available at Day 0, 303 had repeat measures at Day 14 and a further 15 at Day 28. Intraclass correlation coefficients were used to generate reliability estimates for continuous inflammatory variables using all available data from Day 0, Day 14 and Day 28. In order to get the most accurate measure of systemic inflammation (i.e. a measure that was not 'falsely' elevated by underlying infection) the lowest level of each inflammatory marker were used in all analyses. As orosomucoid, ACT, sialic acid, neopterin and serum amyloid A were log e transformed to normality, the b-coefficients generated from their linear regression models are presented as a percentage unit increase; data from untransformed variables (fibrinogen) are presented as absolute changes. Figure 1 describes the selection of study participants and lists reasons for noninclusion. A total of 781 individuals met the study criteria, of whom 148 were excluded prior to tracing. Fieldworkers traced the remaining 633 eligible individuals of whom 181 were subsequently found to be unavailable for study. Fieldworkers were able to contact 86.7% (n 5 392) of the 452 individuals traced of whom 72 (18.4%) declined to participate. The majority of those who declined were male (n 5 46; 63.9%). The remaining 320 individuals represent 70.8% of those traced and available for study (n 5 452). Selection of study participants Compared with nonparticipants, study participants were younger (mean (95% confidence interval) age 22.2 (21.8, 22.5) vs 23.0 (22.7, 23.3) years; P 5 0.0001) and a slightly higher percentage were male (51.9 vs 45.3%; P 5 0.07). Mean birth weight, gestational age, change in SD score from birth to 3 months and weight at 1 year and percentage born during the hungry season or low birth weight were not different between participants and nonparticipants. Table 1 describes the characteristics of the study population by gender. Forty-one participants (12.8%) were born low birth weight with a higher prevalence in female compared with male participants (17.5 vs. 8.4%.; P 5 0.02). Gestational age ranged from 32.0 to 41.6 weeks with 30 participants (11.2%) born premature (<37 weeks gestation). The majority of participants had a BMI within the normal range although the percentage overweight, or centrally obese, was considerably higher in females compared to males (P < 0.001 for both). As expected in young Gambian adults there was a low prevalence of hypertension (<3%), IFG (<1%) and type 2 diabetes (<1%). Less than 1% of the study population had asymptomatic malaria. There was a clear sex-difference in tobacco use with no females, compared with 30% of males reporting ever using tobacco regularly. Few women (3.3%) used hormonal contraceptives. Table 2 presents the full summary statistics and reliability estimates for each inflammatory marker. There was considerable variation in the reliability estimates between inflammatory markers; the highest estimate (0.718) was observed for sialic acid. Association between early life exposures and systemic inflammation Regression analysis adjusted for age and sex. In regression analyses, each adjusted for age and sex, there was no evidence that birth weight, low birth weight, weight at 1 year or season of birth predicted levels of systemic inflammatory markers. There was evidence that a higher change in SD score from birth to 3 months was associated with higher levels of fibrinogen, orosomucoid and sialic acid but no association was observed with the remaining five markers. For each one unit increase in change in SD score fibrinogen levels increased by 0.11 g/l (95% CI 0.03-0.18 g/l; P 5 0.004), orosomucoid levels increased by 3% (95% CI 0.4-5.4%; P 5 0.02) and sialic acid levels by 2% (95% CI 1-4%); P 5 0.003). Regression analysis fully adjusted for potential confounding factors. For each inflammatory marker, multiple regression analyses were used to adjust for measures of location, chronic disease, infectious disease and lifestyle (as listed in Table 1) that predicted levels of that inflammatory marker. To facilitate comparisons with other studies, and because data were available for all study participants, BMI was used as the primary measure of adult adiposity. After full adjustment there was no association between IQR 5 inter-quartile range; SD 5 standard deviation. a Logged data used to report geometric mean (inter-quartile range). b Range of levels in those individuals categorised as having a C-reactive protein level 1.1 mg/l and an interleukin-6 level 0.8 pg/ml. birth weight, low birth weight, weight at 1 year or season of birth and levels of inflammatory markers. After full adjustment, early postnatal growth still predicted higher levels of adult fibrinogen (0.10 (95% CI 0.03-0.16) g/l; P 5 0.007) and showed a weak positive association with sialic acid (1.00 (95%-0.05 to 2.92)%; P 5 0.06) but any association with orosomucoid levels was removed. Table 3 reports the relationship between early postnatal growth and fibrinogen according to the analytical approach recommended by Lucas and colleagues. Supporting Information Tables S1-A-D and S2-A-D present the regression analyses for four selected markers (CRP, orosomucoid, sialic acid and IL-6) by birth weight and early postnatal growth. There was no evidence that adjusting for DXA measures of adiposity altered the association between postnatal growth and levels of inflammatory markers (data not shown). The only exception to this was that adjustment for DXA, total or percent fat, rather than BMI, forced the association between postnatal growth and levels of sialic acid toward the null. DISCUSSION The current study aimed to test the hypothesis that systemic inflammation is young adulthood is predicted by early life exposures. Using a comprehensive set of markers in a cohort of 320 young Gambian adults, we have found little evidence for an effect of a number of early life parameters on later inflammatory status. This is the first published study to investigate this association in an indigenous African population. The inclusion of a range of early life measures, in particular early postnatal growth, and a large and varied number of markers to assess systemic inflammation considerably widens the evidence base in this field. To date, the only published data supporting an association between prenatal exposures and systemic inflammation comes from studies looking at the relationship between birth weight and either CRP or fibrinogen. Previous studies in infants and young children have reported no association between birth weight and levels of CRP (;Gillum, 2003;), although one study was hampered by the failure to use a high-sensitivity assay. Studies in adults, including data from 5,849 Finnish men and women aged 31 years (northern Finland 1966 Birth Cohort;) and 1,603 middle-aged Scottish adults () have reported that, after adjustment for confounding factors, lower birth weight was associated with higher levels of CRP. A separate analysis of the northern Finland Birth Cohort also reported that lower birth weight predicted higher adult total leukocyte count (). Likewise, data from the Philippines shows that birth weight was negatively associated with CRP in adulthood ((McDade et al.,, 2012, A number of previous studies have investigated the association between birth weight and fibrinogen (as a measure of CVD risk). The findings from previously published studies in adults are inconsistent (;a) but in line with this study, the majority have reported no association (;;). Few studies have investigated the association between postnatal growth and systemic inflammation. Data from 3,827 adults in the 1982 Pelotas (Brazil) birth cohort suggest that rapid weight gain across the life course predicts higher CRP at age 23 years (). In the northern Finland birth cohort, participants with highest tertile body mass index (BMI) at 31 years and lowest tertile birth weight had the highest average CRP levels (). It has been suggested that the association between low birth weight and increased CVD risk factors in later life may not be a result of low birth weight per se but the subsequent rapid postnatal growth (Kuh and Ben-Shlomo, 1997). The potential mechanisms explaining any association between rapid postnatal growth and adult systemic inflammation are not understood and cannot be tested by the current study design. A series of systematic reviews have reported that rapid catch-up growth is a risk factor for later obesity (;Monteiro and Victora, 2005;Ong and Loos, 2006). One potential mechanism, therefore, is that later adiposity determined by catch-up growth explains any association between rapid postnatal growth and levels of inflammatory markers. In the current study, adjustment for BMI did not alter the association between postnatal growth and fibrinogen; suggesting that in this study population catch-up growth, independent of adult adiposity, was important. A second TABLE 3. Linear regression associations between postnatal growth (change in standard deviation (SD)-score from birth to three months) and adult levels of fibrinogen) Postnatal growth (change in SDscore from birth to 3 months) Body mass index (kg/m 2 ) Interaction b Regression model a n b-coefficient (95 % CI) potential mechanism is that rapid catch-up growth and raised levels of fibrinogen and sialic acid arise from a common genotype. However, there is currently no evidence in support of this hypothesis, and it cannot be tested from the data collected in the current study. A number of published reports have tested the repeatability of CRP within individuals across longitudinal collections (e.g., ;;;). However, only one previous study has collected repeat measures of a range of inflammatory markers to assess which most reliably characterizes systemic inflammation (). Browning et al. compared three repeat measures of a range of cytokines and acute phase response markers (including CRP) in 15 overweight white Caucasian UK women over a 6 month period and concluded that IL-6 and sialic acid were best at characterising systemic inflammation with a single measure. In the current study CRP and IL-6 were analyzed as binary variables and the repeatability of both measures could not be assessed. However, of the six remaining inflammatory markers studied sialic acid had the highest repeatability co-efficient. The observation, in two different populations, that a single measure of sialic acid provides a reliable measure of habitual systemic inflammation gives further weight to the conclusion by Browning et al., that future studies should consider using sialic acid as a measure of systemic inflammation. Sialic acid is not an acute phase protein but the terminal glycoprotein found in a number of acute phase proteins. It has been estimated that these glycoproteins account for approximately 70% of the total sialic acid concentration (). Browning et al. argue that sialic acid may therefore provide an "integrated measure of the inflammatory response" which is "less prone to the day-to-day variability of individual markers"; there is no reason why this explanation is not pertinent to the current (and other) study populations. The primary strength of the current study is the collection of data on eight inflammatory markers allowing a more detailed investigation into the association between early life parameters and systemic inflammation than previous studies. Furthermore, the collection of repeat samples of inflammatory markers enabled the reliability of a single measure to be assessed for the first time in an indigenous African population. The very low incidence of risk factors for CVD in this population limited the possibility that systemic inflammation was elevated due to clinical or subclinical atherosclerosis. One of the difficulties of measuring systemic inflammation in an African, compared with UK, population is that levels of markers may be elevated as a result of infectious disease. Levels of infectious disease in the study population were limited by carrying out data collection in the dry season (November to May) . The prevalence of malaria parasitaemia and incidence of elevated levels of inflammatory markers indicated that the population under investigation were healthy at the time of study (for example, only 2% of individuals had CRP levels > 6 mg/dl). The majority of published data linking inflammation to later CVD comes from industrialised settings. The possibility exists, therefore, that within rural African or other indigenous populations inflammation may not always be a risk factor for CVD. In the Tsimane tribe from Bolivia, for example, markers of infection and inflammation were much higher than among comparable US adults (). Such data suggest that inflammation in such settings may be offset by other factors such as an active lifestyle and favorable body mass, and hence not result in morbidity or mortality from chronic degenerative diseases. A critical weakness of the study is that the population measured represents only 41% of those who met the initial study criteria (n 5 781). Comparisons between participants and nonparticipants however suggest no differences between the two groups in the exposure variables available for analysis. In addition, it is important to highlight that this study was investigating comparisons withinsubjects and it is unlikely that in nonparticipants any associations between early life factors and levels of inflammatory markers would be in the opposite direction to that observed in participants. It has been suggested that the association between birth weight and other determinants of cardiovascular disease (e.g. blood pressure) amplifies with age; the relatively young age of the study population (19-30 years) may have prevented any association between early life events and levels of inflammatory markers being observed. Furthermore, the study participants were relatively lean and therefore underlying associations may exist but remain hidden. A technical limitation of the study design was that despite using high sensitivity kits the CRP and IL-6 assays were not sufficiently sensitive in the lower ranges. In total, 65 and 83% of participants had levels of CRP and IL-6 below the assay ranges, respectively; CRP and IL-6 were subsequently analyzed as binary variables which limited the power to observe any association with early life factors. No evidence-base was available with which to decide the cut-offs for generating the binary variables. It is possible that the cut-offs used in this study (above and below the lowest assay cut-off) also limited observations on the effect of early life factors on levels of CRP and IL-6. CONCLUSIONS The main hypothesis of this study was that early life programming of systemic inflammation was a mechanism to explain the association between poor early life growth and increased risk of adult CVD; this study provided little evidence to support this hypothesis among young, lean Gambians.
A 31-year-old man was shot and seriously wounded at an Austin neighborhood gas station Sunday morning on the West Side. About 5 a.m., he got into an argument with another male at the station in the 5100 block of West Chicago, according to police News Affairs Officer Jose Estrada. The other person got into a vehicle and opened fire as he drove away, striking the 31-year-old in the left shoulder, Estrada said. Acquaintances dropped the man off at West Suburban Medical Center, where he was listed in serious condition.
The Role of Different Types of Information Systems In Business Organizations- A Review For the last twenty years, different kinds of information systems are developed for different purposes, depending on the need of the business. In todays business world, there are varieties of information systems such as transaction processing systems (TPS), office automation systems (OAS), management information systems (MIS), decision support system (DSS), and executive information systems (EIS), Expert System (ES) etc. Each plays a different role in organizational hierarchy and management operations. This study attempts to explain the role of each type of information systems in business organizations.
The cast of “Friday Night Lights.” (DirecTV) When “Friday Night Lights” debuted in the fall of 2006, I was just out of college and in possession of a television hooked up to cable for the first time in my life. I was just learning to watch television at all, mostly by getting swept up into reruns of the “Law & Order” franchise and “NCIS.” And so I missed the debut of Jason Katims’s sensitive, insightful drama about a small Texas town obsessed with its high school football team, starring Kyle Chandler as Eric Taylor, the man charged with bringing the squad to victory, and Connie Britton as Tami Taylor, Eric’s brilliant, insightful wife. This summer I decided to remedy this hole in my education. And after finishing all of the seasons yesterday, I am actually glad I came late to “Friday Night Lights” and got to appreciate the show after watching dramas such as “The Sopranos” and “Breaking Bad.” Especially in context of television’s obsession with difficult middle-aged men and brilliant serial killers, “Friday Night Lights” feels like a miraculous aberration. It is astonishing how tenderhearted, how emotional and how fragile “Friday Night Lights” allows its boys and men to be. It helps that the boys, with the exception of talented, drunken Tim Riggins (Taylor Kitsch), actually look like boys. They have the bodies of people who have not yet finished growing, and the tangled tongues of people who are still trying to figure out what they feel, much less how to express it. And rather than acting like adults who happen to still be going to class, the boys act like teenagers in ways both foolish and remarkable, advancing toward adulthood in fits and starts and occasionally getting derailed by exalted dreams of what it means to be grown-up. That conflict between impulse and the long view weaves through the story of Smash Williams (Gaius Charles). After he loses a football scholarship during his senior year, Smash goes back and forth about whether he wants to train for another chance to continue playing. When he tells Coach Taylor that he intends to take a management-track job at Alamo Freeze, it is with a sad and lovely acceptance of a more modest vision of providing for his family. That he does get a second opportunity to play serious college football is an occasion for surprise and joy — what Smash once took for granted, he now sees for the remarkable thing that it is. Similarly, former star quarterback Jason Street (Scott Porter) clings to a series of diminishing dreams after the injury that takes away his ability to walk in the “Friday Night Lights” pilot. At first he longs to find some way to be an elite athlete by trying out for a national rugby team for wheelchair users. When that proves a disappointment, Street places his faith in an experimental treatment for his spine. As a boy, Street could not imagine what his life might be like if he was not a star. As a man, he gains a broader perspective of what a good life might look like. For many of the boys, their relationship with Coach Taylor is the first time an adult man has been around to expect things from them and to keep the promises he makes to them in return. Matt Saracen (Zach Gilford), who is promoted to quarterback after Street’s injury, has essentially been abandoned by his parents: That Coach Taylor places his trust in Matt is both exciting and a tremendous burden, given that Saracen cares for his grandmother by himself. Whereas Matt is able to meet the expectations Coach Taylor has for him at home and on the field, a similarly fatherless Tim Riggins struggles after he leaves his boots on the football field where he has just lost a state championship. Despite their differences, Coach Taylor’s love for both young men is not conditional. At the end of “Friday Night Lights,” Matt is joining the Taylor family on an official basis, having become engaged to Coach Taylor’s daughter, Julie (Aimee Teegarden), while Riggins falls less formally under the coach’s protection. Coach Taylor has helped secure Riggins’s parole and given the young man a standing offer of help should Riggins choose to ask for it. J.D. McCoy (Jeremy Sumpter), a young quarterback who replaces Saracen in the third season of the show, finds Coach Taylor to be a kinder man than his own bullying father, Joe (D.W. Moffett). When Coach Taylor calls Child Protective Services after Joe attacks J.D., the choice becomes too much for the young man, who sides with the father who hurt him rather than the father figure who is trying to protect him. Vince Howard (Michael B. Jordan), who Coach Taylor takes on as his quarterback when he restarts the football program at the reopened East Dillon High School, makes a different choice. He briefly reunites with his estranged father, Ornette (Cress Williams). Later, Vince sees how Ornette, newly released from prison, threatens the fragile balance that Vince and his mother have built together, and he returns to Coach Taylor’s tutelage. And most touchingly, Coach Taylor’s boys are still young enough to confess to their fears and insecurities and ask for help in matters big and small, rather than keeping their own, still-developing counsel. It was always inevitable that a boy as good as Landry Clarke (Jesse Plemons) would be unable to keep the secret that he had killed the man who attacked his first love. Later, Landry anxiously asks Tami Taylor whether there is some flaw in him that drives away women. The scene in which Matt Saracen finally confesses his worst fear to Coach Taylor, the terror that there is something in him that makes everyone leave him behind, is among the most touching of the series. Even Luke Cafferty (Matt Lauria) and Tim Riggins — the former rendered inexpressive by a surfeit of parenting, the latter by his parentless upbringing — manage to give the girls they love some understanding of how deep their feelings run. The tenderness of “Friday Night Lights” is not limited to boys on the cusp of manhood. Instead, it extends to the adults in the series, in beautiful and surprising ways. I am particularly in awe of Brad Leland’s performance as chief football booster Buddy Garrity. Buddy could have been a textbook Difficult Man, a toxic middle-aged king swaggering through his patch of Texas, or a paper-thin lyric from a Bruce Springsteen song, existing mostly to improve Coach Eric Taylor by comparison. Leland’s face collapses when Buddy’s wife, Pam (Merrilee McCommas), refuses for a final time to reconcile with him after he damages their marriage through a careless affair. He tucks his chin down to his chest when his younger children vent their anger at him during what was supposed to be a pleasant family camping trip that has gone badly awry. Buddy gets drunk, and he gets anxious, and he can be an awful pest. The emotional qualities that make Buddy an entitled nag and a bad husband also turn out to be what make him capable of growth. It is a shame that “Friday Night Lights” dropped the subplot that involved Buddy parenting Santiago (Benny Ciaramello), a promising football prospect — it was lovely to watch Buddy’s feelings for the boy blossom from an initial interest in how Santiago might help the Dillon Panthers into full parental affection. Buddy’s growing sensitivity is what leads him to renounce his beloved alma mater and the boosters who are taking it in a new direction in the fourth season of “Friday Night Lights.” Buddy loves his state championship ring and the prospect of other boys from Dillon wearing similar baubles, but Coach Taylor gets to him. Buddy finds that how Dillon wins has come to matter to him. When he makes a painful switch in his loyalties to help resurrect the East Dillon Lions after the town’s second high school reopens, Buddy’s bluff camaraderie with the black stars of Lions teams past takes him further than Coach Taylor’s awkward efforts to sell himself to the men who are skeptical that a white coach from a white school can rebuild a black program. “Friday Night Lights” allows men these tremendous vulnerabilities not because the show believes that men are weak, but because it knows they are strong. Boys in Dillon, Tex., drink too much, they pretend that they are thinking about buying motorcycles, they distance themselves from the girls they love. Men cheat, push their children too hard and take too long to do right by the women they love so much. But unlike the anti-heroes who dominate so many cable dramas or the action heroes who shoot their ways through big screen after big screen, the good boys and men of “Friday Night Lights” get over these behaviors. Their need to prove their masculinity is the thing of a moment, or at worst, something they conquer after considerable trouble. Talented young quarterback J.D. McCoy and his abusive, obsessive father, Joe, are less villains than tragic figures. They never find the same security that means Matt Saracen can marry the girl who caught him kicking cardboard boxes in an alley, that gave Jason Street the courage to chase his family all the way to New York, that Tim Riggins thought he left on the football field and finds again, or that lets Coach Taylor put his wife’s career before his own.
FOR SALE IS A ONE OWNER 2007 TOYOTA FJ CRUISER 6MT 4X4. ODOMOTER READS 178K MILES. OPTIONED WITH THE FOLLOWING FROM FACTORY. 6 SPEED MANUAL TRANSMISSION IS VERY RARE AND ALMOST IMPOSSIBLE TO FIND.
. In 1027 lateral radiograms of the ankle in a Caucasian population, 161 plantar and/or dorsal calcaneal spurs (15.7%) were diagnosed. Plantar spurs were more common than dorsal spurs (11.2 and 9.3% respectively). Prevalence of both spurs increases considerably with the rising age. Dorsal spurs appear slightly earlier than plantar spurs. The spur frequencies are similar in left and right feet. The plantar spurs were significantly (p < 0.0001) more common in women than in men in general, while dorsal spurs were more frequent in men than in women up to the age of 70. The previously reported higher frequencies of plantar and dorsal calcaneal spurs in women than in men are probably a result of a disproportionally higher number of women in higher age in the groups studied. In forensic medicine, calcaneal spurs provide evidence for identity and age of unknown corpses, and to certain extend their profession, physical activities and constitution during life.
Bone mineral content in idiopathic calcium nephrolithiasis. The calcium content of the central one third of the skeleton was measured using neutron activation analysis in 109 patients with idiopathic calcium nephrolithiasis. The bone mineral content (calcium bone index or CaBI, corrected for body size) was significantly decreased by 5.2% in 20- to 60-year-old patients with calcium nephrolithiasis (p less than 0.01). Under age 50 the decrease was more marked in 64 males (7.1%; p less than 0.02) than in 21 females (4.1%; p = NS). There was a significant negative correlation of CaBI with fasting urine calcium/creatinine ratio (r = 0.39; p less than 0.01), but no correlation with age or indices of parathyroid function. The decrease in bone mineral content did not appear to be progressive. The decrease in CaBI indicates negative calcium balance, either in the past or at present, in patients with calcium nephrolithiasis and does not favour increased intestinal absorption as a primary cause. The lack of correlation of CaBI with parameters of parathyroid function does not support a primary renal loss of calcium. The results suggest that increased bone turnover may be an important component of disordered calcium metabolism in patients with calcium nephrolithiasis.
Guidance to bone morbidity in children and adolescents undergoing allogeneic hematopoietic stem cell transplantation 109 Introduction Allogeneic hematopoietic stem cell transplantation (HSCT) is the standard of care in children with very high-risk acute leukemia. Through advances in donor selection and supportive care strategies, cure rates in patients with high-risk acute lymphoblastic leukemia (ALL) are approaching 70% in large multi-institutional trials. However, this success comes at the cost of complications and sequelae from chemotherapy and HSCT with negative impact on quality of life (QoL). These complications are increasingly being recognized and become the focus of research in childhood leukemia survivors. Whilst little is known on complications specifically attributable to allogeneic HSCT in children with high-risk leukemia compared to chemotherapy alone, their overall greater number and severity are uncontroversial. Notably, side-effects vary between conditioning regimens, e.g., depending on the use of total body irradiation and the drugs administered. One of the most prevalent and debilitating complications from ALL therapy and HSCT is bone morbidity including osteoporosis (OP) and osteonecrosis (ON). Reported incidences range between 20-60% for reduced bone mass accrual, including OP, and 4-40% for ON, respectively. However, these estimates are mostly based on retrospective studies using dual energy X-ray absorptiometry (DXA) for bone mineral density (BMD) assessment, and include several HSCT approaches and heterogeneous underlying diseases. In the setting of leukemia, clinically relevant fractures are associated with low BMD. A prospective surveillance study in children (STOPP) confirmed a vertebral fracture prevalence of 16% already at diagnosis of ALL. The proportion of children with fractures at any skeletal site over the 6-year observation period was 36%, with 71% of all incident fractures occurring in the first 2 years of chemotherapy. Other studies reported a two-to six-fold increase in the fracture rates during chemotherapy compared with healthy controls. Due to lack of vertebral fractures assessment (VFA) and only DXA based studies, it is difficult to determine the real extent of bone morbidity in older studies. Noteworthy, studies exploring bone health in children and adolescents prior to and following allogeneic HSCT for high-risk ALL are very sparse. In the STOPP study, only 4.8% of 186 ALL patients underwent HSCT. Across all ALL patients, predictors for incident fractures were cumulative corticosteroid dose and vertebral fractures at diagnosis. Hence, it remains unclear whether allogeneic HSCT adds additional risk to bone health compared to standard ALL treatment. Notwithstanding, a number of studies reporting on quantitative computed tomography (QCT) measures in long-term survivors of allogeneic HSCT in childhood demonstrated significant deficits including growth, spine and tibia trabecular volumetric BMD, cortical dimensions, and muscle crosssectional area at a median of 5 years after HSCT. Timely recognition of bone disease is crucial for initiation of treatment and for prevention of fractures, pain, loss of mobility and deformity and, thus, reducing long-term morbidity and adverse consequences on QoL. Therefore, assessment of bone health is indicated at diagnosis of leukemia and regularly after allogeneic HSCT. Here, we present guidance to the most important bone morbidities 'reduced bone mass accrual / OP and 'ON in children and adolescents undergoing allogeneic HSCT and recommendations for clinical practice. For patients affected by sickle cell disease, specific guidelines should be considered, ascompared to other HSCT patients -further mechanisms add to their bone disease. Methods In order to improve outcome of allogeneic HSCT in children and adolescents, the International BFM Stem Cell Transplantation (I-BFM SCT) Committee and the Pediatric Disease Working Party (PDWP) of the European Society for Blood and Marrow Transplantation (EBMT) address and discuss various topics associated with allogeneic HSCT in working groups aiming at providing guidance for care. As bone morbidity is a complex topic requiring particular consideration, a pediatric bone specialist and member of the European Society for Pediatric Endocrinology (ESPE) working group on bone and growth plate was involved in this process to approach this topic as an interdisciplinary team. To search for evidence in the field of acute leukemia/HSCT and low BMD/OP/ON, a PubMed-based literature search was conducted using the MeSH terms children/adolescents, acute leukemia (ALL, AML, leukemia), HSCT, and low BMD, reduced bone mass accrual, osteoporosis, vertebral fractures, and osteonecrosis, respectively. The titles and abstracts of identified articles were checked against the cohort and conditions reported (only those studies were kept which primarily reported on children and adolescents, leukemia and allogeneic HSCT). Preference was given to articles written in English. One author (MK) prepared an evidence-based summary of the literature relating to the topics bone mass deficits and osteonecrosis and circulated it among all authors. The best available evidence was used to develop recommendations. Recommendations and evidence are described as follows: Level of evidence (LoE) I: Evidence from at least one randomized trial, Level II: Evidence from cohort studies, case control studies, time series, Level III: Opinions of respected authorities, based on clinical experience, descriptive studies or reports of expert committees), and provide our practice whenever no evidence is available. Authors presented the revised summaries to the group for discussion at three consecutive rounds. All authors approved the recommendations of this guidance. This guidance includes the cumulative evidence up to the end of 2018. As OP and ON are two completely different conditions with regard to the underlying pathophysiology, risk factors, diagnostic steps, and treatment, we subsequently summarize our guidance in two paragraphs. The paragraphs are consistently structured in a brief overview on definitions, symptoms, diagnostics, and a summary of published evidence including incidence and risk factors (supplemented by an overview of studies) followed by our suggestion for clinical practice (including a diagnostic workflow). In addition, references on treatment recommendations are given, whenever available. Low bone mass accrual and osteoporosis Definition: According to the International Society for Clinical Densitometry (ISCD), low BMD is defined as a low bone mineral content or areal BMD Z-score that is less than or equal to -2.0, adjusted for age, gender, and body size, as appropriate. The diagnosis of OP requires at least one vertebral compression fracture or a combination of low BMD and a clinically significant fracture history. The latter is defined as at least two long bone fractures before the age of 10 years, or three or more longbone fractures before the age of 19 years, in the absence of high-energy trauma. Symptoms: Vertebral fractures often remain asymptomatic and, thus, will be missed and OP not diagnosed unless imaging is performed. However, back pain is a well-known sign of vertebral fractures. Diagnostics: Bone mass is measured using a dual-energy X-ray absorptiometry (DXA) scan of the lumbar spine (L1-L4) and/or whole body and expressed relative to age-and body size-matched (Zscore) norms. Low bone mass is defined as a BMD Z-score at or below -2.0. For children under the age of 5 years, DXA reference values are lacking as children have to lay still during measurement. Since BMD is underestimated in children with short stature and since chronically ill children are frequently short, adjustments for height and bone volume are necessary. Typical adjustments used are the calculation of lumbar spine bone mineral apparent density (BMAD, in g/cm 3 ) or BMD adjustments for height Z-score at the lumbar spine and removing the head from the total body scan (total body less head BMD). Suspected (extremity) fractures should be confirmed using conventional x-ray. Particular attention is needed to vertebral compression fractures. These are usually not recognized clinically at the time of their occurrence. However, their detection confirms the presence of OP and poses a substantial risk for subsequent fractures independent of BMD. Noteworthy, a BMD Z-score >-2.0 does not preclude the possibility of skeletal fragility and increased fracture risk. Thus, screening for vertebral fractures using vertebral fracture assessment (VFA) by DXA, lateral spine x-rays, or MRI at regular intervals is necessary. Summary of published evidence: It was long believed that adolescents who fail to appropriately accrue bone mass and/or lose part of it as after HSCT are at risk for life-long osteopenia, early onset OP, and fractures. However, this 'peak bone mass' concept has been heavily disputed. The STOPP prospective trial has demonstrated that vertebral fractures are most frequent and severe in the first 2 years of ALL therapy. In survivors of childhood HSCT, who are at potentially greater risk for inadequate bone accrual and metabolism, having had more osteotoxic therapy, the incidence of clinically asymptomatic and symptomatic fractures still needs to be studied. An overview on studies reporting on BMD deficits and fractures in children, adolescents and young adults following HSCT is given in table 1. Suggestions for clinical practice (prevention): To date, there is no evidence that shows a benefit for the prevention of fractures or bone mass deficits in ALL, or in the context of HSCT. The first biochemical signs of osteomalacia is an increasing parathyroid hormone (PTH) and indicates low dietary calcium, low vitamin D status or malabsorption. Osteoporosis, in contrast to osteomalacia and rickets, cannot be prevented by giving vitamin D. We therefore only provide general recommendations:  Measurement of calcium, phosphorus, alkaline phosphatase (ALP), PTH, and 25-hydroxy vitamin D (25(OH)D) on a regular basis (e.g. every six months during the first year, afterwards yearly; adapted in patients with chronic GvHD (cGvHD)). (LoE 2)  Adequate calcium and vitamin D intake are important for preventing osteomalacia and rickets but will not prevent or treat osteoporosis. The minimum intakes known to prevent rickets are  500 mg/day of calcium and 10 g (400 IU)/day of vitamin D; higher vitamin D intakes (12.5-25 g or 500-1,000 IU) have been recommended for children and adolescents at-risk of vitamin D deficiency due to factors and conditions that reduce synthesis or intake (e.g. restricted exposure to sun, high latitude during winter/spring season, and low dietary calcium intake). Target Suggestions for clinical practice (treatment): Principally, assessment of treatment indication and OP treatment should be performed in consultation with the pediatric endocrinologist or metabolic bone specialist.  Basically, diagnosis and treatment of OP in children and adolescents should follow the ISCD guidance of pediatric OP. (LoE 2) Therein, BP treatment is reserved for older patients with overt bone fragility and low potential for BMD restitution and vertebral body reshaping.  In case of significant functional impairment limiting QoL, age becomes less important and treatment may be initiated. (LoE 2)  However, the ISCD guidance only provides recommendations for children with standard ALL. As in children and adolescents with ALL undergoing HSCT more complications and poor outcome are probably more likely, BP therapy may be used in younger patients with serious complications, bone pain and therefore less potential for recovery, as long as ISCD criteria of OP are fulfilled. HSCT and low BMD. Hence, attention is needed to secondary prevention in children with less potential to recover spontaneously from low BMD and/or fractures and therefore increased risk of disease progression and disability. In children and adolescents, the potential to recover from bone fragility depends on the severity of bone morbidity, the remaining growth potential, and the persistence of risk factors. Consequently, children with limited or no potential of recovery including children of older age with restricted linear growth potential qualify for bone-targeted therapy. Furthermore, younger children with potential for spontaneous recovery may warrant BP treatment if OP due to pain and functional limitation significantly impacts their QoL. The treatment of leukemia-and HSCT-related osteoporotic fractures should follow these general principles of bone-targeted treatment of OP in children. For the future, alternative agents may become further treatment options. For example, the receptor activator of nuclear factor B ligand (RANKL) inhibitor denosumab operates by inhibiting bone resorption and, to a lesser degree, bone formation, and is commonly used in postmenopausal women. Osteonecrosis (ON) Definition: ON -also known as avascular necrosisis defined as the death of a bone segment due to an imbalance between the actual and required blood flow due to various reasons. Symptoms: The clinical picture of ON is multifaceted and usually depends on ON stage and location. Most commonly, ON occurs in the midshaft of long bones and remains asymptomatic and completely harmless. However, in ON affecting the major joints, this is frequently associated with pain. At first, the pain is mostly stress-induced, caused by the pressure on the affected bone, typically on the lower limbs. Subsequently, it becomes more constant and appears also at rest. In case of further disease progression, including joint collapse, the joint surface loses its smooth shape and severe pain interferes with daily life. Other symptoms include restrictions in activities of daily living such as climbing stairs and putting on shoes as well as gait abnormalities, while particularly joint swelling, mobility restrictions and stiffening and taking a relieving posture are generally symptoms of a far progressed joint disease. The time between first symptoms and collapse of the bone may vary from several months to more than a year. Diagnostics: Magnetic resonance imaging (MRI) is the only appropriate imaging to show osteonecrotic lesions and allowing their grading. Standard X-ray images may look normal in early stages and become significant in advanced stages only. Summary of published evidence: Risk factors for the development of ON include older age at HSCT, steroid treatment, cGvHD and ON prior to HSCT. Other factors such as gender, obesity, total body irradiation and other immunosuppressants have only inconsistently been reported to increase the incidence of ON. (11,13, In addition, children already presenting with grade 1 ON at MRI screening within 6-8 months of ALL therapy are at increased risk of developing symptomatic ON grade 2 to 4. In an MRI-based single center study, the prevalence of ON in children following HSCT is reported to be approximately 30%. In contrast, the cumulative incidence of symptomatic ON following HSCT in children and adolescents is reported to be 4-9%. Most ON are diagnosed within two years following HSCT with hips and knees being most frequently affected Previous studies in children and adolescents with ALL exploring pharmacological interventions for ON including BPs and prostacyclin analogs lack sufficient quality evidence, as previously reviewed; ) studies in children with ON after allogeneic HSCT are completely missing. New therapies targeting pathways in bone metabolism such as anti-sclerostin antibody may deserve prospective clinical trials in children after allogeneic HSCT. In general, surgical management is based on patient factors and lesion characteristics. In late stage ON with joint infarction, surgical interventions comprise arthroplasty and surface replacement. In precollapse lesions, joint-preserving procedures including core decompression (CD) may be attempted. In non-cancer related ON, data indicate that CD combined with cellular therapies (autologous or allogeneic bone marrow cells, mesenchymal stem cells, human bone morphogenetic protein), vascularized bone grafts, avascular grafts, combinations of the aforementioned or rotational osteotomies is beneficial. Therapeutic approaches in children and adolescents with ALL have been previously reviewed. (82, 90, Summarizing remarks and outlook Children and adolescents undergoing allogeneic HSCT are at increased risk of OP and ON. Bone Turbo inversion recovery magnitude, TIRM; T1-weighted MRI scans, T1. *Some preliminary data suggest that interventions including core decompression plus mesenchymal stem cells may provide improved outcome if patients are treated at an early / precollapse stage. These data still need to be confirmed in children and adolescents with acute lymphoblastic leukemia. (reviewed in )
For four years, Susan Hurlburt wondered what happened to her son. Neil Harris Jr. was last seen at the Inwood LIRR station wearing a hoodie under a thick Carhartt jacket on Dec. 12, 2014. She hadn’t heard from him since. Hurlburt, who grew up in Inwood but was living in upstate New York, looked all over the hamlet where her son had lived. She distributed missing person flyers and posted about him every week on Facebook, hoping someone would recognize him. She was optimistic Harris would someday resurface, but worried for his safety. He had a history of depression and other mental health issues, she said. And when she moved from upstate New York to North Carolina two years ago, she kept her Nassau County number in case Harris ever called and regularly checked in with the AWARE Foundation, a national nonprofit that helps locate missing people. Hurlburt learned just months ago that her son died last year, but has found solace in knowing that an entire community had in fact cared for him, as she had hoped. It was a Manhattan journalist who gave Hurlburt the gift of closure. Jessica Brockington, 55, was digging through a missing persons database for a story when she stumbled on a flyer for Harris, who was 30 when he disappeared. The man looked heavier, his beard shorter, but he had the same dark eyes as the homeless man she’d met while walking her two small dogs through Riverside Park in Manhattan. He went by "Stephen" and sat on the same bench near West 74th Street every day, through the muggy summer heat and in 20-degree weather, Brockington said. Though he was withdrawn, the man had become a fixture in the park, so neighbors noticed when he suddenly stopped showing up at his perch. Gisela Wielki, who lives near the park, often offered him food or kept him company on the bench. "He left an impression on all of us," Wielki said. "Just in his silence, he became such a member of the community and entered all of our lives." Harris was found dead in Manhattan in March 2017, Hurlburt said. Residents placed a plaque on the bench in his memory, and Wielki helped organize a memorial service at the nearby Christian Community Church where she's a pastor. The New York City Office of Chief Medical Examiner determined he died of an intestinal hemorrhage caused by a chronic ulcer. But because he wasn’t identified, Harris was buried in the city’s potter’s field on Hart Island in the Bronx, Hurlburt said. About a year later, Brockington found the missing persons notice and began to connect the dots. “I just knew it in my bones that was him,” Brockington said. Brockington went to Nassau County police for help. The department had put out alerts for Harris and thought briefly it had found him in Pennsylvania, but it didn’t have any other leads, Nassau County police Det. Raymond Olsen said. Brockington eventually spoke to Hurlburt and told her she may have found her son. She helped Hurlburt locate a photo of Harris in a federal database that lists the missing and unidentified. Hurlburt took one look and knew it was her son. Harris was thinner than when she had last seen him, but she had seen him at 300 pounds and 155 pounds, with dreadlocks and a shaved head. “That was my son, and I knew it beyond a shadow of a doubt,” she said. Hurlburt had Harris’ medical records sent to the city medical examiner’s office, and officials were able to identify him through an old arm injury, she said. Hurlburt is considering leaving Harris on Hart Island because that is where his father is also buried, she said. He never knew his father — who died in 1993 according to Hart Island records — and always wanted to meet him, she said. Hurlburt remembers her son had a lighter side at times, with a soft spot for the homeless, Hurlburt said. Once, without her knowing, he bought a tent and pitched it in their backyard for a young woman he found sleeping at the Inwood train station. Another time, he invited a man he found sleeping in a van to the house for dinner. Hurlburt takes comfort knowing there were people who cared for him, too. Area residents have planned a memorial service for Harris this Sunday at the Christian Community Church on the Upper West Side, which Hurlburt and her daughter will attend. Hurlburt will get a chance to thank Brockington in person.
"It will cause serious harm to his young children by depriving them of a loving father and role model and will strip R.V. of the opportunity to heal through continued sustained treatment and the support of his close family." His opinion, first reported in the New York Law Journal, is the latest salvo in a war over whether penalties for possessing child pornography have gotten too harsh. The existing guidelines, Weinstein wrote, do not "adequately balance the need to protect the public, and juveniles in particular, against the need to avoid excessive punishment." The defendant, who agreed to speak to NBC News on the condition his name was not used, said he was surprised and relieved that Weinstein was so lenient after his guilty plea. "I prayed to God and took my chances," the 53-year-old father of five said. "I feel very remorseful. It's something that will never happen again." But child-abuse victims' advocates said they were appalled by Weinstein's reasoning. "I think Judge Weinstein's opinion minimizes the harm that is done to victims of these crimes from the mere act of viewing their images. It's a gross violation of privacy and an invasion of privacy that traumatizes them throughout their lives," said Paul Cassel, a former federal judge who is now a law professor at the University of Utah. In 2013, investigators remotely connected to the man's computer and downloaded four photos and videos showing men engaged in sexual acts with girls, including a 3-year-old and a 5-year-old, and they seized more porn on thumb drives with a search warrant, court papers said. The man also had "sexual" chats with underage girls online, but there was no evidence he sought physical contact with minors. When he pleaded guilty, the defendant said he understood the charge carried up to 10 years behind bars. Based on the specifics of his case, the federal guidelines called for a sentence of 6.5 to 8 years in prison. But Weinstein thought that was too much time for an offender who did not make, swap or sell child porn or try to abuse children. He said the five days the man served before making bail, plus seven years of court supervision and a fine, were punishment enough. The judge noted that the man was undergoing sex offender treatment and was deemed unlikely to relapse and that a psychiatrist testified he was not a danger to his own or other children. He also noted that the Internet has made child pornography accessible to a much wider group of Americans who might not otherwise have been exposed to it. The man — who lost his $75,000-a-year job as a restaurant manager after his arrest — told NBC News he stumbled on child pornography while consuming legal, adult pornography online. "I just got caught up in it," he said. "It's not like I woke up and said, 'Listen, let me look at this stuff.' It kept popping up every time I was downloading." Weinstein is among a group of federal judges who have argued that sentencing ranges for possessing child pornography — which were doubled by Congress in 2003 — are too severe. The federal bench handed down sentences below the guidelines 45 percent of the time, the Associated Press reported in 2012. Those who favor tougher sentences point out that while many consumers of child pornography may not never lay a hand on a child, some do. And all, they say, play a role in a system that promotes the abuse of children. "The viewing has a market-creation effect," Cassel said. "It ends up leading inexorably to the rape of children." I feel very remorseful. It's something that will never happen again." Jennifer Freeman, an attorney who represents child-porn victims in efforts to obtain restitution, called Weinstein's opinion "a diatribe" and said he was using the particulars of one case to indict the entire sentencing structure. "He's basically saying it's not worth too much punishment," she said, adding that she did not want to comment on whether the man Weinstein sentenced deserved more time than five days. That man said that he had done something wrong and was ashamed of it but that locking him up would not have served any purpose and would have "put my family living out on the street." "It should be illegal," he said of child pornography. "No child should be put through that process." But he added, "I would never physically do anything. I never had even a thought of it." Weinstein did not respond to a request for comment. Federal prosecutors said they had no comment.
Jimmy Fallon paid tribute to his late-mother Gloria as he returned to The Tonight Show on Monday night for the first time since her death. Taylor Swift made a surprise appearance on the episode, performing her new track New Year’s Day. “She was not scheduled to do our show today. But we wanted something special for this first show back, so we asked her on a complete whim, since she had been in town doing SNL,” The Tonight Show writer and producer Mike DiCenzo wrote on Twitter. During his monologue, Fallon shared a story about how his mother would squeeze his hand three times to tell him that she loved him. Oddly enough, the memory shared a striking resemblance to the lyrics from Swift’s song. “Suddenly she sings the line, ‘Squeeze my hand 3 times in the back of the taxi.’ I nearly gasped. Tears. I think everyone in the audience started sobbing,” DiCenzo wrote. After performing the song, Swift gave a choked-up Fallon a tight hug.
Effect of calcium and stanozolol on calcitonin secretion in patients with femoral neck fracture. The role of calcitonin in the aetiology of postmenopausal osteoporosis remains uncertain. Oestrogen, an established therapy for postmenopausal osteoporosis, has been shown to enhance calcitonin secretion. In order to assess whether two other osteoporotic drug treatments, oral calcium and stanozolol (an anabolic steroid), may also affect calcitonin secretion, 20 elderly women with femoral neck fracture were randomly selected to receive either 880 mg calcium or 5 mg stanozolol daily for 12 weeks. Basal calcitonin and serum calcium were not altered significantly by either treatment. The calcitonin response to a 10 min infusion of calcium was enhanced following treatment with oral calcium but not stanozolol. This suggests one possible mechanism of action whereby calcium may exert its antiresorptive effect on bone and supports the use of oral calcium in the treatment of postmenopausal osteoporosis.
Batman ninja stars. Jawbone tomahawks. Oversized liquor bottles filled with dead sea horses. The U.S. Transportation Security Administration stops a wide array of strange, exotic and sometimes deadly items from coming onto airplanes. Now we can add 3D printed guns to the list. TSA agents recovered this plastic gun from a passenger's carry-on bag at Reno-Tahoe International Airport in Reno, Nev. The man agreed to leave it behind and was allowed to board the plane without incident. ( tsa/Instagram ) The TSA says a plastic revolver assembled with a 3D printer was among the 68 firearms the agency confiscated from carry-ons around the country during the week ending Aug. 5. TSA agents discovered the weapon in a passenger’s luggage during screening at the Reno-Tahoe International Airport in Nevada. The gun was a replica, but was loaded with five live .22-calibre bullets, the agency said. The fact that it was inoperable didn’t matter, the TSA said. Fake guns are treated just like real ones — permitted in checked bags, but banned in carry-ons. Article Continued Below The gun might be the first 3D printed firearm confiscated in the U.S., TSA spokeswoman Lorie Dankers said. The passenger, whose name hasn’t been released, voluntarily left the gun and ammunition at the airport and boarded his flight. He wasn’t arrested or issued a citation, but could still face a civil fine of as much as $7,500, Dankers said. Guns and other weapons manufactured with 3D printers pose a major security risk in air travel. For one, they’re typically made of plastic and resin, allowing them to easily slip through airport metal detectors. They can also be broken down into their component parts, enabling carriers to store them in different places and reassemble them later. The godfather of 3D guns was designed by Cody Wilson, a 25-year-old law student at the University of Texas, who uploaded the blueprints for it on his website, Defense Distributed. Wilson printed the gun, known as the Liberator, on an $8,000 printer and field tested it in May 2013. The State Department quickly ordered Wilson to remove the blueprints, and he complied, but not before they’d been downloaded some 100,000 times. The same month, reporters from the Daily Mail printed their own version of the Liberator and snuck it through security onto a crowded Eurostar train without setting off any alarms. “Two reporters passed completely unchallenged through strict airport-style security to carry the gun on to a London to Paris service in the weekend rush-hour, alongside hundreds of unsuspecting travellers,” the Daily Mail reported, saying the journalists had exposed a “massive international security risk.” Article Continued Below The TSA has been criticized for security lapses as well. The agency came under fire last year when an internal investigation revealed that teams of inspectors were able to smuggle a range of banned items — including mock explosives and weapons — through airport security in nearly 70 tests conducted around the country. The acting TSA director resigned in response. Still, in 2015, the TSA confiscated a record 2,653 firearms in carry-on bags. By contrast, the agency found just 660 in passenger luggage in 2005. And earlier this year, TSA set another record by confiscating 73 guns in a week. The TSA’s Instagram account, where the agency posted pictures of the seized 3D revolver, offers an amusing — and mildly frightening — look at just how many prohibited items screeners recover on a daily basis. With files from The Associated Press.
Plus meridian incision for secondary implantation. We studied 25 consecutive secondary implantations with a minimum of 4 months follow-up to learn the effect on astigmatism of passing or not passing the incision through the most plus corneal meridian. Eighteen of the 19 cases having a most plus meridian incision had postoperative astigmatism of 1 diopter or less, and their average astigmatism was reduced by surgery. All six of the cases with the incision not passed through the most plus meridian had postoperative astigmatism greater than 1 diopter, and their average astigmatism was doubled by surgery. The astigmatic difference between the two patient groups was highly significant. A most plus meridian incision is recommended.
We are in Arthur C. Clarke land here with this incredible app from Quest Visual. Any sufficiently advanced technology is indistinguishable from magic. This is a sort of magic. To get all technical it is on OCR translation app that uses augmented reality. For the rest of us, go with magic. Using your iPhone’s camera the app scans images of text, converts it into text, translates it from English to Spanish (or vice versa) and then injects the translated text back into the image. This is the future.
The Analysis of Magnetic Coupling Force to An Energy Harvester with Rotational Frequency Up-Conversion Structure This paper proposed the analysis of magnetic coupling force for an energy harvester with rotational frequency up-conversion structure. The harvester consists of a piezoelectric cantilever with a tip magnet and a rotatable disk with a magnet fixed on its edge as the driving magnet, and its operating principle for frequency up-conversion is introduced in detail. Since the magnetization direction of the driving magnet is along the radial direction of disk which is time-varying during the rotation of disk, traditional methods are not suitable for the proposed energy harvester to calculate the magnetic coupling force. Therefore, a novel theoretical model is established. Through both the simulation and the experimental validation, it can be proven that the proposed model has achieved an excellent accuracy and is in good agreement with the practical situation.
Light at the end of the tunnel?: The Great Indian Pharmacoeconomics story It is estimated that 20 million people in India fall below the poverty line each year because of indebtedness due to healthcare needs. This is indeed an alarming figure. Just to put things into perspective, the population of the financial capital of our country, Mumbai, is around 12 million and that of the national capital, Delhi, is 10 million. Now, imagine a population, the size of an entire city, going bankrupt. Only 11% of the Indian population have health insurance coverage. Thus, everyone outside this 11% and those who don't fall in the upper middle and upper socio-economic classes, is at a risk of going bankrupt, God forbid, if they were to encounter a health crisis. Total health care costs can be divided into two major components: cost of medicines and other costs. We would like to highlight the cost of medicines as a separate chunk because, the average Indian household may spend about 50% of their total health expenditures on medicines alone. According to the National Sample Survey (NSS) for the year 19992000, in rural India, the share of drugs in the total Out of Pocket expenditure (OPP) was estimated to be nearly 83%, while in urban India, it was 77%. The other costs include doctor's fees, medical/surgical procedure costs, laboratory test costs, in-hospital costs and so on. And there are the intangible costs that are difficult to estimate but nevertheless assume significant proportions. It is important to note that most of the health insurance schemes pay the in-hospital costs only. Inflation is touching a new high and the rupee is touching a new low. The cost of living is steadily increasing whereas the incomes of individuals are increasing at much lower rates. In India, the mighty gold and the acrid onions have been in the news for being one of the priciest commodities putting them out of the common man's reach. But drug prices are outstripping all commodities. An examination of the price trends of 152 drugs in India, reveals that antibiotics, anti-tuberculosis and anti-malarial drugs, and drugs for cardiac disorders, etc. registered price increments from 1 to 15% per annum during 19762000 (National Commission on Macroeconomics and Health Ministry of Health and Family Welfare Government of India, 2005). The reason being that only one-tenth of drug market is price controlled as against nearly 90% during the late 1970s (Government of India, 2005). If this was not enough, both communicable and lifestyle diseases are increasing at alarming proportions. It is estimated that by 2015 the number HIV/AIDS cases would be three times more than that in 2005, entailing possibly a corresponding increase in the existing prevalence level of tuberculosis of about 85,00,000 cases. Cardiovascular diseases and diabetes will more than double. Cancers will rise by 25%. In the coming 5 years there will be an enormous increase in various health disorders thereby increasing the healthcare costs geometrically (National Commission on Macroeconomics and Health Ministry of Health and Family Welfare Government of India, 2005). At such times, the decision of the National Pharmaceutical Pricing Authority (NPPA), to regulate prices of 348-odd essential medicines by means of the Drugs Prices Control Order 2013, is a welcome respite (Department of Pharmaceuticals, 2013). But how long will this step go in mitigating the double blow of increasing disease and skyrocketing prices, only time will tell. Many loopholes have been cited in the DPCO (First Post India, 2013; Rajagopal, 2013; Venkiteswaran, 2013). Some of them are as follows: Only drugs on the National List of Essential Medicines (NLEM) will be included. The NLEM has itself been criticized for improper selection of drugs (Manikandan and Gitanjali, 2013). Manufacturers might change dosages and formulations to avoid the DPCO. Market based pricing has been used to determine the ceiling price. The past experiences have not been good, because in 2008 the government had initiated a similar program called as the Jan Aushadhi program, that hit many roadblocks and was not successful at what it had purported. The private sector accounts for more than 80% of total health care spending in India. Unless there is a decline in the combined central and state government deficit, which stands at roughly 8.5% as of 201213, the opportunity for significantly higher public health spending will also be limited (Government of India, 2013). The government of India has started the Rashtriya Swasthya Bima Yojana (RSBY), literally National Health Insurance Programme for individuals below the poverty line in 2008. It provides annual hospitalization cover up to Rs. 30,000 or $ 485 for a family of five members through health insurance companies (Ministry of Labour, and Employment, 2013). But again it doesn't cover the outpatient costs. Also, it has been said that the coverage under the scheme is less than desirable and as a result many poor still remain uninsured (Dhoot, 2011, 2013). The exact impact of the scheme still remains to be evaluated. Another piece of irony is that since many villages do not even have hospitals, some of these hospitalization schemes can't be used at all. Pharmacoeconomics research can take us toward the final aim of making drugs affordable to all. Though pharmacoeconomics is in its infancy in India, it is rapidly evolving and good quality studies are being conducted across the nation. The first proposed Pharmacoeconomics guidelines draft was recently prepared by experts in the field (). This notwithstanding, overall, for the common man, the situation doesn't look very promising and a lot needs to be done to make medicines affordable to all. It is estimated that 20 million people in India fall below the poverty line each year because of indebtedness due to healthcare needs. This is indeed an alarming figure. Just to put things into perspective, the population of the financial capital of our country, Mumbai, is around 12 million and that of the national capital, Delhi, is 10 million. Now, imagine a population, the size of an entire city, going bankrupt. Only 11% of the Indian population have health insurance coverage. Thus, everyone outside this 11% and those who don't fall in the upper middle and upper socio-economic classes, is at a risk of going bankrupt, God forbid, if they were to encounter a health crisis. Total health care costs can be divided into two major components: cost of medicines and other costs. We would like to highlight the cost of medicines as a separate chunk because, the average Indian household may spend about 50% of their total health expenditures on medicines alone. According to the National Sample Survey (NSS) for the year 1999-2000, in rural India, the share of drugs in the total Out of Pocket expenditure (OPP) was estimated to be nearly 83%, while in urban India, it was 77%. The other costs include doctor's fees, medical/surgical procedure costs, laboratory test costs, in-hospital costs and so on. And there are the intangible costs that are difficult to estimate but nevertheless assume significant proportions. It is important to note that most of the health insurance schemes pay the in-hospital costs only. Inflation is touching a new high and the rupee is touching a new low. The cost of living is steadily increasing whereas the incomes of individuals are increasing at much lower rates. In India, the mighty "gold" and the acrid "onions" have been in the news for being one of the priciest commodities putting them out of the common man's reach. But drug prices are outstripping all commodities. An examination of the price trends of 152 drugs in India, reveals that antibiotics, anti-tuberculosis and anti-malarial drugs, and drugs for cardiac disorders, etc. registered price increments from 1 to 15% per annum during 1976-2000 (National Commission on Macroeconomics and Health Ministry of Health and Family Welfare Government of India, 2005). The reason being that only one-tenth of drug market is price controlled as against nearly 90% during the late 1970s (Government of India, 2005). If this was not enough, both communicable and lifestyle diseases are increasing at alarming proportions. It is estimated that by 2015 the number HIV/AIDS cases would be three times more than that in 2005, entailing possibly a corresponding increase in the existing prevalence level of tuberculosis of about 85,00,000 cases. Cardiovascular diseases and diabetes will more than double. Cancers will rise by 25%. In the coming 5 years there will be an enormous increase in various health disorders thereby increasing the healthcare costs geometrically (National Commission on Macroeconomics and Health Ministry of Health and Family Welfare Government of India, 2005). At such times, the decision of the National Pharmaceutical Pricing Authority (NPPA), to regulate prices of 348-odd essential medicines by means of the Drugs Prices Control Order 2013, is a welcome respite (Department of Pharmaceuticals, 2013). But how long will this step go in mitigating the double blow of increasing disease and skyrocketing prices, only time will tell. Many loopholes have been cited in the DPCO (First Post -India, 2013;Rajagopal, 2013;Venkiteswaran, 2013). Some of them are as follows: 1. Only drugs on the National List of Essential Medicines (NLEM) will be included. The NLEM has itself been criticized for improper selection of drugs (Manikandan and Gitanjali, 2013). Manufacturers might change dosages and formulations to avoid the DPCO. 3. "Market based pricing" has been used to determine the ceiling price. The past experiences have not been good, because in 2008 the government had initiated a similar program called as the "Jan Aushadhi" program, that hit many roadblocks and was not successful at what it had purported. The private sector accounts for more than 80% of total health care spending in India. Unless there is a decline in the combined central and state government deficit, which stands at roughly 8.5% as of 2012-13, the opportunity for significantly higher public health spending will also be limited (Government of India, 2013). The government of India has started the Rashtriya Swasthya Bima Yojana (RSBY), literally "National Health Insurance Programme" for individuals below the poverty line in 2008. It provides annual hospitalization cover up to Rs. 30,000 or $ 485 for a family of five members through health insurance companies (Ministry of Labour, and Employment, 2013). But again it doesn't cover the outpatient costs. Thakkar and Billa Light at the end of the tunnel? Also, it has been said that the coverage under the scheme is less than desirable and as a result many poor still remain uninsured (Dhoot, 2011(Dhoot,, 2013. The exact impact of the scheme still remains to be evaluated. Another piece of irony is that since many villages do not even have hospitals, some of these hospitalization schemes can't be used at all. Pharmacoeconomics research can take us toward the final aim of making drugs affordable to all. Though pharmacoeconomics is in its infancy in India, it is rapidly evolving and good quality studies are being conducted across the nation. The first proposed Pharmacoeconomics guidelines draft was recently prepared by experts in the field (). This notwithstanding, overall, for the common man, the situation doesn't look very promising and a lot needs to be done to make medicines affordable to all.
RIYADH: King Salman’s visit to Malaysia, the first country on the Saudi delegation’s month-long tour of Asia, comes after more than 55 years of strong diplomatic relations between the two countries. Saudi Arabia and Malaysia opened embassies in Kuala Lumpur and Jeddah, respectively, in mid-1961. The relationship can be described as deep-rooted, with state visits dating back many decades. The first was made by the late King Faisal in 1970; the late King Abdullah visited in January 2006. Malaysia’s current Prime Minister Najib Razak has also visited the Kingdom. As an extension of military cooperation between the two countries, Malaysian troops have participated in the Saudi-led coalition to restore legitimacy in Yemen, as well as in the “North Thunder” joint military exercises. Malaysia is also a member of the Saudi-led Islamic military alliance against terrorism, and joined in condemning the Houthis’ targeting of Makkah with a missile. Saudi Arabia and Malaysia have also signed several agreements and memoranda of understanding in the educational and tourism fields. Saudi exports to Malaysia are dominated by crude oil and related products, while TV and video devices are among the most prominent Malaysian exports to the Kingdom.
Microbe forensics: Oxygen and hydrogen stable isotope ratios in Bacillus subtilis cells and spores Bacillus subtilis, a Gram-positive, endospore-forming soil bacterium, was grown in media made with water of varying oxygen (18O) and hydrogen (D) stable isotope ratios. Logarithmically growing cells and spores were each harvested from the cultures and their 18O and D values determined. Oxygen and hydrogen stable isotope ratios of organic matter were linearly related with those of the media water. We used the relationships determined in these experiments to calculate the effective whole-cell fractionation factors between water and organic matter for B. subtilis. We then predicted the 18O and D values of spores produced in nutritionally identical media and local water sources for five different locations around the United States. Each of the measured 18O and D values of the spores matched the predicted values within a 95% confidence interval, indicating that stable isotope ratio analyses may be a powerful tool for tracing the geographic point-of-origin for microbial products.
Contribution to the thermodynamics of protein folding from the reduction in water-accessible nonpolar surface area. Protein folding and the transfer of hydrocarbons from a dilute aqueous solution to the pure liquid phase are thermodynamically similar in that both processes remove nonpolar surface from water and both are accompanied by anomalously large negative heat capacity changes. On the basis of a limited set of published surface areas, we previously proposed that heat capacity changes (delta C degrees p) for the transfer of hydrocarbons from water to the pure liquid phase and for the folding of globular proteins exhibit the same proportionality to the reduction in water-accessible nonpolar surface area (delta Anp) . The consequence of this proposal is that the experimental delta C degrees p for protein folding can be used to obtain estimates of delta Anp and of the contribution to the stability of the folded state from removal of a nonpolar surface from water. In this paper, a rigorous molecular surface area algorithm is applied to obtain self-consistent values of the water-accessible nonpolar surface areas of the native and completely denatured states of the entire set of globular proteins for which both crystal structures and delta C degrees p of folding have been determined and for the set of liquid and liquefiable hydrocarbons for which delta C degrees p of transfer are known. Both processes (hydrocarbon transfer and protein folding) exhibit the same direct proportionality between delta C degrees p and delta Anp. We conclude that the large negative heat capacity changes observed in protein folding and other self-assembly processes involving proteins provide a quantitative measure of the reduction in the water-accessible nonpolar surface area and of the contribution of the hydrophobic effect to the stability of the native state and to protein assembly.
The Effects of Job Demands, Job Resources and Intrinsic Motivation on Emotional Exhaustion and Turnover Intentions: A Study in the Turkish Hotel Industry ABSTRACT This study develops and tests a model which investigates the simultaneous effects of job demands, job resources, and a personal resource (intrinsic motivation) on emotional exhaustion and turnover intentions. Frontline hotel employees in Ankara, Turkey serve as the study setting. Among others, results show that job demands (role conflict and role ambiguity) trigger frontline employees' emotional exhaustion and turnover intentions. Job resources (supervisory support, training, empowerment, and rewards) and intrinsic motivation reduce emotional exhaustion. Implications of the findings are discussed and directions for future research are offered.
POLICE dealing with a major gas leak at an address in Gorleston have confirmed it has now been made safe. A 200m cordon has now been stood down and residents have been allowed back to their properties. Norfolk Fire Service remain on scene.
Intracorneal hydrogel lenses and corneal aberrations. PURPOSE To investigate the optical performance of the cornea based on corneal aberrometry following intracorneal hydrogel lens implantation. METHODS A retrospective, nonconsecutive, observational study of the anterior corneal surface aberration profile of four hyperopic eyes previously implanted with an intracorneal hydrogel lens were studied by videokeratographic elevation maps before and 6 months after surgery. RESULTS Intracorneal hydrogel lenses reduced the optical performance in all four eyes by increasing the spherical aberrations by a mean factor of 1.87 and 1.95, coma aberrations by a mean factor of 2.98 and 3.01, and total higher order aberrations by a mean factor of 2.6 and 2.17 at 3.0-mm and 6.5-mm pupils, respectively (P<.005). CONCLUSIONS Intracorneal hydrogel lenses decreased the optical performance of the cornea by significantly increasing spherical, coma, and total higher order aberrations.
SEOUL, South Korea (AP) — North Korea abruptly withdrew its staff from a liaison office with South Korea on Friday, a development that is likely to put a damper on ties between the countries and further complicate global diplomacy on North Korea’s nuclear program. Moon’s office said presidential national security adviser Chung Eui-yong convened an emergency meeting of the National Security Council to discuss the North Korean withdrawal. Moon says inter-Korean reconciliation is crucial for achieving progress in nuclear negotiations, but the breakdown of last month’s summit between U.S. President Donald Trump and North Korean leader Kim Jong Un has created a difficult environment to push engagement with the North. North Korean state media have recently demanded that South Korea distance itself from the U.S. and resume joint economic projects that have been held back by the U.S.-led sanctions against the North. Last Friday, North Korean Vice Foreign Minister Choe Son Hui said her country has no intention of compromising or continuing the nuclear talks unless the United States takes steps commensurate with those the North has taken, such as its moratorium on missile launches and weapons tests, and changes its “political calculation.” She said Kim would soon decide whether to continue the talks and the moratorium. While the liaison office was one of the main agreements reached in three summits between Moon and Kim last year, Chun said it’s too early to say whether North Korea is renegading on the deals.
Ayr led three minutes from the interval when Jamie Adams scooped the ball home from close range. Brechin levelled when on-loan Alloa striker Isaac Layne sent an 18-yard shot past Greg Fleming after a corner had been diverted into his path. McCall said: “We may still be at the top but I would rather have still been there with a win behind us. “We had a great chance not long after we scored but it was missed and then their equaliser came from a corner that was never a corner. Dunfermline were held to a goalless draw at East End Park by Peterhead. The Blue Toon had the best opening of the first half when Leighton McIntosh broke free on the left, but his shot came back off a post. In the second half Dunfermline pushed the hardest for a winner but Peterhead goalkeeper Graeme Smith ensured that the scoresheet would stay blank when he pushed away a late netbound Ryan Wallace free-kick. Airdrie were romping to victory at Forfar with three goals in the first 29 minutes, but the Loons rallied and the Diamonds eventually edged a 3-2 victory. Alan Lithgow touched home a David Cox cross to open the scoring after 14 minutes and Cox added the second midway through the half. When Mark Fitzpatrick rose to head home a corner, the game looked cut and dried. However, on-loan Hibs youngster Scott Martin netted from close range just before the break. Martin hit his second from 20 yards just seven minutes into the second half while Danny Denholm and Martyn Fotheringham spurned the best chances of grabbing a point. Stenhousemuir’s second win in a row has pushed them into the play-off race, the Warriors enjoying a 2-1 success at Stranraer. Mark Gilhaney converted a cross from Jason Scotland (pictured) to open the scoring on 18 minutes. Scotland then grabbed his his third goal in two games to increase the advantage, before Ryan Thomson’s injury-time consolation. The Albion Rovers and Cowdenbeath game was called off as Cliftonhill was waterlogged, with the sides looking to get back together tomorrow night.
Conservation of the greater glider (Petauroides volans) in remnant native vegetation within exotic plantation forest A fieldvalidated metapopulation model of patch occupancy was used to examine the persistence of greater gliders (Petauroides volans) in patches of remnant native eucalypt forest in southeastern Australia. The model was based on a system of eucalypt patches embedded within a plantation forest of exotic radiata pine (Pinus radiata) at Buccleuch State Forest, New South Wales. The probability of local extinction in occupied patches was a function of patch size. The probability of colonization of empty patches was a function of the size and proximity of occupied patches. The results of the simulations suggested that suitable habitat should occupy approximately 10% (or more) of the total landscape to ensure the persistence of greater gliders. Rates of patch occupancy were maximized when suitable habitat was clustered into larger patches. Patches <3 ha in area were predicted to be of limited value as habitat for greater gliders. The predictions of the model are consistent with the observed persistence of greater gliders in the eucalypt patch system. The system of unlogged eucalypt patches within the pine plantation is a useful model of a network of small to mediumsized conservation reserves. Principles of reserve design can have value even in somewhat degraded and highly modified landscapes. The results demonstrate that reserved patches of native forest embedded within intensively logged forest can have significant conservation value, with important implications for the design and establishment of new softwood plantations.
Case Report: Right Bundle Brunch Block and QTc Prolongation in a Patient with COVID-19 Treated with Hydroxychloroquine Abstract. Novel coronavirus disease (COVID-19) is a highly contagious disease caused by severe acute respiratory distress syndrome coronavirus-2 that has resulted in the current global pandemic. Currently, there is no available treatment proven to be effective against COVID-19, but multiple medications, including hydroxychloroquine (HCQ), are used off label. We report the case of a 60-year-old woman without any cardiac history who developed right bundle brunch block and critically prolonged corrected electrocardiographic QT interval (QTc 631 ms) after treatment for 3 days with HCQ, which resolved on discontinuation of the medication. This case highlights a significant and potentially life-threatening complication of HCQ use. INTRODUCTION Novel coronavirus disease (COVID-19) is a highly contagious disease caused by severe acute respiratory distress syndrome coronavirus-2 (SARS-CoV-2) that was identified in December 2019 in China and is now a global pandemic. 1 Currently, there is no proven effective treatment, and medications proposed to inhibit the virus life cycle such as hydroxychloroquine (HCQ), chloroquine, lopinavir/ritonavir, and remdesivir are used off label. These medications are widely used despite the lack of evidence for their efficacy and safety, and are often used in combination. CASE REPORT A 60-year-old woman was admitted to the National Isolation Centre in Brunei after her nasopharyngeal and throat swabs tested positive (reverse transcriptase -PCR) for severe acute respiratory distress syndrome coronavirus-2 (SARS-CoV-2). She was among a group of infected travelers and was linked to a confirmed COVID-19 case through contact tracing. She had just returned from Indonesia 4 days before and developed symptoms (fever, dry cough, weakness, and dyspepsia) on returning. These symptoms had already improved when she was called for testing. Her comorbid conditions included hypertension, hyperlipidemia, and being overweight (31.1 kg/m 2 ), but she had no known heart disease. Admission chest radiograph (CXR) was normal, and laboratory investigations revealed mildly elevated C-reactive protein, without lymphopenia (Table 1). She was empirically started on intravenous amoxicillin-clavulanic acid (1.2 g three times daily) and oseltamivir (75 mg twice daily). A repeat CXR on the second day of hospitalization showed bilateral lower zone opacities. As a result, she was transferred to the intensive care unit for close monitoring and was started on lopinavir 400 mg/ritonavir 100 mg (twice daily). As her condition did not improve, HCQ (400 mg stat dose followed by 200 mg twice daily) was initiated on the fourth day of hospitalization. An electrocardiograph (ECG) on hospital day 4 (before initiation of HCQ) was normal, with a corrected QT interval (QTc) of 397 ms. Repeat ECGs the following day remained normal (QTc 414 ms). The patient's condition deteriorated, requiring intubation and ventilatory support on the fifth day of admission. Blood and urine cultures were negative. Sputum culture isolated Pseudomonas aeruginosa and Serratia marcescens, both sensitive to meropenem. Amoxicillin-clavulanic acid was discontinued, and meropenem (1,000 mg three times daily) was initiated. She was also started on inotropic support. The timeline of events and medications prescribed are shown in Figure 1 and laboratory investigations in Table 1. On the fifth day, serum troponin I was noted to be mildly elevated. Transthoracic echocardiogram (TTE) showed normal ejection fraction and no regional wall motion abnormalities. Myocarditis secondary to SARS-CoV-2 was considered. Serial monitoring twice daily showed fluctuation of troponin I. On the seventh day of hospitalization, repeat ECG before the morning dose of HCQ showed a new right bundle branch block (RBBB) and critically prolonged QTc (631 ms) ( Figure 2). Hydroxychloroquine was discontinued after a cumulative dose of 1,400 mg. Blood investigations on that day showed normal serum Mg 2+ and K + but slightly low corrected Ca 2+ (Table 1). This was corrected with calcium replacement. A repeat ECG performed 24 hours after the last dose of HCQ showed normalization of the QTc (433 ms). On the tenth day of hospitalization, a repeat TTE showed normal cardiac function. She was eventually weaned off inotropes and was extubated on the 14th day of hospitalization. Investigations on the 19th day showed improvement of laboratory parameters apart from elevated D-dimer, and she was otherwise well and had no leg swellings. She was started on low molecular weight heparin. After three consecutive negative RT-PCR results for SARS-CoV-2, she was transferred on the 23rd day of hospitalization to a tertiary hospital, where a computed tomography pulmonary angiogram showed scattered ground-glass opacities consistent with COVID-19 and a small pulmonary embolism on the right. She was started on dabigatran 150 mg twice daily (planned 3 months of treatment) and remained well on follow-up. DISCUSSION Corrected QT interval prolongation is dangerous and can be associated with torsade de pointes, a life-threatening arrhythmia. Our patient developed RBBB and critically prolonged QTc (QTc > 500 ms) after 3 days of HCQ at a cumulative dose of 1,400 mg. A systematic review of chronic use of chloroquine and HCQ in rheumatic conditions reported cardiac side effects to be common. Among patients who were treated with HCQ who experienced cardiac toxicity (n = 50, median duration of use 8 years , and cumulative dose of 1,235 g ), the study reported bundle branch block (26%), atrioventricular block (24%), and firstor second-degree heart block (4%). 6 Other cardiac adverse effects of HCQ included ventricular hypertrophy (32%), ventricular hypokinesia (16%), heart failure (ejection fraction < 40% in 52.9%), and valvular dysfunction (8%), especially with high cumulative doses. 6 Other adverse effects of HCQ include gastrointestinal, ophthalmic, neurological, musculoskeletal, psychiatric, metabolic, and dermatological abnormalities. 7 Patients with COVID-19 who require hospitalization are at risk for complications including electrolyte derangements, which are risk factors for QTc prolongation. 8 Our patient had several risk factors for conduction abnormalities: administration of HCQ, lopinavir/ritonavir, and inotropes, and hypocalcemia. Lopinavir/ritonavir is also associated with the prolongation of QTc, but the ECG after starting this medication was normal. Among the electrolytes associated with the prolongation of QTc, only calcium was slightly low. Inotropes were started 2 days (noradrenaline) and one (dopamine) day before the detection of conduction abnormalities. Unfortunately, we did not obtain an ECG on the third day of HCQ therapy. We considered the possibility of myocarditis. The elevated troponin coincided with the period leading up to conduction abnormalities and peaked several days later, before decreasing. A repeat TTE was normal. We did not perform further investigations for myocarditis, as our patient remained stable. It is possible the other factors discussed contributed to the development of conduction abnormalities, but the resolution of ECGs changes after discontinuation of HCQ suggested a causal relationship between HCQ and these abnormalities. Given the lack of proven therapies for COVID-19, the continued use of HCQ is likely. HCQ and chloroquine have also been used as prophylaxis for COVID-19. 7,9 With such widespread use, complications can be expected. Combination therapy of chloroquine and azithromycin, both medications associated with QTc prolongation drugs, has recently been advocated. 4 Our case highlights that significant and lifethreatening conduction abnormalities can occur with the use of HCQ. Therefore, clinicians should exercise caution and assess cardiac risk if considering HCQ treatment for COVID-19. This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
LATERAL PERIODONTAL CYST: AN UNUSUAL CASE REPORT / CISTO PERIODONTAL LATERAL: RELATO DE CASO INCOMUM Introduction: Lateral periodontal cysts are odontogenic cysts with very unusual development. According to the literature, they account for less than 0.4% of cases of odontogenic cysts. Presentation of Case: The present report describes a 34-year-old patient referred to the Brazilian Journal of Development Braz. J. of Develop., Curitiba, v. 6, n. 11, p.86470-86477, nov. 2020. ISSN 2525-8761 86471 maxillofacial surgery and traumatology department of Montenegro Hospital due to swelling of the face with asymptomatic evolution for approximately 1 year. Based on clinical and tomographic examinations, the diagnostic hypothesis was odontogenic cyst, and the surgical plan involved complete enucleation of the cystic lesion. Complete removal was performed, and the material removed was sent for histopathological analysis. The examination revealed an irregular cystic cavity covered by epithelial tissue with few cuboidal layers that showed clear cells in the basal layer in some areas and the formation of nodular epithelial structures that protruded into the cavity. Discussion: The histopathological characteristics described in the literature are consistent with the histopathological description of the enucleated cyst in this case, confirming the diagnosis of lateral periodontal cyst. Conclusion: The patient is currently under follow-up, and evaluations have been normal. maxillofacial surgery and traumatology department of Montenegro Hospital due to swelling of the face with asymptomatic evolution for approximately 1 year. Based on clinical and tomographic examinations, the diagnostic hypothesis was odontogenic cyst, and the surgical plan involved complete enucleation of the cystic lesion. Complete removal was performed, and the material removed was sent for histopathological analysis. The examination revealed an irregular cystic cavity covered by epithelial tissue with few cuboidal layers that showed clear cells in the basal layer in some areas and the formation of nodular epithelial structures that protruded into the cavity. Discussion: The histopathological characteristics described in the literature are consistent with the histopathological description of the enucleated cyst in this case, confirming the diagnosis of lateral periodontal cyst. Conclusion: The patient is currently under follow-up, and evaluations have been normal. INTRODUCTION Lateral periodontal cysts are a rare type of developmental odontogenic cyst that typically occurs lateral to the tooth root surface. They have distinct histopathological, clinical and radiographic characteristics and are defined as nonkeratinized and noninflammatory developmental cysts. These cysts likely originate from dental lamina remains, but their source is still under debate 1,2. They are most frequently found in the region of the premolars in the mandible and represent approximately 0.4% of all odontogenic cysts 3. These cysts have no predilection for sex or race, and their peak prevalence occurs in the sixth decade of life 5. Because of the asymptomatic characteristics of lateral periodontal cysts, lesions are often discovered during routine radiographic examination and appear as a well-defined solitary radiolucency adjacent to a tooth root. The lesion has a circular or oval shape and is surrounded by a radiopaque border 1,2. The vitality of the pulp of the adjacent tooth is not affected by the cyst 6. Lateral periodontal cysts have an average size of 1±0.6 cm 1. Typical histological features include a cystic cavity coated with nonkeratinized squamous epithelium that sometimes penetrates the fibrous tissue, forming invaginated plaques 5,6. The treatment of choice for lateral periodontal cysts is surgical removal and subsequent histological evaluation to confirm the diagnosis 3. Recent studies have used diode lasers for the surgical treatment and removal of the cyst, with good results 4. The objective of the study was to report a case of lateral periodontal cyst due to the rarity of the disease and its unusual characteristics. The surgery occurred under general anaesthesia, and the patient received prophylactic antibiotic coverage with 1 g intravenous cephalothin immediately before the surgical procedure. PRESENTATION OF CASE A transoral buccal incision was made, and the buccal cortical bone was removed to gain access to the lesion. The thin cyst capsule was easily identified and removed from the adjacent bone. Total enucleation of the lesion was performed with a surgical curette (Figure 2), after which haemostasis and surgical wound suture were performed. The cyst (Figure 3) was immersed in 10% formaldehyde and sent for histopathological analysis. The patient remained in the hospital for 24 hours and was then discharged. After 10 days, the patient returned to the outpatient clinic for suture removal and began clinical follow-up. After two years of postsurgical control, bone repair with reorganization of anatomical structures was observed in a new CT scan ( Figure 5). The patient will be followed up every six months for five years. DISCUSSION In this study, a case of lateral periodontal cyst was reported, and its clinical, tomographic and histological presentation was investigated. Because lateral periodontal cysts are asymptomatic, they are usually found in routine radiographic examinations. They are located near vital teeth in middle-aged patients; radiographically, the lesion shows well-defined radiolucency surrounded by a radiopaque halo 3, as was observed in the present case. Although the literature indicates that these cysts have an average size of 1 cm 1,3, in this case, the cyst was 3 x 4.5 cm, indicating that is was as an unusual type of lateral periodontal cyst. Based on these findings, a provisional diagnosis of lateral periodontal cyst was established, and treatment was planned accordingly. After the enucleation of the cyst, it is always prudent to confirm the diagnosis through histopathological analysis, which should reveal a cyst that is surrounded by thin walls of non-keratinized stratified squamous epithelium that is usually without inflammation and is supported by connective tissue. Invagination of the squamous epithelium in the fibrous tissue can be observed 5 Surgical treatment is considered successful when the following criteria are met: reestablishment of stomatognathic system functions followed by total reduction of the disease focus, in this case of the lateral periodontal cyst; healing of bone and mucosal tissues; and nonrecurrence of the disease during follow-up 4. Thus, it can be stated that the diagnosis, treatment plan and postoperative follow-up of the unusual case of a lateral periodontal cyst described in this report were successful.
Sen. Wagner’s guest column highlights the need to “clean up Harrisburg” but illustrates the distortions and extremism that have become more common in Harrisburg since the senator took office. For example, the senator characterizes the House Republican budget that Gov. Wolf vetoed last June as “balanced and responsible.” Pennsylvania’s Independent Fiscal Office and Standard and Poor’s, however, note that state revenues will increasingly fall short of expenditures – a gap that last June’s vetoed budget and the one just blue-lined by Gov. Wolf would have increased. Translation: those budgets weren’t balanced or responsible. The senator also drags out that staple of divisive Pennsylvania politics, the Philadelphia card, saying “Gov. Wolf cares more about Philadelphia than the rest of Pennsylvania.” That claim is invented. If it’s meant to imply that most of the benefits of the governor’s proposals go to Philadelphia, it’s also inaccurate. On education funding and property tax relief, Gov. Wolf’s proposals would benefit school districts across the state, including York and many lower-income rural areas. Those proposals reflect the governor’s belief in educational opportunity and strong communities for all children and families. Before the holidays, Sen. Wagner could have helped achieve a genuinely balanced and responsible budget by getting behind a Senate bill with more money for schools that passed by a big bipartisan majority, 43-7. Instead, he voted no and then reportedly threw his weight behind derailing that budget in the House. That doesn’t clean up Harrisburg; it makes it more extreme and dysfunctional. The first page of the Jan. 10 Viewpoints section of the Sunday News was devoted to York County's Top Political Figures for 2016. Of the 16 persons who were identified by their photo, three were women and one was a person of color. Of those 40 persons who were identified by their name only, 16 were women and two were persons of color, as best as I could determine. This same lack of significant diversity is also present in our county's police departments, our county's judges and district judges and our county's school district administrations. What does this say to our increasingly diversified population of residents of York County when we are half-way through the second decade of the 21st century? As a grandparent, I am always concerned about my grandchildren. Well, my youngest grandchild was born Dec. 21 and I cannot say anything negative about York Hospital maternity and labor and delivery and NICU while my Mason was there. Thank you all to the nurses and volunteers who nurtured my grandson and other little ones. Thanks for all the care. Thank you so much for the excellent article, "This Mayor's on a Mission," in Sunday's newspaper. So uplifting to hear what he is doing to make his part of the world a better place. I appreciated reading about Mayor John Fetterman's life's journey from York to Braddock. Reading about the people whose lives he has touched and the role model he has been was a real inspiration to so many of us. Thank you.
twitter.com/FleeceFriends www.instagram.com/fleece_frien… Want to bring home a plushie for yourself? Checkout my etsy shop! I'm always open for comissions Want to bring home a plushie for yourself? Checkout my etsy shop! I'm always open for comissions Yay more winter ponies! I'm really into different variations of ponies lately, and with the weather being the way it is I decided to go for a winter themed Rainbow Dash! It was a lot of fun figuring out her accessories! I hope you like her!Dashie is entirely hand sewn out of the softest light blue fleeceShe's stuffed with polyfil stuffing and stands just about 10" tallHer eyes and cutie mark are printed onto a cotton fusible fabric and then ironed in placeShe features hand stitched detail on her cute little wingsHer boots, hat, and scarf are all hand sewn out of purple and red fleece, and feature ironed on accents!Let me know what other variations you'd like to see!I got a twitter! Please follow me because I have no idea what I'm doing and could use more friends lol
INVESTMENT POTENTIAL AND EFFICIENCY OF INVESTMENTS IN THE ECONOMY OF THE KURSK REGION The article discusses the features of investing in the economy of the Kursk region, and the results of investments for 5 years from 2013 to 2017. To determine the effectiveness of possible investments, average revenue and profit indicators were mapped by the districts of the region. Identified areas with maximum efficiency of investment. The indicators of income and expenditure of the population of the region and the rate of savings were investigated. Calculated the amount of money available to the population. It has been suggested how it is possible to increase the presence of the capital of individuals on the Russian stock market. In the first place it is more convenient applications to simplify investment. Certain efficiency of investments in the economy of the Kursk region, with the help of indicators such as the volume of investments in fixed capital and the change in GRP (gross regional product). When calculating, inflation was taken into account. Assumptions have been made about the reason for the slowdown in economic growth and the weak economic impact of the GDP (gross domestic product) multiplier. Also in the article the population of the region is considered as one of the important sources of investment in the economy of the region. With the increase of financial literacy, the population will begin to participate in investment projects.
Warning: This article contains major spoilers from tonight's season 2 finale of "Outlander," "Dragonfly in Amber" For most of the second season of "Outlander," the narrative followed a relatively linear trajectory: 18th-century power couple Claire (Caitriona Balfe) and Jamie Fraser (Sam Heughan) fought against tremendous odds – ineffectual royals, vengeful French aristocrats – to change the future and thwart the Scottish-culture-destroying Battle of Culloden. However, from the moment tonight's finale, titled "Dragonfly in Amber" after the novel upon which season 2 was based, opened on a black-and-white clip from the British TV show "The Avengers," we knew that we were no longer in the 1700s, and that something truly drastic had gone down between the impassioned lovers. "Outlander" executive producer and showrunner Ronald D. Moore called up Speakeasy recently to discuss the significant creative decisions that were made for the supersized finale (which ran at a nearly feature-length 90 minutes tonight on Starz). While Moore and his team have always remained faithful to Diana Gabaldon's books, he explained the logic behind certain choices like the enhanced reappearance of Lotte Verbeek's character Gillian Edgars/Geillis Duncan, as well as the distinctive structure of the season 2 finale. Moore, who is currently in pre-production on season 3 of "Outlander," also revealed a juicy nugget about how the original plan to use a vintage clip from the "Star Trek" TV series – instead of the Diana Rigg-starring "The Avengers" – in the opening scenes was foiled due to accuracy reasons. A real bummer for a guy who owes a good portion of his career to the venerated sci-fi franchise. What was your reasoning behind not showing the 1960s scenes until the finale – even though they bookended the novel “Dragonfly in Amber”? I thought opening the season with the 1960s was too big of a jump for the TV show. I liked it conceptually; I liked the idea of jumping ahead in the story and telling the audience everything that happened in Paris and Scotland was ultimately going to come to naught, and Claire was going to return to the 20th century. But what I just said is enough. To also go 20 years into that story, and Brianna as an adult, and Frank is dead, and [Claire's] a surgeon. I was just like, it’s too much, to go from them sailing away to that as the next cut. It’s a big enough shock to the audience that she returns to the 20th century – let’s just start there and catch up to the Sixties in the end. When we talked about the season 1 finale last year, you discussed how you wanted to open the episode up with the aftermath of Jamie and Black Jack’s sexual assault, so that the audience was forced to deal with the harsh truth of what happened. It seemed to me the season 2 finale took the same approach: We opened with Claire in 1968, so we know she and Jamie couldn’t change the future. Was it your intention to force the audience to deal with the reality head-on? In some sense. I just wanted to yank the audience out of the place where they were, which was on the cusp of the Culloden battle, and remind them, “Oh, wow, that’s right.” It hearkens all the way back to the first episode, and Claire in the 20th century: "Holy cow! She not only went back, but she stayed there for 20 years. Well, wait a minute, how does this all fit together?” I liked the idea of taking the audience off-stride as they went into the finale. Why 90 minutes? Did you hear about the 69-minute “Game of Thrones” finale and think, “Yeah, I can do better than that”? [Laughs] No, I didn’t even know about that. It was actually initiated, I believe, by Starz. Early on, in the planning of season 2, they said, “If you guys need more time for the finale, let us know and we’ll be happy to dedicate more resources – it could be a two-parter, it could be a supersize episode, or 90 minutes, let us know. So once we got to breaking down the finale itself, we just had that knowledge, that, whatever’s the best size for this is what we go for. We felt that 90 minutes was the best way to tell that story. Whose idea was it to have Roger Wakefield (Richard Rankin) and the younger mourners watch “The Avengers” in the beginning? That was a great period touch. That evolved over the course of the script. That it would be fun to come out of the main title into a clip of a TV show, which would tell the audience [that we were in the 20th century]. Matt first suggested a “Star Trek” clip, and I got really excited. Immediately my Trekkie mind was, “Okay, it has to be one with Scotty in it – which one did Scotty wear a kilt?” And then I stopped myself and went, “Wait a minute, was ‘Star Trek’ on the air in the U.K. in 1968?” I checked, and no, it was not. So then it became, what’s another TV show that would be in the U.K. that a U.S. audience could still kind of identify? And Marina Campbell, who’s a writer’s assistant – and Scottish – it was her idea to do “The Avengers.” She found the clip, and that’s the introduction of Emma Peel in that episode. This is more of a question for [costume designer] Terry Dresbach, but was Claire’s 1968 look an homage to Diana Rigg’s Emma Peel? It’s a good question for Terry – I don’t know. I do know that with the selection of the clip, Terry was already pretty far down the line in terms of costumes; she already made them before we did the “Avengers” clip. So if she did it as an homage to Emma Peel, it was just serendipity. In the book, Gillian/Geillis doesn’t establish a relationship with Roger and Brianna. So what was the thinking behind working that element into the story for the show? Was it so you could show the absolute freakiness of Gillian meeting her seven-times great-grandson (Roger)? Yeah, partially. We just wanted to work Geillis into the story more. In the book, she really doesn’t make an appearance – it’s almost like a cameo. Claire literally never sees her until the moment up at Craigh na Dun. So we just thought, if we’re going to bring Lotte back and do the character, you just wanted to enjoy it a little bit more. So then we started talking about how we could see Geillis more, and we realized, “Well, she could meet Brianna and Roger, and that wouldn’t affect anything, and that would be kind of cool, with the audience going, ‘Oh my God! You don’t know who that is!’” We thought that would be fun to play. Who came up with Claire and Jamie’s dance toward the stones? I felt that was one of the more indelible images of the finale. I think that was Matt and Toni. They came up with the idea that he would have to back her toward the stones without her realizing what he was doing. Because Jamie’s focus would have to be on getting her through those stones, whereas Claire would be resisting up until the last moment. Did you have any favorite episodes or scenes from this season? From the finale, I really liked the scene of Claire at the grave at Culloden Moor. It’s an amazing moment, it’s very emotional. It’s a bravura performance from Cait. I also like “Faith,” the episode where they lost the child and it had the star chamber. There’s just so much in that episode, it’s one of the standouts of the season. Were you more of a fan of the Paris episodes or the Scotland ones? I don’t know that I preferred one or the other. I really liked the challenge of doing the Paris episodes, because they were very difficult creatively, in terms of adapting the story line to the production. The Scotland episodes felt more like a homecoming; they felt like, “Oh, we’re back to doing ‘Outlander’ again,” so there was a certain warm-and-fuzzy feeling once we got into the Scotland section. How are far into production are you on Season 3? We’re in pre-production. We’re writing the scripts and stories and the team is on the ground in Scotland doing prep, scouting locations, designing sets, building costumes and so on. But we won’t start shooting again until, like, early September/late August. Will you still be operating primarily out of Scotland, or will the, well, “Voyager” element of next season move you elsewhere? Yeah, Scotland will still be our base of operations, and then we’re looking at other locations to do things like the sea voyage and where Jamaica’s going to be, and that kind of stuff.
I first saw George H. W. Bush at a Fourth of July picnic in Lake Jackson, Texas, a slender young politician in his mid-40s running for congress. He won the election and later served as U.S. Ambassador to the United Nations and Vice President with Ronald Reagan before being elected President of the United States. His son, George W. and I are the same age. I have always been impressed with George H.W. as a man of integrity, honesty, character, courage and faith. He embodied the qualities that Tom Brokaw described as “the greatest generation.” Not least in the legacy of President Bush was his devotion to his wife, Barbara and the support they shared in the death of their daughter, Robin. I saw the same qualities in my father, a blue-collar worker with Bell Telephone who was devoted to my mother, raised three sons and served as a deacon in his church. He died at 53 of cancer. It is good for our nation that we will spend this next week remembering a leader with the qualities of George H.W. Bush. A world awash with lies, accusations, falsehoods, greed, self-serving, prejudice, fear and faithlessness needs to be reminded of the higher standards that can sustain us. President Bush later spearheaded the formation of the Points of Light Foundation that encourages volunteers to engage solutions for their communities. According to their website, Point of Light has a global network of over 200 affiliates in 35 countries working with thousands of non-profits to mobilize volunteers worldwide. Most of us will never be rich or famous. All of us, regardless of our occupation or income, can make the world better. Whether we are garbage collectors, janitors, cashiers, factory workers, salesmen, technicians, nurses, maids or executives, we will all leave a legacy. Most of us will have children and grandchildren. We all have classmates, friends and co-workers. Every life counts. Every life makes a difference. Jesus said, “You are the light of the world. A city set on a hill cannot be hid. Neither do men light a lamp and put it under a basket, but on a lampstand and it gives light to all who are in the room. So let your light shine before men that they might see your good works and glorify your Father which is in Heaven.” Matthew 5:14-16. In a dark and desperate world, when increasingly it seems people practice dishonesty, deceit and immorality in the shadows of secrecy, perhaps we can heed the legacy of our former President and live in such a way that we turn the world from darkness to light. As we prepare to celebrate Jesus’ birth let us be reminded that “In Him was life, and the life was the Light of men. The Light shines in the darkness, and the darkness did not comprehend it.” John 1:4.
Reduced spatial extent of extreme storms at higher temperatures Extreme precipitation intensity is expected to increase in proportion to the waterholding capacity of the atmosphere. However, increases beyond this expectation have been observed, implying that changes in storm dynamics may be occurring alongside changes in moisture availability. Such changes imply shifts in the spatial organization of storms, and we test this by analyzing presentday sensitivities between storm spatial organization and nearsurface atmospheric temperature. We show that both the total precipitation depth and the peak precipitation intensity increases with temperature, while the storm's spatial extent decreases. This suggests that storm cells intensify at warmer temperatures, with a greater total amount of moisture in the storm, as well as a redistribution of moisture toward the storm center. The results have significant implications for the severity of flooding, as precipitation may become both more intense and spatially concentrated in a warming climate.
Right now, Senate Republicans are trying to reach a consensus on a health care bill and bring it to a vote before the July 4 break. It is a bill that was crafted outside of the public eye, one that is attempting to repeal the Affordable Care Act and end Medicaid as we know it. But while U.S. leaders are bent on pushing legislation to reduce access to health care, the rest of the world is focused on increasing people’s ability to get the care they need, including in developing countries. During its 70th World Health Assembly in May, the World Health Organization (WHO) made a historic leap on the path toward equity and diversity in leadership by electing Dr. Tedros Adhanom Ghebreyesus as the Director General, its highest office. He is the first African to ever hold this position. ADVERTISEMENT Ghebreyesus saved lives in his home country of Ethiopia and is now poised to save even more on a global scale. An experienced health leader with a history of positive results, Dr. Ghebreyesus reduced deaths from multiple infectious and chronic diseases that primarily impact women and children and did so within a conservative and restrictive government. Yet we can’t help but contrast Dr. Ghebreyesus’ leadership and commitment to implementing programs and policies that support the health of children and families with Secretary Price’s own impact in the United States. Ghebreyesus’ leadership has led to a clear positive trajectory for health outcomes in Ethiopia. As Minister of Health of Ethiopia, Ghebreyesus developed multiple programs that reduced maternal and child mortality including the training of a 38,000 person strong cadre of health extension workers that are creating a community driven system with women at its core. Under Dr. Ghebreyesus’ leadership from 2005-2012, the country's health infrastructure was expanded, creating 3,500 health centers and 16,000 health posts to reach the remotest parts of the country, establishing an ambulance system, and increasing access to laboratories. Key health outcomes improved dramatically. Malaria mortality dropped by 75 percent, while TB mortality decreased by 64 percent. Ethiopia lost the dubious distinction of having the highest incidence of HIV infections in Africa; the country has the lowest incidence rate of East African countries. As he ascends to the global stage, Ghebreyesus continues to display strong vision and leadership. Here in the United States, however, Secretary Price appears to be the polar opposite, dedicated to advancing policies that would reduce access to health care, undermine women’s health and allow religious beliefs to override science and public health. He has been leading the Administration’s charge to repeal the Affordable Care Act and to cut Medicaid. He is publicly promoting the American Health Care Act, the Republican House bill that would result in 23 million people losing health insurance, defund Planned Parenthood, and roll back critical consumer protections such as outlawing insurance companies’ discrimination for pre-existing conditions, and guaranteeing essential benefits like maternity care, prescription drugs, and mental health services. All this to provide an enormous redistribution of wealth to very rich individuals and corporations. This administration’s prioritization of profits over progress makes it clear that the United States health leadership and influence is moving backward, not forward. Some might say that comparing Director General Ghebreyesus and Secretary Price is comparing apples to oranges, since Ethiopia’s and the U.S.’s health challenges are widely different — not to mention what Dr. Ghebreyesus will have to grapple with on a global scale. Here in the United States, we may not have to deal with cholera epidemics, but recent outbreaks of diseases like measles show that the United States too struggles with basic public health disease prevention, not to mention increasing maternal mortality rates over the last 15 years,(CDC Foundation) which were noted in Save the Children’s State of the World’s Mothers report as the worst record in the developed world. Secretary Price said, “Global health security begins at home.” We agree completely. Secretary Price would do well to look to Dr. Ghebreyesus leadership in addressing pressing health disparities with limited means. We need our health leader to harness the United States’ tremendous resources to improve health outcomes for everyone in our nation. We must pursue global health security through sustained investments in global health and by backing research-based, proven solutions — domestically and internationally. Krista Scott is the Senior Director of Child Care Health Policy at Child Care Aware of America; Uzma Alam PhD, MPH, is a global health practitioner, and Sinsi Hernández-Cancio is an expert in health care policy, health equity, and Latino issues. All are participants in the Allies for Reaching Community Health Equity Public Voices Fellowship of The OpEd Project. The views expressed by contributors are their own and not the views of The Hill.
Some Cardinals' Seals of the Thirteenth Century Cardinals' seals as a source of evidence for artistic style in Rome and central Italy have been rather neglected. Yet the preserved examples enable one to chart with some accuracy shifts in style, particularly in that most vulnerable of forms, goldsmiths' work, in a way which the sporadic survival of other objects does not. The scope of this paper is limited. Certainly not all, or nearly all the cardinals' seals known to the writer or even all those published are discussed. Comment is restricted to a number of significant examples and even there confined largely to style and design, putting to one side the equally important role that seals can play as evidence for administrative practice.1 Palaeographical problems are not discussed as they lie beyond the author's competence.2 More attention will be given to seals of the later thirteenth century where the evidence for work in precious metals, abundant in the documentary sources, is virtually non-existent as regards preserved material. Seals here provide a measure of direct information although, as will become clear, many qualifications must be placed on its interpretation. Problems of the countries of origin of the seal designs will be examined, typology and continuity as well as style. Selectivity is essential, for the sheer amount of the material is too large, and adequate illustration of wax impressions is often very difficult to obtain.3 Yet the richness of the material is evident: the major papal goldsmiths of the day are documented as designers of seals as well as of jewels and elaborate goldsmiths' work now lost. This restricted approach inevitably (and unfortunately) means that the content of the seals has been neglected. In this field a great deal of work remains to be done. The design of cardinals' seals was closely bound to their role as titulars, and this inhibited developments which are discernible elsewhere-in Italian, as in northern European seal design: the use of the rebus, the introduction of personal coats of arms, a close relationship between legend and matrix design. To a notable extent cardinals' seals adhered to patterns which once established became persistent traditions. Iconographical problems, viewed as a whole, have been eschewed deliberately in order to concentrate attention on the stylistic importance of the seal designs. The general relation of their content to that of other Italian seals, lay or clerical, remains to be traced. Similarly, family influences in the design of individual cardinals' seals, particularly where there was an accepted design tradition, requires much further investigation. This paper is intended to serve as an introduction for subsequent work in this rich and relatively unexplored field. Unlike objects in precious metals which could be melted down, stolen or
One-Step Preparation of Monodisperse Vinyl Hybrid Nanosilica Hydrosol The successful one-step preparation method of vinyl hybrid nanosilica (V-SiO2) hydrosols was studied. TheV-SiO2 hydrosols were prepared via water-based sol-gel reaction with vinyl trimethoxy silane (VTMS) as the precursors under alkaline condition in the presence of composite surfactant. The formation mechanism of V-SiO2 nanoparticles was explained. The size of V-SiO2 nanoparticles could be adjusted by controlling the concentration of the composite surfactant and catalyst. Fourier transform infrared (FT-IR), transmission electron microscopy (TEM), Zetasizer Nano ZS Particle Size Analyzer were used to characterize V-SiO2 nanoparticles. The obtained V-SiO2 hydrosols are more convenient for preparation of polymer/SiO2 nanocomposite.
Intermodal use of (e-)scooters with train in the Provence-Alpes-Cte dAzur region: towards extended train stations areas? Featuring rapid adoption rates in recent years, personal standing scooters, as a micromobility, represent a missing complement to the first and last mile of public transport. This paper examines intermodal trips involving private (e-)scooters and trains with the objective to investigate the influence of this intermodal combination on station catchment areas. The methodology is based on the analysis of existing scientific research and empirical evidences. The case study focuses on access data from 12 railway stations collected by SNCF Rseau in the Provence-Alpes-Cte dAzur region in September and October 2020. Main findings of this secondary analysis, based on 53 passengers using this personal device, suggest an over-representation of male and younger users, with very frequent intermodal practices mainly dedicated to work or study, and a feeder distance between combined walking and cycling. There appears to be similarities between bike-and-ride and scoot-and-ride but also clear distinctions that characterize this emerging mode, among which the fact that scooter is almost always used both during access and egress stages. This article advocates that station areas should be better considered by redesigning the surrounding public spaces to better balance the space of each mode, in favour of alternatives to the car and sustainable cities. Introduction Standing scooters are a dramatically growing new trend in urban mobility (, p. 1). These are small, light, electric, single-user and cost-effective alternative transportation options for short distances (McKenzie 2019b;, p. 1). Private e-scooters are increasingly spreading in France with a one-third increase in sales in 2020 (FP2M and SML 2021) and in Europe with an increase in use from 20 to 60% during the first lockdown (). In addition, preliminary data from the OMNIL (2021, p. 35) mobility observatory suggest that light transportation modes have considerably increased in September and October 2020 in the le-de-France region. Beyond its rapid adoption rate, this personal device interacts with other modes because it is relatively light and foldable (, p. 1), providing high flexibility for the user (, p. 2), an essential value that explains a large part of the appeal of automobile (Hran 2015, p. 209). The evolution of urban mobility is a key role in reducing the costs generated by a car-oriented society, resulting mainly in GHG emission, urban sprawl, different forms of pollution (air, noise), socio-spatial fragmentation, budget costs, traffic accidents and congestion (Hran 2001, p. 12). Although modal shift to public transport is one of the most energy-efficient ways to contribute to sustainable mobility, it has the disadvantage of being inflexible by following a fixed route (EEA 2020, p. 10). Intermodality, i.e. using multiple means of travel to achieve a given trip (Polzin 2017), reduces the rigidity of public transport (Wiel 1998, p. 17) and is a key role improving the efficiency of the urban transport system (Oostendorp and Gebhardt 2018, p. 82) by extending the stations' service area (Amar 2016, p. 16). In this context, Amar (2016, p. 222) describes the evolution towards a more connected and efficient mobility through the revaluation of proximity modes, articulated to other scales and forms of mobility. New mobility solutions are having a lasting impact on the twenty-first century mobility system (Cervero 2019, p. 137). Emerging feeder modes reinforce the attractiveness of public transport (, p. 116) and increase their service area, allowing more destinations to be reached within the same time budget (EEA 2020, p. 10). New mobility solutions forms, such as the combination of micromobility and public transport, are expected to become relevant in the future (Oostendorp and Gebhardt 2018, p. 77), especially in suburban areas where travel distances for accessing transit stops tend to be longer (, p. 84). By providing on-demand mobility, e-scooters are touted as a solution to the missing "first and last-mile" (FLM) transit connection (, p. 9). Therefore, micromobility could be a valuable complement to heavy-duty public transport networks by making it more efficient towards a post-carbon city (Schultz and Grisot 2019, p. 3). Thus, e-scooters, incorporated as an intermodal option (, p. 1), have the potential to meet the challenges of public transport gaps (Gauquelin 2021). This emerging mode should be considered as a potential complement to the quality of the public transport system capable of promoting more virtuous and longer distance trips, competing with car in urban and peri-urban areas. Micromobility options have a genuine capacity to be relevant as a segment of intermodal trips, increasing the catchment area of public transport and compete with car travel (CPB 2020, p. 63). However, the changes produced by the arrival of e-scooters in the city remain nascent, unclear, and difficult to predict (, p. 1). There is a lack of empirical studies on the integration of micro-vehicles and public transport, more specifically on e-scooter-rail intermodal transportation (, p. 2). Gaps are identified across quantitative support in this respect (Schlueter Langdon and Oehrlein 2021, p. 7), and trip purpose and demographics (, p. 9). It can be explained by the fact that these vehicles are new, methodology is not standardized and data are only scarcely available (, p. 17). This paper is based on the analysis of a survey conducted by the French rail network manager SNCF Rseau. The sample extracted from the questionnaire is composed of passengers who indicated using another mode to get to or from a train station. The method applied is based on geostatistical processing of the data collected by comparing standing scooters with the other modes mentioned, and by projecting spatial outputs. The aim of this paper is therefore to provide an overview of the literature published in relation to micromobility, especially standing scooters, and investigate a local survey on feeder modes to train stations. This paper identifies both a gap in the literature about this modal mix and a geographical disparity in the literature in favour of the US. To the authors' knowledge, no scientific article has examined in detail the socio-demographic profile and mobility practices of such intermodal travellers. This paper also presents evidence about this emerging trip chain, in the European context, highlighting some particular features such as the combined use of scooter in access and egress, and inequalities in usage in terms of gender and age distribution. The analysis of this survey thus contributes to a better understanding of the challenges involved in the use of micromobility in the context of first-and last-mile access to public transport. This work also makes recommendations to urban planners to integrate these riders' needs: a win-win objective being to capture the growth potential of this intermodal system, and attract a wider and more diversified public to the railways. The remainder of the paper is organized as follows. In Sect. 2, we review existing literature based mainly on shared e-scooters, to highlight research gaps on combined scooters. Further to a description of SNCF Rseau's investigation protocol in Sect. 3, the present article reports the results on train transfers by scooter in the Provence-Alpes-Cte d'Azur (PACA) region in France, with a focus on one periurban station in Sect. 4. Results are discussed in Sect. 5, while Sect. 6 concludes and provides also an outlook for further research. Related work One of the appealing aspects of micromobility solutions is their role in enhancing connectivity to public transport (, p. 4). The use of micromobility including bikes and personal mobility devices (PMD) as human (kick) or electric-powered (e-)scooters can significantly enlarge service area available within transit isochrones (Kostrzewska and Macikowski 2017, p. 4). As a new type of vehicle with particular technical features-conveying a futuristic and sustainable representation of the city (Boffi 2019, p. 1)-e-scooters enable unique mobility practices by allowing riders to rapidly transform into pedestrians (, p. 9). Kostrzewska and Macikowski (2017, p. 7) point out that scoot-and-ride 1 differs from bike-and-ride in the ease of boarding the linked mode on public transport: e-scooters appear to be an interesting "Hybrid, distinct transport mode" option (, p. 1) when carried on dense or poorly equipped transit, even during peak hours. It is easily manoeuvrable, folded, and easy to carry, granting enhanced mobility and speed, especially when combined with urban transport (, p. 1), although this depends on the type of public vehicle and the context. The aim of this article is then to focus on the intermodal use of scooters, to capture the main characteristics of this practice which seems to be growing and which differs from cycling combined with public transport in some points. Search parameters considered in the review of literature Through a literature review based on previously conducted research on the uses and users of scooters, we will present the main results that emerge from the analysis of 104 scientific articles and reports collected on an international scale. Our intention is to cover all the publicly available data on this new object of study, including both academic literature and industry-driven research. Very few scientific articles and reports examine exclusively the characteristics and impact of intermodality involving a standing scooter. Therefore, it was decided to devote this literature review to all white and grey literature on scooters and their combination with other modes of transport. These papers have been systematically filtered to identify trends in the research, as shown in Fig. 1. The literature review is restricted to Europe and America, as few articles seem to exist on this subject in other continents, at least in English. It appears that studies about scooters are monopolizing the subject of electric scooters and have mainly focused on shared e-scooters soon after their emergence in 2017, especially during 2019 and 2020. While studies rate about e-scooters as Personal Electric Vehicles (PEV) seems to be increasing from 2021 onwards. This is in line with the growth in the number of publications expected by Abduljabbar, Liyanage, and Dia (2021, p. 2) who conducted a bibliometric analysis focused on micromobility, from English-language journal articles between 2000 and 2020. Studies on the intermodal approach to e-scooters tend to focus on both personal and shared scooters, accounting for one third of the literature reviewed (Table 1). By contrast, this topic is more common in the European literature (36%) than in the American one (17%). Although the question of integration of standing scooters and transit is regularly included in scientific research and reports, it is only partially discussed and restricted to a few results which are outlined below. To better understand the uses of scooters, we will first analyse the characteristics of all trips on scooters, and then we will examine the statistics gathered on intermodal trips. Socio-demographic profile of scooter users This part deals with the main aspects defining the user's profile, including age and gender that will be explored in Sect. 4. Age Field surveys based on the data analysis on scooters concur on the relative youth of users of scooters. Generally, the median age of scooter users ranges from 25 to 34 (Fig. 2). Accordingly, 6t-Bureau de recherche (2019a, p. 14) observes that 45% of shared e-scooter riders are under 25 in Paris (France), while 33% are under 24 in New Zealand (Fitt and Curl 2019, p. 4). In Calgary (Canada), Sedor and Carswell report that 70% are actually less than 30 while de Bortoli and Christoforou (2020, appendices) identify 46% as being under the 30-year mark in Paris. The same applies to personal scooters which are more popular with the younger population, with 41% of users between 15 and 25 in a ratio similar to that for public bikeshare in France (Richer 2021). A graduate thesis in Lyon stands out from the overall results by counting half of the users as being below 21 (Pestour 2019). Other authors suggest that the median age is closer to the 25-34, as illustrated by Laa and Leth (2020, p. 3) who counts 46% of both dockless and private scooter users in Vienna (Austria) in this age category. In the US-especially in Arlington, Baltimore, Minneapolis, Portland, San Francisco and Santa Monica-more than half (50-73%) of shared e-scooter users are under 40 (NACTO 2020, p. 16). Moreover, the 6t-Bureau de recherche (2019b, p. 65) research office determined the average age of users of the scooter-sharing system by frequency, revealing that the most casual users are likely to be older: regular users are 35 on a mean basis, occasional users are 37, whereas users having tried only once are 40. Even though the under-30 age group seems to be the most popular choice for scooters, Degele et al. (2018, p. 4) note another significant peak of customers between 45 and 50 who tend to cover longer distance, suggesting then user clusters partly divided into Y and X+ generations. The effects of crises also appear to make more young people shift to free-floating e-scooters, as the Apur (2020, p. 68) report shows with the public transport strikes in the le-de-France region. Gender A second factor significantly influencing modal choice along with age is gender. Scooters face a gender gap, with male users taking up two-thirds to three-quarters of the reported modal share (Fig. 3). In Oslo (Norway), the typical user of dockless e-scooter program is a man (, p. 16). In Austin, 62% of shared e-scooters are men (, p. 16), while the NACTO (2020, p. 22) association of North American cities and transit agencies lists 66-80% male users from Austin, Baltimore, Minneapolis, San Francisco and Santa Monica. The same figure (67%) is also found among Lime users in Paris, Lyon and Marseille, reaching 70% for visitors (6t-Bureau de recherche 2019b, p. 50). This masculine population for shared scooters is equally identified (68%) in the French capital, as much by Apur (2020, p. 46) as de Bortoli and Christoforou (2020, appendices). By considering the travel frequency by shared e-scooters, it turns out that men are more represented among Lime regular users (76%) than casual users (68%) and one-time users (58%) (6t-Bureau de recherche 2019b, p. 65). Even if this is less noticeable for Dott regular users (78%), with a 3% gap relative to overall (6t-Bureau de recherche 2019a, p. 25). Gender inequalities seem to be strengthened when it involves personal devices: over 60% of scooter users in France are men (Richer 2021), Laa and Leth (2020, p. 5) reported 75% of male cycle path users by private scooters in Vienna, Sedor and Carswell surveyed both personal and shared scooters and obtained a 68% male share in Calgary. However, the le-de-France public transport strike drew a more feminized population among the new micromobility users, including shared e-scooters (Apur 2020, p. 68). Surprisingly, despite this inequality in the access to scooters, women have a more positive image of this micromobility (72%) than men (67%) in the United States (Clewlow 2018, p. 15). These indications call for deepening the researches on gender use and representation of this new transport mode. Mobility patterns of scooter trips This section addresses the basic characteristics of scooter trips, i.e. frequency, purpose and distance. Frequency Shared e-scooters are mostly used casually, while private scooters seem to be used more frequently (Fig. 4). In Paris, the use of e-scooters services at least once a week is the case for between 25 and 50% (6t-Bureau de recherche 2019a, b, 2020; Pestour 2019; Apur 2020) of riders. Similar rates are detected in Vienna with 32% of all respondents using the e-scooter sharing system at least once a week (Laa and Leth 2020, p. 3). Data from City of Chicago (2020, p. 28) indicate that most e-scooter users were infrequent or occasional users. The opposite, usage rates for weekly or more reaches 60% in Baltimore (U.S.) (BCDOT 2019, p. 27) and 70% in Oslo in summer (Norway), although this drops to 42% heading into the autumn (, p. 23). More factors can influence travel frequency of shared e-scooters. The longer people have been subscribing, the more frequently they appear to use e-scooters, 62% of regular users beginning to access Dott at least 1 month prior to the survey period (6t-Bureau de recherche 2019a, p. 47). Younger respondents between 17 and 39 report higher scooter and bikeshare ridership: several times a week or every day for the 17-24 (49%) and the 25-39 (47%), compared to 25% for the 40-54 and 31% for the 55-64 age groups (BCDOT 2019, p. 27). Similarly, people who identify as Asian (57%), Black/ African American (55%), or Hispanic/Latinx (55%) report being more frequent riders than those as White (30%) (BCDOT 2019, p. 27 Unlike other modes, the use frequency of personal and shared e-scooters does not seem to be associated with the residential context, with little difference between the city centre, the inner and the outer suburbs (6t-Bureau de recherche 2020, p. 21). Given the French legislation restricting the top speed of a scooter to 25 km/h, travel frequency is seen to have an effect on the perceived speed: frequent users report a driving speed above 20 km/h in comparison with 15-20 km/h speed's occasional users (Pestour 2019, p. 39). The 6t-Bureau de recherche (2019b, p. 123) survey suggests that 72% of users are very much in favour of riding a rented e-scooter more often as long as the price is reduced. In terms of use intensity, only one scientific publication focused on personal e-scooters and finds that 94% use it at least once a week (Laa and Leth 2020, p. 3). According to 6t-Bureau de recherche (2020, p. 22), 18% of e-scooter owners said they never use one, when frequency could also depend on the trip purpose: apart from commuting, scooters are used for shopping, leisure, culture, etc. at least once a week in 24% of situations. Purpose Commuting and leisure are the two main trip purposes for e-scooters services, whereas no extensive research on reasons for riding by personal scooters seems to have been driven (Fig. 5). E-scooters services are used for different purposes (McKenzie 2019a, p. 19). Destinations related to work or study by shared e-scooters are the most cited: 16-18% (6t-Bureau de recherche 2019a, b, pp. 34, 68), and 48% (39% and 9%) (Apur 2020, p. 58) in Paris; 39% (9% and 30%) (, p. 24) A data-based analysis of bicycles and scooters distribution in Zaragoza (Spain) shows a regular occurrence during peak hours to the centre and the University in the morning and evening in the opposite direction (Lpez-Escolano and Campos 2019, p. 11). The same phenomenon is present with students in St. George (U.S.) (The Spectrum 2019). When grouped with shopping, restaurant, and errands, the statistics seem more consistent between 30-50% in Europe and in the United States (twice 29%; 30%; 40%; 50%; 55%; and 85%). Moreover, a clear peak in usage is visible at the weekend and in the later hours of the day in Hamburg (Germany) and Louisville (U.S.), indicating a recreational and tourist usage (Civity Management Consultants 2019;, p. 12). Thus, data suggest that e-scooters are less being used as a first last-mile commute option, and more as a mode for running short-distance midday errands, travelling around a campus, and leisure in Indianapolis (U.S.) (, p. 48). Besides the last trip made by shared e-scooter, the main reasons appear to be commuting, followed by errands: commuting trips have been achieved by shared e-scooters for 58% of users, most of the time for 19%, often for 15% and occasionally for 24% of them (6t-Bureau de recherche 2019b, p. 78). In Baltimore, respondents were asked to rank the top three most common purposes with dockless bike or scooter trips: entertainment or socializing (50%), work or education (49%), shopping or errands (38%), business trips (37%) and connections to transit (22%) are the most highly cited in the list (BCDOT 2019, p. 28). The intensive users explain that they mostly ride shared scooters to and from work and for business trips (6t-Bureau de recherche 2019b, p. 77). Few studies could provide usage evidence on private scooters purposes. Richer analysed some forty Cerema-certified mobility enquiries between 2015 and 2019 and determined that 24-30% of trips by personal scooters are for work or education in France, which is a higher average than all trips (19%) and similar to private bikes and public bikeshare. In urban areas over 200,000 inhabitants in France, 3% of surveyed commuters state that they are interested in riding a personal standing scooter (6t-Bureau de recherche 2020, p. 22-23). According to ITDP (2019, p. 19), a non-profit organization, e-scooters are popular among commuters since they save the need to change clothes or shower to get to work. We also remark that a rather stable amount of individuals (10%) ride to or from a public transport station, indicating that personal and shared e-scooters are used for intermodal trips (, p. 24). Distance The scatter plot (Fig. 6) shows that private and shared e-scooters are best suited for short distances (, p. 3). E-scooter trips average between 1.5 and 3 km, lying at an intermediate position between the estimated range of combined walking and cycling: between 0.5 and 1 km for the pedestrian mode (, p. 2) and about 3 km for the non-motorised bicycle (van Oort 2020). On short-distance trips (1-4 km), dockless e-scooters would provide a new alternative to the private automobile in car-park constrained environments (Smith and Schwieterman 2018, p. 9). In France, the median distance of private standing scooter trips is 1 km (1.6 on average) as opposed to 0.6 km for walking and 1.4 km for private and shared bikes (Richer 2021). Data show that the distances travelled by e-scooter service are weather-dependent: trips between August and September averaged 1.6 km, while trips in January and February were closer to 1.1 km (BCDOT 2019, p. 14). It seems to be also influenced by the location inside the urban area: trips averaged both 1.9 km long in a Baltimore and Portland centre area, compared to an average of respectively 2.9 and 2.6 km near cities limits (BCDOT 2019; PBOT 2018, p. 11). It has been seen that from the 85th percentile onwards, there is a lower propensity to make long journeys by this new device (less than 2.4 km) than by public bikes (2.7 km) and especially by personal bicycle (3.6 km), which demonstrates a less wide range of distances for private scooters (Richer 2021). The same is observed for e-scooter services with the 75th percentile being less than 2.2 km (, p. 45), although pricing may be limiting the distances travelled over long distances. Regarding the distance covered by intermodal shared e-scooter trips in Paris, Lyon and Marseille, 6t-Bureau de recherche (2019b, p. 79) estimates that these daily trips and all daily routes are shorter than the average time-distance: these rides lasted 16.5 min or an approximate distance of 4 km. Scooters combined with transit Various studies have identified the characteristics of scooter-and-ride practises. This part takes a closer look at the role of intermodality in the last scooter trips. Figure 7 shows a trend of around one third of scooter used for access or egress trip chains. There is also a clear distinction in the type of combination depending on the area. Therefore, between a quarter and a fifth of standing scooter trips in France are combined with public transport, notably the metro and train. A multiple choice questionnaire shows that 43% of standing scooter riders associate this vehicle with the metro, 40% with the bus, 22% with the train, and 17% with the tram in Oslo (, p. 25). For unmotorized scooters in Stuttgart, 25% of interviewees connect it with the tram and the bus; and 17% with the train (, p. 4). In Paris, 74% of shared e-scooters, bicycles, and mopeds users have already linked one of these modes to the metro, 33% to the bus, 29% to the suburban trains (RER) and 20% to the tramway (Apur 2020, p. 60). Beyond the last ride, a large proportion of users sometimes or often use scooters to reach a public transport station. In the United States, one-third of users usually connect a free-floating e-scooter with public transport (NACTO 2019, p. 9). This share is twice in Paris, where 70% of Lime users have made at least one intermodal trip in the previous month (Lime 2019, p. 5) and 60% of Voi users sometimes connect e-scooters with public transport (, p. 12). Each month in the period from March to November 2021, between 43 and 50% of privately owned standing scooter and solowheel users in France switch between modes during a same trip (Mobiprox 2021). Methods The literature review on the use of scooters, for all forms of journeys, revealed several descriptions depending on the type of scooter and the geographical areas. With regard to personal scooters, the vast majority of users are young men who use it at least once a week, particularly to go to work or school. The distance covered using this micromobility device is estimated to be around 1.5 km, although this is influenced by trip purpose and by urban surrounding. These findings provide a baseline for this evidence-based research, which attempts to better understand the type of use and profile of public transport passengers using a private scooter in France. Objective This paper aims to achieve the following objectives with a secondary analysis of a survey data, property of SNCF Rseau, French national public rail network manager. Identify and quantify transit passengers' basic travel characteristics and mode shares for rail station access and egress trips; 2. Understand the comparative advantages of intermodal practices related to the standing scooter, including what links and differentiates it from the bicycle and the car; 3. Locate the train station accessibility by scooter where micromobility-friendly development recommendations can be made. Survey protocol This questionnaire was carried out as part of the studies prior to the public utility enquiry for the transport project "Ligne Nouvelle Provence Cte d'Azur" (LNPCA). This project aims to create three metropolitan express networks in the conurbations of Aix-Marseille, Toulon and the Cte d'Azur, to improve rail links between the three metropolises (SNCF Rseau n.d.). The face-to-face questionnaires were conducted by interviewing individuals going to the station, filtering out train passengers. It was conducted a Tuesday or a Thursday over a period from late September to early October 2020, excluding school holidays. These dates are part of a context linked to the COVID-19 health crisis and correspond to the period of free movement after the end of the first generalized lockdown on 11 May 2020, and shortly before the implementation of the second national lockdown on 29 October 2020. This station-based survey collected information on the age and gender of passengers, travel frequency and purpose, the municipality (or even the street) of trip origin, the destination station, transfer modes as well as the effects produced by the pandemic. Description of the study area The twelve locations studied have similarities in that none benefit or operate freefloating scooter service, assuming that all the scooters observed in the survey are owned by the user or borrowed for a long period of time. Similarly, the type of scooter identified varies according to the stations studied, although the trend is for electric scooters to predominate (Table 2). It is also relevant to note that the stations are not all integrated into the same urban context and do not all have the same size, capacity or quality of rail service. In addition, it can be assumed that the railway station areas are relatively poorly designed for cyclists and pedestrians, even for the busiest stations even though regional trains have facilities to accommodate bicycles and other micromobility for free. The data analysis considers these twelve sites before focusing on a single case study to illustrate our arguments. In view of this, the Mouans-Sartoux station attracted our attention for several aspects that ensure a comprehensive diagnosis: The station is located in the commune of Mouans-Sartoux in the Alpes-Maritimes department, with 10,000 inhabitants in 2018 and a population density of 733 per km 2 compared to 979 in Bandol, with 8000 inhabitants (Insee 2021). It is located about 10 km from Grasse and Cannes. This train station, part of the Grasse-Cannes-La Bocca railway axis, registered 120,000 passengers in 2016 before rising to 160,000 users in 2019 (SNCF 2018). As part of the reopening of the railway line in 2004, four stops were put into service, including the one at Mouans-Sartoux. The PACA Region and the Ple Azur Provence Agglomeration Community have restored the original station with a waiting room, ticket offices and information facilities (Nice Premium 2011). The surroundings of the station are supplied with dedicated routes, car park facilities and one public bicycle storage, giving a total of 34 available spaces including the 10 "boxcyclettes" (OpenStreetMap 2021). The terminal is also bordered by a bus line to Grasse, around 290 free car parking spaces, dedicated carpooling spots and a park-and-ride facility to be opened in 2021 (Olivier 2021;OpenStreetMap 2021). Micromobility's modal share for access is relatively in line with the general average of the survey, standing at 2.80% for the standing vehicle and 4.50% for the bicycle (all types). The station has the benefit of capturing only surveyed passengers with electric scooters, allowing for a more in-depth insight into this emerging mode (Table 2). It is relevant to note that Mouans-Sartoux city was evaluated by 90 of its inhabitants in the context of the French Cycling Barometer in 2019. The territory then recorded a positive score of 3.69/6-against an average of 2.75/6 for cities with less than 20,000 inhabitants and 2.57/6 for the 16 ranked cities of the department-thanks in particular to the comfort, safety, urban design and the public efforts made by the municipality. Between the last two editions of 2017 and 2019, the cycling situation has improved for 70% of respondents (FUB 2019). Sample description The study interviewed a total of 2537 passengers. To the question, "How did you get from your place of departure to this station" 53 of them replied that they had accessed the station using "another mode" = such as "scooter, skateboard, hoverboard, etc.". It is this sub-category of the 53 passengers that will be analysed in this article. It should be noted that observations made in parallel with the administration of the survey show that almost all the passengers interviewed who stated that they had accessed a station "with another mode (scooter, skateboard, hoverboard, etc.)" had indeed used a scooter. However, the standing scooter riders investigated were not categorized by type of scooter, i.e. whether it was a mechanical or electric scooter. There is no match in the entries between the observations made across electric and push and the results for each individual. In the following, we consider that these Fig. 9 Modal share and number of respondents by access mode. Source: authors, 2022 travellers constitute the sub-sample of travellers accessing a station by scooter. Thus, it can be considered that 2.09% of the surveyed travelling to one of the railway stations used a standing scooter, which is higher than those observed in the previous literature review. Geographical analysis method A spatial analysis of the data collected in the case study supplements the statistical results to confront scooter trips and other access modes with the urban environment. The departure and destination stations were requested in the questionnaire, together with the street and municipality of origin. However, the precise location of the destination of the trip was not collected during the survey. To map the origin and destination flows, we geocoded the stages of each respondent's journey according to the mode to access the train station. The geocoding procedure depends on a multistep approach (Fig. 10). First, to standardize the names of the stations and obtain their geolocation, we extracted the SNCF database of stations with their geolocation. We merged the database with the SNCF Open Data using a Python code which applies a textual filter on the headings of the origin and destination stations. Missing names in the survey results were compared manually by querying the SNCF API to obtain more conclusive results and to correct some names. The spatial data are taken from OpenStreetMap (OSM) 2 running Photon 3 to carry out reproducible and open access work. We request with the Photon tool to generate single addresses that were georeferenced and allocated to each row. In more detail, the first step was to improve the quality of the stations and street mailing addresses previously reworked by the owner of the database, with a Python code (using a "SequenceMatcher"). This program has resulted in the harmonization of station names to get better matches, and the implementation of a textual similarity index between the registered stations and the Open Data list 4 (with a limit set at 0.9, on a scale from 0 to 1.0). The second part of the process enabled latitude and longitude coordinates to be assigned to the locations, while minimizing the approximations of the titles. To match the OSM database, the Python code queried Photon which provided exact or similar addresses with geographical coordinates. Then, we obtained unique addresses identified and matched with survey data (with a textual similarity above or equal to 0.9) to combine names and spatial coordinates. This approach made it possible to calculate the straight line and travel distances (taking the shortest route on roads) for both access 5 and train trips. 6 The determination of the geographical coordinates of the places of origin (address number, street and municipality), the departure and arrival stations and the municipalities of destination guarantee GIS-based analysis on QGIS. 7 Travel time, which was not recorded in the questionnaire, was also estimated to compare the efficiency of this modal combination with private car. It is important to note that this predicted time only takes into account the movement itself, excluding the additional time required for preparation, congestion, parking, or waiting time before reaching the station. This bias affects all modal chains, albeit nonuniformly, which allows for a reasonable comparison of trip duration across mode combinations. In sum, the following statistical analysis is based on the 2537 responses to the questionnaire, while the spatial analysis of the data is based on 1095 location-based journeys. Results In this section, we will focus on the main findings of the station survey regarding the profile and practices of scooter users in combination with train, i.e. 2.09% of passengers surveyed. It should be pointed out that this share varies significantly depending on the twelve stations studied and that it would seem that the higher the modal share of bike-and-ride, the higher the share of scoot-and-ride (Table 2). Socio-demographic characteristics of scoot-and-ride travelers Individuals were interviewed about their age and gender, which could then be categorized by transfer mode. Other things being equal, there is an over-representation of 18-34 among passengers accessing a station by scooter (Fig. 11). Compared to all modes, the majority age category of 18-24 is estimated at 29% as against 34% for intermodal scooter riders. These social disparities are more significant for the next category of 25-34 representing 16% of all passengers as opposed to 28% for scooter riders. Conversely, the 12-17 and over-55 age groups are less important in the distribution of intermodal scooter use by age: the youngest age group accounts for only 9% compared to 14%, and the other age range for 2% compared to 14% on average. We can also point out that a less pronounced increase of scoot-and-ride users stands out: the 45-54 age group at 15% versus 13%. We can also see that the age-related characteristics of combined scooter users differ significantly from passengers surveyed accessing a station by bicycle. In relation to the age classes targeted by scooter, the modal share of intermodal cycling is more concentrated among the over-35s, and particularly among adults 35-44 (21%), 45-54 (16%) and the over-55s (15%). The socio-demographic features of intermodal trips by scooter are reflected in significant gender inequalities. Although the sample achieved a balance of 51% men and 49% women for all access modes, 83% of the scooter users were men and only 17% were women (Fig. 12). These gender disparities are almost unchanged for cycling (79% men), while women are over-represented among car drivers (62%). Only walking (54% men), the bus (54% men) and car passengers (56% women) are the least discriminating feeder modes to access one of the surveyed stations. Mobility behaviour by combining train and scooter Passengers were also surveyed on their mobility behaviour, i.e. travel frequency and purpose, as well as the type of transit access then egress mode along the intermodal trip. Personal scooter is a combined mode used much more frequently than all other modes revealing a specific use of this micromobility, as shown in Fig. 13. 79% of surveyed use a scooter as a feeder at least five days a week, whereas only half (56%) do so on average. Intermodal and daily trips by scooter differ from bicycle (67%) and car as a driver (65%), and more clearly from walking (57%), bus (56%), and car as a passenger (42%). In aggregate, almost all (94%) of the respondents use their scooters at least once a week to reach a station, compared to three quarters (77%) for all modes. The shares decrease to 88% for cycling, 86% for driving, 78% for walking and bus, and 63% for driving as a passenger. The frequency of train use has remained broadly the same since the start of the COVID-19 health crisis, for 63% of passengers and 58% of scooter users. However, 31% of new passengers and 28% of standing e-scooter users had not previously used the train before the pandemic, while 6% (overall) and 4% (scooter travellers) of them are using the train for the first time. It should be noted that 9% of scooter passengers say that they have increased their train use since the health crisis. As regards trip purpose, 81% of the rail passengers accessing a station by scooter commuted, including 55% to work and 26% to study. This figure should be seen in the context of all modes with the train being distributed between working (37%), studying (31%) and leisure (20%). Commuting share by scooter is similar to purposes observed for cycling (80%) and car driving (82%), but with a higher proportion of students by scooter (Fig. 14). In contrast, the share of trips made by scooter and train for leisure purposes decreases to 8%, as do car drivers (8%) and cyclists (10%). To get an overview of intermodal trips, the survey sought to measure the proportion of passengers using the same mode for transit access and egress. Results clearly show that a very large majority (85%) of rail passengers use a scooter both before and after their train ride, as opposed to only 34% of all respondents. Figure 15 illustrates that several transfer modes are in fact not often used at both ends of the rail trip, such as car passengers (2%) and drivers (11%) or bus users (17%). The values for cycling (59%) and walking (57%) tend to be close to those for the standing scooter. Distances covered to reach stations The feeder trip distances were estimated for 46 of the 53 respondents accessing one of the twelve stations by scooter, according to the location of their residence. The average distance is 2.4 km, as shown in Fig. 16a. The results show that for every 100 intermodal users by scooter, 75 trips are longer than 1.4 km, 50 than 2 km, and 25 than 3 km. Based on the predicted travel times by bicycle based on the Open-RouteService route planner, access trips take approximately 10.6 min. This figure is coherent with the widely used value of 10 min for the walkable radius around stations (Bertolini and Spit 1998). These distances travelled by scooter are higher than the average feeder trip, regardless of mode, which is 1.70 km (0.87 km when considering the median). The segment of the intermodal chain by train and scooter covers an average of 40.5 km, as shown in Fig. 16b. The median access and train route is 35 km. Thus, the scooter seems to provide only a 6% coverage of the distance trip (without considering the egress segment from the station), whereas these first kilometres correspond to a quarter of the time spent on these trips. The study aims to compare the benefits of standing scooters combined with the train, as opposed to an entirely car-oriented use, so as to determine the modal shift potential of this practice from private car. The data analysis examines the attractiveness of the scooter and train combination by measuring travel time against a trip that would have been made by car (Fig. 17). The estimated times by scooter and train to the destination station appear to be 9.8 min longer on average. This means that intermodal trips generally last more than 45.1 min, versus 35.3 min for a planned trip by car, regardless of parking and urban congestion times for the latter. In the 46 scoot-and-ride trips surveyed, the average duration of the trips is 22% longer than in a perfect scenario by car. However, it appears that this negative balance is prevalent among scoot-and-ride trips exceeding 20 km (Fig. 18). Among the nine trips that are characterized by a positive balance, no potential time savings could be identified beyond the 33 km threshold. On the contrary, the most significant differences in favour of the car (i.e. between − 40 and − 50 min) are mainly observed above the median distance (41.7; 44.6; 61.4; 65; 166 km). Within the scope of the urban area accessible by regional train (TER), 23 of the 46 intermodal trips surveyed do not exceed 33 km. It transpires that 10 of them compete with the car, even in an unconstrained environment. Taking into account these 23 journeys, i.e. half of the sample, the average time balance is in favour of the e-scooter and TER mix, saving 3.22% of time compared to private car. It should be noted that this graph showing the comparative saved times according to two modal choices cannot be generalised. As the results of the regression are significantly dependent on the Focus on the Mouans-Sartoux station Once the characteristics of the intermodal trips and the users are established, it is meaningful to analyse a station in more detail in order to better understand the effects of these mobility practices on the radius of a station area. To this end, we will consider Mouans-Sartoux station, which is located in a small and dense town of 10,000 inhabitants. By plotting the origin locations of passengers who have used a scooter or a bicycle to reach the train station, hypothetical routes have been mapped out (Fig. 19) between the two places based on the shortest route (excluding traffic areas restricted). It appears that the median distances covered by the e-scooter are 1.5 km and 3.6 km for the bicycle. Between these micromobilities we find the private car for passengers with a range of 3.2 km. Lastly, walking reaches a 700-m median around the station. The spatial analysis of Mouans-Sartoux station investigates the urban environment around passengers who have chosen to use a scooter or a traditional, folding Fig. 19 Map of the first segments of the intermodal chain with e-scooter or bicycle to Mouans-Sartoux station. Source: authors, 2022 or electric bike to get to catch their train. By measuring the density of inhabitants for each 200 200 m grid, 8 we infer that scooter users are more likely to start from a relatively dense place in relation to the average population distribution of the commune, whereas cyclists are more likely to come from less dense areas (Fig. 20). Discussion The main results obtained from the analysis of this survey, collected in twelve stations in the Provence-Alpes-Cte d'Azur region, show that the use of scooters in combination with the train is not marginal, despite its novelty in the modal range of options. Modal share corresponds to that of motorbikes and is almost equal to bicycles. This indicator which amounts to 2.09% is much higher than the 0.8% and 1.25% recorded by Gioria (2016, p. 14) and Enov (2021, p. 18) revealing the rise of this transfer mode, particularly with regard to the e-scooter. Following the COVID-19 health crisis, the growing standing scooter modal share in association with the train resonates with the significant increase in e-scooter sales recorded in France (FP2M and SML 2021). More broadly, this could be explained by the resilience in walking and cycling (Hran 2020). [25 -1,600[ [1,600 -5,600[ [5,600 -12,400[ [12,400 -23,200[ [23,200 -49, The upward trend in these statistics could be explained by the development of the electric scooters, as shown by the high proportion (approximately 90%) of this motorized device through this survey. The importance of the electric motor in the combined scooter differs from the bicycle which remains predominantly traditional, as the above-mentioned report demonstrates, given a ratio of 0.20 for the intermodal e-bike compared to 0.58 for the e-scooter (Enov 2021, p. 17). It can be deduced that the prevalence of this type of scooter that at least partly supports the trips' range, reaching more than 2 km. It can be associated with 3 km station radius by driving defined in French territorial engineering literature (Hasiak and Bodard 2018, p. 3). Regardless of the mode, this distance exceeds the average distance made to reach one of the surveyed stations, including when active modes are omitted. Similarly, it was seen that scooter achieves large areas over 40 km, as a result of its combination with the train. These results echo studies conducted by Rabaud and Richer (2019, p. 7) who evaluate the effective distance by scooter to be roughly 23 km in France, as well as by Edel, Wassmer, and Kern (2021, p. 8) who note that 44% cover more than 20 km in Hannover (Germany). The greater distance covered by bike to reach the rail station is found in several studies such as BiTiBi (2017, p. 20; 28)'s study which estimates the access trip to be 4 km, for a total of 35 km in Belgium and Liverpool. It has been seen that a quarter of the time spent in the intermodal trip (off egress) depends on the first mile stage by scooter, supporting the portion identified by Krygsman, Dijst, and Arentze (2004, p. 268). The average access time by scooter is about 10 min, while Enov (2021, p. 20) measured 15 min for all modes considered in three major railway stations in the country. Although this intermodal practice seems to overcome both the short range of the scooter and the fixed route nature of the train, time required for these trips appears to be not competitive with a monomodal use of the private car in regional city fringes (considering the absence of parking time and urban congestion). This differs from the Schlueter Langdon and Oehrlein (2021, p. 9)'s model illustrating that the integration of micromobility with public transport is faster than the car. A possible explanation for this might be that the methodological approach in this paper is not sufficiently complex, and the case study is located in a sparsely populated area beyond the city limits. The variations based on urban forms have been taken into account by Ensor, Maxwell, and Bruce (2021, p. 69) who predict a 1% reduction in car use in New Zealand's fringe areas, compared to 3% in the major city inners, in the scenario of substantial availability of shared and owned micromobility. These calculations need to be qualified in terms of time and congestion conditions, as indicated by McKenzie (2019b, p. 9)'s research which shows that ride-hailing is faster than scooter services for the majority of the day, apart from during weekday commuting hours. In contrast to the car, scooter origin points were mostly observed in relatively dense areas in Mouans-Sartoux, despite the absence of cycle routes. This result may suggest that the use of scooters in dense neighbourhoods may be explained by parking and traffic constraints that encourage the use of alternatives to access the station. This hypothesis is in line with Smith and Schwieterman (2018, p. 9)'s findings that most shared e-scooters are time-competitive with the car, between 0.8 and 3.2 km, in Chicago's parking-constrained environments. Another significant aspect of this "Hybrid Urban Mobility tool" (Kostrzewska and Macikowski 2017, p. 7) is the way this mode is used: the scooter on board the train is used on a daily basis and for work or study purposes. These characteristics can reflect the convenience provided by the scooter in combination with rail transport. The proportion of regular scooter users on the train largely exceeds the average share of daily train passengers. This outcome matches the report on French metropolitan railway stations that found twice as many frequent passengers by micromobility (Enov 2021, p. 18). The high frequency of scooter use, like cycling and driving, appears to be related to commuting. Conversely, it is interesting that few scooter users take the train for leisure or social purposes, sharply contrasting with shared e-scooters. This can be seen in the context of the 6t-Bureau de recherche (2020, p. 40) report stating that 45% of respondents would not use a scooter for shopping purposes due to theft, versus only 15% are reluctant to use this mode for commute trips. A last distinctive feature of the scooter appears among the properties examined: unlike bicycles, nearly all passengers by scooter use this mode for both access and egress. This individual strategy can be found in the fifth category of micro-vehicle and public transport integration, outlined by Oeschger, Carroll, and Caulfield (2020, p. 3). This indicator shows that scooter has an advantage when combined, as it can be carried on trains more easily than bicycles, especially during peak periods. Standing scooter offers the advantage of reaching a wider range of destinations once the traveller have left the train. This advantage is not met in shared scooter use, with 44% of riders switching to a different mode for their return trip (6t-Bureau de recherche 2019b, p. 80). Having discussed trip patterns, this paper addresses socio-demographic characteristics of passengers using scooters. The main type of user profile is a young man, between 18 and 34. These outcomes show a complementarity with the distribution of cycling passengers depending on age, over-represented from 35 years onwards. A question emerges with regard to the youthfulness of scooter users: is this an agerelated effect or rather a generation-based effect, suggesting a gradual increase in the ageing proportion of standing scooter users with the train? Results suggest a more pronounced gender differences associated with the private scooter, as evidenced by Laa and Leth (2020, p. 3)'s online survey. This investigation reveals a travel pattern which tends to exclude a part of the public. While these findings partly jeopardize sustainable mobility goals, the scooter may be able to turn the tide. According to Hran., mention should be made of cities such Copenhagen or Strasbourg (France) which achieved gender parity by bicycle by means of a safer environment (Schepman 2014). One of the first barriers to micromobility for women is risk aversion, especially due to the volume and speed of traffic (, p. 57). Still, scooter users are the ones who feel most unsafe when riding, rating themselves at 6.3/10, according to a French study undertaken by Smart Mobility Lab (2020, p. 25). Broach (2016, p. 118) shows that women were around 38% less likely to cycle and that availability of low-traffic routes may be particularly important to them. In addition to this, Clewlow (2018, p. 14) argues that e-scooter could be a catalyst for attracting more women since the electric assistance makes it possible to bridge the distance for which they seem to be more sensitive while the standing position is more suitable for some types of clothing. The scooter as an intermodal device can be more gender inclusive as demonstrated by the second generational peak of users identified in the study, reminiscent of clustering customers observed by Degele et al. (2018, p. 4) on dockless e-scooters in Germany. These two issues represent a challenge for the scooter insofar as the population is ageing and evidences that women behave more virtuously for the environment than men, except for mobility (Pech and Witkowski 2021, p. 26). Facing with the challenges of scoot-and-ride to contribute fully to the sustainable mobility system, several recommendations intended for planners and transport operators are highlighted in this study. To ensure a significant modal shift from car drivers to micromobility, it seems therefore necessary both to promote the use of these individual light modes and to curb the development of the car (, p. 530) within several-kilometre "bubbles" (Canepa 2007, p. 31). We did not observe scoot-and-ride competing directly with bike-and-ride-unveiling a similar evolution for each station-or walking on greater distances. These two small modes have the potential to substitute park-and-ride practises by sharing common features: frequency, purpose and even age for passengers. Especially since replacing "kiss and ride" by a micromobility ride means cutting the distance travelled by car by twice, for the passenger and the driver, since the driver often has to make a round trip or extend his journey by detouring to the station (Litman 2021, p. 50). While the first proposal is the development and maintenance of cycle paths adapted to all types of micromobility (bike, scooter, skateboard, etc.), such a measure would not be effective on its own if the supply of services dedicated to the car remains abundant, notably car park. The interest of determining the average range of the scooter combined with the train is then to define the catchment area where residents could adopt this mobility solution. For land use planning stakeholders, this geographic area has become a key territorial challenge to enhance rail services use (Hasiak and Bodard 2018, p. 1). The urban design approach should be considered as one of the solutions to promote a lower carbon access to stations, as the modal shift from car to rail is significantly influenced by the connection quality to the station (Strnsk 2019, p. 38). In addition to protected cycle paths separated from car traffic, volume and speed moderation and a reduction in car parking (Richer 2021), the rail operator could work on the station, platform and train conception to make access for scooters and bikes easier, since 23% of scoot-and-riders or bike-and-riders express a desire to transfer without barriers (Enov 2021, p. 35). An alternative to reducing the capacity of car parks in station areas is to manage access by allowing free parking only for commuters-who have no other access to the station than by car-to prevent them from renouncing using the train. Focusing on the case study of Mouans-Sartoux, it was seen that the municipality is making efforts to develop cycling in the railway station area, but is also favouring the car use around. The most striking initiative is the opening of the Chteau de Mouans-Sartoux car park in 2021, providing 245 new free spaces to ensure train and car intermodality (Ecomnews n.d.: Olivier 2021). As is widely admitted, urban decision makers and transit agencies should invest in cycling-and-ride rather than park-and-ride facilities, for reason of space and cost-effectiveness and environment (Pucher and Buehler 2009, p. 79). Chan andFarber (2020, p. 2175) suggest that availability of car park, together with measures encouraging active transportation, may generate conflicts and deter passengers from using micromobility for access. Given the park-and-ride facility opened in August 2021, one recommendation for the Mouans-Sartoux station area would be to transform the other spaces dedicated to car parking into public spaces, on which there will be protected cycle routes favouring a greater diversity of micromobility users. These planning recommendations would increase competitiveness of micromobility integrated with efficient train lines with respect to car travel, within extended catchment areas in less dense areas, as CPB (2020, p. 63) points out. Conclusion In this study, the secondary analysis of a survey conducted in twelve French train stations, between the first and second lockdown measures due to the COVID-19 health crisis, examines in more detail the usage patterns and socio-demographic characteristics of the scooter integrated in an intermodal trip. A sample of 53 passengers accessing one of the surveyed stations by scooter was analysed. The results suggest that the use of this light vehicle contributes to enlarging service area within transit isochrones, covering a 2-km distance (11 min) at the walking and cycling interface. This means that the coverage of the scooter is 5.5 times as wide as walking. Regarding the riders, young adult men who take their scooter onboard the train both for access and egress frequently, to study or work, are overrepresented. 83% of surveyed scoot-and-ride passengers are male and 62% are between 18 and 34 years old. 79% of respondents by scooter access to the train station with at least five times a week, 81% of them were going to work or study when questioned, and a third of these commuters are students. During the same trip, the scooter is both used as an access and an egress mode for 85% of riders. It has been seen that the combination of the two modes takes place in dense areas and could be time-competitive with the car. The main findings of this paper revisit the conventional scale of station areas based on a physical and mental boundary of 0.8 km by inviting to integrate the firstand last mile connections to public transport. Indeed, the scope of the Transit-Oriented Development (TOD) urban model in most countries corresponds to a "pedestrian pocket" around a transit stop (Calthorpe 1993) with a radius between 0.4 and 0.8 km by walk or bike. Referring to the founding principles of TOD, the synergy created by associating the use of scooters or bicycles with transit extends beyond the walkable "primary area": riders reach the "secondary area" (Calthorpe 1993, p. 87) related to TOD which appears to cover up to 1.6 km, connecting low-density housing (, p. 112). Thus, this emerging mode represents an opportunity to strengthen the TOD model by supporting public transit and creating a pedestrian and cycling-friendly urban environment to reduce the vehicular traffic congestion. By "bursting" the TOD radii Canepa (2007, p. 34) and thereby increasing potential transit ridership, this intermodal perspective amounts to redesigning the TOD concept by integrating cycling modes, through the "Bicycle-based TOD" (B-TOD). This implies the need for car restrictions and alternative incentives, as recommended in the case study, to encourage modal shift to rail and less energy consuming intermodal practices. To achieve this, it is essential for urban decision-makers to further redesign public spaces around stations by reducing the space occupied by car park and traffic, and by developing cycle routes. Looking at the potential virtues of this synergy, it appears that local and regional authorities could benefit from their success in achieving sustainable and socially fair development goals. Extending the railway station areas by promoting micromobility could reflect the emerging concept of the "15-minute city" (Duany and Steuteville 2021), where the combination of walking, cycling, scootering and public transport (Sadik-Kahn 2021) would ensure that most commonly-accessed services and activities can be reached within a 15-min walk or cycling ride (Moreno and Hjelm 2021). The contributions of this work can be extended to gain a deeper understanding of intermodality associating standing scooter with rail. First, the current survey lacked the precise location of trip destination, precluding a detailed comparative analysis of the first-and last-mile by scooter. Additionally, future empirical studies should consider several indicators not addressed in relation to the scooter and public transport combination: such as distinguishing mechanical from electric scooters, users' revenues and socio-professional categories, routes used to reach their destination, modes substituted, reasons for choice mode, propensity to adopt this micro-vehicle, perception and acceptance. Future works examining the combination of scooters with urban public transport may also provide interesting results in comparison to the findings in this paper. Several questions arise and take the form of future challenges. With a modal share of 2.09% for station access by scooter, taking this folding mode onboard trains does not seem to be an issue. However, to what extent can scooter-and-ride practices develop to a stage where their presence on trains disturbs passengers on board? Alongside this, at what point can the design of transit vehicle types becomes a limiting element in the development of scoot-and-ride? To anticipate this challenge, regional trains should integrate facilities for folded standing scooters, in the same way as bicycles. But railway stations should also provide secure parking spaces for private micromobility, especially for first-or last-mile riders, or as an alternative if the trend of scoot-and-ride practice strongly develops.
Tracking technology: lessons learned in two health care sites. The aim of this study is to describe the process of staff and patient adoption and compliance of a real-time locating system (RTLS) across two health care settings and present lessons learned. While previous work has examined the technological feasibility of tracking staff and patients in a health care setting in real-time, these studies have not described the critical adoption issues that must be overcome for deployment. The ability to track and monitor individual staff and patients presents new opportunities for improving workflow, patient health and reducing health care costs. A RTLS is introduced in both a long-term care and a polytrauma transitional rehabilitation program (PTRP) in a Veterans Hospital to track staff and patient locations and five lessons learned are presented from our experiences and responses to emergent technological, work-related and social barriers to adoption. We conclude that successful tracking in a health care environment requires time and careful consideration of existing work, policies and stakeholder needs which directly impact the efficacy of the technology.
Effect of interfiber bonding on the rupture of electrospun fibrous mats Electrospun fibrous mats have a wide range of applications, and characterizing their mechanical behavior is an important task. In addition to the mechanical properties of the individual fibers, other factors can alter the overall mechanical behavior of the mat. In this study, we use computational and experimental methods to investigate the effect of interfiber bonding on the failure and rupture of typical fibrous mats. A non-linear finite element model of a mat is simulated with randomly distributed fibers with different porosities. The percentage of bonding between intersecting fibers is controlled by an auxiliary code. The results reveal that interfiber bonding increases the stiffness of the mat, and the toughness of the mat increases as well. Interestingly, a large percentage of interfiber bonding at a predefined porosity of a mat does not increase the elastic modulus of the mat, nor does it have considerable effects on the failure behavior. Moreover, the effect of interfiber bonding increases with a mats porosity. The findings of this study could help tune the mechanical properties of fibrous mats used for different applications.
Observation of a Free-Shercliff-Layer Instability in Cylindrical Geometry We report on observations of a free-Shercliff-layer instability in a Taylor-Couette experiment using a liquid metal over a wide range of Reynolds numbers, $Re\sim 10^3-10^6$. The free Shercliff layer is formed by imposing a sufficiently strong axial magnetic field across a pair of differentially rotating axial endcap rings. This layer is destabilized by a hydrodynamic Kelvin-Helmholtz-type instability, characterized by velocity fluctuations in the $r-\theta$ plane. The instability appears with an Elsasser number above unity, and saturates with an azimuthal mode number $m$ which increases with the Elsasser number. Measurements of the structure agree well with 2D global linear mode analyses and 3D global nonlinear simulations. These observations have implications for a range of rotating MHD systems in which similar shear layers may be produced. We report on observations of a free-Shercliff-layer instability in a Taylor-Couette experiment using a liquid metal over a wide range of Reynolds numbers, Re ∼ 10 3 − 10 6. The free Shercliff layer is formed by imposing a sufficiently strong axial magnetic field across a pair of differentially rotating axial endcap rings. This layer is destabilized by a hydrodynamic Kelvin-Helmholtz-type instability, characterized by velocity fluctuations in the r − plane. The instability appears with an Elsasser number above unity, and saturates with an azimuthal mode number m which increases with the Elsasser number. Measurements of the structure agree well with 2D global linear mode analyses and 3D global nonlinear simulations. These observations have implications for a range of rotating MHD systems in which similar shear layers may be produced. The destabilization of rotating sheared flows by an applied magnetic field in magnetohydrodynamics (MHD) is a topic with astrophysical and geophysical implications, and has been the subject of a number of experimental and theoretical efforts. Such destabilization can be caused by the magnetorotational instability (MRI), in which a magnetic field of sufficient amplitude can destabilize Rayleigh-stable rotating sheared flows. In this Letter, we report the observation of an instability which, like the MRI, appears in a sheared rotating fluid when a magnetic field is applied. But rather than playing a role in the dynamics of the instability, as in the case of the MRI, the magnetic field here acts to establish free shear layers which extend from axial boundaries and which are subject to a hydrodynamic instability. Hartmann and Shercliff laid the groundwork in understanding the effect of magnetic fields on shear layers in conducting fluids. Hartmann studied boundary layers normal to an external applied field, and Shercliff extended his analysis to include boundary layers parallel to the applied field. Free Shercliff layers can be established in rotating MHD systems when the line-tying force of an axial magnetic field extends a discontinuity in angular velocity at an axial boundary into the bulk of the fluid. These shear layers are similar to the Stewartson layers that extend from discontinuous shearing boundaries in rapidly rotating systems, but for the free Shercliff layer discussed here, it is the magnetic field tension rather than the Coriolis force that leads to equalization of the angular velocity in the axial direction. Free Shercliff layers were first realized experimentally by Lehnert in a cylindrical apparatus with a free surface at the top and a rotating ring at the bottom axial boundary. Lehnert observed the formation of vortices at the location of the shear layers, though he attributed their formation to discontinuities in the free surface at the shear layer location rather than to the shear itself. These layers were then described analytically by Stewartson and Braginskii. The formation of free Shercliff layers in a cylindrical Taylor-Couette device has been predicted computationally, but these simulations were axisymmetric and thus incapable of evaluating the stability of these shear layers to nonaxisymmetric perturbations. Both free Shercliff layers and Stewartson layers can be present at the tangent cylinder of spherical Couette systems. The Kelvin-Helmholtz destabilization of these layers has been studied extensively through computation. Stewartson layers have been observed experimentally in spherical and cylindrical geometry and are found to be unstable to nonaxisymmetric modes, which is consistent with simulations. The Princeton MRI experiment is a Taylor-Couette apparatus consisting of two coaxial stainless steel cylinders as shown in Fig. 1. The gap between the cylinders is filled with a GaInSn eutectic alloy which is liquid at room temperature. Differential rotation of the cylinders sets up a sheared rotation profile in the fluid. If the cylinders were infinitely long, the fluid between the cylinders would assume an angular velocity at a radius r matching the ideal Couette solution in steady state, (r) = a + b/r 2. The constants a and b are found by matching the solution to the imposed rotation rates at the inner and outer cylinder boundaries. In conventional Taylor-Couette devices, the endcaps are typically corotated either with the inner or outer cylinder. This produces strong secondary circulation and angular momentum transport to the axial boundaries, resulting in a deviation from the ideal rotation profile. A novelty of this apparatus is the configuration of the axial endcaps, each of which is split into two differentially-rotatable acrylic rings, giving four independent rotation rates: those of the inner cylinder, outer cylinder, inner rings, and outer rings. In previous experiments using water as the working fluid, this configuration was very effective at reducing the influence of the axial boundaries, allowing the generation of quiescent flows in the bulk of the fluid with Reynolds numbers Re = 1 r 1 (r 2 − r 1 )/ above 10 6. The experimental parameters are shown in Table I. Each endcap is split into an inner ring (IR) and an outer ring (OR). Differential rotation of these rings produces a discontinuity in the angular velocity at the axial boundary. Overlaid on the right half of the figure is a plot of the shear (r/)(∂/∂r) from a nonlinear MHD simulation with differential rotation between the endcap rings and a strong axial magnetic field. The free Shercliff layers are the regions of strong negative shear extending from the interface between the rings. Fluid velocities are measured with an ultrasound Doppler velocimetry (UDV) system. Ultrasonic transducers are mounted on the outer cylinder at the midplane of the experiment. A transducer aimed radially and others aimed tangential to the inner cylinder allow determination of the radial and azimuthal velocity components. Two tangential transducers aimed identically but separated azimuthally by 90 provide information about azimuthal mode structure. A set of six solenoidal coils applies an axial magnetic field to the rotating fluid. Fields below 800 Gauss can be applied indefinitely, while the application time for higher fields is limited by the resistive heating of the coils. An array of 72 magnetic pickup coils placed beyond the outer cylinder measures ∂B r /∂t. Experiments were run using both Rayleigh-stable and -unstable flow states. The Rayleigh-stable states had component rotation speeds in the ratio for the inner cylinder, inner ring, outer ring, and outer cylinder, respectively. The ideal Couette solution for these inner and outer cylinder speeds satisfies Rayleigh's stability criterion that the specific angular momentum increase with radius: ∂(r 2 )/∂r > 0. The A single run of this experiment starts with an acceleration phase of two minutes, during which the sheared azimuthal flow develops. The axial magnetic field is then applied, initially resulting in the damping of hydrodynamic fluctuations. If the magnetic field is strong enough to satisfy the requirement that the Elsasser number = B 2 /4∆ > 1, where ∆ is the difference between the inner-and outer-ring rotation rates, the instability grows up as a large-scale coherent mode. It manifests itself as a fluctuation in the radial velocity and azimuthal velocity, where significant perturbations of more than 10% of the inner cylinder speed are observed. An ultrasonic transducer inserted on a probe and aimed axially at an endcap did not measure axial velocity fluctuations when the instability was excited, suggesting that the flow due to the instability is mainly in the r − plane. Correlated magnetic fluctuations are observed at the highest rotation rates and applied fields. The instability develops on both the Rayleigh-stable and -unstable backgrounds, and typical mode rotation rates exceeds the outer cylinder rotation rate 2 by ∼ 0.1( 1 − 2 ). The instability was observed over a range of more than 3 orders of magnitude in rotation rate in the Rayleighunstable configuration, as shown in Fig. 2, with Re = 820−2.610 6. The instability is present even with a magnetic Reynolds number Rm = 1 r 1 (r 2 − r 1 )/ ∼ 10 −3, indicating an inductionless mechanism in which induced magnetic fields are dynamically unimportant. For of order one, the primary azimuthal mode number at saturation is m = 1, with phase-locked higherorder mode numbers typically present at a smaller amplitude. The measured mode structure is shown in Fig. 3. It is common for an m = 2 mode to grow up before an m = 1 dominates at saturation. High- scenarios at very The necessity of shear at the axial boundary has been verified experimentally. Experiments were performed with the components rotating in the standard Rayleighstable configuration, but with a number of different inner ring speeds. The critical magnetic field for instability varied with the differential rotation between the endcap rings as expected. When the inner rings and outer rings corotated, the instability was not observed. The free shear layer has been measured experimentally at low Re and high where it penetrates to the midplane of the experiment as shown in Fig. 4. The width of the layer measured at a time just before the onset of instability is consistent with the expected width scaling for a Shercliff layer ∼ 1/ √ M, where the Hartmann number M = Bl/ √ 4 and l = r 2 − r 1 is a characteristic length. The onset of the instability is associated with a decrease in the mean shear in this layer. Nonlinear numerical MHD simulations have been performed with the HERACLES code, modified to include finite viscosity and resistivity. The simulations were performed in the experimental geometry with a 200x64x400 grid inr,, and, with Re = 4000 and a range of Rm and M. These simulations show the formation of the free Shercliff layer extending from the discontinuity at the axial boundaries, as shown in Fig. 1. The ax- ial length of the shear layer scales with √, which seems to arise from a competition of magnetic forces, which act to extend the shear layer into the fluid, and poloidal circulation generated by the axial boundaries, which acts to disrupt the free shear layer. The simulations also produce an instability requiring > 1 for onset and suggest that a minimum penetration depth of the shear layer is required for development of the instability. Like the experimental observations, the unstable modes exhibit a spiral structure, and a cascade is observed from higher azimuthal mode number during the growth phase of the instability to a dominant m = 1 at saturation. A global linear stability analysis was performed to investigate unstable modes in the experimental geometry. The analysis found eigenvalues of the linearized nonideal MHD equations discretized across 2048 grid cells in the radial direction, assuming sinusoidal azimuthal dependence with a specified mode number and no axial dependence. Unstable hydrodynamic solutions were sought for realistic fluid parameters and for a zeroth order, background rotation profile consisting of a free shear layer represented by a hyperbolic tangent centered between the inner and outer cylinders. Angular velocity profiles with a sufficiently narrow shear layer were found to be hydrodynamically unstable to nonaxisymmetric Kelvin-Helmholtz modes with a similar structure to those observed experimentally. The most unstable mode number increases with decreasing shear layer width, similar to the experimental observations of the saturated states. The results presented here describe a minimum magnetic field required for onset of the instability. Simulations have shown that a sufficiently strong magnetic field will restabilize this instability, similar to the simulation results in spherical geometry. Experimentally, the decreasing saturated amplitude with increasing field at small rotation rates, shown in Fig. 2, suggests that this critical field strength is being approached. But the limits on controllable slow rotation and on the availability of strong magnetic fields precluded verification of the complete restabilization. This free-Shercliff-layer instability exhibits strong similarities to the expected behavior of the standard MRI in a Taylor-Couette device because in both cases a magnetic field acts to destabilize otherwise stable flow and in both cases the associated angular momentum transport results in a large modification to the azimuthal velocity profile. But this instability is a hydrodynamic instability on a background state established by the magnetic field and is present with Rm ≪ 1. While there are inductionless relatives of the standard MRI, such as the so-called HMRI which relies on azimuthal and axial applied mag-netic fields, the unimportance of induction here is in stark contrast to the requirement of a finite minimum Rm for the standard MRI in an axial magnetic field. These results have particular relevance to other MHD experiments in which similar shear layers may be established. A spherical Couette MHD experiment produced a nonaxisymmetric instability with applied magnetic field that was claimed to be the MRI. However, subsequent simulations have attributed those observations to hydrodynamic instability of free shear layers, similar to the observations that we report. We expect that other cylindrical devices, such as the PROMISE 2 experiment, could produce this instability. But the critical value of will likely change for experiments with different geometric aspect ratios. The free-Shercliff-layer instability is not expected to impact the study of the MRI in this device since the magnetic fields required for the MRI are weaker than those required for the Shercliff layer instability at MRIrelevant speeds.
Muscular Dystrophy: Centronucleation May Reflect a Compensatory Activation of Defective Myonuclei Muscular dystrophy has long been believed to be characterized by degeneration and abortive regeneration of muscle fibers (the muscle degeneration theory), but unfortunately its pathogenesis is still unclear and an effective treatment has yet to be developed. As a challenge to the theory, we have proposed an alternative muscle-defective-growth theory and a further bone muscle growth imbalance hypothesis supposing possible defects in bone-growth-dependent muscle growth based on our findings in hereditary dystrophic dy mice (C57BL/6J dy/dy). This review presents some new insights into the pathogenesis of the disease along with our hypothesis, focusing on the physiological meaning of centronucleation, one of the major pathological changes commonly observed in dystrophic muscles of man and experimental animals.
An automated nD model creation on BIM models Abstract The construction technology (CONTEC) method was originally developed for automated CONTEC planning and project management based on the data in the form of a budget or bill of quantities. This article outlines a new approach in an automated creation of the discrete nD building information modeling (BIM) models by using data from the BIM model and their processing by existing CONTEC method through the CONTEC software. This article outlines the discrete modeling approach on BIM models as one of the applicable approaches for nD modeling. It also defines the methodology of interlinking BIM model data and CONTEC software through the classification of items. The interlink enables automation in the production of discrete nD BIM model data, such as schedule (4D) including work distribution end resource planning, budget (5D)based on integrated pricing system, but also nD data such as health and safety risks (6D) plans (H&S Risk register), quality plans, and quality assurance checklists (7D) including their monitoring and environmental plans (8D). The methodology of the direct application of the selected classification system, as well as means of data transfer and conditions of data transferability, is described. The method was tested on the case study of an office building project, and acquired data were compared to actual construction time and costs. The case study proves the application of the CONTEC method as a usable method in the BIM model environment, enabling the creation of not only 4D, 5D models but also nD discrete models up to 8D models in the perception of the construction management process. In comparison with the existing BIM classification systems, further development of the method will enable full automated discrete nD model creation in the BIM model environment.
The Long-Term Capital-Market Performance of the Forestry Sector: An Investors Perspective High risk-adjusted returns, low correlation with financial asset classes and inflation hedging are investment characteristics that make forests a desirable investment opportunity. To examine returns on forestry investments (from 2011 to 2020), we focused solely on 48 forest companies (across the globe) that were listed on stock exchanges. Results indicate the economic justification of investing in publicly traded forestry companies. The positive five-year beta coefficients () range from 0.21 to 3.46, amounting to 1.15 on average. Taking the last 10-year comparison of the worlds most common capital market benchmarks, the highest return was achieved by the S&P500 (13.8% on average) followed by forestry companies (9.1%), U.S. Treasury bonds (4.4%), and gold (3.0%). Forestry companies, along with their associated business activities (sawmilling, final products production, and paper production), show the best historical performance from an investors point of view (total return of 13.2%).
Noteworthy Record of the Mediterranean Water Shrew (Neomys anomalus) from South-Western Iran (Mammalia: Soricomorpha) Three water shrew specimens were collected in the Sheshpir spring (2303 m a.s.l.), Fars province, south-western Iran. In pennial morphology, which is species-specific in the genus Neomys Kaup, 1829, the specimens were indistinguishable from N. anomalus Cabrera, 1907. The Sheshpir spring is located approximately 900 km to the south-south-west of Gorgan, the only site in Iran where N. anomalus has been known to occur so far. The spring is a stony basin with bare banks and poorly developed aquatic vegetation and as such is an atypical habitat for water shrews. It is noteworthy that fish are absent in the spring.
The Fathers on the Biblical Word Only now are we acknowledging neglect in our attention to the Fathers' understanding of Scripture. In overcoming this neglect, we are indebted to contemporary developments in biblical studies, anthropology and linguistic theory. But the lacuna is due also in part to the Fathers' own uncertainty as to whether in the Scriptures they were encountering a living Word or a derivative text with a life of its own, or even a history book or encyclopedia. We are led to ask, Does the Word still speak to us the scriptural text?
Sum of public power Historical context The death of the federalist caudillo Facundo Quiroga caused great concern in the Argentine Confederation, and soon the legislature of Buenos Aires elected Rosas as governor. A law from Augusto 3, 1821, allowed the legislature to grant those powers. Those powers were fully delegated on him, with the sole exceptions of keeping, defending and protecting the Roman Catholic Church, and keeping and defending the cause of the Confederation. The term of office of the governor, of three years, was extended to five years. The legislature reelected Rosas three times, allowing him three full mandates of 5 years, being overthrown during the fourth. Rosas could use the sum of public power during any time period he deemed convenient during his mandate. To confirm the legitimacy of his mandate, Rosas requested a vote to approve or reject him. Although there was no universal suffrage in Argentina by then, Rosas requested that all the people in Buenos Aires was allowed to vote, regardless of wealth or social conditions. This proposal was influenced by Jean-Jacques Rousseau's The Social Contract. The only ones who could not vote were the women, the slaves, children under 20 years old (unless emancipated) and foreigners without a stable residence in the country. The final result had 9720 votes for Rosas and only 8 against him. Nature Although Rosas received the sum of public power, he did not became an absolute monarch. He still had a limited term of office, and the legislature and other republican institutions were kept. It was not a tyranny either, as he did not have the usual traits of a tyranny. He did not took the power by an illegal way, such as a coup d'état, but by an appointment of the legislature, and no law prevented the legislature from doing what it did. He did not become governor against the will of the population, as it was confirmed by a popular vote. He did not rule on behalf of a social minority, either. His appointment was in line with the ideas of Rousseau, who thought that "If, on the other hand, the peril is of such a kind that the paraphernalia of the laws are an obstacle to their preservation, the method is to nominate a supreme ruler, who shall silence all the laws and suspend for a moment the sovereign authority. In such a case, there is no doubt about the general will, and it is clear that the people’s first intention is that the State shall not perish". This principle influenced as well the concept of the state of emergency, included in the 1853 constitution and in most legal systems around the world. Actual usage Rosas did not fully use the powers invested in him. He did not close the legislature, which continued working during his rule. He was not interested in the tasks of the judiciary power, so he did not use any judiciary powers after the end of the trial about the death of Facundo Quiroga. Even more, the governor used to be the highest court of appeal since the times of Spanish authority, so the legislature sanctioned a law in 1838 that established the "Tribunal Supremo de Recursos Extraordinarios", so that the highest court of the judiciary was still outside the executive power. Rosas gave his consent to the new law immediately. Controversy The delegation of the sum of public power on Rosas was highly controversial. Domingo Faustino Sarmiento compared Rosas with other historical dictators in his work Facundo, where he said as follows: Once being owner of absolute power, who will ask him for it later? Who will dare to dispute his titles to the domination? The Romans gave the dictatorship in rare cases and for short-term, fixed durations, and yet the use of temporary dictatorship allowed the perpetual one that destroyed the Republic and brought all the wildness of the Empire. When the term of government expires Rosas announces his resolute determination to retire to private life, the death of his beloved wife, his father, had ulcerated his heart and needs to go away from the tumult of public affairs to mourn the wide losses as bitter. The reader should recall hearing this language in the mouths of Rosas, that he had not seen his father since his youth, and whose wife had been such bitter days, something like the hypocritical protests of Tiberius to the Roman Senate. The Board of Buenos Aires pleads, begs him to continue making sacrifices for their country, Rosas is left to persuade, still only six months, spending six months and leaves the farce of the election. And indeed, what need has been elected a leader who has entrenched the power in his person? Who asks trembling from the terror that has inspired all of them? On the contrary, José de San Martín gave his full support to the delegation, on the grounds that the current situation in the country was so chaotic that it was needed to create order. Men do not live from dreams but from facts. What I care if it is repeated over and over again that I live in a country of liberty, if on the contrary, I'm being oppressed? Freedom!, Give it to a child of two years for enjoying by way of fun with a box of razor blades and you tell me the results. Freedom! So that if I devote myself to any kind of industry, comes a revolution that destroy the work of many years and the hope of leaving a loaf of bread for my children. Freedom! In order to charge me for contributions to pay the huge costs incurred, for four ambitious because they feel like, by way of speculation, making a revolution and go unpunished. Freedom! For the bad faith to found complete impunity as proved by the generality of bankruptcies ... this freedom, nor is the son of my mother going to enjoy the benefits it provides, until you see established a government that demagogues called tyrant, and protect me against the properties that freedom gives me today. Maybe you may tell that this letter is written in a good soldierly humor. You will be right, but you agree that at age 53 one can not admit that good faith will want to take for a ride ... Let this matter conclude and let me end by saying that the man who set the order of our country, whatever the means that for it employees, is the only one that would deserve the noble title of Liberator. Constitutional status Rosas's mandate ended after his defeat at the Battle of Caseros, and Urquiza called for the making of a National Constitution, which was written the following year, 1853. The 29º article explicitly forbids a delegation of powers such as the one done with Rosas to be performed. Congress can not give the National Executive, nor the provincial legislatures to governors of provinces, extraordinary powers, nor the sum of public power, nor may grant submission or supremacy whereby the life, the honor, or wealth of the Argentine people will be at mercy of governments or individual. Acts of this nature shall be utterly void, and shall render those who formulate, consent to or sign them, liable to be condemned as infamous traitors to the motherland. However, the penalty for the 1835 release of the public power to Rosas is not affected by this ruling, as the Constitution was not established back then and had no ex post facto law provisions.
The Facilitative Effect of Context on Second-Order Social Reasoning The Facilitative Effect of Context on Second-Order Social Reasoning Ben Meijering 1 (b.meijering@rug.nl), Leendert van Maanen 1, Hedderik van Rijn 2, & Rineke Verbrugge 1 Department of Artificial Intelligence, University of Groningen Department of Psychology, University of Groningen Abstract This paper is about higher-order social reasoning such as I think that you think that I think. Previous research has shown that such reasoning seriously deteriorates in complex social interactions. It has been suggested that reasoning can be facilitated greatly if an abstract logical problem is embedded in a context. This has not yet been tested for higher-order social reasoning. We presented participants with strategic games that demand higher-order social reasoning. The games were embedded in the context of a marble game. Participants performed really well, that is, almost at ceiling. We argue that context has a facilitative effect on higher order- social reasoning. Keywords: Theory of Mind; Social Cognition; Higher-order Social Reasoning; Strategic Game. Social Reasoning In many social situations we need to reason about one another. We do so to plan our actions and predict how our behavior might affect others. The ability to reason about anothers knowledge, beliefs, desires and intentions is often referred to as Theory of Mind (Premack & Woodruff, 1978). It has been extensively investigated in children and seems to develop around the age of 4 years (Wimmer & Perner, 1983; but see Onishi & Baillargeon, 2005). Nevertheless, reasoning about others is very demanding, even for adults, which becomes apparent in more complex interactions. So far, empirical results have shown social reasoning to be far from optimal (Flobbe, Verbrugge, Hendriks, & Kramer, 2008; Hedden & Zhang, 2002). It has been suggested that (social) reasoning might be facilitated if it is embedded in a context (Wason & Shapiro, 1971). In the current study, we investigate whether social reasoning really is difficult and whether embedding it in a context can facilitate it. When we ascribe a simple mental state to someone, we are applying first-order social reasoning. For example, imagine a social interaction between Ann, Bob and Carol. If Bob thinks Ann knows that my birthday is tomorrow, he is applying first-order reasoning, which covers a great deal of social interaction. However, first-order reasoning is not sufficient to cover more complex social situations. The interaction between Ann, Bob and Carol can easily demand reasoning of one order higher: If Carol thinks Bob knows that Ann knows that his birthday is tomorrow, she is making a second-order attribution. Bobs first-order attribution and Carols second-order attribution are hierarchically structured: Bob applied first- order reasoning by attributing a mental state to Ann, and Carol applied second-order reasoning by attributing first- order reasoning to Bob. A third-order attribution involves the reader attributing second-order reasoning to Carol, and so forth. The depth of reasoning in humans is constrained by cognitive resources (Verbrugge, 2009; ; Hedden & Zhang, 2002). As the order of reasoning increases, the demands on cognitive processing increase as well. Cognitive resources and processing speed seem to increase with age (Fry & Hale, 1996), and that increase could allow for the representation of increasingly more complex mental states. Findings from developmental studies support that idea. Where first-order social reasoning is acquired at the age of around 4 years (Wimmer & Perner, 1983), second-order social reasoning seems to develop some years later, at the age of around 6 to 8 years (Perner & Wimmer, 1985). However, 6- to 8-year-olds do not understand all kinds of mental states, and even adults cannot readily apply second-order reasoning in all kinds of contexts (; Hedden & Zhang, 2002). Paradigms to Test Social Reasoning There are a few paradigms to test social cognition. Probably the most familiar paradigm is the False-Belief task (Wimmer & Perner, 1983), which has been adapted to test second-order social cognition (Perner & Wimmer, 1985). In a typical second-order False-Belief story, two characters, John and Mary, are independently informed about the transfer of an object, an ice-cream van, from one location to another. In the story, both John and Mary know where the van is, but John does not know that Mary also knows that the van has moved to a new location. Participants are told the story and asked where John thinks Mary will go for ice cream. To answer this question correctly, participants have to be able to represent the second-order false belief John thinks that Mary thinks the van is still at the old location.. In Perner and Wimmers study, some children of 6 to 7 years of age were able to make such second-order attributions, but only under optimal conditions; when the inference of second-order beliefs was prompted. Apart from some concerns about the False-Belief tasks aptness to test for the presence of a Theory of Mind (Bloom & German, 2000), Perner and Wimmer expressed concerns about the generality of their findings as participants were presented rather pedestrian problem of knowing where somebody has gone to look for something (p. 469). They stressed that investigations into higher-order social reasoning will only achieve theoretical importance if a link with other domains can be established. Various other language comprehension paradigms have been used to test social cognition (e.g., Van Rij, Van Rijn, & Hendriks, to appear; Hollebrandse, Hobbs, De Villiers, & Roeper, 2008; Hendriks & Spenader, 2006). Hollebrandse et al. presented discourse with multiple, recursive
Kris Commons refused to single out Celtic team-mate Efe Ambrose for blame following the 2-2 draw at home to Fenerbahce in the Europa League. A poor headed back-pass from Ambrose set up the visitors' first goal right on half-time. "Mistakes are part and parcel of football," said Commons, who had shot Celtic into a 2-0 lead. "I think Efe played well. If you went through a full game without making a mistake, you'd be some player." Prior to scoring, Commons set up Leigh Griffiths for the opening goal at Celtic Park but a double from Fenerbahce substitute Fernandao ensured the points were shared. Celtic sit second in Group A on two points behind surprise leaders Molde, who they face next in Norway. "It was a good team performance and something we can certainly build on," added Commons. "When we put our minds to it, we're a good outfit but, at this level, you need to try and eliminate those silly mistakes. "We were leading and feeling pretty much in control up to 44 minutes but an unfortunate error cost us the goal. "With them scoring either side of half time it kind of took the wind out of our sails." "Ambrose's career is strewn with errors, he tends to succumb to a moment of calamity in every game." Celtic were undone by a late set-piece in their 2-2 draw away to Ajax on match-day one and Fernandao's second goal came directly from a corner. However, Commons preferred to focus on a point gained against a team packed with international experience. "Fenerbahce are a good team and they will always get chances," said the midfielder. "A draw is probably a good reflection. They showed their quality in stages."
1. Technical Field This invention relates to people moving devices in general, and to floorplate support systems for people moving devices in particular. 2. Background Art Escalators, moving walkways, and other people moving devices efficiently move a large volume of pedestrian traffic from one point to another. At each end of the device, landing areas provide access to moving steps (or belts, or pallets) traveling at a constant rate of speed. The landing areas typically include floorplates and a combplate. The floorplates cover a structural frame which, in the landing, houses mechanical equipment for actuating the moving steps. The combplate is an intermediary surface between the stationary floor plates and the moving steps. The structural frame comprises a left and a right truss connected by structural members extending therebetween. By convention, the side of the escalator on the left of a person facing the escalator at the lower elevation is called the left hand side of the escalator, and the side to the person's right is called the right hand side. Each truss section has two end sections parallel to one another, connected by an inclined midsection. The end sections form the landings at each end of the midsection. It is known in the art that the floorplates may be positioned and supported off of the frame by a plurality of brackets and commercially available structural steel having an "L" shaped cross-section, also known as "angle iron". Sections of angle iron are cut and assembled into a floorplate frame which is then attached to the brackets. After the floorplate frame is attached to the brackets, the floorplates are placed within the frame and conventionally attached. A person of skill in the art will recognize that the "quietness" of a people moving device is perceived as an indicia of the quality of the machine. A problem with the aforementioned floorplate frame arrangement is that it permits vibrations, and therefore noise, to propagate from the device frame to the floorplates via the floorplate frame. A further disadvantage of floorplate frame fabricated from structural steel is that the angle iron must typically be altered to permit the frame to be attached to the brackets that support the floorplate frame.
Players criticise Ubisoft for "deflecting blame." Ubisoft has indicated it will punish The Division players that have used an exploit in the game's first raid mission, Falcon Lost. Posting on the game's official forum, community manager Natchai Stappers said using the exploit was a violation of its Code of Conduct. "We are working on fixing the exploit," he said. "Obviously, it is against our Code of Conduct and the team is looking into what can be done in terms of punishment for those who have exploited." Falcon Lost was introduced in the game's April update, and is designed to dish out high-level completion rewards once a week. However, by using a riot shield players have been able to phase through a wall and repeatedly run the mission, thus granting them loot drops at a much faster rate than intended. The method can be seen in the video below. In response to Ubisoft's statement, some players have said they shouldn't be reprimanded for a mistake in the developer's design and coding. Some have noted the Code of Conduct isn't prominently available in the game. "As someone that has done the incursion both ways on hard, I will say that dealing out a punishment to players blindly is ********," said forum user Cipher_Sierra. "I have never read the terms, they're never referenced in the game, I'm never warned anywhere. "But now I'm reading that you consider your faulty code to be on US, as players? Am I to be punished for using a mask that regens constantly having no idea why for the first day? What about the reckless talent? Or running on the side of a mission area to avoid mobs?" On the game's subreddit, meanwhile, another player also said Ubisoft should shoulder the blame, instead of the players. "It absolutely sickens me that a Community Manager has stepped forward not to apologize for the complete mess of code that has been delivered as a finished product, but rather to deflect blame for any balancing issues that may arise due to their shoddy production onto their paying customers." Others have suggested Ubisoft simply implement a system to punish players for future transgressions, since the exploit shifted the balance of multiplayer to force many to use the exploit so they can remain competitive. Punishing them for keeping up with community is unreasonable, they argue. As of yet, Ubisoft hasn't revealed what the punishment will be or when it will be implemented. The April update to The Division resulted in numerous problems. Some players discovered their characters had gone missing and the game's Daily Challenges also vanished. These issues have since been resolved. The patch was the first major update to the game since launch in March, and added new content and features, all of which you can see in the official patch notes here. The Falcon Lost Incursion was included among these, and is the first in a series of upcoming raid-style missions. The next one is Conflicts and launches in May. Following this Ubisoft plans to release paid expansions further into the summer and beyond.
Compatibility of H9N2 avian influenza surface genes and 2009 pandemic H1N1 internal genes for transmission in the ferret model In 2009, a novel H1N1 influenza (pH1N1) virus caused the first influenza pandemic in 40 y. The virus was identified as a triple reassortant between avian, swine, and human influenza viruses, highlighting the importance of reassortment in the generation of viruses with pandemic potential. Previously, we showed that a reassortant virus composed of wild-type avian H9N2 surface genes in a seasonal human H3N2 backbone could gain efficient respiratory droplet transmission in the ferret model. Here we determine the ability of the H9N2 surface genes in the context of the internal genes of a pH1N1 virus to efficiently transmit via respiratory droplets in ferrets. We generated reassorted viruses carrying the HA gene alone or in combination with the NA gene of a prototypical H9N2 virus in the background of a pH1N1 virus. Four reassortant viruses were generated, with three of them showing efficient respiratory droplet transmission. Differences in replication efficiency were observed for these viruses; however, the results clearly indicate that H9N2 avian influenza viruses and pH1N1 viruses, both of which have occasionally infected pigs, have the potential to reassort and generate novel viruses with respiratory transmission potential in mammals.
The statistical mechanics of networks We study the family of network models derived by requiring the expected properties of a graph ensemble to match a given set of measurements of a real-world network, while maximizing the entropy of the ensemble. Models of this type play the same role in the study of networks as is played by the Boltzmann distribution in classical statistical mechanics; they offer the best prediction of network properties subject to the constraints imposed by a given set of observations. We give exact solutions of models within this class that incorporate arbitrary degree distributions and arbitrary but independent edge probabilities. We also discuss some more complex examples with correlated edges that can be solved approximately or exactly by adapting various familiar methods, including mean-field theory, perturbation theory, and saddle-point expansions. I. INTRODUCTION The last few years have seen the publication of a large volume of work in the physics literature on networks of various kinds, particularly computer and information networks like the Internet and world wide web, biological networks such as food webs and metabolic networks, and social networks. This work has been divided between empirical studies of the structure of particular networks and theoretical studies focused largely on the creation of mathematical and computational models. The construction of network models is the topic of this paper. Models of networks can help us to understand the important features of network structure and the interplay of structure with processes that take place on networks, such as the flow of traffic on the Internet or the spread of a disease over a social network. Most network models studied in the physics community are of a practical sort. Typically one wishes to create a network that displays some feature or features observed in empirical studies. The principal approach is to list possible mechanisms that might be responsible for creating those features and then make a model incorporating some or all of those mechanisms. One then either examines the networks produced by the model for rewarding similarity to the realworld systems they are supposed to mimic, or uses them as a substrate for further modeling, for example of traffic flow or disease spread. Classic examples of models of this kind are the small-world model and the many different preferential attachment models, which model network transitivity and power-law degree distributions respectively. However, there is another possible approach to the modeling of networks, which has been pursued comparatively little so far. An instructive analogy can be made here with theories of gases. There are (at least) two different general theories of the properties of gases. Kinetic theory explicitly models collections of individual atoms, their motions and collisions, and attempts to calculate overall properties of the resulting system from basic mechanical principles. Pressure, for instance, is calculated from the mean momentum transfered to the walls of a container by bombarding atoms. Kinetic theory is well motivated, easy to understand, and makes good sense to physicists and laymen alike. However, kinetic theory rapidly becomes complex and difficult to use if we attempt to make it realistic by the inclusion of accurate intermolecular potentials and similar features. In practice, kinetic theory models either make only rather rough and uncontrolled predictions, or they rely on large-scale computer simulation to achieve accuracy. If one wants a good calculational tool for studying the properties of gases, therefore, one does not use kinetic theory. Instead, one uses statistical mechanics. Although certainly less intuitive, statistical mechanics is based on rigorous probabilistic arguments and gives accurate and reliable answers for an enormous range of problems, including many, such as problems concerning solids, for which kinetic theory is inapplicable. Equilibrium statistical mechanics provides a general framework for reasoning and a powerful calculational tool for very many problems in statistical physics. Here we argue that the current commonly used models of networks are akin to kinetic theory. They posit plausible mechanisms or dynamics, and produce results in qualitative agreement with reality, at least in some respects. They are easy to understand and give us good physical insight. However, like kinetic theory, they do not make quantitatively accurate predictions and provide no overall framework for modeling, each network model instead concentrating on explaining one or a few features of the system of interest. In this paper we discuss exponential random graphs, which are to networks as statistical mechanics is to the study of gases-a well-founded general theory with true predictive power. These advantages come at a price: exponential random graphs are both mathematically and conceptually sophisticated, and their understanding demands some effort of the reader. We believe this effort to be more than worthwhile, however. Theoretical techniques based on solid statistical foundations and capable of quantitative predictions have been of extraordinary value in the study of fluid, solid state, and other physical systems, and there is no reason to think they will be any less valuable for networks. We are by no means the first authors to study exponential random graphs, although our approach is different from that taken by others. Exponential random graphs were first proposed in the early 1980s by Holland and Leinhardt, building on statistical foundations laid by Besag. Substantial further developments were made by Frank and Strauss, and continued to be made by others throughout the 1990s. In recent years a number of physicists, including ourselves, have made theoretical studies of specific cases. Today, exponential random graphs are in common use within the statistics and social network analysis communities as a practical tool for modeling networks and several standard computer tools are available for simulating and manipulating them, including Prepstar, ERGM, and Siena. In this paper we aim to do a number of things. First, we place exponential random graph models on a firm physical foundation, showing that they can be derived from first principles using maximum entropy arguments. In doing so, we argue that these models are not merely an ad hoc formulation studied primarily for their mathematical convenience, but a true and correct extension of the statistical mechanics of Boltzmann and Gibbs to the network world. Second, we take an almost entirely analytic approach in our work, by contrast with the numerical simulations that form the core of most previous studies. We show that the analytic techniques of equilibrium statistical mechanics are ideally suited to the study of these models and can shed much light on their structure and behavior. Throughout the paper we give numerous examples of specific models that are solvable either exactly or approximately, including several that have a long history in network analysis. Nonetheless, the particular examples studied in this paper form only a tiny fraction of the possibilities offered by this class of models. There are many intriguing avenues for future research on exponential random graphs that are open for exploration, and we highlight a number of these throughout the paper. II. EXPONENTIAL RANDOM GRAPHS The typical scenario addressed in the creation of a network model is this: one has measurements of a number of network properties for a real-world network or networks, such as number of vertices or edges, vertex degrees, clustering coefficients, correlation functions, and so forth, and one wishes to make a model network that has the same or similar values of these properties. For instance, one might find that a network has a degree sequence with a power-law distribution and wish to create a model network that shows the same power law. Or one might measure a high clustering coefficient in a network and wish to build a model network with similarly high clustering. Essentially all models considered in modern work, and indeed as far back as the 1950s and 1960s, have been ensemble models, meaning that a model is defined to be not a single network, but a probability distribution over many possible networks. We adopt this approach here as well. Our goal will be to choose a probability distribution such that networks that are a better fit to observed characteristics are accorded higher probability in the model. Consider a set G of graphs. One can use any set G, but in most of the work described in this paper G will be the set of all simple graphs without self-loops on n vertices. (A simple graph is a graph having at most a single edge between any pair of vertices. A self-loop is an edge that connects a vertex to itself.) Certainly there are many other possible choices and we consider some of the others briefly in Sections III D and III F. The graphs can also be either directed or undirected and we consider both in this paper, although most of our time will be spent on the undirected case. Suppose we have a collection of graph observables {x i }, i = 1... r, that we have measured in empirical observation of some real-world network or networks of interest to us. We will, for the sake of generality, assume that we have an estimate x i of the expectation value of each observable. In practice it is often the case that we have only one measurement of an observable. For instance, we have only one Internet, and hence only one measurement of the clustering coefficient of the Internet. In that case, however, our best estimate of the expectation value of the clustering coefficient is simply equal to the one measurement that we have. Let G ∈ G be a graph in our set of graphs and let P (G) be the probability of that graph within our ensemble. We would like to choose P (G) so that the expectation value of each of our graph observables {x i } within that distribution is equal to its observed value, but this is a vastly underdetermined problem in most cases; the number of degrees of freedom in the definition of the probability distribution is huge compared to the number of constraints imposed by our observations. Problems of this type however are commonplace in statistical physics and we know well how to deal with them. The best choice of probability distribution, in a sense that we will make precise in a moment, is the one that maximizes the Gibbs entropy subject to the constraints plus the normalization condition G P (G) = 1. Here x i (G) is the value of x i in graph G. Introducing Lagrange multipliers, { i }, we then find that the maximum entropy is achieved for the distribution satisfying for all graphs G. This gives or equivalently where H(G) is the graph Hamiltonian and Z is the partition function Equations to define the exponential random graph model. The exponential random graph is the distribution over a specified set of graphs that maximizes the entropy subject to the known constraints. It is also the exact analogue for graphs of the Boltzmann distribution of a physical system over its microstates at finite temperature. Using the exponential random graph model involves performing averages over the probability distribution. The expected value of any graph property x within the model is simply The exponential random graph, like all such maximum entropy ensembles, gives the best prediction of an unknown quantity x, given a set of known quantities, Eq.. In this precise sense, the exponential random graph is the best ensemble model we can construct for a network given a particular set of observations. In many cases we may not need to perform the sum ; often we need only perform the partition function sum, Eq., and the values of other sums can then be deduced by taking appropriate derivatives. Just as in conventional equilibrium statistical mechanics, however, performing even the partition function sum analytically may not be easy. Indeed in some cases it may not be possible at all, in which case one may have to turn to Monte Carlo simulation, to which the model lends itself admirably. As we show in this paper however, there are a variety of tools one can employ to get exact or approximate analytic solutions in cases of interest, including mean-field theory, algebraic transformations, and diagrammatic perturbation theory. III. SIMPLE EXAMPLES Before delving into the more complicated calculations, let us illustrate the use of exponential random graphs with some simple examples. A. Random graphs Consider first what is perhaps the simplest of exponential random graphs, at least for the case of fixed number of vertices n considered here. Suppose we know only the expect number of edges m that our network should have. In that case the Hamiltonian takes the simple form We can think of the parameter as either a field coupling to the number of edges, or alternatively as an inverse temperature. Let us evaluate the partition function for this Hamiltonian for the case of an ensemble of simple undirected graphs on n vertices without self-loops. We define the adjacency matrix to be the symmetric n n matrix with elements Then the number of edges is m = i<j ij, and the partition function is It is convenient to define the free energy which in this case is (Note that the free energy is extensive not in the number of vertices n, but in the number n 2 of pairs of vertices, since this is the number of degrees of freedom in the model.) Then, for instance, the expected number of edges in the model is Conventionally we re-express the parameter in terms of so that m = n 2 p. The probability P (G) of a graph in this ensemble can be written In other words, P (G) is simply the probability for a graph in which each of the n 2 possible edges appears with independent probability p. This model is known as the Bernoulli random graph, or often just the random graph, and was introduced, in a completely different fashion, by Solomonoff and Rapoport in 1951 and later famously studied by Erds and Rnyi. Today it is one of the best studied of graph models, although, as many authors have pointed out, it is not a good model of most real-world networks. One way in which its inadequacy shows, and one that has been emphasized heavily in networks research in the last few years, is its degree distribution. Since each edge in the model appears with independent probability p, the degree of a vertex, i.e., the number of edges attached to that vertex, follows a binomial distribution, or a Poisson distribution in the limit of large n. Most real-world networks however have degree distributions that are far from Poissonian, typically being highly right-skewed, with a small proportion of vertices having very high degree. Some of the most interesting networks, including the Internet and the world wide web, appear to have degree distributions that follow a power law. In the next section we discuss what happens when we incorporate observations like these into our models. B. Generalized random graphs Suppose then that rather than just measuring the total number of edges in a network, we measure the degrees of all the vertices. Let us denote by k i the degree of vertex i. The complete set {k i } is called the degree sequence of the network. Note that we do not need to specify independently the number of edges m in the network, since m = 1 2 i k i for an undirected graph. The exponential random graph model appropriate to this set of observations is the model having Hamiltonian where we now have one parameter i for each vertex i. Noting that k i = j ij, this can also be written Then the partition function is and the free energy is More generally we could specify a Hamiltonian with a separate parameter ij coupling to each edge. Then This allows us for example to calculate the probability of occurrence p ij of an edge between vertices i and j: The model of Eq. is the special case in which ij = i + j and the normal (Bernoulli) random graph of Eq. corresponds to the case in which the parameters ij are all equal. Sometimes it is convenient to specify not a degree sequence but a probability distribution over vertex degrees. This can be achieved by specifying an equivalent distribution over the parameters i in. Let us define () d to be the probability that the parameter for a vertex lies in the range to + d. Then, averaging over the disorder so introduced, the free energy, Eq., becomes The part of this free energy due to a single vertex with field parameter is and the expected degree of vertex i with field i is the derivative of this with respect to, evaluated at i : By a judicious choice of () we can then produce the desired degree distribution. (See also Sec. III E.) We studied this model in a previous paper, as a model for degree correlations in the Internet and other networks. We could alternatively specify a probability distribution () for the parameters ij in that couple to individual edges. Or, taking the developments a step further, one could define joint distributions for the ij on different edges, thereby introducing correlations of quite general kinds between the edges in the model. There are enormous possibilities to be explored in this regard, but we pass over them for now, our interests in the present paper lying in other directions. One can calculate many other properties of our models. For example, for the model of Eq., one can calculate the expectation value of any product of vertex degrees from an appropriate derivative of the partition function: Such derivatives are correlation functions of degrees within the model. Similarly, derivatives of the free energy give the connected correlation functions: and so forth. For instance, the two-vertex connected correlation is For the case of the Bernoulli random graph, which has all i equal, this gives k i k j c = p(1 − p) for i = j, where we have made use of Eq.. Thus the degrees of vertices in the random graph are in general positively correlated. One can understand this as an effect of the one edge that potentially connects the two vertices i and j. The presence or absence of this edge introduces a correlation between the two degrees. (For a sparse graph, in which p = O(n −1 ), the correlation disappears in the limit of large graph size.) In order to measure some quantities within exponential random graph models, it may be necessary to introduce additional terms into the Hamiltonian. For instance, to find the expectation value of the clustering coefficient C, one would like to evaluate which we can do by introducing an extra term linear in the clustering coefficient in the Hamiltonian. To measure clustering in the network of Eq., for example, we could define Then Thus it is important, even in the simplest of cases, to be able to solve more general models, and much of the rest of the paper is devoted to the development of techniques to do this. C. Directed graphs Before we look at more complicated Hamiltonians, let us look briefly at what happens if we change the graph set G over which our sums are performed. The first case we examine is that of directed graphs. We define G to be the set of all simple loopless directed graphs, which is parameterized by the asymmetric adjacency matrix Thus, for instance, the Hamiltonian H = m gives rise to a partition function and a corresponding free energy. The directed equivalent of the more general model of Eq. in which we can control the degree of each vertex is a model that now has two separate parameters for each vertex, in i and out i, that couple to the in-and out-degrees: Then the partition function and free energy are From these we can calculate the expected in-and outdegree of a vertex: We note that i k in, as must be the case for all directed graphs, since every edge on such a graph must both start and end at exactly one vertex. We can also define a probability distribution ( in, out ) for the fields on the vertices, and the developments generalize Eqs. in a natural fashion. We give a more complex example of a directed graph model in Section IV C 1, where we derive a solution to the reciprocity model of Holland and Leinhardt using perturbative methods. D. Fermionic and bosonic graphs It will by now have occurred to many readers that results like Eqs. and bear a similarity to corresponding results from traditional statistical mechanics for systems of non-interacting fermions. We can look upon the edges in our networks as being like particles in a quantum gas and pairs of vertices as being like singleparticle states. Simple graphs then correspond to the case in which each single-particle state can be occupied by at most one particle, so it should come as no surprise that the results look similar to a system obeying the Pauli exclusion principle. Not all networks need have only a single edge between any pair of vertices. Some can have multiple edges or multiedges. The world wide web is an example-there can be and frequently is more than one link from one page to another. The Internet, airline networks, metabolic networks, neural networks, citation networks, and collaboration networks are other examples of networks that can exhibit multiedges. There is no problem generalizing our exponential random graphs to this case and, as we might expect, it gives rise to a formalism that resembles the theory of bosons. Let us define our set of graphs G to be the set of all undirected graphs with any number of edges between any pair of vertices (but still no self-edges, although there is no reason in principle why these cannot be included as well). Taking for example the Hamiltonian, Eq., and generalizing the adjacency matrix, Eq., so that ij is now equal to the number of edges between i and j, we have and The equivalent of the probability p ij of an edge appearing in the fermionic case is now the expected number of edges n ij between vertices i and j, which is given by Note that this quantity diverges if we allow ij → 0, a phenomenon related to Bose-Einstein condensation in ordinary Bose gases. For the special cases of Eqs. and, we have and respectively. The connected correlation between the degrees of any two vertices in the latter case is for i = j. Thus the degrees are again positively correlated and the correlation diverges as → 0. E. The sparse or classical limit In most real-world networks the number of edges m is quite small. Typically m is of the same order as n, rather than being of order n 2. Such graphs are said to be sparse. (One possible exception is food webs, which appear to be dense, having m = O(n 2 ).) The probability p ij of an edge appearing between any particular vertex pair (i, j) is of order 1/n in such networks. Thus, for example, in the fermionic case of the network described by the Hamiltonian, Eq. tells us that e ij must be of order n in a sparse graph. The same is also true for the bosonic network of the previous section. This allows us to approximate many of our expressions by ignoring terms of order 1 by comparison with terms of order e ij. We refer to such approximations as the "sparse limit" or the "classical limit," the latter by analogy with the corresponding phenomenon in quantum gases at low density. In particular, the equivalent of Eq. for either fermionic or bosonic graphs in the classical limit is p ij = e −ij. For the case of Eq., it is so that each edge appears with a probability that is a simple product of "fugacities" e −i defined on each vertex. The classical limit of this model has been studied previously by a number of other authors, although again developed and justified in a different way from our presentation here; generally the edge probability has been taken as an assumption, rather than a derived result. For a given distribution () of, the expected degree of a vertex, Eq., is which is simply proportional to e −i. So we can produce any desired degree distribution by choosing the corresponding distribution for. F. Fixed edge counts Another possible choice of graph set G is the set of graphs with both a fixed number of vertices n and a fixed number of edges m. Models of this kind have been examined occasionally in the literature and, if we once more adopt the view of the edges in a graph as particles, they can be considered to be the canonical ensemble of network models, where the variable edge-count models of previous sections are the grand canonical ensemble. As in conventional statistical mechanics, the grand ensemble is often simpler to work with than the canonical one, but progress can be made sometimes be made in the canonical case by performing the sum over all graphs regardless of edge count and introducing a Kronecker -symbol into the partition function to impose the edge constraint: where m is the desired number of edges. For instance, the fixed edge-count version of the generalized random graph, Eq., would be one in which where we have made use of the integral representation for the -function The sum over graphs is now in the form of the partition function for the grand canonical version of the model, but with ij → ij + 2i, giving the field parameters an imaginary part. Thus, from Eq. In general the integral cannot be done in closed form, which is why fixed edge-count graphs-and canonical ensembles in general-are avoided. The integral can in principle be carried out term by term for any finite n, but doing so is tantamount to performing the sum over all graphs with m edges explicitly, so there is little to be gained by the exercise. It is also possible to have a bosonic graph with a fixed number of edges-one would simply sum over the set of graphs that have m edges with any number of them being permitted to fall between any given pair of vertices. We will not discuss further either fixed edge-counts or bosonic networks in this paper, concentrating instead on the grand canonical fermionic ones, which are more useful overall. However, essentially all of the results reported in the remainder of the paper can be generalized, with a little work, to these other cases if necessary. IV. MORE COMPLEX HAMILTONIANS Outside of the models described in the previous sections, and some minor variations on them, we know of few other exponential random graph models that are exactly solvable. (One exception is the reciprocity model of Holland and Leinhardt, for which we derive an exact solution in Sec. IV C 1.) To make further progress one must turn to approximate methods. There are (at least) three types of techniques that can yield approximate analytic solutions for exponential random graph models. The first and simplest is mean-field theory, which works well in many cases because of the intrinsically high dimensionality of network models; usually these models have an effective dimensionality that increases with the number of vertices n, so that the thermodynamic limit of n → ∞ also corresponds to the high dimension limit in which mean-field theory becomes accurate. Nonetheless, there are many quantities, such as those depending on fluctuations, about which mean-field theory says nothing, and for these other methods are needed. In some cases one can use non-perturbative approaches based on the Hubbard-Stratonovich transform or similar integral transforms, which are very effective and accurate but suitable only for models with Hamiltonians of specific forms polynomial in the adjacency matrix. More generally, one can use perturbation theory, which may involve larger approximations (although they are usually well controlled), but is applicable to Hamiltonians of essentially any form. We discuss all of these approaches here. As an example of their application, we use one of the oldest and beststudied of exponential random graphs, the 2-star model. The Hamiltonian for the 2-star model is where m is the number of edges in the network and s is the number of "2-stars." A 2-star is two edges connected to a common vertex. (The minus sign in front of the parameter is introduced for later convenience.) The quantities m and s can be rewritten in terms of the degree sequence thus: Substituting these expressions into Eq., we can rewrite the Hamiltonian as where J = 1 2 (n−1) and B = − 1 2 (+). (The factor (n− 1) in the definition of J is also introduced for convenience later on.) Noticing once again that k i = j ij, where the variables ij are the elements of the adjacency matrix, we can also write We study the 2-star model in the fermionic case in which each vertex pair can be connected by at most a single edge, and within the grand canonical ensemble where the total number of edges is not fixed. Generalization to the other cases described above is of course possible, if not always easy. A. Mean-field theory The variables ij can be thought of as Ising spins residing on the edges of a fully connected graph, and hence the 2-star model can be thought of as an Ising model on the edge-dual graph of the fully connected graph. (The edge-dual G * of a graph G is the graph in which each edge in G is replaced by a vertex in G * and two vertices in G * are connected by an edge if the corresponding edges in G share a vertex.) Using this equivalence, the mean-field theory of the 2-star model can be developed in exactly the same way as for the more familiar lattice-based Ising model. We begin by writing out all terms in Eq. that involve a particular spin ij : where we have explicitly taken account of all the ways in which ij can enter the first term in the Hamiltonian. (We have also dropped the term 2J ij /(n−1) required to correctly count the terms diagonal in ij, since it vanishes in the large n limit.) Then, in classic mean-field fashion, we approximate the local field by its average: where, as before, p = is the mean probability of an edge between any pair of vertices, which is also called the connectance of the graph. Then H( ij ) = −(4Jp + 2B) ij, and we can write a self-consistency condition for p of the form Rearranging, this then gives us p = e 4Jp+2B 1 + e 4Jp+2B = 1 2 tanh(2Jp + B) + 1. For J ≤ 1 this equation has only one solution, but for J > 1 there may either be one solution or, if B is sufficiently close to −J, there may be three, of which the outer two are stable. Thus when B is close to −J we have a bifurcation at J c = 1, a continuous phase transition to a symmetry broken state with two phases, one of high density and one of low. We show in Fig. 1 a plot of the solution of which displays clearly the characteristic hysteresis loop of the symmetry broken state. Along the "symmetric line" B = −J there is always a solution p = 1 2 to Eq. (although it may be unstable), and along this line we can think of p − 1 2 as a standard order parameter for the model which is zero in the highsymmetry phase and non-zero in the symmetry-broken phase. We can define a critical exponent in the usual fashion by as J → 0 +, giving = 1 2, which is the usual Ising meanfield value and should come as no surprise, given the equivalence mentioned above between the 2-star model and the Ising model. One can define other critical exponents as well, which are also found to take Ising meanfield values. For instance, as we showed in, the variance of the connectance, which plays the role of a susceptibility, goes as ∼ |J − 1| − in the vicinity of the phase transition with = 1. B. Non-perturbative approaches We can go beyond the mean-field approximation of the previous section by making use of techniques borrowed from many-body theory. The developments of this section follow closely the lines of our previous paper on this topic, and rather than duplicate material needlessly, the reader is referred to that paper for details of the calculation. Here we merely summarize the important results. The evaluation of the partition function for the 2-star model involves a sum of terms of the form e k 2. The study of interacting quantum systems has taught us that such sums can be performed using the Hubbard-Stratonovich transform. We start by noting the well-known result for the Gaussian integral: Making the substitutions a → (n − 1)J and → i − k i /(n − 1), and rearranging, this becomes Then the partition function is where we have interchanged the order of sum and integral. The sum over graphs now has precisely the form of the partition function sum for the model of Eq., and from Eq. we can thus immediately write down the partition function where the quantity is called the effective Hamiltonian. Thus we have completed the partition function sum for the 2-star model, but at the expense of introducing the auxiliary fields { i } which must be integrated out to complete the calculation. The integral cannot, as far as we are aware, be evaluated exactly in closed form but, as we showed in, it can be evaluated approximately using a saddle-point expansion, with the result that the free energy of the 2-star model is given to leading order in the expansion by where is the position of the saddle point, i.e., the maximum of the Hamiltonian on the real- line. Note that Eq. is identical to the mean-field equation, Eq., for the connectance p of the 2-star model. Thus, 0 is the connectance of the model within the mean-field approximation and the saddle-point expansion, as is typically the case in such calculations, is an expansion about the mean-field solution. From the free energy we can derive a number of quantities of interest. We showed in, for instance, that the variance of vertex degree in the model is given by which has a gradient discontinuity but no divergence at the phase transition. (This quantity is, by contrast, zero everywhere within mean-field theory.) C. Perturbation theory Exponential random graphs also lend themselves naturally to treatment using perturbation theory. Here we describe the simplest such theory, which is roughly equivalent to the high-temperature expansions of conventional thermal statistical mechanics. Expansions of this type have been examined previously by Burda et al. for Strauss's model of a transitive network. Here we develop the theory further for general exponential random graphs. The fundamental idea of perturbation theory for random graphs is the same as for other perturbative methods: we expand about a solvable model in powers of the coupling parameters i in the Hamiltonian. We write the Hamiltonian for the full model in the form H = H 0 + H 1, where H 0 is the Hamiltonian for the solvable model and H 1 takes whatever form is necessary to give the correct expression for H. Then the partition function is where Z 0 = G e −H0 is the partition function for the unperturbed Hamiltonian, and... 0 indicates an ensemble average in the unperturbed model. The only case that has been investigated in any detail is the one where we expand around a random graph, H 0 = m, so that the averages in Eq. are averages in the ensemble of the random graph. (It is possible for to be zero, so this choice for H 0 does not place any restriction on the form of the overall Hamiltonian. If = 0 then the expansion is precisely equivalent to an ordinary high-temperature series.) However, for Hamiltonians H that give significant probability to networks substantially different from random graphs, the perturbation theory cannot be expected to give accurate answers at low order. In theory there is no reason why one could not expand about some other solvable case, although no such calculations have been done as far as we are aware. One obvious possibility, which we do not pursue here, is to expand around one of the generalized random graph forms, Eqs. and. Typically, to make progress with Eq., we will expand the exponential in a power series of the form In practice, H 1 normally contains a coupling constant, such as the constant J in the 2-star model of Eq., and thus our expression for the perturbed partition function is an expansion in powers of the coupling. In this section, we apply the perturbation method to two example models. First, we study a simple model proposed about a quarter of a century ago by Holland and Leinhardt, which is exactly solvable by this method. Then we illustrate the application of the method to the 2-star model and compare its performance against the approximate saddle-point expansion results of the previous section. The reciprocity model Our first example of perturbation theory is a directed graph model. In the real world, many directed graphs display the phenomenon of reciprocity: a directed edge running from vertex A to vertex B predisposes the network to have an edge running from B to A as well. Put another way, the network has a higher fraction of vertex pairs that are joined in both directions than one would expect on the basis of chance ("mutual dyads" in the parlance of social network analysis). Behavior of this kind is seen, for example, in the world wide web, email networks, and neural and metabolic networks. Holland and Leinhardt have proposed a exponential random graph model of reciprocity, which we study here in a simplified version. As we now show, the perturbation expansion for this model can be written down to all orders and resummed to give an exact expression for the partition function. The Hamiltonian for the model is where m is the total number of (directed) edges in the graph, and r is the number of vertex pairs with edges running between them in both directions. The unperturbed Hamiltonian H 0 is that of an undirected random graph (Sec. III C) with partition function given by Eq. and each directed edge present with independent probability p = (e + 1) −1. The perturbation H 1 can be written in terms of the adjacency matrix as Then the perturbation series, Eq., for the full Hamiltonian is with Thus the partition function is written as an expansion in powers of whose coefficients are correlation functions of elements of the adjacency matrix, calculated within the ordinary random graph. If we can evaluate these correlation functions, at least up to some finite order, we can also evaluate the perturbed partition function. Since all edges are present or absent independently of one another in the random graph, the correlation functions factor: 12 21 34 43 0 = 12 0 21 0 34 0 43 0 = p 4, and so forth. The only exception is in cases where two or more of the elements ij being averaged are the same. In that case, noting that n ij = ij for any n, we have results like To evaluate expressions such as, therefore, we need to count the number of independent elements ij that appear in each term. This can be difficult for some models, but for the reciprocity model it is quite straightforward. The question we need to answer is this: if we choose k pairs of vertices (i, j) from the n Eqs. and where the function is the exponential generating function for a k,q. Now the number of ways of choosing k pairs such that all choices are made from a particular set of size q, but without the constraint that each pair in the set appear at least once, is just q k. Thus q m=1 q m a k,m = q k, or equivalently Multiplying by z k /k! and summing over k = 1... ∞, this gives which immediately implies that g q (z) = (e z − 1) q, by induction on with the initial condition g 1 (z) = ∞ k=1 z k a k,1 /k! = e z − 1. Substituting this result into Eq. then gives us our solution: Or, making use of Eqs. and and F = − ln Z in the normal fashion. From these expressions we can, for instance, obtain the mean number of edges m and the mean number r of pairs of vertices connected by edges running both ways from A quantity of interest in directed networks is the reciprocity, which is the fraction of edges that are reciprocated. This quantity is found to be on the order of tens In Fig. 2, we show the reciprocity, along with the connectance of the network, as a function of for the case p = 0.01. There is no phase transition or other unexpected behavior in this model: the measured properties are smooth functions of the independent parameters. Notice that there is a substantial range of values of over which the connectance is low and the graph realistically sparse, but the reciprocity is still high, with values similar to those seen in real networks. Example 2: The 2-star model As our second example of the application of perturbation theory, we return to the 2-star model introduced at the beginning of Sec. IV. Unlike the case of the reciprocity model in the preceding section, perturbation theory does not lead to an exact solution of the 2-star model but, as we now show, we can get an approximate solution by studying the perturbation expansion to finite order-a different approximation from the saddle-point expansion of Sec. IV B. We divide the Hamiltonian H = m−s into an unperturbed part H 0 = m, which is the normal Bernoulli random graph, and a perturbation Hamiltonian H 1 = −s. Then, following Eq., the partition function for the full model is given by The number of 2-stars is and therefore Our strategy is to evaluate the series up to some finite order in to get an approximate solution for Z, but there is a problem. Each term in the series corresponds to states of the graph that have the corresponding number of 2-stars: the term in s 0, for instance, counts the number of graphs that have a 2-star in any position in the graph. This is not enough for our purposes however. Realistic graphs will have not a finite number but a finite density of 2-stars in them, and the number of such graphs is counted by terms that appear at infinite order in the perturbation expansion in the limit n → ∞. So, without going to infinite order as we did in the reciprocity model, we are never going to get meaningful results from our expansion. Similar problems appear in ordinary statistical mechanics and the solution is well known. Instead of expanding the partition function, we form an expansion for the free energy. We can write the free energy as where F 0 is the free energy of the unperturbed network and F 1 = − ln(Z/Z 0 ). Now we expand F 1 as a power series in of the form where we have made use of the fact that F 1 = 0 when = 0. Substituting into Z/Z 0 = e −F1, we get and comparing terms with Eq., we find and so forth. These are the cumulants of s within the ensemble defined by the unperturbed network. If we expand s in the form of Eq. then they are connected correlations of elements of the adjacency matrix-"connected" because individual elements of the adjacency matrix are uncorrelated, so that all terms in the cumulants vanish unless they involve sets of 2-stars that share one or more edges. (Note that sharing a vertex, as in the more familiar spin models of traditional statistical mechanics, is not a sufficient condition for being connected. The fundamental degrees of freedom in a network are the edges.) We will proceed then as follows. We calculate the free energy F 1 in terms of connected correlations up to some finite order in and from this we calculate the partition function Z = Z 0 e −F1. Even though F 1 is known only to finite order, our expression for Z will include terms with all powers of the connected correlations in it, via the expansion of the exponential, and hence will include graphs with not only a finite number but a finite density of 2-stars. This idea, which will be routine for those familiar with conventional diagrammatic many-body theory, is entirely general and can be applied to any model, not just the 2-star model. In essence, the series given by e −F1 is a partial resummation to all orders of the partition function, including some but not all of the contributions to Z from disconnected correlations of arbitrarily high order. Let us see how the calculation proceeds for the case of the 2-star model, to order 3, as above. The leading O() term in F 1 is simple: Since we are primarily interested in large networks, we can approximate this expression by its value to leading order in n, which is 1 2 n 3 p 2. The second term, at order 2, is more complicated because there are several different ways in which two 2stars may combine to share one or more edges. In order to keep track of these different contributions, we make use of a diagrammatic representation similar to that employed by Burda et al. for Strauss's transitivity model. Figure 3a shows the single diagram contributing to f 1, which gives the result in Eq.. Figure 3b shows the three diagrams that contribute to f 2. It is an assumption of our notation that each edge that appears in a diagram is distinct. Thus the third diagram in Fig. 3b, which represents the case in which the two 2-stars fall on top of one another, must be depicted separately, rather than being considered a special case of the first diagram. This turns out to be a good idea, since this term has a different functional form from the first diagram, and neither diagram is necessarily negligible by comparison with the other. In general the basic "Feynman rules" for interpreting the diagrams are: 1. each edge contributes a factor of p; 2. each vertex contributes a factor of n; 3. the numerical multiplier is the number of distinct ways in which the diagram can be decomposed into overlapping 2-stars such that each edge occurs at least once, divided by the symmetry factor for the diagram. (The symmetry factor is the number of distinct permutations of the vertices that leave the diagram unchanged.) Then for the connected correlation functions one must subtract all other ways of composing lower order diagrams to make the given diagram, as in Eq.. To see how these rules work in practice, let us apply them to the first diagram in Fig. 3b. This diagram has four vertices and three edges, which gives a factor of n 4 p 3, by the first two rules. The diagram can be decomposed into two 2-stars in 6 different ways, but the symmetry factor is also 6, so we end up with n 4 p 3 6/6 = n 4 p 3. The contribution to the diagram from the term −f 2 1 in Eq. (92b) is −n 4 p 4, so the final value of the diagram is n 4 (p 3 − p 4 ) to leading order in n. Proceeding in a similar fashion, the other diagrams of Fig. 3b contribute n 4 (p 3 − p 4 ) and 1 2 n 3 (p 2 − p 4 ), respectively. The diagrams for the O( 3 ) term are shown in Fig. 3c, and are more complicated, but routine to evaluate using the rules above. The final expressions for the f s are: Note that we have retained the leading order terms in n separately at each order in p, since we have no knowledge a priori about the relative magnitude of n and p. In a sparse graph, we expect that p will be of order 1/n, in which case it may be possible to neglect some terms. Once we have the expansion of F 1, it is straightforward to calculate statistical averages from derivatives of the free energy in the normal fashion. For example, the expected number of 2-stars in the network is given by And the expected number of edges is m = ∂F ∂ = p(p − 1) ∂F 0 ∂p + ∂F 1 ∂p = 1 2 n 2 p + n 3 (1 − p)p 2 1 + 1 2 (1 + 6np − 8np 2 ) + 1 6 (1 + 21np + 58n 2 p 2 − 180n 2 p 3 + 129n 2 p 4 ) 2. In Fig. 4, we show the connectance 2 m /n 2 and the density of 2-stars 2 s /n 3 calculated from the saddlepoint method of Sec. IV B and from the expressions above, at first, second, and third order. As the figure shows, the perturbation expansion agrees with the non-perturbative method at high and low values of J = 1 2 (n − 1), and markedly better for the third-order approximation than for the first-and second-order ones. However, in the region of the phase transition at J c = 1 the agreement is poor, as we would expect. In this region there will be large critical fluctuations and hence contributions to the free energy from large connected diagrams that are entirely missing from our series expansion. Presumably by extending the perturbation series we can derive successively more accurate answers in the critical region. We also note that the perturbation expansion gives results only for the sparse phase in the symmetry-broken region. We have here studied in detail two examples of the treatment of exponential random graphs by perturbation theory (and another can be found in Ref. ). The techniques we have used, however, are entirely general and diagrammatic theories similar to these, with similarly simple "Feynman rules," can be derived for other examples as well. V. CONCLUSIONS In this paper we have discussed exponential random graphs, which in both a figurative and a quantitative sense play the role of a Boltzmann ensemble for the study of networks. Exponential random graphs are a formally well-founded framework for making predictions about the expected properties of networks given specific measurements of properties of those networks. We have shown in this paper how they can be derived in moderately rigorous fashion from maximum entropy assumptions about probability distributions over graph ensembles. We have given many examples of particular calculations using exponential random graphs, starting with simple random graph models that have linear Hamiltonians, many of which have been presented previously by other authors, albeit it with rather different motivation. In most cases these linear models can be solved exactly, meaning that we can derive the partition function or equivalently the free energy of the graph ensemble exactly in the limit of large system size. For nonlinear Hamiltonians it appears possible to find exact solutions only rarely, but we have been able to find approximation solutions in several cases using a number of different methods. Taking the particular example of the 2-star model, we have shown how its behavior can be understood using mean-field theory, perturbation theory, and non-perturbative methods based on the Hubbard-Stratonovich transform. We have also given one example, the reciprocity model of Holland and Leinhardt, that is exactly solvable by evaluating its perturbation expansion to all orders. The results presented in this paper are only a tiny fraction of what can be done with exponential random graphs. There are many interesting challenges, both practical and mathematical, posed by this class of models. Exploration of the behavior and predictions of specific models as functions of their free parameters, development of other approximate solution methods, or expansion of those presented here, and the development of models to study network phenomena of particular interest, such as vertex-vertex correlations, effects of hidden variables, effects of degree distributions, and transitivity, are all excellent directions for further research. We hope to see some of these topics pursued in the near future.
Nobiletin as a tyrosinase inhibitor from the peel of Citrus fruit. A tyrosinase inhibitor was isolated from the peel of Citrus fruit by activity-guided fractionation, and identified as 3',4',5,6,7,8-hexamethoxyflavone (nobiletin) by comparison with reported spectral data. Nobiletin (IC50 of; 46.2 microM) exhibited more potency than Kojic acid (IC50; 77.4 microM) used as a positive control, and it was found to be potentially an effective inhibitor of the production of melanin.
The Occupation of Household Financial Management among Lesbian Couples Abstract Occupational science seeks to explicate the everyday activities of individuals within their social and cultural worlds. This research is concerned with the occupation of doing finances within the context of creating a home as a couple. Thirteen couples were interviewed in their homes, to address the question: How do lesbian couples go about creating a home together through engagement in household occupation? Financial management, the specific subject of this paper, was a found topic that emerged spontaneously from the interviewees. Modified grounded theory and narrative approaches were used to analyze data. The major findings include approaches to money management, the dynamic nature of financial management, and how financial management is influenced by being lesbian in today's society. Implications for occupational science are discussed, such as: redefining the division of labor approach to household tasks; concepts of fairness and balance; equal respect for paid and unpaid work; and that people situated in seemingly personal occupations are actually situated in legal, social, and political realms.
Finding a best parking place using exponential smoothing and cloud system in a metropolitan area Finding a vacant place to park cars in the rush hour is time-consuming and even may be frustrating for the drivers. In some studies, vehicles are equipped with communicative tools called On Board Units (OBUs), which come along with roadside devices known as Roadside Units (RSUs), which allow the drivers to communicate each other and trace a vacant parking place easily. Previous systems work with connection of sensors all over the road and parking space and may result in spending much time to find a parking place, occupancy of the empty parking place until the car gets to the desired location, requirement for an additional hardware, wired network communication, and security issues. In this paper, we propose an exponential smoothing and multi-objective decision-making by using cloud-based methods to find the best parking place, taking into consideration of the park-cost for the driver. The proposed system uses the cellular base stations to eliminate the cost of the RSUs and the sensors. The results of our simulations via NS-2 network simulator confirm the efficiency of the proposed model.
Colin Cowherd has some interesting views when it comes to fatherhood. Back in November of last year, Cowherd went on a strange rant about Washington Wizards point guard John Wall, and how Wall's lack of a father meant that he probably wasn't cut out to run an NBA team. The comments touched a nerve, since a) Wall's father is dead, and b) there was some perceived race-baiting in Cowherd's previous criticisms of Wall (who is black, and who Cowherd criticized for doing "The Dougie," a distinctly black dance move, during a game). So the "You don't have a dad, so you can't lead" thing was the last straw for some people. Well, it would appear that Colin Cowherd, professor of sociology, is back at the microphone. And this time he has a theory about Roger Goodell and NFL players who don't have dads. Cowherd references a column by CBS' Gregg Doyel, in which Doyel compares the NFL commissioner to a strict father and refutes the claim by some players and media outlets that Goodell purposely disciplines black players more than white ones. Cowherd then takes the column and uses it as a jumping-off point for this argument: since 3 out of 4 NFL players are black, and since 71% of African-American men grow up without fathers (I'm not sure where Cowherd got this figure from, but according to the Census Bureau, 56 percent of black children lived in single-parent families in 2004), and since star football players don't get told what to do in high school and college, the NFL is the first time some of these guys have ever had a dad. Which, based on Cowherd's logic, would mean Roger Goodell, who looks like a much bigger version of Bobby from The Brady Bunch, is a dad to roughly 1,017 black NFL players.
I wrote here about Shirley Sherrod’s lawsuit against Andrew Breitbart’s widow in connection with video Andrew posted. In the video, Sherrod regales an NAACP audience with the tale of how, as a public employee, she initially stiffed a white farmer seeking her help. Andrew did not post, presumably because he did not have, the full video in which Sherrod says she eventually saw the error of her ways and helped the farmer. Sherrod is represented by lawyers at the mega-firm Kirkland & Ellis. They are representing Sherrod for free. They are doing so even though, from all that appears, Sherrod is far from indigent thanks at least in part to her prior successful adventures in litigation. Through their representation of Sherrod free of charge in a suit against Breitbart’s widow, the K&E lawyers illustrate how pro bono law has morphed into left-wing lawfare at big law firms. Sherrod’s legal team deposed Christian Adams last week. Adams provides a partial account of the experience, along with his scathing view of Sherrod, her lawyers, and the lawsuit, here. In my opinion, the deposition likely had two purposes. First, it probably was a fishing expedition for evidence that Breitbart posted the video with “malice,” a necessary element for a successful suit by a public figure like Sherrod. In this context, as John has explained, malice means that Andrew knew what he was communicating about Sherrod was false (or cast her in a false light), or believed it was likely false (or cast her in a false light). It’s unlikely that Sherrod can make this showing and unlikely that Adams has knowledge bearing on the question. His only connection to the case is receipt of an email from Andrew containing a link to the video he later posted. Sending the link would not speak to whether Andrew knew or believed the link cast Sherrod in a false light. I had Tweeted out that it was my opinion that Shirley Sherrod was a “greedy redeemed racist.” Sure enough, Sherrod’s lawyer Jonathan F. Ganter was armed with the Tweet as an exhibit with questions to follow. Is it simple happenstance that I was one of the few people subpoenaed for a deposition after I wrote a series of articles criticizing the immorality of suing Andrew Breitbart’s widow (a woman who had nothing to do with the Sherrod saga)?. . . . Is it simple happenstance that I was one of the few people subpoenaed for a deposition after I wrote articles noting that Kirkland and Ellis has represented a Nazi camp guard at Treblinka, among other unsavory cases? Given how little came out of my deposition, one wonders. Is Sherrod a “greedy redeemed racist”? I’ll leave it Adams to answer to that question. In my opinion, though, Sherrod the hater remains unredeemed. The Breitbart incident didn’t cost Sherrod her employment or her reputation, except during a brief period until the full video was produced. She came out of the affair smelling fine (the same cannot be said, however, for her NAACP audience whose members took delight in the racism of Sherrod pre-redemption). Thus, for Sherrod to pursue this lawsuit against Andrew’s widow is, as Adams has said, indecent. It’s difficult to imagine that she would be doing it but for the fact that Breitbart was a prominent figure in conservative media. And it’s even more difficult to believe that Kirkland & Ellis would be representing her for free but for that fact. Not enough conservative media outlets are covering the lawsuit against Brietbart. I’m pretty sure that if any of those now-silent outlets were being sued by someone with Sherrod’s background for publishing the statements she made, Andrew Breitbart’s cavalry would have been riding to their defense. I met Andrew only a few times, and didn’t know him well. But I’m pretty sure Adams is right.
Energy-Efficient Routing Protocols for Wireless Sensor Networks: Architectures, Strategies, and Performance Recent developments in low-power communication and signal processing technologies have led to the extensive implementation of wireless sensor networks (WSNs). In a WSN environment, cluster formation and cluster head (CH) selection consume significant energy. Typically, the CH is chosen probabilistically, without considering the real-time factors such as the remaining energy, number of clusters, distance, location, and number of functional nodes to boost network lifetime. Based on the real-time issues, different strategies must be incorporated to design a generic protocol suited for applications such as environment and health monitoring, animal tracking, and home automation. Elementary protocols such as LEACH and centralized-LEACH are well proven, but gradually limitations evolved due to increasing desire and need for proper modification over time. Since the selection of CHs has always been an important criterion for clustered networks, this paper overviews the modifications in the threshold value of CH selection in the network. With the evolution of bio-inspired algorithms, the CH selection has also been enhanced considering the behavior of the network. This paper includes a brief description of LEACH-based and bio-inspired protocols, their pros and cons, assumptions, and the criteria of CH selection. Finally, the performance factors such as longevity, scalability, and packet delivery ratio of various protocols are compared and discussed.
Five-time Grammy winner Mariah Carey has officially joined the judging panel on Fox&apos;s "American Idol." Fox entertainment chief Kevin Reilly announced her signing to a meeting of the Television Critics Association on Monday, and then put Carey on speaker phone to confirm the deal. "I am so excited to be joining &apos;American Idol,"&apos; she said. "This kind of all just happened really quickly." Carey had been courted by "Idol," and as one of music&apos;s best-selling singers she could bring the star power it needs to compete with rivals like "The X Factor," which recently added Britney Spears and Demi Lovato as judges. Earlier Monday, veteran "Idol" executive producer Nigel Lythgoe said there&apos;s a slender possibility that Jennifer Lopez might return. His reasoning: Since she told "Idol" host Ryan Seacrest she was 99 percent sure she was leaving, that means there&apos;s a 1 percent chance she won&apos;t, Lythgoe said Monday. Lopez&apos;s representative, Mark Young, said July 13 that she&apos;s leaving "American Idol," following rocker Steven Tyler out the door after two years. That leaves original judge Randy Jackson. Fox did not immediately respond to Lythgoe&apos;s remarks.
Measurements of Sea Surface Currents in the Baltic Sea Region Using Spaceborne Along-Track InSAR The main challenging problems in ocean current retrieval from along-track interferometric (ATI)-synthetic aperture radar (SAR) are phase calibration and wave bias removal. In this paper, a method based on differential InSAR (DInSAR) technique for correcting the phase offset and its variation is proposed. The wave bias removal is assessed using two different Doppler models and two different wind sources. In addition to the wind provided by an atmospheric model, the wind speed used for wave correction in this work is extracted from the calibrated SAR backscatter. This demonstrates that current retrieval from ATI-SAR can be completed independently of atmospheric models. The retrieved currents, from four TanDEM-X (TDX) acquisitions over the resund channel in the Baltic Sea, are compared to a regional ocean circulation model. It is shown that by applying the proposed phase correction and wave bias removal, a good agreement in spatial variation and current direction is achieved. The residual bias, between the ocean model and the current retrievals, varies between 0.013 and 0.3 m/s depending on the Doppler model and wind source used for wave correction. This paper shows that using SAR as a source of wind speed reduces the bias and root-mean-squared-error (RMSE) of the retrieved currents by 20% and 15%, respectively. Finally, the sensitivity of the sea current retrieval to Doppler model and wind errors are discussed.
By Aaron Marshall on Monday, November 21st, 2011 at 6:00 a.m. Just days after a two-to-one margin of victory for a "health care freedom" amendment to Ohio’s constitution on the November ballot, the band of Tea Party activists and other conservatives behind that initiative announced a new amendment they would seek. Next up would be a petition drive to get a new issue on the ballot that the group describes as a "workplace freedom amendment," but which is usually called a right-to-work issue. This proposed amendment would forbid forcing a person to join a union as a condition of employment, and it is similar to a proposed amendment that was stomped at the ballot by Ohio voters in 1958. At the well-attended Statehouse news conference Nov. 10, supporters of the amendment argued that not having a right to work law was hurting Ohio’s economy in competing for jobs and hurting the state’s economic growth. One of those speaking on behalf of the right to work bill was Maurice Thompson, the executive director of the 1851 Center for Constitutional Law, the author of the proposed amendment. During his remarks, Thompson attempted to make the case that economic conditions were better in right to work states than in Ohio, citing the Buckeye State’s unemployment rate. "We’re hovering around 9-10 percent unemployment at any given time, which is significantly higher than the unemployment rate in states which are not forced union states and it’s always been that way," Thompson said. Just minutes later—at a separate statehouse news conference—House Democrats argued much the opposite -- that unemployment rates in right to work states actually aren’t any lower than Ohio’s. So it seemed like a good time to check into Thompson’s statement. Had Ohio’s unemployment rate "always" been higher? It’s fairly easy to compare unemployment rates in different states thanks to the federal Bureau of Labor Statistics, which diligently tracks and compiles the information each month. Politifact asked Thompson for a source listing right-to-work states and sent us a link to the National Right to Work Legal Defense Fund, which lists 22 states primarily in the south and west which have right-to-work statutes on the books. The fund has a database to the relevant section of law in each state so the fund’s information on what states have the laws looked to be unbiased. We compared Ohio’s unemployment rate of 9.1 for September 2011 to those 22 states and found that eight right-to-work states had higher unemployment rates, 13 had lower unemployment rates and one was precisely the same (Arizona). For what its worth, Ohio’s rate dipped to 9.0 for October, but we’re sticking with the September figures here, as they were the most current numbers available when Thompson made his claim. Three of the 13 states with lower unemployment rates in September were within one percentage point of Ohio’s — Idaho at 9.0 percent, Texas at 8.5 percent and Arkansas at 8.3 percent. If the jobless rate in Ohio is less than one percent greater, does that count as "significantly higher," as Thompson said? We’re not convinced. Thus, we are left with 10 of 22 right-to-work states with "significantly" lower unemployment rates than Ohio. The tail end of Thompson’s statement makes a second claim — that Ohio has always had an unemployment rate higher than right-to-work states. To test this portion of his statement, we looked at both the high and low water marks for Ohio’s unemployment rate over the last decade to give a s snapshot of how Ohio stacked up against right-to-work states during both good and bad economic times. When Ohio had its lowest point of unemployment in the state’s history — 3.8 percent in January 2001 — there were still eight right-to-work states with lower unemployment. Fourteen had higher unemployment rates. When Ohio had the highest rate of unemployment over the last decade — 10.6 percent in Feb. 2010 — 18 states had lower unemployment rates. Four right-to-work states had higher unemployment rates. We think it’s likely that Ohio has ranked somewhere between No.5 and No. 15 in highest unemployment rates when compared to the 22 right-to-work states. Thompson also forwarded information from Stanley Greer, a staffer with the National Right to Work Legal Defense Fund, who argued that on the average right-to-work states have slightly lower unemployment rates than states without the laws. Greer also produced statistics that argue that private employment has fallen in non right-to-work states but grown in right-to-work states over the past decade. Frankly, we don’t think that information is pertinent to evaluating the truthfulness of Thompson’s claim, which focused solely on the unemployment rate in Ohio as compared to right-to-work states. So let’s return to that. Thompson flatly stated that Ohio’s unemployment rate was "significantly higher" than the rates in right-to-work states and has "always" been that way. But figures from the Bureau of Labor Statistics show that 12 of 22 right-to-work states have unemployment rates that are very similar to Ohio’s or greater. Less than half have "significantly" lower unemployment rates. As for Ohio "always" having had a higher unemployment rate, a check of Ohio’s rate at its highest and lowest points over the past decade shows 14 of the right-to-work states had higher unemployment rates than Ohio when the rate was at its lowest. And when Ohio’s rate was at its highest, there still were four right-to-work states with higher jobless rates. Ohio’s unemployment rate is not higher than all right-to-work states and very likely has never been. Thompson’s claim is not accurate. On the Truth-O-Meter, we rate his claim False. Published: Monday, November 21st, 2011 at 6:00 a.m. E-Mail correspondence with Maurice Thompson, including forwarded information from Stanley Greer of the National Right to Work Legal Defense Fund, Nov. 10 and Nov. 15, 2011.
Determination of sodium penicillin G in disodium carbenicillin preparations. Aqueous solutions of disodium carbenicillin containing sodium penicillin G (sodium benzylpenicillin) are chromatographed by TLC using silica gel on aluminum foil and acetonechloroform-acetic acid-water (50:45:5:1 v/v) as a developing solvent. The location of penicillin G is determined by reference to standard strips cut from the edges of the chromatogram and visualized colorimetrically. The appropriate area is removed and penicillin is eluted from the silica with phosphate buffer at pH 7.0. The amount of penicillin is determined spectrophotometrically after formation of penicillenic mercuric mercaptide formed by heating penicillin with an imidazole reagent containing mercury.
THINGS went from bad to worst for Daz Spencer tonight when he hit Graham Foster with his car whilst driving drunk. Fans were left shocked as an inebriated Daz was then seen fleeing the scene and calling the police to report his car stolen. It all started when he was seen walking home after an evening with his boss. He spotted his car in a field - as it had been dumped their after Noah, Amelia and Leanna and had gone joyriding in it. Although he didn't know the kids had taken his car, Daz decided to drive it home - even though he was hammered. Whilst driving in the dark he spotted Noah's phone ringing on the floor of the car, he tried to reach for it and he hit something. He presumed he had hit an animal and went out to inspect the damage. Just as he did this, another car pulled up and you could hear them say that a person had been hit. Daz quickly fled the scene to avoid being arrested and got home a reported his car stolen. Later the police were seen talking to Noah as his phone was found in the car. The show ended with it being revealed it was Graham who had been hit. Fans immediately took to Twitter to to share their shock, this one said: "Daz, honestly, how stupid do you need to be, drinking and driving! #Emmerdale" While another said: "Daz, digging his own grave. #Emmerdale" This one tweeted: "#Emmerdale Oh it's Graham. You're a dead man Daz." Another one asked: "Will Daz be found out?? #Emmerdale" Emmerdale continues tomorrow night at 7pm on ITV.
Institute of the Environment For over 80 years, the Highlands Biological Station (HBS) has served as a center for research and education focusing on the biodiversity of the southern Appalachians, and has served as the mountain field site of the Institute for the Environment (IE) since 2001. Situated in some of the highest mountain country in the eastern United States, the IE Highlands Field Site offers a unique experience for students interested in biodiversity and conservation issues. The region is an ideal " natural laboratory " in which to learn about the historical and ecological processes that shape the biogeography of the rich southern Appalachian biota, and to explore the interplay of land use pressures and conservation concerns facing the region. HBS is a fully equipped scientific field station that also offers a summer field course program and research facilities for visiting scientists. Additionally, it serves as an important community resource with its Nature Center museum and extensive Botanical Garden of native plants. IE students live in a restored home on the Station grounds in the town of Highlands, N.C. (elevation 4,118 feet). The campus provides a convenient center for course offerings and scientific investigations, with classroom and lab space, a computer lab and a library. Students spend the semester becoming intimately familiar with the issues of the Highlands region, much of which lies within the Nantahala National Forest. Coursework is focused on mountain biodiversity and biogeography, theoretical and applied methods (including GIS) for the study of mountain ecology and conservation, and the social, political and ecological history of land use in the southern Appalachians. The program takes advantage of its proximity to the Great Smoky Mountains National Park, the Qualla Boundary (Reservation of the Eastern Band of Cherokee Nation), the Blue Ridge Parkway and other areas of interest to experience firsthand the complexities of the environmental issues of the southern mountains.
Usability Evaluation of Method Handbook In enterprise modelling and information systems development, methods contribute to systematic work processes and to improving the quality of modelling results. Information Demand Analysis (IDA) is a method, which recently was developed for the purpose of optimizing the information flow in the field of information logistics. In order to contribute to improvement of the IDA method, the focus of this paper is to evaluate the usability of the IDA method handbook. For this purpose, an approach for usability evaluation of the handbook is proposed and applied. The main contributions are an approach how to apply the concept of usability when evaluating a method handbook, experiences from using this approach in a real world case, and recommendation for improving the IDA method handbook with respect to usability.
I can’t answer what happened. We didn’t turn up for 45 minutes. DAVE JONES apologised to the Cardiff fans after a horror show ended their automatic promotion hopes. All the pre-match talk was about securing a top-two fi nish. But 21 minutes in shell-shocked Cardiff were already wondering who their play-off opponents would be instead. That summed up their nightmare showing from start to fi nish, where everything that could have gone did go wrong for the Welsh side. Their sorry day included having two stonewall penalties for handball turned down, which referee Graham Scott will not enjoy when he sees again. However, the hosts were the architects of their own downfall here as dismal defensive disasters ended their top-two hopes as Middlesbrough’s early three-goal blitz stunned the Cardiff City Stadium – and stopped their bid to put pressure on Norwich. Bluebirds boss Jones said: “I can’t answer what happened. We didn’t turn up for 45 minutes. “I didn’t see this coming. The goals we gave away were schoolboy defending. It would be easy to character assassinate the players, but when the expectation was there we didn’t deliver and it’s as simple as that. A carnival atmosphere – which included a Craig Bellamy pre-match gee up beamed to the fans on the big screen – was transformed within 20 minutes. Boro striker Leroy Lita began the rout when he headed home Tony McMahon’s third-minute cross. And a nervy home crowd were soon stunned again when more abject defending saw Lita cross for Barry Robson to poke home in the 13th minute. Eight minutes later the contest was all over as Richie Smallwood capitalised on more sloppy play to fi re in from the six- yard box. The cheers had turned to jeers and home fans were already leaving the ground before Cardiff’s fi rst attempt arrived on the half-hour mark when Jay Bothroyd stabbed wide. The writing was on the wall that it wasn’t to be the home side’s day when Kevin McNaughton and Bothroyd fl uffed routine chances just after the break. Desperate Cardiff piled forward but couldn’t even fi nd a consolation and, with the damage already done, left the field to boos. Jones added: “We know we didn’t play anywhere near what we are capable of. “We can normally score goals, but we could have put a blanket over their goal. “Whether we have to go through the play-offs, we have to keep the belief alive. Middlesbrough don’t want the season to end after hitting a rich vein of form. It’s come too late for the pre- season promotion favourites, but Tony Mowbray admits the run of one loss in 11 games bodes well for the next campaign. The Boro boss said: “Everyone knows we have to cut our wage bill dramatically. We will wait and see what happens in the summer. “But we have some talented players who are not affecting the wage bill. “These days happen. I was manager of West Brom, who were striving to get out of this league.
Fluorescent Sensor for Rapid Detection of Nucleophile and Convenient Comparison of Nucleophilicity. Although nucleophile (Nu) is associated with many important chemical reactions, there are no fluorescence sensors for Nu detection and even for calculation of its nucleophilicity up to the present. In this study, we developed a fluorescent malononitrile-modified perylenediimide (MAPDI) which can selectively and rapidly react with nucleophiles, such as amines, amino acids, and some inorganic anions, and then change its UV-vis absorption and fluorescence emission. Detection limits of MAPDI for different nucleophiles could be calculated to compare their strength of nucleophilicity. Furthermore, it was found that MAPDI could detect reductive inorganic anions. These results suggested that MAPDI might have a great potential in organocatalytic reactions, metal ion-catalyzed reactions, reactions of amines, and other nucleophilic chemical reactions.
Q: Prove function in little-oh of a big-theta function I'm having trouble with this homework question. Prove if $f(n)$ is $o( g(n) )$ and $g(n)$ is $\Theta( h(n) )$, then $f(n)$ is $o(h(n))$. I know I need to use the precise definition of a limit, but I'm not sure how to apply it to this proof. Any help would be appreciated. A: $f(n)$ is $o( g(n) )$ This above equation implies the following: $f(n) < c \cdot g(n) \ -(1)$ for some $c \in R$ and $\forall n \geq n_{0}$ $g(n)$ is $\Theta( h(n) )$ The above equation implies following: $ c_{1} \cdot h(n) \leq g(n) \leq c_{2} \cdot h(n) -(2)$ for some $c1, c2 \in R$ and $\forall n \geq n_{0}'$ Now let's assume $c$ > 0 Multiplying equation $(2)$ by $c$ gives: $ c \cdot c_{1} \cdot h(n) \leq c \cdot g(n) \leq c \cdot c_{2} \cdot h(n)$ Considering right side of the above inequality: $c \cdot g(n) \leq c \cdot c_{2} \cdot h(n) -(3)$ Let $n_{0}''$ be $max(n_{0}, n_{0}')$ We can say $\forall n \geq n_{0}''$, equation $(1)$, $(2)$ and $(3)$ will hold true. Now by using $(1)$ and $(3)$, we can infer: $ c \cdot c_{2} \cdot h(n) > f(n) -(4)$ Let $c' = c \cdot c_{2}$ Therefore we can write equation $(4)$ as: $ f(n) < c' \cdot h(n)$ $\forall n \geq n_{0}''$ and since $R$ is closed under multiplication, therefore $c \in R$ which implies $f(n)$ is $o(h(n))$. Note: For $c < 0$, you can consider left side of the inequality, which after multiplying by $c$ will actually come to right side (since $c < 0$).
Manitoba is not worried about being the last province standing in a health-funding dispute with Ottawa and will not be rushed into accepting any deals, Premier Brian Pallister said Monday. "We're standing alone. I'm not afraid of that and I'm not anything but proud of the fact we're willing to do that," Pallister told reporters. "I'm not going to be intimidated by a threat and I'm not going to be worried about other people's deadlines. The reality is, for Manitobans, we need (federal) support and partnership." Manitoba became the final holdout last week, when the federal government signed bilateral health agreements with Quebec, Ontario and Alberta after months of heated negotiations. The dispute started last fall, when Ottawa said it would limit annual health-transfer increases to three per cent a year — half the six per cent annual increase set out in the last long-term agreement with the provinces. In the ensuing months, the federal government sweetened the pot by offering extra money for specific projects — the opioid crisis in the case of Alberta and British Columbia, for example. Pallister said the overall transfer increase being offered is not enough to keep up with the rising cost of health care. He said holding out has paid off so far because provinces other than British Columbia and Alberta, for example, will benefit from what they have managed to bargain. "(B.C Premier) Christy Clark negotiated a side deal on opioids. Good. Good for her, because that benefits all Canadians ... and so what we're trying to do is get the best possible deal we can for Manitoba, and I expect that will benefit every other province as a consequence." Pallister would not provide any details, such as what deadline the federal government may have set or what extras it may be offering. Pallister wrote to Prime Minister Justin Trudeau two weeks ago to ask that any deal include $6 million per year for the next ten years to combat kidney disease. He also asked for more support dealing with health-care issues in indigenous communities. Pallister would not say Monday whether any progress has been made since then. "Discussions are continuing," he said.
A 19-year-old student plunged from the roof of a dorm building at Columbia University on Monday, Dec. 15. (Credit: CBS2) NEW YORK (CBSNewYork) — A student was hospitalized Monday night after plunging from the roof of a dorm building at Columbia University. The 19-year-old man plunged from the roof of the eight-story building at 411 W. 116th St. around 8:30 p.m. Monday, and landed on a scaffold on the third-floor landing, officials said. Officials told CBS2 the man was a student. He had no clothes on at the time he plunged from the building, officials said. The Wien Hall dormitory is located at the 116th Street address. He was taken to Mount Sinai St. Luke’s Hospital, where he was reported in serious condition, the FDNY told CBS2. He was still breathing when he was hospitalized, the FDNY said. Police were investigating the incident late Monday night. You May Also Be Interested In These Stories
The first part of the line-up for this year’s Buckingham Literary Festival has been announced. Some of the biggest names from the worlds of literature and politics will be involved in the four-day event in June. Speakers from the world of politics include Lord Adonis, whose most recent book ‘5 Days in May’ charts the creation of the coalition Government, plus Buckingham MP John Bercow and former MP and the previous Governor of Hong Kong Chris Patten. There will be the opportunity for the audience to question the authors, meet them in person and get their books signed. Events will be held throughout the town at venues including the Villiers Hotel, the Radcliffe Centre and Buckingham Library between Thursday June 14 and Sunday June 17. Christopher Woodhead, co-founder of Buckingham Literary Festival, said: “This is the third Buckingham Literary Festival and we are thrilled to have once again attracted such a high calibre of speakers. “The Festival has been growing year on year and we’ve been delighted to see it become one of the highlights of the summer. Tickets for the event will go on general sale on Saturday April 21. Advanced bookings for festival friends will be available from April 6 to 20.
Empirical Formulae to Molecular Structures of Metal Complexes by Molar Conductance Molar conductance studies of electrolytic solutions have always been exciting for chemists. The studies of electrolytic behavior of metal complex solutions provide brief insights into their nature and composition. These studies provide a clue of the number of ions present in a particular solution responsible for the conduction of electric current and, thereby, quite significant structural information can be obtained. Molar conductance data are exploited to ascertain electrolytic and non-electrolytic nature of metal complexes. Attempts have been made to summarize molar conductance ranges of metal complexes in various solvents, which might prove useful to researchers and academia. Besides, molar conductance data have been applied to predict geometries of metal complexes. Moreover, efforts have been made to discuss the applications of conductance data for the estimation of the size of structurally relevant complexes. In addition, molar conductance has been applied to determine metal-ligand stoichiometry. Finally, the structural variance of metal complexes in different solvents is discussed in terms of molar conductance measurements.
Sexual orientation differences in psychological treatment outcomes for depression and anxiety: National cohort study. OBJECTIVE This study investigates whether sexual minority patients have poorer treatment outcomes than heterosexual patients in England's Improving Access to Psychological Therapies (IAPT) services. These services provide evidence-based psychological interventions for people with depression or anxiety. METHOD National routinely collected data were analyzed for a cohort who had attended at least 2 treatment sessions and were discharged between April 2013-March, 2015. Depression, anxiety and functional impairment were compared for 85,831 women (83,482 heterosexual; 1,285 lesbian; 1,064 bisexual) and 47,092 men (44,969 heterosexual; 1,734 gay; 389 bisexual). Linear and logistic models were fitted adjusting for baseline scores, and sociodemographic and treatment characteristics. RESULTS Compared to heterosexual women, lesbian and bisexual women had higher final-session severity for depression, anxiety, and functional impairment and increased risk of not attaining reliable recovery in depression/anxiety or functioning (aORs 1.3-1.4) and reliable improvement in depression/anxiety or functioning (aORs 1.2-1.3). Compared to heterosexual and gay men, bisexual men had higher final-session severity for depression, anxiety, and functioning and increased risk of not attaining reliable recovery for depression/anxiety or functioning (aORs 1.5-1.7) and reliable improvement in depression/anxiety or functioning (aORs 1.3-1.4). Gay and heterosexual men did not differ on treatment outcomes. Racial minority lesbian/gay or bisexual patients did not have significantly different outcomes to their White lesbian/gay or bisexual counterparts. CONCLUSIONS The reasons for treatment outcome inequities for bisexual patients and lesbian women (e.g., 30-70% increased risk of not recovering) need investigation. Health services should address these inequalities. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
The Senate Judiciary Committee will have three new Republican members this Congress. The three are Joni Ernst (Iowa), Marsha Blackburn (Tennessee), and Josh Hawley (Missouri). Sen. Hawley is a fantastic addition. He has served as Missouri’s Attorney General. He also clerked for Chief Justice Roberts (and before that, our friend Michael McConnell). It’s difficult to imagine better credentials for serving on the Judiciary Committee. In addition, Hawley is a strong conservative. Enrst and Blackburn are the remedy. To the best of my knowledge neither has practiced law or attended law school. A background in law may not be an absolute prerequisite for effective service on the Judiciary Committee, but it certainly helps. All of the recent members (from both parties) whom I consider most effective have that background. I’ve watched Ernst during hearings of the Armed Services Committee. I think she’s performed well, aided, presumably, by the fact that she has extensive background in the military. Perhaps she’ll come through on the Judiciary Committee. Ernst is up for reelection in 2020. Her perch on the Judiciary Committee will enhance her visibility if there’s another high profile battle before that body. The battle to confirm William Barr won’t produce the drama or the passion of the Kavanaugh hearings, but it has the potential to bring some attention Ernst’s way. Blackburn is a new Senator. Until now, she’s been in the House. Blackburn is a strong, pro-life conservative who excels at sticking to her talking points. These are good qualities, but they don’t necessarily translate into effective service on the Judiciary Committee. Perhaps she has additional strengths that will so translate. Both Ernst and Blackburn voted to enact First Step — the leniency for federal felons legislation that passed in both chambers last month. Ernst was an enthusiastic early supporter and, I believe, backed even more lenient legislation that failed to reach the Senate floor a few years ago. “Second Step” is coming, I expect, though not until after 2020. That’s another reason why I’m not thrilled that Ernst and Blackburn are joining the Judiciary Committee.
(1) Field of the Invention This invention relates to the rotary-file dental treatment devices. (2). Description of Related Art Currently, the dental treatment devices that use rotary files, such as the nickel-titanium files of different diameters based on the dental work to be done, in root canals have means for measuring torque and/or means for measuring root canal length to limit the breaking of files and/or to stop the rotation of the file in the apical region, but have few if any safety means for avoiding the breaking of files, for monitoring the precision of the stopping of the file in the apical region or else for keeping dental debris from being projected into or under the apical region. For reasons of stability of measuring or design, regardless of the measuring means, the measuring of the root canal position of the file is an averaged and smoothed measurement that makes it possible to eliminate outliers or speed fluctuations of the file. Such a smoothed measurement does not at any moment provide the exact position of the file, whereby a sometimes significant delay is noted on the measurement of position of the file. This is all the more significant provided that the practitioner is to apply a back-and-forth movement to bring out the dental sludge and provided that the speed of insertion of the files in the tooth is not controlled. This situation brings about a non-negligible source of error on the depth reached by the file in the canal with regard to a precise setting of the depth to be reached. The documents U.S. Pat. No. 5,902,105 and U.S. Pat. No. 5,980,248 describe dental treatment devices that comprise means that are suitable for gradually slowing the speed of rotation of a motor driving the file before the stopping thereof so that the work of the file decreases with the lowering of the speed of rotation of the motor. One drawback of this method is that when the motor slows down, the file no longer works normally and in particular can be caused to work by torsion impacts, which fatigues the file and reduces its reliability. In addition, when the file is in the apical region, a slowing of the file can cause a screwing of the file into the tooth, which carries with it the risk of piercing the conduit, with debris being projected under the apex, source of infection, or breaking the file. A known process for stopping the motor consists in measuring the current drawn by the motor at the terminals of a resistor inserted into the power circuit, amplifying and integrating this signal, then converting it by means of an analog/digital converter so as to compare the result of the conversion to a value that represents a given torque limit before acting on the control integrator and electrically stopping the motor in its motion. According to this process, the motor will stop only when the torque that is braking has overcome the inertia of the kinematic and logical unit. This process has the additional drawback that the delays that are introduced by the measuring chain repel the desired stopping action by a non-negligible period of time that can produce the fracturing or the breaking of the file in the tooth. To avoid the screwing of the file, it was also provided in the document U.S. Pat. No. 5,980,248 to rotate the file backwards at high speed after the apical stop. Such a device has the drawback of sending dental debris back into the apex, which is a source of contamination of the dental cavity. For its part, the document EP 0 966 237 describes a process and a device for measuring in real time the distance between the distal end of an electrode that is inserted in the radicular canal of a tooth and the apex of said canal and the location of the apex of a tooth.
A clinical trial of an experimental drug for treatment-resistant major depression finds that modulation of the endogenous (inner) opioid system may improve the effectiveness of drugs that target the action of serotonin and related monoamine neurotransmitters. In their paper published online in February 2016 in the American Journal of Psychiatry, a multi-institutional research team reports that adding treatment with ALKS-5461, a medication that combines two drugs with complementary effects on different opioid receptors, to serotonin-targeting antidepressant therapy produced significant symptom improvement in patients with persistent depression. ALKS-5461 is being developed by Alkermes, Inc., which sponsored the trial. Opioid drugs produce their effects by binding to receptors in the endogenous opioid system, which the body uses to suppress pain and to reward biologically beneficial activities. Two prominent opioid receptors are the mu and kappa receptors, which have overlapping but somewhat different effects. ALKS-5461 is a combination of buprenorphine, which suppresses kappa receptor activity and weakly activates mu receptors, and samidorphan, which blocks mu receptor activity. While buprenorphine is FDA-approved to help treat opioid addiction by easing withdrawal symptoms, samidorphan is an experimental drug being developed by Alkermes for several potential uses. The combination of the two drugs is an effort to balance the opioid system activity while avoiding adverse effects, including the potential for abuse. The current study, a phase 2 clinical trial, enrolled 142 patients with treatment-resistant depression at 31 sites in the U.S. Since depression treatment trials are likely to have a large placebo response, this study used a design developed in 2003 by Fava and David Schoenfeld, PhD, an MGH biostatician, to reduce the impact of the placebo effect. Using this sequential parallel comparison design (SPCD), the trial was conducted in two stages. In stage 1, 98 participants were randomized to receive placebo doses while 43 participants received ALKS-5461 in daily dosages containing either 2 mg or 8 mg of each of the two drugs. After the first four-week treatment period, placebo group members who did not show a response to treatment were re-randomized either to receive one of the two dosages of the active drug or to continue receiving a placebo for stage 2. Fava explains that, by manipulating the expectations of both participants and investigators — neither of which knew whether and when an individual was receiving the active drug — SPCD minimizes the likelihood of a placebo response, while reducing the need for a much large group of participants. While both dosage levels of ALKS-5461 produced a greater reduction in depression symptoms than did the placebo, as measured by several standard scales, the lower dosage of 2 mg of each drug had effects that were stronger and met statistical significance. Fava notes that it is not unusual for lower doses of psychotropic drugs to be more effective, since higher doses may have more side effects. The most common reported adverse events were nausea, vomiting and dizziness, most of which occurred during the first few days of treatment; and there was no evidence of withdrawal after the treatment period ended or of the likelihood of abuse of ALKS-5461. “The robust treatment effect seen in this clinical study suggests that many patients with depression may have a dysregulation of the endogenous opioid system, which may be why they do not respond to monoamine-based antidepressants that target the serotonin system,” says Fava, who is director of the Division of Clinical Research of the MGH Research Institute and the Slater Family Professor of Psychiatry at Harvard Medical School. “For the substantial percentage of patients who do not respond to monoamine based medications, this combination may represent an important new approach to the treatment of depression.” Alkermes has been conducting three phase 3 studies of ALKS-5461, two of which have been completed but their results not yet reported in scientific journals. Additional co-authors of the American Journal of Psychiatry paper are Michael Thase, MD, University of Pennsylvania; Alexander Bodkin, MD, McLean Hospital; Madhukar Trivedi, MD, University of Texas Southwestern Medical Center; and Asli Memisoglu, ScD, Marc de Somer, MD, MPH, Yangchun Du, PhD, Richard Leigh-Pemberton, MD, Lauren DiPetrillo, PhD, Bernard Silverman, MD, and Elliot Ehrich, MD, Alkermes, Inc.
Whitneyville, Connecticut History Several grist mills were established in what is now the neighborhood as early as 1640. Eli Whitney chose a mill at the base of East Rock, with water power from the Mill River, as the site of his gun factory in 1798, and built a boarding house for unmarried workers nearby, establishing the village of Whitneyville. In 1860 Whitney's son, Eli Whitney Jr., completely rebuilt the factory and increased the height of the mill dam. This provided more water power and created Lake Whitney, the first municipal water supply for New Haven. He was also a financial supporter of Whitneyville Congregational Church, almost a mile north of the factory, and the neighborhood center began to shift there. By 1900 the electric trolley line from New Haven reached Whitneyville, leading to the subdivision of surrounding land for residential development. The neighborhood street network was substantially complete by 1927. Houses continued to be built well into the twentieth century, notably several distinctive modern homes on Deepwood Drive, and the last house in which playwright Thornton Wilder lived. Government The Town of Hamden provides all municipal services for the neighborhood. It is located in Connecticut's 3rd congressional district. Most of the neighborhood is in the 11th state senate district, with a small western portion in the 17th, and most of it is in the 91st state house district, with a small western portion in the 94th. It is in the 5th district of the town legislative council. It is served by the Whitneyville post office with ZIP code 06517. Transportation Whitney Avenue extends north and south through the neighborhood with Putnam Avenue extending to the west from Whitney. The nearest expressway interchanges are on the Wilbur Cross Parkway (Route 15) in Hamden or I-91 in New Haven. CT Transit operates the 228 and 229 bus routes on Whitney Avenue and the 234 route on Putnam Avenue and several other neighborhood streets. Farmington Canal Greenway, a segment of East Coast Greenway, extends along the western edge of Whitneyville with an entrance on Putnam Avenue. Education Educational facilities in the neighborhood include Hamden Hall Country Day School, a private, coeducational school for prekindergarten to grade 12, and the Children's Center of Hamden, a residential facility for children and teenagers with learning disabilities and other challenges. The central offices of the Hamden public schools are located in the former Putnam Avenue School building. A small portion of the campus of Albertus Magnus College in New Haven extends into the neighborhood. Recreation There are two town parks in the neighborhood. DeNicola Park has a playground, multisport field, and basketball courts. Villano Park has a playground, splash pad, multisport field, and basketball and tennis courts. The adjoining Rochford Field has baseball and softball fields used by Albertus Magnus.
Family context in pediatric psychology from a transactional perspective: family rituals and stories as examples. Reviewed the transactional model as applied to the family context of pediatric psychology. A three-part sequence of child behavior, parent behavior, and family interpretation was used to describe developmental adaptation and transitions. It was proposed that families are regulated by practices that are proximal to the child's experience and representations that are more distal to the child's experience. Family practices are examined through family routines and rituals. Family representations are examined through family stories. Case examples of low birth weight premature infants and an adolescent referred for repeated hospitalizations due to diabetic ketoacidosis were presented to illustrate the model. Guidelines for the practicing pediatric psychologist are presented to assess family organization through family rituals and family meaning-making in the telling of family stories.
Anti-titin antibodies in myasthenia gravis: tight association with thymoma and heterogeneity of nonthymoma patients. BACKGROUND Titin is the major autoantigen recognized by anti-striated muscle antibodies, which are characteristic of generalized myasthenia gravis (MG). OBJECTIVE To seek a correlation between anti-titin antibodies and other features of MG patients, including histopathology, age at diagnosis, anti-acetylcholine receptor (anti-AChR), autoantibody titers, and clinical severity. METHODS A novel, highly specific radioligand assay was performed on a large group of 398 patients with generalized MG. RESULTS Among thymectomized patients, anti-titin antibodies were present in most patients with thymoma (56/70 ), contrasting with only a minority of patients with thymus atrophy or hyperplasia (17/165 ). They were also present in 64 (41%) of 155 nonthymectomized patients who had a radiologically normal thymus. In these patients and in those who had a histologically normal thymus, anti-titin antibodies were associated with a later age at onset of disease and with intermediate titers of anti-AChR antibodies. After controlling for these 2 variables, disease severity was not significantly influenced by anti-titin antibodies. CONCLUSIONS Anti-titin antibodies are a sensitive marker of thymoma associated with MG in patients 60 years and younger, justifying the insistent search for a thymoma in MG patients of this age group who have these antibodies. In nonthymoma patients, anti-titin antibodies represent an interesting marker complementary to the anti-AChR antibody titer, identifying a restricted subset of patients. These clinical correlations should prompt further studies to examine the mechanisms leading to the production of anti-titin antibodies.
For the best part of a decade, the social news site Reddit was neglected. Abusive users ran free in its darkest corners, its look became dated and "100 per cent" ugly, and it failed to adapt to modern technology, lacking both an app and a mobile website. But despite being a bastion of 2009 aesthetics and style, Reddit was far from dilapidated. It had continued to grow in popularity, and by 2015 had more than 100 million monthly active users who were part of an online community renowned for shaping public opinion. What kept Reddit alive for so long when other social networks such as Myspace and Friends Reunited spluttered and expired is, in its founders' eyes, the combination of its tens of thousands of different "communities" and its anonymous usernames. READ MORE: * Founder not proud of what Reddit became * A beginner's guide to Reddit * Reddit finally launches official app Alexis Ohanian, one of the site's co-founders, says: "When we left in 2009, there was enough that was right with Reddit to keep it growing in spite of no changes whatsoever. It's pretty miraculous. I can't point to any other company, let alone a technology company, that has managed to not evolve for the best part of a decade and still grow." Ohanian and his fellow co-founder Steve Huffman, realising that its potential was being left to wither, returned to the helm last year after being away from the company for six years. "The chance to get back control of a company we started a decade before without any experience was a once in a lifetime opportunity," Ohanian says. "We felt it had a lot more potential." Although the social network has been mired in accusations of fostering hate speech and nurturing abuse, Ohanian thinks Reddit's 250m users form a microcosm of public opinion. "I think I, like a lot of people who live on the coast, was not really as in touch or aware as we should have been," he says. "What we saw online was an expression of what was to come at the polls, which is tens of millions of people, if not more, really feeling not listened to and like they weren't being represented in mainstream media. Now their voice has been heard." COMMITMENT TO FREE SPEECH Among the voices on Reddit that received disproportionate amounts of coverage throughout the election was the alt-right. The site is said to have facilitated the growth of the nationalist movement that favoured Donald Trump and fuelled his campaign, partly because of its use of pseudonyms and its community feel. Reddit's commitment to free speech has helped the site gain a reputation as a breeding ground for online vitriol and a meeting place for hateful trolls. Abuse was one of the first problems Ohanian and Huffman decided to tackle on their return, introducing a new trust and safety team, as well as warning messages. For example, the first post on the /r/Politics channel now warns: "In general, don't be a jerk. Don't bait people, don't use hate speech, etc. Attack ideas, not users." It seems to be working, says Ohanian, given that only 0.02 per cent of all posts are reported by users. Ohanian and Huffman founded Reddit in the summer of 2005 shortly after graduating from university, at the ripe age of 22. "We sold Reddit after 16 months because we thought we were getting the deal of a lifetime," says Ohanian. "We'd just graduated from college and it seemed ludicrous that we'd been given this much money for 16 months' worth of work." Ohanian spent the next six years investing in upstart technology companies and writing a book, while Huffman co-founded a travel comparison site called Hipmunk. But it became difficult to watch their brainchild continue to grow and yet receive little improvement, so they both decided to return to the company: Ohanian first as executive chairman in 2014, then Huffman as chief executive in 2015. "We're not going to make the mistake of selling twice. We're very, very much in it for the long term," Ohanian says. Since the pair rejoined the company, it has doubled in size. It now boasts a mobile app and website, 240 members of staff and almost 250 million users a month. And, for the first time in its history, the company is focused on revenue generation, with early advertising experiments proving "incredibly successful". A PLAN TO TAKE ON FACEBOOK But Reddit still needs to grow significantly if it is to compete with its closest rival Twitter and its self-declared competition Facebook. "Facebook is the only company we think about," says Ohanian. "They obviously have 1.8 billion users so we still have a way to go with our quarter of a billion." Ohanian is optimistic that Reddit can trump Mark Zuckerberg's social behemoth with its rudimentary commitment to authenticity and free speech. "Reddit offers the opportunity for us as humans to connect on a much deeper, broader level because users have an alter ego and aren't tied to a social network of friends with whom they want to share how perfect their lives are," he says. Ohanian is positive that his and Huffman's initial goals for Reddit can easily be met: after all they're picking the "low-hanging fruit", the problems created by half a decade's neglect. For now, the focus is on refreshing the site and introducing more native advertising. Then, in the not too distant future, it can start working towards that sought after billion.
Obsessive-Compulsive Symptoms in an Adolescent Appearing after Cerebellar Vermian Mass Resection. Obsessive compulsive symptoms have been reported in frontal lobe tumours and basal ganglia lesions. We report herewith a case of an adolescent who had a vermian cystic mass for which he underwent excision surgery. Three months postsurgery family members noticed that he started with repeated hand washing and abnormal walking pattern. Also, he developed bedwetting in sleep at night. He was given clinical diagnosis of Obsessive-Compulsive Disorder (OCD) and Nocturnal enuresis following a cerebellar mass removal which improved with fluoxetine and impiramine respectively.
Liverpool have made an official application to have 'The Normal One' trademarked under European law. The phrase was coined by Liverpool manager Jurgen Klopp to describe himself after the German took over from Brendan Rodgers in October 2015 . The 48-year-old declared himself 'The Normal One' in his first press conference at Anfield. After Jose Mourinho famously dubbed himself as 'The Special One' in his first press conference as manager of Chelsea, Klopp was pushed on how he would choose to describe himself. “I am a normal guy from the Black Forest and I do not compare myself with the geniuses,” said Klopp. “I don’t describe myself. Does anyone in this room think I can do wonders? Let me work. I am the normal one. The phrase sparked a wealth of Liverpool merchandise with the slogan, including hats, banners and t-shirts. A 25-foot banner is regularly unfurled at Liverpool's Anfield home games as a tribute to the so-called 'Normal One'. T-shirts emblazened with Klopp's famous cap and glasses combo with the 'Normal One' phrase beneath are currently on sale in the Liverpool megastore for £15.99. Now Liverpool, as unconvered by writer Dave Phillips, want to officially trademark the phrase in the UK and Europe, enabling them to exclusively feature it on future Liverpool merchandise. Jurgen Klopp has become a vocal supporter of the campaign for a winter break after eleven of his Liverpool players were ruled out injured. Klopp has been forced to bring QPR defender Steven Caulker into the club on an emergency loan deal. The former Borussia Dortmund manager has warned Pep Guardiola, who is set to join the Premier League next season, that the fixture pile-up will be unlike anything he has dealt with as a manager in Germany. “Pep is so experienced, for sure. I’m sure he will buy a few players and have a good team, have 35 players or whatever. I don’t have to tell Pep Guardiola anything because he is that experienced. “I have had a similar situation before with injuries, but with the winter break, they come back. The number of games is the biggest difference. When I came here, I didn’t know there were two rounds in the semi-final of the Capital One Cup," said Klopp.
Kendall Jenner is taking fashion inspiration from the guys and it's awesome. Last night, the Keeping Up With the Kardashians star stepped out for a trip to an amusement park in the Netherlands with Bella Hadid and Gigi Hadid in a ribbed tank top, acid wash jeans with a black belt and black sneakers—a look typically associated with menswear. However, paired to luminous makeup with subtle cat-eye makeup and a shoulder bag, the style is an effortless style that looks good on both women and men. At first glance, it appears that the model is wearing boyfriend jeans, a relaxed and popular style of women's denim. However, instead of looking for a version of menswear-inspired jeans, the model sourced her style from the men's section. She's wearing the Levi's 501 Original Fit Mens Jeans with an acid wash (another throwback trend she's bringing back into style). If you're ready to break out of the boundaries of men's and womenswear, her style has been reduced from $60 to $35 on the e-commerce site, Tilly. Hurry and get them before they're gone!
You can call it a comeback. Over the past several years, the Miami real estate market has gone from serious lows (thousands of unsold condos, for example) to serious highs (the in-contract rooftop penthouse of the Faena House that will reportedly sell for a record $60 million). The new development side is also exciting; in Greater Downtown Miami alone, nearly 18,000 units across 70 towers have been proposed since 2011, according to CraneSpotters. Here’s what else is happening. With beachfront running low in Miami and construction costs lower slightly north in Fort Lauderdale, Miami heavy-hitters Fortune International and the Related groups are teaming up here with the Fairwinds Group to build a 171-unit luxury beachfront condo billed as the first of its kind. “Fort Lauderdale is primed for Miami-style high-quality projects, which will be enhanced by the charm and beauty Fort Lauderdale always offered,” says Fortune International’s Edgardo Defortuna. Dubbed the Auberge Beach Residences & Spa (the property and resort-style amenities will be managed by hotel operator Auberge Resorts Collection), this development will span two buildings. Units will measure 1,500 to 5,000 square feet, have two- through five bedrooms, prices ranging $1.5 to $8 million and all will have ocean views. Some units will have oversized terraces with private pools. Construction will kick off in 2015. But in Miami, ads are out for the 60-story Auberge Residences & Spa at 1400 Biscayne Boulevard, which the Related Group is developing. Paris’ posh Maison&Objet decorative fair is coming to Miami in May with seriously chic expectations. Maison&Objet — the long-standing Paris trade fair dedicated to high-end lifestyle, decoration and design — makes its stateside debut in Miami Beach on May 12. But this solo show happens at the end of the season — months after Art Basel and Design Miami draws hordes of deep-pocketed folks to the city. The annual fair will set up shop at the Miami Beach Convention Center. The intent is to expand the show to lure the “sizable” and “untapped” markets of the Americas and Caribbean. The show has already grown internationally after launching in Singapore this past March. “Miami is an American city that pulses with the energy of Latin America, and has already proven itself to be conducive to the convergence of art and business,” says Philippe Brocart (inset), managing director of SAFI — the organization that owns and operates Maison&Objet. Faena Group’s the Residences at Faena Hotel Miami Beach, formerly the Saxony Hotel, is the latest development in the rapidly-expanding Faena District and will have a total of 13 fully furnished penthouses, and one of them has broken a new barrier. The six-bedroom, nine-bathroom duplex has listed for a cool $55 million, making it the most expensive condo up for grabs in Miami. Its features also pack a punch: Roughly 9,780 square feet of interior space and 4,435 square feet of terraces with panoramic views of the ocean and Downtown skyline. Designed by film director Baz Luhrman and costume designer Catherine Martin, the condos are set to be finished next fall. The debut should coincide with the opening of the Faena Hotel and Faena Forum — an art-filled center designed by Rem Koolhaas of Holland. Privacy is a standout amenity. This is why developers are now including separate single-family villas in their overall plans. “It is a very unique product in the Miami Beach marketplace to have the services and maintenance-free lifestyle of a condominium while still enjoying the privacy of a stand-alone home,” says Ophir Sternberg of Lionheart Capital. The Ritz-Carlton Residences in Miami Beach, which Lionheart is developing, will have 15 unattached villas, eight of which are waterfront. These three- to four-bedroom properties will run 3,400 to 5,000 square feet, with prices spanning $5 million to $7 million. Oceana Key Biscayne includes 12 detached villas (four are still for sale), each of which total 5,187 square feet. The villas have balconies of 374 square feet, plus 1,636-square-foot terraces with heated pools and prices spanning $5.4 to $6 million. Meanwhile, Don Peebles’ planned Bath Club Estates will offer something neat: Two side-by-side oceanfront “villas” — which are each massive private duplex units with 9,200 interior square feet and 4,126 square feet of outdoor space each asking $22.5 million. Best of all, they can easily be combined. With celeb residents like photographer Bruce Weber and designer Tommy Hilfiger, Golden Beach has emerged as Miami’s ‘hood of the moment. That’s because it’s the only place in town with homes directly on the Atlantic Ocean. Last year, Hilfiger scored a 15,000-square-foot modernist manse for $25 million. Although new homes are rare in Golden Beach, architect Chad Oppenheim is designing a 20,000-square-foot home at 699 Ocean Blvd expected to fetch some $36 million, says Douglas Elliman broker Oren Alexander, who will rep the listing. “The home’s spa will take up its entire first floor,” says Alexander. Oppenheim is also working on a smaller, more modestly-priced home there, while architect Rene Gonzalez is designing a seriously grand compound for a billionaire Latin American family. The latest Miami real estate must-have is on-site urban farms and gardens. Related’s Brickell Heights will include plots designed by urban gardening specialists Ready-to-Grow for residents to raise their own plants and vegetables. On South Beach, LeFrak’s 1 Hotel & Homes on 23rd Street and Collins Avenue will have a herb garden for use of the property’s food and beverage operations serving both hotel and condo guests, which will be overseen by Tom Colicchio. And TSG Paragon’s Cassa Brickell will have a residents’ garden on the rooftop sun deck. Garden specialist Le Petite Fleur Miami will oversee the greenery. Actor Adrian Grenier is part of the creative team behind Filling Station Lofts showroom in Miami, which opened in late October. Developer NR Investments, which is working to grow a Downtown arts and entertainment district, teamed up this fall with film producer Peter Glatzer and actor/filmmaker Adrian Grenier to curate a showroom in NR’s Filling Station Lofts with art and sustainable furnishings to promote the vision of the pair’s lifestyle platform SHFT.com. This helps push what the developer envisions for the area, which is a creative hub in which residents can live healthier. Few Miami ’hoods have recovered from 2008’s real estate crash as strongly as Sunny Isles, where 17 new condo towers are slated for construction. Three towers stand out for their sheer audacity. There’s developer Gil Dezer’s 60-floor Porsche Design Tower, whose 132 units will include private garages within the condos themselves. Prices will reach $32.5 million. Nearby, the 46-floor Mansions at Acqualina should finish next year and include a 15,000-square-foot, $50 million penthouse. Acqualina’s developers have also just launched The Estates at Acqualina with 90 units priced from $5.9 to $40 million. Finally, there’s the 57-floor Jade Signature from Swiss architects Herzog & de Meuron. The 192-unit tower will feature futuristic design and underground parking. “This allows for unobstructed views of lush landscape and direct beach access,” says project developer Edgardo Defortuna of Fortune International Group. Prices range from $3 million to $30 million.
The Kremlin, which generally opposes Western attempts to tighten United Nations sanctions, criticised Iran for starting to enrich uranium at up to 20 per cent purity inside a previously secret plant. This facility, located at Fordow near the city of Qom, is buried beneath a mountainside and could be invulnerable to military attack. A statement from the Russian foreign ministry said that Moscow has "with regret and worry received the news of the start of work on enriching uranium at the Iranian plant". Iran only declared Fordow to the International Atomic Energy Agency after the facility was discovered by western intelligence agencies. Russia was also kept in the dark – a fact that damaged the Kremlin's relations with Iran. However Tehran retains a firm ally in the shape of President Hugo Chavez of Venezuela, who publicly mocked the west's concerns about its nuclear ambitions on Tuesday. Later this month, the EU is expected to maximise the economic pressure on Iran by agreeing an embargo on the country's oil. A meeting of foreign ministers that will take this decision, previously scheduled for Jan 30, has been brought forward to Jan 23. An official said this was to avoid clashing with a summit of European heads of government and had nothing to do with the situation regarding Iran. Nonetheless, no EU member state is believed to disagree with the principle of an oil embargo. The debate is solely about the practicalities of how to impose this measure. In total, the EU buys about 450,000 barrels of Iranian oil every day, with almost all of this going to Greece, Italy and Spain. They are believed to be seeking a delay that would allow them to find alternative supplies and adapt their refineries to deal with crude oil from other sources. An informed source said the outcome was likely to be a phased embargo, with an agreed delay before it comes fully into effect. Britain, France and Germany are understood to be seeking a transitional period of only three months, while Greece is believed to want an interval of about a year. A possible compromise would see a delay of between six and nine months. This would also give Iran another chance to return to the negotiating table. If Tehran were to decline, the EU oil embargo would come into force and the western powers would also be in a better position to convince Russia and China to allow more United Nations sanctions. Timothy Geithner, the US Treasury secretary, began an official visit to China yesterday with the partial aim of persuading Beijing to place more pressure on Iran's economy. But the Chinese state media said the government should ignore this pressure. "Global Times", a newspaper backed by the Communist party, said: "China has made clear its opposition to further sanctions against Iran. Despite pressure from the US and European countries, China should continue trading with Iran." The newspaper added: "If Chinese companies are sanctioned by the US due to their legal trade with Iran, China should take counter-measures." Meanwhile, President Mahmoud Ahmadinejad of Iran, on a visit to Venezuela, shared jokes about his country's nuclear project with one his closest foreign allies, President Chavez. "One of the targets that Yankee imperialism has in its sights is Iran, which is why we are showing our solidarity," said Mr Chavez during a joint press conference with the Iranian president in Caracas. Scorning western fears of Iran's nuclear ambitions, Mr Chavez added: "That hill will open up and a big atomic bomb will come out." Mr Ahmadinejad responded by praising his host as the "champion in the war on imperialism". * Additional reporting by Peter Simpson in Beijing
Application of a passively mode-locked quantum-dot Fabry-Perot laser in 40 Gb/s all-optical 3R regeneration. The application of a mode-locked quantum-dot Fabry-Perot (QD-FP) laser in a wavelength preserving all-optical 3R regenerator is demonstrated at 40 Gb/s. The 3R regenerator consists of a QD-FP laser for low-timing jitter clock recovery, cross-phase modulation based retiming, and self-phase modulation based reshaping. The performance of the alloptical 3R regenerator is assessed experimentally in terms of the Q-factor, timing jitter and bit-error ratio.
Attachment and Associational Dimensions in the Architecture of Historical Building Conversion in Thailand Between 1997 and 2012 The research on The Initial Survey of Evolution of Adaptive Reuse of Historic Buildings in Thailand is aimed at constructing knowledge for the module named Reuse and Rehabilitation of Historic Buildings. This module is part of the curriculum of Bachelor of Architecture, which studies the role of architecture fabricated within historical buildings. In the era between 1997 and 2012 in Thailand, there was a notable transformation from conservation to contemporary conversion. A review of conservation perspectives indicates a combined multi-disciplinary cooperation between architectural design and conservation. To establish issues regarding the knowledge of conversion, a research question is raised: how do architectural elements play its role through changing of use? Aimed at understanding the complexity among conversion ideologies, issues surrounding architectural elements of historical buildings are rationally explored. Based on significant conversion projects from 1996 to 2012, architectural elements were referenced as to how concepts and objectives were associated. Qualitative research was conducted through a study of primary sources; survey and classification of representative samples, and secondary documents, records and architectural drawings. The controlling significance of the buildings led to a discussion and an analysis of the architectural designs through new additions and amendments made on the historical fabric. Included in this discussion are the principles of conversion as they relate to the architecture of historic buildings and the ideology of the modification. It is found that understanding a change to architectural elements through an ontological perspective, that of attachment and associational approaches, could clearly reveal the construction programme that felicitates the historical building for which conservation or adaptation is determined. A dialogue on relevant contexts surrounding amendments of architectural elements demonstrated that a strong emphasis towards particular objectives of use could coincidentally harm the historical buildings architectural dimension framework. This leads to the notion that architecture for adaptive reuse should include knowledge of the original construction, a balance among conditions of the existing building, its programming and further habitation.
VEGETATION DYNAMICS IN COMMUNITY RESOURCE MANAGEMENT AREAS: A MEASURE OF PROGRESS Ghanas Community Resource Management Areas (CREMAs) are established to reduce biodiversity degradation through the promotion of communal responsibility to conserve resources for sustainable benefits. This study was conducted to assess vegetation dynamics in three CREMAs in the northern savanna zone of Ghana through the application of remote sensing techniques and field observation. The findings showed the vegetation cover of all the three study areas improved over the period between 1990 and 2010. There were indications of successions from lower tier vegetation classes to higher ones. The riparian vegetation of the study sites changed from open savanna woodland to closed savanna woodland mainly through the grazing activities of the hippopotamus (Hippopotamus amphibious) and management practices that restrict farming, livestock grazing, charcoal production and suppress wildfires. The suppression of wildfires has resulted in considerable amount of fuel load which must be managed to prevent severe-intensive-fires in the future. Notwithstanding the general improvement in vegetation cover, there were also considerable increased coverage of bare surface/built up areas indicating economic activities also moved up over the period.
The E3 Ubiquitin Ligase SMAD Ubiquitination Regulatory Factor 2 Negatively Regulates Krppel-like Factor 5 Protein* Background: The pro-proliferative Krppel-like factor 5 (KLF5) is posttranslationally regulated. Results: SMAD ubiquitination regulatory factor 2 (SMURF2) interacts with, ubiquitinates and degrades KLF5. Conclusion: SMURF2 negatively regulates KLF5. Significance: The findings increase the understanding of the mechanisms by which KLF5 is regulated posttranslationally. The zinc finger transcription factor Krppel-like factor 5 (KLF5) is regulated posttranslationally. We identified SMAD ubiquitination regulatory factor 2 (SMURF2), an E3 ubiquitin ligase, as an interacting protein of KLF5 by yeast two-hybrid screen, coimmunoprecipitation, and indirect immunofluorescence studies. The SMURF2-interacting domains in KLF5 were mapped to its carboxyl terminus, including the PY motif of KLF5 and its zinc finger DNA-binding domain. KLF5 protein levels were reduced significantly upon overexpression of SMURF2 but not catalytically inactive SMURF2-C716A mutant or SMURF1. SMURF2 alone reduced the protein stability of KLF5 as shown by cycloheximide chase assay, indicating that SMURF2 specifically destabilizes KLF5. In contrast, KLF5, a KLF5 amino-terminal construct that lacks the PY motif and DNA binding domain, was not degraded by SMURF2. The degradation of KLF5 by SMURF2 was blocked by the proteasome inhibitor MG132, and SMURF2 efficiently ubiquitinated both overexpressed and endogenous KLF5. In contrast, knocking down SMURF2 by siRNAs significantly enhanced KLF5 protein levels, reduced ubiquitination of KLF5, and increased the expression of cyclin D1 and PDGF-A, two established KLF5 target genes. In consistence, SMURF2, but not the E3 ligase mutant SMURF2-C716A, significantly inhibited the transcriptional activity of KLF5, as demonstrated by dual luciferase assay using the PDGF-A promoter, and suppressed the ability of KLF5 to stimulate cell proliferation as measured by BrdU incorporation. Hence, SMURF2 is a novel E3 ubiquitin ligase for KLF5 and negatively regulates KLF5 by targeting it for proteasomal degradation. The zinc finger transcription factor Krppel-like factor 5 (KLF5) is regulated posttranslationally. We identified SMAD ubiquitination regulatory factor 2 (SMURF2), an E3 ubiquitin ligase, as an interacting protein of KLF5 by yeast two-hybrid screen, coimmunoprecipitation, and indirect immunofluorescence studies. The SMURF2-interacting domains in KLF5 were mapped to its carboxyl terminus, including the PY motif of KLF5 and its zinc finger DNA-binding domain. KLF5 protein levels were reduced significantly upon overexpression of SMURF2 but not catalytically inactive SMURF2-C716A mutant or SMURF1. SMURF2 alone reduced the protein stability of KLF5 as shown by cycloheximide chase assay, indicating that SMURF2 specifically destabilizes KLF5. In contrast, KLF5, a KLF5 amino-terminal construct that lacks the PY motif and DNA binding domain, was not degraded by SMURF2. The degradation of KLF5 by SMURF2 was blocked by the proteasome inhibitor MG132, and SMURF2 efficiently ubiquitinated both overexpressed and endogenous KLF5. In contrast, knocking down SMURF2 by siRNAs significantly enhanced KLF5 protein levels, reduced ubiquitination of KLF5, and increased the expression of cyclin D1 and PDGF-A, two established KLF5 target genes. In consistence, SMURF2, but not the E3 ligase mutant SMURF2-C716A, significantly inhibited the transcriptional activity of KLF5, as demonstrated by dual luciferase assay using the PDGF-A promoter, and suppressed the ability of KLF5 to stimulate cell proliferation as measured by BrdU incorporation. Hence, SMURF2 is a novel E3 ubiquitin ligase for KLF5 and negatively regulates KLF5 by targeting it for proteasomal degradation. Protein ubiquitination is a key form of posttranslational modification central to eukaryotic regulation. As a main mechanism of controlling the stability and turnover of transcription factors, proteasomal degradation triggered by ubiq-uitination is pivotal to transcriptional control. The specific effects from ubiquitination-triggered degradation are mainly achieved by E3 ubiquitin ligases, of which there are hundreds, and often determine the substrate availability and specificity of the proteasomal destruction. A given protein can be targeted by multiple E3 ubiquitin ligases, whereas the same E3 ubiquitin ligase can target multiple substrates, demonstrating a highly complex and dynamic regulation. Hence, identifying new targets for these ubiquitin ligases and, conversely, new ubiquitin ligases for a given target, will improve our understanding of the dynamic regulation of cellular functions by ubiquitination. SMURF2 is an E3 ubiquitin ligase recently grouped into the Nedd4 family of HECT ubiquitin ligases. It contains WW domains, which directly bind to a PPXY motif (also known as PY motif) in its targets. This interaction is further stabilized by the PY tail, a six-amino acid stretch immediately carboxylterminal to the PPXY motif, although additional interactions exist. As a HECT E3 ubiquitin ligase, SMURF2 catalyzes ubiquitination at specific lysine residues in its targets, which triggers subsequent degradation by proteasomes. SMURF2 has a relatively broad spectrum of targets and is involved in diverse signal pathways and cellular processes (2, 6 -12). Thus, the identification of new targets for SMURF2 may provide further insights into the mechanisms by which the SMURF family of ubiquitin ligases regulates cellular functions. KLF5 is a zinc finger-containing transcription factor that promotes cell proliferation and plays important roles in development, differentiation, tumorigenesis, and embryonic stem cell renewal. The expression and protein activity of KLF5 are tightly regulated at both transcriptional and posttranscriptional levels (5,7,. A primary mechanism by which KLF5 is posttranslationally regulated is through ubiquitination and subsequent degradation of KLF5, as mediated by a number of E3 ubiquitin ligases, including WWP1 and FBW7. The interaction between KLF5 and WWP1 involves the PPXY motif of KLF5 (PPPSY) and the WW domains in WWP1. However, whether additional ubiquitin ligases for KLF5 exist and how KLF5 is regulated by various ubiquitin ligases are not clearly defined. Here we present evidence for a novel interaction between KLF5 and SMURF2 and demonstrate that SMURF2 negatively regulates KLF5 by targeting KLF5 for ubiquitination and degradation. This report therefore presents KLF5 as a target for SMURF2 and SMURF2 as an ubiquitin ligase that regulates KLF5. Yeast Two-hybrid Screen and Assay-A yeast two-hybrid screen was performed as described previously. A yeast twohybrid assay was performed at extremely high stringency with the Matchmaker Gold Yeast two-hybrid system (Clontech). Briefly, the indicated KLF5, SMURF2, or vector control constructs were transformed in the Saccharomyces cerevisiae Y2HGold strain, and specific interaction was verified under selection with leucine, tryptophan, adenine, histidine, and aureobasidin A in the absence or presence of X-␣-Gal according to the manufacturer's instructions. Small Interfering RNA-siRNA against SMURF2, in the form of either a mixture of three siRNAs targeting different regions of SMURF2 (Origene, company-guaranteed Trilencer-27 siRNA duplex kit, catalog no. SR312096), two individual siRNAs (Origene, catalog nos. SR312096A/452087 and SC312096B/452091), or the negative control siRNA included in the kit (Origene, catalog no. SR30004) was transfected into 25% confluent COS-1 cells with Lipofectamine RNAiMAX (Invitrogen, catalog no. 13778-150) according to the manufacturer's instructions. Three days later, cells were subjected to Western blotting, immunoprecipitation, or quantitative RT-PCR analysis. Quantitative RT-PCR-siRNAs against SMURF2 (Origene, catalog no. SR312096) or the negative control siRNA (Origene, catalog no. SR30004) was transfected into 25% confluent COS-1 cells with Lipofectamine RNAiMAX. Three days later, total RNA was isolated with TRIzol (Ambion/Invitrogen), and quantitative real-time RT-PCR was performed in four triplicates with primer sets specific for SMURF2, SMURF1, KLF5, cyclin D1, PDGF-A (Qiagen, QT00079961, QT00031689, QT00074676, QT00495285, and QT01664488), and the control gene GAPDH (forward, ACCCAGAAGACTGTGGATGG and reverse, TTCTAGACGGCAGGTCAGGT). Products were amplified and detected with the Power SYBR Green RNAto-CT 1-Step kit (Applied Biosystems) on an Eppendorf REAL-PLEX epgradient S real-time PCR Mastercycler according to the manufacturer's instructions. Relative changes in expression were calculated based on the comparative C T (⌬⌬C T ) method after normalization with the GAPDH control. For ubiquitination of endogenous KLF5 after SMURF2 knockdown, siRNAs against SMURF2 (Origene, catalog no. SR312096) or the control siRNA (Origene, catalog no. SR30004) were transfected into 25% confluent COS-1 cells with Lipofectamine RNAiMAX. Two days later, cells were transfected with HA-ubiquitin. The next day, cells were treated with 20 M MG132 (Sigma) for 1 h and disrupted in the lysis buffer. The lysates were denatured by boiling, diluted in the dilution buffer, and immunoprecipitated with either control rabbit IgG or the mixture of rabbit KLF5 antibody and commercial KLF5 antibody (Santa Cruz Biotechnology), followed by incubation with protein A beads. The immune complexes were washed for four times, and Western blotted with mouse HA and ␤-actin, and rabbit SMURF2 (Upstate, catalog no. 07-249) and KLF5 antibodies. Cycloheximide Chase Assay-A cycloheximide chase assay was performed as described. Briefly, COS-1 cells were transfected with the indicated plasmids or vector alone, treated with 100 g/ml cycloheximide for the indicated time, lysed, boiled in Laemmli buffer containing complete protease inhibitor mixture, and subjected to SDS-PAGE and Western blotting with rabbit KLF5 and mouse Myc, FLAG (Sigma), and ␤-actin antibodies. BrdU Incorporation Assay-The BrdU incorporation assay was performed as described. Briefly, COS-1 cells were transfected overnight with pMT3-HA-KLF5 and pCMV-Myc-SMURF2 at 10:1 in plasmid ratio to ensure cotransfection and detection, under which HA-KLF5 was not completely degraded by Myc-SMURF2 in the majority of cells. Cells were fixed and permeabilized with methanol, treated with HCl, neutralized, and blocked with 2% BSA in PBS. Cells were then incubated with mouse BrdU (BD Pharmingen), chicken HA (Chemicon), and rabbit Myc (Upstate) antibodies, and with Cy5-conjugated ␣-mouse, donkey FITC-conjugated ␣-chicken, and RRX-conjugated ␣-rabbit antibodies (Jackson Immuno-Research Laboratories, Inc.). The percentages of transfected cells stained positive for BrdU were then calculated. RESULTS KLF5 Interacts with SMURF2-A previous yeast two-hybrid screen with KLF5 as bait revealed a number of proteins that interact with KLF5. A repeat screen identified an additional protein, SMURF2, a WW domain-containing E3 ubiquitin ligase that interacts with KLF5. Individual yeast two-hybrid assays confirmed this physical interaction (Fig. 1A). KLF5 binds to SMURF2 in the two-hybrid assay under highly stringent conditions with at least five selection markers, leucine, tryptophan, adenine, histidine, and Aureobasidin A (Fig. 1A). The interaction was also demonstrated by coimmunoprecipitation (Fig. 1C). When HA-tagged KLF5 and Myc-tagged SMURF2 were cotransfected in COS-1 cells stabilized with MG132, a proteasome-specific inhibitor, immunoprecipitation with a HA antibody followed by Western blotting against Myc indicated that Myc-SMURF2 coimmunoprecipitated with HA-KLF5 (Fig. 1C). This interaction was not detected without MG132 treatment, presumably because of constant degradation of KLF5 in the immune complexes (data not shown). Endogenous KLF5 and SMURF2 also interacted with each other, as demonstrated by coimmunoprecipitation of endogenous SMURF2 with KLF5 immunoprecipitated from COS-1 cells treated with MG132 (Fig. 1D). This interaction is specific for SMURF2, as the interaction of endogenous SMURF1 with KLF5 was not detected (Fig. 1D). Thus, KLF5 specifically interacts with SMURF2. We also mapped the domains that mediate the association of KLF5 with SMURF2 using the two-hybrid system and deletion constructs of KLF5. KLF5 has a known PPXY motif (codons 314 -317 in mouse and 325-328 in human KLF5), which is absolutely conserved across all the available species (Fig. 1B). Its PY tail, the six residues immediately following the PPXY motif that help stabilize SMURF2 binding, is also highly conserved (Fig. 1B). Consistent with the presence of this SMURF2-interacting PY motif in KLF5, SMURF2 binds efficiently to a portion (amino acids 308 to 360) of the mouse KLF5 that spans the PPXY motif and PY tail (Fig. 1A). In addition, SMURF2 interacts with the flanking zinc finger DNA-binding domain at the very carboxyl terminus of the mouse KLF5 (amino acids 361 to 446) (Fig. 1A). This interaction is specific, as it prefers the zinc finger DNA-binding domain of KLF5 to that of KLF4 (amino acids 350 -483 in mouse KLF4) (Fig. 1A), although both KLF5 and KLF4 contain similar C 2 H 2 -type zinc fingers (Fig. 7A). Therefore, the minimal domains in KLF5 that mediate its association with SMURF2 are localized to the carboxyl terminus of KLF5, including its PY motif and DNAbinding domain. Consistent with the interaction between KLF5 and SMURF2, the two proteins colocalize in both the nucleus and cytoplasm. After COS-1 cells were transfected with HA-KLF5 and Myc-SMURF2 and treated with MG132 to stabilize HA-KLF5, indirect immunofluorescence demonstrated that both HA-KLF5 and Myc-SMURF2 were primarily colocalized to the nucleus (Fig. 2, A and B). Endogenous KLF5 and SMURF2 also colocalized primarily in the nucleus (Fig. 3, A and B), and the colocalization was especially visible under higher magnification and better resolution, often excluding nucleolar-shaped subnuclear structures ( Figs. 2A and 3C). We reported previously that a fraction of KLF5 was also localized to cytoplasm. For the small fraction of cells that had considerable cytoplasmic localization of KLF5, the cytoplasmic KLF5 also colocalized with SMURF2 ( Fig. 2A, center and right columns). Combining the results of all the experiments, KLF5 apparently interacts with SMURF2 in cells. SMURF2 Degrades KLF5-A consequence of the physical interaction between SMURF2 and KLF5 is the ability of SMURF2 to degrade both exogenous and endogenous KLF5. In cells cotransfected with HA-KLF5 and Myc-SMURF2, the level of HA-KLF5 was considerably lower compared with cells transfected with HA-KLF5 and vector (Fig. 4A), suggesting that SMURF2 triggers KLF5 degradation. Similarly, when cells were transfected with Myc-SUMRF2 alone, the abundance of endogenous KLF5 was lower than that in vector-transfected cells (Fig. 4B), indicating that SMURF2 degrades endogenous KLF5 as well. Lending further support that SMURF2 facilitates KLF5 degradation, treatment with MG132 prevented the ability of Myc-SMURF2 to degrade HA-KLF5 (Fig. 4C, compare lanes 1 FIGURE 1. KLF5 interacts with SMURF2. A, SMURF2 binds KLF5 in a yeast two-hybrid assay through the carboxyl terminus of KLF5. The indicated SMURF2 or full-length or truncated KLF5 constructs (or the corresponding vector alone) were transformed in yeast and selected with leucine, tryptophan, adenine, histidine, and aureobasidin A. The top panel is a schematic showing the various KLF5 constructs and relative location of the PY motif and DNA-binding domain of KLF5 (Zinc Fingers). B, a high degree of conservation of the PY motifs of KLF5 from different species. The six-residue PY tail immediately following the PPXY core is also shown. C, coimmunoprecipitation of SMURF2 with KLF5. COS-1 cells were cotransfected with HA-KLF5 and Myc-SMURF2, treated with MG132, and immunoprecipitated (IP) with a mouse HA antibody. The coprecipitated Myc-SMURF2 was revealed by Western blotting with a rabbit Myc antibody. The proteins in lysates and immunoprecipitates were revealed by blotting with rabbit Myc and HA and mouse ␤-actin antibodies. D, KLF5 coimmunoprecipitates with endogenous SMURF2 but not SMURF1. COS-1 cells were treated with MG132 and immunoprecipitated with rabbit KLF5 or control antibodies. The coprecipitated SMURF2 was revealed by Western blotting with a rabbit SMURF2 antibody. The proteins in lysates and immunoprecipitates were also revealed by blotting with rabbit SMURF2 and mouse SMURF1, KLF5, and ␤-actin antibodies. NOVEMBER and 2). Furthermore, a KLF5 deletion construct (KLF5-N) that contains only the amino terminus of KLF5 and lacks the PY motif and DNA-binding domain was unresponsive to Myc-SMURF2-mediated degradation (Fig. 4D), indicating that the interaction between KLF5 and SMURF2 is important for SMURF2 to degrade KLF5. SMURF2 Regulates KLF5 The degradation of KLF5 by SMURF2 requires the E3 ubiquitin ligase activity of SMURF2 and is highly specific because the catalytically inactive SMURF2-C716A mutant failed to reduce the steady-state protein level (Fig. 4E, compare lanes 3 and 4) of KLF5, and overexpression of SMURF1 or SMURF1-C699A did not significantly affect the steady-state protein level (Fig. 4E, lanes 1 and 2) of KLF5. To reinforce the conclusion that SMURF2 specifically degrades KLF5, the effect of SMURF2, SMURF1, and their catalytically inactive mutants on the stability of endogenous KLF5 was investigated by cycloheximide chase assay, a standard method for measuring the stability of proteins, including KLF5. The half-life of KLF5 in COS-1 cells was significantly reduced upon transfection of SMURF2 as compared with vector-transfected cells (Fig. 4F). In addition, the half-life of KLF5 in cells transfected with wild-type SMURF2 was significantly lowered than those transfected with SMURF2-C716A, wild-type SMURF1, or SMURF1-C669A (Fig. 4G). Taken together, these results clearly indicate that SMURF2 specifically destabilizes KLF5 in a manner that is dependent on the E2 ubiquitin ligase activity of SMURF2. KLF5 Is Ubiquitinated by SMURF2-Consistent with our observation that SMURF2 degrades KLF5 and acts as an E3 ligase to ubiquitinate target proteins, SMURF2 promotes the ubiquitination of KLF5. We overexpressed KLF5 together with Myc-SMURF2 and HA-tagged ubiquitin in HEK293T cells. Immunoprecipitation under denaturing condition with rabbit KLF5 antibodies followed by Western blotting with mouse HA demonstrated ubiquitination of immunoprecipitated KLF5 (Fig. 5A). Relatively little ubiquitination was detected in control immunoprecipitation, where KLF5 was not included in the transfection (Fig. 5A, lane 1) because of the relatively low amount of endogenous KLF5 in these cells. However, transfection of KLF5 and HA-ubiquitin resulted in a detectable ladder of ubiquitinated KLF5 (Fig. 5A, lane 2), and this ladder of ubiquitinated KLF5 was further enhanced when Myc-SMURF2 was included in the transfection (Fig. 5A, lane 3), despite the amount of input KLF5 in the presence of Myc-SMURF2 cotransfection being lower than that in the absence of Myc-SMURF2 ( Fig. 2A, compare KLF5 in lanes 2 and 3). These results indicate that SMURF2 catalyzes the ubiquitination of KLF5. The ubiquitination of KLF5 by SMURF2 was also detected for endogenous KLF5 without any MG132 treatment. COS-1 cells in which a reasonable amount of endogenous KLF5 is SMURF2 Regulates KLF5 NOVEMBER 18, 2011 VOLUME 286 NUMBER 46 detected (Fig. 4) were transfected with HA-ubiquitin and either Myc-SMURF2 or vector alone, disrupted under denaturing conditions, followed by immunoprecipitation of endogenous KLF5 (Fig. 5B). In cells transfected with HA-ubiquitin, some ubiquitination of endogenous KLF5 was detectable (Fig. 5B, lane 2). In Myc-SMURF2-cotransfected cells, the ubiquitination was increased further, despite the level of endogenous KLF5 in the presence of Myc-SMURF2 transfection being significantly lower (Fig. 5B, lane 3). The increase was especially evident for the high molecular weight, polyubiquitinated forms of KLF5 and occurred even in the absence of MG132, which inhibits proteasomes and stabilizes highly polyubiquitinated proteins (Fig. 5B, lane 3). Thus, endogenous KLF5 is ubiquitinated by SMURF2. To lend further support that SMURF2 ubiquitinates KLF5, ubiquitination of endogenous KLF5 was reduced significantly (Fig. 5D) when the level of endogenous SMURF2 was diminished by siRNA interference against SMURF2 (C). Taken together, these results indicate that SMURF2 degrades KLF5 through the proteasome-ubiquitin pathway. SMURF2 Does Not Target Any Single Lysine Site in KLF5-We next determined whether there is a predominant site targeted by SMURF2 in KLF5. As SMURF2 is a HECT ubiquitin ligase known to target lysine residues in its substrates, we substituted every lysine residue in mouse KLF5 with arginine. Each KLF5 lysine-to-arginine (K-to-R) single mutant was cotransfected with Myc-SMURF2 to see if any mutant was resistant to the destabilization by SMURF2. We initially tested lysine residues within the zinc finger DNA-binding domain of KLF5 because of its interaction with KLF5, the proximity to the PY motif of KLF5, and the abundance of lysine residues (eight of a total of 19 lysine residues in full-length KLF5) (Fig. 6A). Given the specific interaction of SMURF2 with the DNA-binding domain of KLF5 rather than KLF4, we first tested Lys-393 and Lys-420, the two lysine residues unique to KLF5 and not conversed between KLF5 and KLF4. Both the KLF5-K393R and K420R mutants were efficiently degraded by Myc-SMURF2 (Fig. 6B), indicating that neither Lys-393 nor Lys-420 is the predominant target site. We further tested the other six KLF5 lysine mutants in the DNA binding domain. All of them were efficiently degraded by SMURF2 to extents comparable with wild-type KLF5 (Fig. 6C), indicating that SMURF2 does not target a single lysine site within the carboxyl-terminal DNAbinding domain of KLF5. We also tested whether SMURF2 primarily targets a lysine site amino-terminal to the DNA-binding domain of KLF5. We first tested the two SUMOylation sites in KLF5, Lys-151 and Lys-202, given that SUMOylation and ubiquitination may interplay and share identical target lysine residues. However, both the partially SUMOylated K202R and SUMOylationdeficient K151R/K202R double mutants were efficiently degraded by SMURF2 (Fig. 7B), indicating that the SUMO-. The corresponding lysates were denatured and immunoprecipitated (IP) with rabbit KLF5 antibodies, followed by Western blotting (IB) of the immune complexes with a mouse HA antibody. The input proteins in cell lysates were also probed by the indicated antibodies. B, ubiquitination of endogenous KLF5 by SMURF2. COS-1 cells were cotransfected with HA-ubiquitin and either vector (V) or Myc-SMURF2, lysed, denatured, and immunoprecipitated with either control rabbit IgG (Con) or rabbit KLF5 (KLF5) antibodies. Western blotting was performed with mouse HA and ␤-actin and rabbit KLF5 antibodies. The asterisk indicates a nonspecific band in the immunoprecipitates from COS-1 cell lysates in the absence of MG132 treatment, presumably from slight sticking to beads. C, siRNA interference of endogenous SMURF2. COS-1 cells were transfected with either control siRNA or two individual SMURF2 siRNAs (Origene, catalog nos. SR312096A and SC312096B). Lysates from the transfected cells were subjected to Western blotting with rabbit antibodies against SMURF2 and KLF5 and a mouse ␤-actin antibody. D, ubiquitination of endogenous KLF5 was significantly reduced after SMURF2 depletion. COS-1 cells were transfected with either control siRNA or the Trilencer siRNAs against SMURF2 (Origene, catalog no. SR312096). Two days later, cells were transfected with HA-ubiquitin. The next day, cells were lysed, denatured, and immunoprecipitated with rabbit KLF5 (KLF5) antibodies, followed by immunoblotting with HA antibodies. Western blotting was also performed on the lysate input with mouse HA and ␤-actin and rabbit SMURF2 and KLF5 antibodies. ylation sites in KLF5 are not targeted by SMURF2. We next tested Lys-324 and Lys-358, given their proximity to the PY motif and the high degree of conservation of residues around them, especially Lys-324, which is immediately juxtaposed to the PPXY motif and PY tail of KLF5 (Fig. 7A). However, both K324R and K358R were effectively degraded by SMURF2 (Fig. 7C). Lastly, we tested the remaining seven lysine residues within the amino terminus of KLF5, and none of these K-to-R mutants resisted SMURF2-triggered degradation (Fig. 7D). Altogether, these results indicate that SMURF2 does not target a single lysine residue in KLF5. SMURF2 Negatively Regulates the Biological Activities of KLF5-Consistent with the degradation of KLF5, SMURF2 inhibits the transcriptional and pro-proliferative activities of KLF5. Endogenously, SMURF2 depletion inhibited the expression of two major KLF5 target genes, cyclin D1 and PDGF-A. This was accomplished by determining the relative transcript levels of cyclin D1 and PDGF-A following SMURF2 knockdown. As seen in Fig. 8A, siRNA directed against SMURF2 specifically reduced the transcript levels of SMURF2 but not SMURF1 (Fig. 8A). In fact, the SMURF1 level actually increased after SMURF2 knockdown, suggesting that SMURF2 negatively regulates SMURF1 expression, a result consistent with a previous report. After the SMURF2 siRNA treatment, the transcript levels of two major KLF5 target genes, cyclin D1 and PDGF-A, were both increased (Fig. 8A), indicating that SMURF2 negatively regulates the activity of KLF5. The knockdown did not affect KLF5 mRNA expression, reinforcing the conclusion that the reduction in KLF5 levels occurs at the posttranslational level. The effect of SMURF2 on the transcriptional activity of KLF5 was also examined by dual luciferase reporter assay using the PDGF-A promoter. The ability of KLF5 to transactivate PDGF-A was suppressed significantly by SMURF2, although this inhibitory effect was abolished when the inactive SMURF2-C716A mutant was used instead (Fig. 8B). KLF5-N (KN), which lacks the PY motif and DNA-binding domain that interact with SMURF2, failed to transactivate PDGF-A (Fig. 8B). Finally, we examined the effect of SMURF2 on the pro-proliferative activity of KLF5 using a BrdU incorporation assay, a typical assay used previously used to study the ability of KLF5 to stimulate cell proliferation. Although KLF5 significantly stimulated COS-1 cell proliferation, this activity was suppressed by SMURF2 (Fig. 8C). These results clearly indicate that SMURF2 inhibits the transcriptional and pro-proliferative activities of KLF5. DISCUSSION In this report, we demonstrate a novel interaction between KLF5 and SMURF2. Consistent with the presence of the SMURF2-interacting PY motif in KLF5, SMURF2 binds to a region of KLF5 that contains the PPXY motif and PY tail (Fig. 1A). Interestingly, SMURF2 also efficiently interacts with KLF5's DNA-binding domain. This interaction is specific, as it prefers the DNA-binding domain of KLF5 to that in KLF4 (Fig. 1A). Sequence alignment shows that the main discrepancies between KLF5 and KLF4's DNA-binding domains lie in the three areas covering the amino-terminal half, i.e. C 2 parts, of the three C 2 H 2 zinc fingers (Fig. 6A). Because these areas coordinate the binding to zinc metal cofactor and mediate direct DNA binding, it remains to be determined whether SMURF2 interaction directly affects the ability of KFL5 to recognize its DNA targets and activate transcription. Recently, another SMURF family protein, SMURF1, was reported to bind and degrade KLF2, and the zinc finger DNA-binding domain of KLF2 is sufficient and efficient for SMURF1 binding. It is also of interest to note that although it interacts with SMURF2, KLF5 does not bind to SMURF1 (Fig. 1D). Hence, specific SMURFs can rec-ognize specific Krppel-like factors despite the highly conserved nature of the DNA-binding domains of Krppel-like factors. These results also provide evidence of KLFs as a novel family of transcription factors regulated by SMURFs and demonstrate a new structural basis of substrate recognition for the SMURF family of ubiquitin ligases. Herein, we present SMURF2 as an ubiquitin ligase that degrades KLF5 but does not degrade KLF5 at a single lysine site (Figs. 6 and 7). SMURF2 therefore likely targets multiple lysine sites within KLF5. This is not only supported by the mutagenesis studies in this work but also consistent with the observation that SMURF2 predominantly polyubiquitinates KLF5 to high molecular weight forms (Fig. 5). Thus, effects from loss in ubiquitination at one site may be largely compensated by ubiquitination at another site. The requirement of multiple lysines in KLF5 for degradation hints at the potential complexity of SMURF2 in regulating KLF5. A, alignment of all the lysine residues preceding the DNA binding domain of KLF5 from various species. Numbers indicate positions of these lysine residues, which are shown in red. Conserved residues adjacent to these lysines are also shown. The two SUMOylation site lysine residues are underlined. The PPXY (PPPSY) motif is bracketed. B, the two SUMOylation site lysines are not targets of SMURF2. HEK293T cells were transfected with the indicated KLF5 SUMOylation mutants and either Myc-SMURF2 (S) or vector alone (V), and whole cell lysates were subjected to Western blotting with mouse HA, Myc, and ␤-actin antibodies. K, lysine; r arginine, Myc, Myc-SMURF2. C, individual mutations at Lys-324 and Lys-358, the two lysine lysines adjacent to the PY motif of KLF5, do not block the degradation KLF5 by SMURF2. HEK293T cells were transfected with K324R or K358R and either Myc-SMURF2 or vector alone, and whole cell lysates were subjected to Western blotting. D, mutation at each of the other lysine residues at the N terminus of KLF5 does not block the degradation of KLF5 by SMURF2. HEK293T cells were transfected with each indicated mutant and either Myc-SMURF2 or vector alone and whole cell lysates subjected to Western blotting. The interaction and degradation of KLF5 are quite specific for the SMURF family, as they are highly preferred by SMURF2 rather than the closely related homolog, SMURF1 (Figs. 1 and 4). Hence, the posttranslational regulation of KLF5 by E3 ubiquitin ligases appears to be highly complex and dynamic. For instance, besides SMURF2, KLF5 is degraded by a related ubiquitin ligase, WWP1, that also targets KLF2. WWP1 is a Nedd4 family of HECT ubiquitin ligases that recently included SMURFs. WWP1 and SMURF2 share identical types and orientation of major structural domains. The destabilization of KLF5 by WWP1 and SMURF2 is highly specific, as most Nedd4 family members, including Nedd4 -1, Nedd4 -2, AIP4/Itch, WWP2/AIP2, and SMURF1, failed to degrade KLF5. It is currently unknown how KLF5 is selectively regulated by these two E3 ligases. WWP1 and SMURF2 exhibit similar inhibitory activity toward KLF5. For instance, both utilize similar binding domains and catalytic mechanism to degrade KLF5, both are blocked to similar extent by the proteasomal inhibitor MG132, and both exhibit highly comparable activity in suppressing the PDGF-A promoter. Theoretically WWP1 and SMURF2 can compete for KLF5. Thus, it would be interesting to determine whether these two ubiquitin ligases act at different stages of growth and development or in response to distinct signaling pathways triggered. KLF5 was also targeted by another E3 ligase, FBW7, an F-box ubiquitin ligase. FBW7 utilizes very different mechanism for substrate recognition and catalysis. Although SMURF2 and WWP1 bind to KLF5 through WW domains and PY motifs, FBW7 binds to KLF5 through the WD40 domain of FBW7 and the phosphor-binding motifs, called CDC4 phosphodegrons, of KLF5. Thus, KLF5 undergoes multiple layers of regulation by identical or diverse families of ubiquitin ligases. These results reflect the highly complex and dynamic regulation of KLF5 by ubiquitination. Among the multiple functions attributed to SMURF2 (2, 6 -12), it regulates cell polarity (10 -12). The intestinal epithelium is a highly polarized system containing terminally differentiated epithelial cells at villi and proliferating and progenitor cells in crypts. As the KLF5 protein is highly abundant in the crypt cell population with a diminishing gradient as cells migrate toward the villus, SMURF2 may form an opposite gradient along the crypt/ villus axis or contribute to establishing the polarity of the intestinal tract. In summary, we illustrate a novel and specific interaction between KLF5 and SMURF2 and expand the list of ubiquitin ligases that dynamically control the turnover and activity of KLF5 by demonstrating that SMURF2 ubiquitinates, destabilizes, and negatively regulates KLF5. These results endorse KLFs as a new family of targets by the SMURF family of ubiquitin ligases and SMURFs as a new group of ubiquitin ligases that regulate KLF transcription factors. FIGURE 8. SMURF2 inhibits the transcriptional and pro-proliferative activities of KLF5. A, depletion of SMURF2 increases the expression of KLF5 target genes. COS-1 cells were transfected with either control siRNA or the Trilencer SMURF2 siRNA mixture from Origene (catalog no. SR312096). Three days later, total RNA was isolated, and quantitative RT-PCR was performed for the SMURF2, SMURF1, KLF5, and KLF5 target genes cyclin D1 and PDGF-A (n 12). Shown are the average ratios of mRNA levels after and before SMURF2 knockdown. N.S., not significant. *, p 0.05; **, p 0.01; ***, p 0.001 by two-tailed Student's t test. B, SMURF2 suppresses the transcriptional activity of KLF5. The ability of KLF5 to transactivate PDGF-A luciferase reporter was inhibited by SMURF2 but not the catalytically inactive SMURF2-C716A mutant. RKO cells were transfected with PDGF-A reporter and Renilla control plasmids plus either vector alone (Vec), pMT3-KLF5 (K), pCMV-FLAG-SMURF2 (S), pCMV-FLAG-SMURF2-C716A (S-), pMT3-KLF5 and pCMV-FLAG-SMURF2 (KS), pMT3-KLF5 and pCMV-FLAG-SMURF2-C716A (KS-), or pMT3-KLF5 (KN), and dual luciferase reporter assays were performed. Shown are the mean S.D. of four independent experiments. *, p 0.05; **, p 0.01; ***, p 0.001, by two-tailed Student's t test. C, SMURF2 suppresses the pro-proliferative activity of KLF5. COS-1 cells were transfected with vector alone, pMT3-HA-KLF5, pCMV-Myc-SMURF2, or both pMT3-HA-KLF5 and pCMV-Myc-SMURF2, and a BrdU incorporation assay was performed as described under "Experimental Procedures." Shown are the mean S.D. of four independent experiments. *, p 0.05; **, p 0.01 by two-tailed Student's t test.
Stability and Horizon Formation during Dissipative Collapse We investigate the role played by density inhomogeneities and dissipation on the final outcome of collapse of a self-gravitating sphere. By imposing a perturbative scheme on the thermodynamical variables and gravitational potentials we track the evolution of the collapse process starting off with an initially static perfect fluid sphere which is shear-free. The collapsing core dissipates energy in the form of a radial heat flux with the exterior spacetime being filled with a superposition of null energy and an anisotropic string distribution. The ensuing dynamical process slowly evolves into a shear-like regime with contributions from the heat flux and density fluctuations. We show that the anisotropy due to the presence of the strings drives the stellar fluid towards instability with this effect being enhanced by the density inhomogeneity. An interesting and novel consequence of this collapse scenario is the delay in the formation of the horizon. phenomenon should be vital to our understanding of the workings of the universe. The pioneers in the research of gravitational collapse, Oppenheimer and Snyder, studied a spherically symmetric matter distribution in the form of a dust sphere undergoing collapse. They obtained the first solution for the non-adiabatic collapse of a dust ball with a Schwarzschild exterior. Vaidya obtained an exact solution to the Einstein field equations which describes the exterior field of a radiating, spherically symmetric fluid by noting that a radiating collapsing mass distribution has outgoing energy and so its exterior spacetime is no longer a vacuum but contains null radiation. The next step in improving the model was accomplished by Santos who derived the junction conditions for a collapsing spherically symmetric, shear-free non-adiabatic fluid sphere with heat flow. The combination of these contributions allowed for the matching of the interior and exterior spacetimes of a collapsing star which lead the way for studying non-adiabatic, isotropic as well as anisotropic dissipative gravitational collapse -. A disturbance or perturbation of a system initially in static equilibrium results in a change in stability which likely renders the system dynamic. The property of a system to retain its initial stable state once perturbed, is then referred to as its dynamical (in)stability. Hence the issue of stability is vital in the study of self-gravitating objects as a static stellar model which evolves towards higher instability is of little physical significance. The dynamical instability of a spherically symmetric mass with isotropic pressure was first investigated by Chandrasekhar. He showed that for a system to remain stable under collapse, the adiabatic index must be greater than 4 3. Subsequently, Herrera et al. showed that for a non-adiabatic sphere where relativistic corrections were imposed to address heat flow, the unstable range of decreased rendering the fluid less unstable. Chan et al. investigated the stability criteria by deviating from the perfect fluid condition in two ways: they considered radiation in the free-streaming approximation; and they assumed local anisotropy of the fluid. Herrera et al. also examined the dynamical instability of expansion-free, locally anisotropic spherical stellar bodies. The application of Einstein's field equations to increasingly more complex gravitating systems with additional parameters and degrees of freedom depends on computational techniques, which is the case when perturbative theories are employed in which higher order terms arise. The generalization of systems, such as the inclusion of a string density field, can increase the complexity of expressions obtained however to first order we aim to introduce the temporal behaviour and hence evolution of the collapse process. This method is well established. Compact objects such as neutron stars, black holes and the more recently proposed dark-fluid stars and strange stars composed of quark matter invite the addition of a more complex, non-empty stellar exterior. The Vaidya metric which is commonly used for describing the exterior spacetime would then require modification to include both the radiation field and a so-called string field as initially put forward by Glass and Krisch. In this more generalized Vaidya exterior, the mass function is augmented to acquire both temporal and spatial dependence. In 2005, Maharaj and Govender showed that the stellar core was more unstable than the outer regions by investigating gravitational collapse with isotropic pressure and vanishing Weyl stresses. More recently, Maharaj et al. showed the impact of the generalized Vaidya radiating metric on the junction conditions for the boundary of a radiating star. Their results describe a more general atmosphere surrounding the star, described by the superposition of a pressure-free null dust and a string fluid. The string density was shown to affect the fluid pressure at the surface of the star. It was demonstrated that this string density reduces the pressure at the stellar boundary. The usual junction conditions for the Vaidya spacetime are regained in the absence of the string fluid. In this study, a spherically symmetric static configuration undergoing radiative collapse under shear-free, isotropic conditions is considered. A boundary condition of the form (p r ) = (qB) − s is imposed where s is the string density and qB is the heat flux. This is the basis for developing the temporal behaviour of the self-gravitating system. The structure of this paper is as follows: In §2 the field equations describing the geometry and matter content for a star undergoing shear-free gravitational collapse are introduced. In §3 the exterior spacetime and the junction conditions necessary for the smooth matching of the interior spacetime with Vaidya's exterior solution across the boundary are presented. In §4 the perturbative scheme is described and the field equations for the static and perturbed configurations are stated. In §5 we develop the new temporal equation employed in the perturbative scheme which includes the effect of the string field. In §6, we develop a radiating model from an interior Schwarzschild static model. In §7 dissipative collapse is discussed, the perturbed quantities in terms of two unspecified quantities are expressed and an equation of state which presents the perturbed quantities in terms of radial coordinate only is introduced. The stability of the collapsing model in the Newtonian and post-Newtonian approximations are explored in §8. The physical analysis of the results and conclusion are discussed in §9. The acknowledgements follows in §10. Stellar Interior In order to investigate the evolution of the radiative collapse we adopt a spherically symmetric shear-free line element in simultaneously comoving and isotropic coordinates given by where the A(r, t) and B(r, t) are the dynamic gravitational potentials. We should highlight the fact that the stability of the shear-free condition may hold for a limited period of the collapse process. Herrera and co-workers have shown that shear-free collapse can evolve into a dynamical process mimicking shear. The shear-like contributions can develop from pressure anisotropy and density inhomogeneities. The stellar material for the interior is described by an imperfect fluid with heat flux and in the form of the energy-momentum tensor where is the energy density, p r the radial pressure, p t the tangential pressure and q a the heat flux vector, u a is the timelike four-velocity of the fluid and a is a spacelike unit four-vector along the radial direction. These quantities must satisfy u a u a = −1, u a q a = 0, a a = 1 and a u a = 0. In co-moving coordinates we must have The nonzero components of the Einstein field equations for line element with energy-momentum tensor are where dots and primes represent partial derivatives with respect to t and r respectively. Exterior Spacetime and Matching Conditions Since the star is radiating, the exterior spacetime can be described by the generalized Vaidya metric which represents a mixture of null radiation and strings where m(v, r) is the mass function which represents the total energy within a sphere of radius r. This is what distinguishes the generalized Vaidya solution from the pure radiation Vaidya solution, which has m = m(v) where v is the retarded time. The energy momentum tensor corresponding to line element where are null vectors such that l a l a = n a n a = 0 and l a n a = −1. The energy momentum tensor can be interpreted as the matter source for the exterior atmosphere of the star which is a superposition of pressureless null dust and anisotropic null strings. The energy density of the null dust radiation, string energy density and string pressure are characterised by, and P respectively. We assume that the string diffusion is equivalent to point particle diffusion where the number density diffuses from higher to lower numbers subjected to the continuity equatio where D is the positive coefficient of self-diffusion. Following de Oliveira et al., we obtain the boundary conditions which include a string density s, Equation represents the conservation of momentum flux across the stellar boundary which we will employ in §5 to determine the temporal evolution of our model. The total energy entrapped within a radius r inside is given by At the boundary, this is given by and included as a boundary condition. Perturbative Scheme Following the method in Herrera et al., as well as the works of Chan et. al. and Govender et. al., we present our model in this section. To begin, we will assume that the fluid is in static equilibrium. The system is then perturbed and undergoes slow shear-free dissipative collapse. Thermodynamical quantities in the static system are represented by a zero subscript, while those in the perturbed fluid are represented by an overhead bar. The metric functions A(r, t) and B(r, t) are taken to have the same temporal dependence, which extends to the perturbed material quantities. The time-dependent metric functions and material quantities are given by where we assume that 0 < << 1. We observe that the temporal dependence of the perturbative quantities, T (t) is the same for both the gravitational potentials and the thermodynamical variables. The imposition of spherical symmetry alone implies that we have a very large gauge (coordinate) freedom to write the line element. In adopting the form of the line element given by we exhaust all coordinate freedom with the exception of re-scaling the radial coordinate and/or the temporal coordinates. It is clear that such re-scaling would not change the form of -. The choice of the perturbed variables as given in the perturbative scheme is not unique. However, once the line element has been chosen, the choice of the perturbed variables cannot be varied to produce the same physics. The Einstein field equations for the static configuration are given by The perturbed field equations up to first order in can be written as The total energy enclosed within is obtained by using and. We separate the static and time-dependent/perturbed components and are shown as follows In the case where the radial and tangential stresses are equal, p r = p t, the condition of pressure isotropy for the static model is p r0 = p t0 which gives The pressure isotropy condition for the perturbed model isp r =p t which gives This completes the outline of the perturbative scheme as applied to our choice of metrics and. In the next section we will examine the temporal aspect more closely. Explicit Form of the Temporal Function We employ the junction conditions derived by Maharaj et al. to determine the temporal evolution of our model. It is important to point out that holds only at the boundary of the star. We require that the static pressure vanishes at the surface via the condition (p r0 ) = 0, so that the following equation is obtained in T (t), namely where, and are given by where s0 is the constant string density. to be evaluated at the boundary r. It should be noted that p r0 vanishes at the boundary. The diffusion equation has been extensively studied and exact several solutions have been obtained,,,, and. One such solution of the diffusion equation for which the string density is a function of the external radial coordinate is given by where 0 and k are constants. The string density profile in was utilised by Naidu et al. to study the effect of an anisotropic atmosphere on the temperature profiles during radiative collapse. The above choice of string profile generalizes earlier work by Govender and Thirukannesh and Govender et al. in which the string density was constant (k = 0). The choice of a constant string density not only makes the problem mathematically tractable but also simplifies the underlying physics. The constant string distribution gives rise to pressure anisotropy in the exterior while any inhomogeneities are suppressed. Our choice allows for pressure anisotropy and inhomogeneities due to density fluctuations. At the boundary of the star the string density can be written as where we have invoked the junction condition. It is necessary to highlight the connection between r and r at this point. The boundary of the collapsing star divides spacetime into two distinct regions M − and M +, respectively. The coordinates in the interior spacetime M − are (t, r,, ) while the coordinates in M + are (v, r,, ). The boundary, is a time-like hypersurface described by the line element endowed with coordinates i = (,, ) and R = R( ). Note that the time coordinate is defined only on the surface. The junction condition is a consequence of requiring the smooth matching of the line elements and across, ie., where dots represent differentiation with respect to while for M + we have We observe that and relate r and r. We complete the expression for the temporal function T (t) by solving. This gives which, together with and and > 0 as well as < 0 describes a system in static equilibrium that starts to collapse at t = −∞ and continues to collapse as t increases. Dynamical Model In order to investigate the properties of the extended form of the temporal function, we make use of the simple Schwarzschild interior metric in isotropic coordinates given by where c 1 and R are constants. Then and can be written as The constant R is easily determined from, given the initial static energy density, and parameter c 1 is obtained from by evaluation at the boundary, giving Restrictions on r as given by Santos are noted, namely We also note that in the case p r = p t, the anisotropy parameter ∆ vanishes. Radiating Collapse We note that - contain two unspecified quantities, namely a(r) and b(r), which modulate the temporal part of the gravitational potentials. Thus it is important that these are determined carefully in order to obtain a physically meaningful dynamical model. Following Chan et al. we adopt the following This choice for b(r) has been widely used to investigate stability of radiating stars undergoing dissipative collapse in the form of a radial heat flux, and. Furthermore, we follow and choose the following form f (r) = r 2. Using in above, we obtain an explicit form for a(r) as where c 2 and c 3 are constants of integration. These may be set by considering the work of Govender et al. with the simple case of the relationship between a(r) and b(r) being employed, namely At this stage, we point out that the radial and temporal evolution of our model is fully determined. We use numerical data given in Table 1. for performing graphical analyses of stability and horizon formation comparisons which follow. Luminosity The luminosity for an observer at rest at infinity is given by where v is the retarded time. The luminosity can then be written as and is the proper time defined on with A and B given by and. For our model, this is calculated to be Horizon Formation From and we note that the luminosity vanishes when which determines the time of formation of the horizon as the collapse process proceeds from −∞ < t ≤ t H which corresponds to −∞ < v < ∞. For our model we have the following instances where the luminosity vanishes. By examining we must have = 0 or g r = 0. Case 1: = 0 In the case = 0, we have from the derivative of Thus = 0 implies = 0 which forces = 0. From the expression for T (t), this is only possible if = 0 (ie. vanishing of the string density). We observe that removing the string density gives a pure radiation solution and the horizon is able to form. However, the inclusion of strings inhibits the formation of the horizon. Stability Analysis In order to provide insight into the stability of the star, we begin with the second law of thermodynamics, and follow the approach of. This leads to the adiabatic parameter which is the ratio of specific heats at constant pressure and constant volume, and taken to be constant throughout the distribution (or at least in the region being studied). In literature referring to this ratio in, it was considered an indicator of stability, and called the stability factor. From the expressions for given below, it is clear that pressure anisotropy and the presence of radiation within the stellar core affect the stability factor. For example, if the sign of the anisotropy parameter ∆ = (p t0 − p r0 ) changes, the stellar core becomes unstable. We observe this is in the Newtonian limit, In agreement with classical fluid dynamics, the fluid sphere becomes more unstable (increasing the unstable range of ) as a result of the Newtonian contribution due to dissipation. Relativistic contributions from the energy density lead to a stability factor different from its Newtonian counterpart. Equation shows that the unstable range of is increased by the Newtonian term due to dissipation, as in the Newtonian limit. Furthermore, the unstable range of is increased by the relativistic correction due to the static background fluid configuration; however the relativistic correction due to dissipation decreases the unstable range of. Bonnor et. al. state that dissipation, by diminishing that total mass entrapped inside the fluid sphere, renders the systems less unstable. In order to investigate the stability of our model in both the Newtonian and post-Newtonian limits, we graphed for the case of pure radiation (absence of string density), radiation plus constant density (k = 0) and the radiation and inhomogeneous string density ( s = 0, k = 0). Since our static model is described by the interior Schwarzschild solution, the anisotropy parameter ∆ = p t0 − p r0 vanishes in and. The modified effects due to pure string density and inhomogeneity are encoded in. We have also graphed the luminosity as a function of time for both the pure and generalized Vaidya exterior. It is important to note that the graphs are plotted using geometrized units, where G and c are taken to be unity. Figure 1 shows the stability factor when the star is close to hydrostatic equilibrium in the Newtonian limit. We observe that the different matter configurations exhibit instability with < 4 3 which signifies the onset of collapse. The inclusion of the string field drives the stellar fluid towards instability with this effect being enhanced by inhomogeneity (k > 0). Figure 2 displays for the post-Newtonian regime. It is clear that the collapse process drives the fluid towards stability. The presence of the strings and their associated anisotropy and inhomogeneity make the fluid more stable at late times. This could be due to trapping of heat within the stellar core due to an inhomogeneous atmosphere, thus resulting in higher core temperatures. An increase in the core temperature results in an increase in outward pressure thus hindering gravitational collapse. In Figure 3 we note that inclusion of the string density field promotes an earlier time of horizon formation (luminosity vanishes), with this effect being particularly sensitive to string inhomogeneity. The effect of the string density on the time of formation of the horizon was also observed by Govender. They reasoned that the presence of the anisotropic strings in the exterior lowered the rate of heat dissipation to the exterior thus leading to a lower heat production rate within the core. This results in a lower outward radial pressure thus allowing gravity to dominate within the stellar interior. This results in a higher collapse rate and eventually to the horizon forming earlier. Luminosities for collapsing neutron star models were studied by de Oliveira et al. in which they considered profiles in the -ray, X-ray and visible bandwidths for core masses of 2M and 3M and a radius of 10 km. The luminosity profiles for all three bandwidths are similar to the profiles depicted in Figure 3. They noted that the radiation pulses do not occur simultaneously for an observer placed at infinity from the collapsing body. Furthermore, they show that nearly all the energy is emitted in the form of -rays. It is well known that the luminosity profile depends on the increasing gravitational redshift as the star collapses and the increase in the effective temperature. Our study provides a possible mechanism to explain the temperature changes within the core which manifests as the luminosity profiles displayed in Figure 3. Physical Analysis and Conclusion It is important to note that while the dynamical (in)stability of radiating spheres has been extensively studied in the Newtonian and post-Newtonian approximations, our investigation is the first attempt at considering the dynamics of the collapse process with a generalised Vaidya exterior. The generalised Vaidya atmosphere alters the temporal evolution of the model which impacts on the stability and time of formation of the horizon. While our study has considered an imperfect fluid interior with heat dissipation and pressure anisotropy it would be interesting to include shear viscosity within the framework of extended irreversible thermodynamics. Acknowledgments NFN wishes to acknowledge funding from the National Research Foundation (Grant number: 116629) as well as the DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS). RB and MG acknowledge financial support from the office of the DVC: Research and Innovation at the Durban University of Technology. MG is indebted to Dr A Barnes for providing the excellent facilities at Glenwood Boys High School where part of this work was completed.
Attacking information overload in software development The productivity of software developers is under constant attack due to a continual inundation of information: source code is easier and easier to traverse and to find, email inboxes are stuffed to capacity, RSS feeds and tweets provide a continual stream of technology updates, and so on. To enable software developers to work more effectively, tools are often introduced that provide even more information. The effect of more and more tools producing more and more information is placing developers into overload. To combat this overload, we have been building approaches rooted in structure and inspired by human memory models. As an example, the Mylyn project packages and makes available the structure that emerges from how a programmer works in an episodic-memory inspired interface. Programmers working with Mylyn see only the information they need for a task and can recall past task information with a simple click. We have shown in a field study that Mylyn makes programmers more productive; the half a million programmers now using Mylyn seem to agree. In this talk, I will describe the overload faced by programmers today and discuss several approaches we have developed to attack the problem, some of which may also pertain beyond the domain of software development.
Fock quantization of the Dirac field in hybrid quantum cosmology: Relation with adiabatic states We study the relation between the Fock representations for a Dirac field given by the adiabatic scheme and the unique family of vacua with a unitarily implementable quantum evolution that is employed in hybrid quantum cosmology. This is done in the context of a perturbed flat cosmology that, in addition, is minimally coupled to fermionic perturbations. In our description, we use a canonical formulation for the entire system, formed by the underlying cosmological spacetime and all its perturbations. After introducing an adiabatic scheme that was originally developed in the context of quantum field theory in fixed cosmological backgrounds, we find that all adiabatic states belong to the unitary equivalence class of Fock representations that allow a unitarily implementable quantum evolution. In particular, this unitarity of the dynamics ensures that the vacua defined with adiabatic initial conditions at different times are unitarily equivalent. We also find that, for all adiabatic orders other than zero, these initial conditions allow the definition of annihilation and creation operators for the Dirac field that lead to some finite backreaction in the quantum Hamiltonian constraint and to a fermionic Hamiltonian operator that is properly defined in the span of the \textit{n}-particle/antiparticle states, in the context of hybrid quantum cosmology.
Carbonic anhydrase activators. Activation of isozymes I, II, IV, VA, VII, and XIV with l- and d-histidine and crystallographic analysis of their adducts with isoform II: engineering proton-transfer processes within the active site of an enzyme. Activation of six human carbonic anhydrases (CA, EC 4.2.1.1), that is, hCA I, II, IV, VA, VII, and XIV, with l- and d-histidine was investigated through kinetics and by X-ray crystallography. l-His was a potent activator of isozymes I, VA, VII, and XIV, and a weaker activator of hCA II and IV. d-His showed good hCA I, VA, and VII activation properties, being a moderate activator of hCA XIV and a weak activator of hCA II and IV. The structures as determined by X-ray crystallography of the hCA II-l-His/d-His adducts showed the activators to be anchored at the entrance of the active site, contributing to extended networks of hydrogen bonds with amino acid residues/water molecules present in the cavity, explaining their different potency and interaction patterns with various isozymes. The residues involved in l-His recognition were His64, Asn67, Gln92, whereas three water molecules connected the activator to the zinc-bound hydroxide. Only the imidazole moiety of l-His interacted with these amino acids. For the d-His adduct, the residues involved in recognition of the activator were Trp5, His64, and Pro201, whereas two water molecules connected the zinc-bound water to the activator. Only the COOH and NH moieties of d-His participated in hydrogen bonds with these residues. This is the first study showing different binding modes of stereoisomeric activators within the hCA II active site, with consequences for overall proton-transfer processes (rate-determining for the catalytic cycle). The study also points out differences of activation efficiency between various isozymes with structurally related activators, convenient for designing alternative proton-transfer pathways, useful both for a better understanding of the catalytic mechanism and for obtaining pharmacologically useful derivatives, for example, for the management of Alzheimer's disease.
Natural orifice specimen extraction for colorectal surgery: Early adoption in a Western population Natural orifice specimen extraction (NOSE) challenges the limits of minimally invasive colorectal surgery by exploiting a natural opening for specimen delivery. Technically challenging, it is less painful, requires smaller wounds and abolishes the possibility of incisional hernia. These advantages of NOSE are seen in the obese (body mass index >30 kg/m2). This audit aims to demonstrate the feasibility of NOSE colectomy in an Australian population.
Geothematic open data in Umbria region Detailed information about geology, hydrogeology and seismic hazard issues for Umbria region are contained in a spatial database available as open data format (shapefile or KMZ) and distributed under the regional open data portal called Open Data Umbria ( http://dati.umbria.it ) where 297 datasets have been produced by Umbria Region until now and most of them are made by Geological Survey. Geological Survey of Regione Umbria carried out a 20 years program to produce 276 geological maps at 1:10.000 reference scale with an accurate geological model of the regional surface and providing millions of geological data. The key word is the characteristic index of the single geologic unit. Characteristic index, shown in percentage, calculates the ratio between the surface of the geologic units compared to their thickness. Thickness value for each geologic unit is intended to be based on rank level and calculated as weighted average of the thickness for each geologic unit. Standardization of geological data and data availability Detailed information about geology, hydrogeology and seismic hazard issues for Umbria region are contained in a Geological DataBase (GDB from now on), a spatial database available as open data format (shapefile or KMZ format) and distributed under the regional open data portal called Open Data Umbria (http://dati.umbria.it/dataset/cartageologica-dell-umbria) where 297 datasets have been produced by Geological Survey of Regione Umbria until now and most of them are made by Geological Survey. Development of standardized regional geologic database took about 2 years since 2010 to manage the huge set of information contained in the 276 former geologic maps, covering the whole territory of Umbria.As a result of migration to GDB, 231 distinct geologic units were found for Umbria Region territory represented by about 47,000 polygon features. The total land area of Umbria 8,475 km 2 wide is divided in the GDB into 46,982 different geological areas. Analysis of the information contained in the GDB is preliminary to the creation of more geothematic layers and custom maps led us to define an item describing in a comprehensive way the geological units: the geological representativeness and the Characteristic Index of the single geologic unit. This means evaluating geological units or domains not just for their 2D extent but even in 3 dimensions. Characteristic index, shown in percentage, calculates indeed the ratio between the surface of the geologic units and their thickness. Thickness value for each geologic unit is intended to be based on rank level and calculated as weighted average of the thickness for each geologic unit. Examples are shown in figure 1 and 2, the areas occupied by the alluvial deposits and terraced and those of ancient and very ancient alluvial deposits of Pliocene-Pleistocene age. Figure 1, shows the area occupied by the current and terraced alluvial deposits and those of ancient alluvial deposits of Pliocene-Pleistocene age. In light blue areas occupied by the alluvial deposits and current terraces, which occupy 17.3% of the region, and in yellow the ancient and very ancient alluvial deposits, which occupy 19% of the region. Globally, therefore, the alluvial deposits of different ages occupy 36.3% of the region, far out 1/3 of the entire region. The geological representativeness of current and terraced alluvial deposits is 0.56%, while that of the very old and ancient alluvial deposits is 26%. No doubt These numbers indicate that the alluvial deposits current have a geologic representativeness very different from the more ancient, the former deposits extent is large but its thickness is IV OPEN SOURCE GEOSPATIAL RESEARCH & EDUCATIONAL SYMPOSIUM October 12-14 2016, Perugia, Italy The open discussion version of this paper is available at: Motti A, Natali N. Geothematic open data in Umbria region. PeerJ Preprints 4:e2096v2 https://doi.org/10.7287/peerj.preprints.2096v2 Please cite this paper as: Andrea Motti, Norman Natali Geothematic open data in Umbria region. In Marchesini I. & Pierleoni A. (Eds.) Proceedings of the 4th Open Source Geospatial Research and Education Symposium (OGRS2016). Perugia, 12-14 October 2016. https://doi.org/10.30437/ogrs2016_paper_45 pellicular with an approximately 1/45 of the form factor; on the other hand the latter show a little thickness compared to their extent. Figure 2 shows at regional level the situation for merged and reclassified geological units compared to the percentage of total land area, and Figure 3 the geological representativeness of the geological domains for the