content
stringlengths
37
2.61M
Forget Baby Einstein. Today's sophisticated, tech-savvy fetus Twitters from the womb! But score another point for both the micro-blogging phenomenon and parents with too much money, who have teamed up in the quest to totally piss me off. Meet Kickbee. This fashionable, flesh-colored belt, embellished with various wires and microchips, wraps around an expectant mother's pregnant belly. Then, whenever the fetus kicks, the apparatus uses Bluetooth technology to signal a nearby computer that this momentous event has taken place. The computer registers the kick and -- don't ask me how -- automatically posts it on Twitter. So, as Urlesque notes, the blog's subscribers are treated to a barrage of near-identical updates: "I kicked Mommy at 09:14 on Wed, December 10, '08!" That little exclamation point may be the worst part of the whole Kickbee enterprise. Now, instead of waiting to brag that their 10-month-old can speak in full sentences and their 3-year-old is reading "Anna Karenina," yuppie parents can boast of their child's in-utero activity, too! Despite my annoyance, there is one part of this Kickbee business that I do find unintentionally hilarious. What the device actually allows parents-to-be to celebrate is their fetus' violence toward its mother. And come to think of it, if your mother is the kind of lady who buys this sort of crap, perhaps kicking her in the stomach is a noteworthy accomplishment.
Change in a Post-Bush Era: Revolution or Maintaining the Neoliberal Legacy Under conditions in the USA in which hope has been generated by the election of a president of African descent, the author expresses concern that the neoliberal education policies of the past 20 years will not be changed and are not being questioned. Examples are provided that include the continued support for the privatization of public schools in post-Katrina New Orleans and the various ways that NCLB privileges capitalism.
Development environment construction of medical imaging software 3d slicer 3D Slicer is an open source software platform for medical image informatics, image processing and 3D visualization. Due to different medical functional requirements, a 3D slicer tailored to the doctor's needs is a great choice for people. However, the development of open source software has some difficulties in environment construction and other aspects. In this paper, after repeated attempts, literature review and research, GitHub download, CMAKE compilation and VS generation of the appropriate version were found. It can quickly build a 3D Slicer programming environment, which is of great significance for the subsequent modification and development of 3D Slicer. Introduction With the rapid development of the information age, open source software has also developed rapidly. However, it is extremely difficult to build the environment. So far, there is no fixed way to build the running environment of the software smoothly. While many people want to learn from open source software, they cannot. In this paper, 3D slicer software as an example, a detailed discussion from the source code download, to CMAKE compilation, generation, and finally package the file. The method is simple and easy to understand, which provides convenience for the construction of software environment in the future. Source code download The source code is usually downloaded from an open source platform. In modern software development, GitHub, as a new open source service platform, has attracted more and more attention. In addition, GitHub is a global developer social network that attracts tens of millions of developers. In order to facilitate users to participate in the development of open source projects, GitHub platform provides Pull Request mechanism (hereinafter referred to as PR) and IssueTrackerSystem for users. The PR mechanism allows any user to participate in the development of any project by simply fork the target project into his or her own code base, complete the changes in his or her own code base, and merge his or her changes into the target code base by initiating a PR request. The IssueTrackerSystem is mainly used to record and track new functional requirements, development tasks and bugs. In the open source community GitHub, every user can create a new Issue for a project where Issue tracking is enabled, thus allowing other users to indirectly participate in the development of the project. In addition, making open source service platform also has the function of social platform, users can focus on the platform or project you are interested in, then making platform regularly push users to its focus on the latest dynamic or project, by the user more quickly understand the project progress or involved in the project development. To download open source software on GitHub, you first need to configure Git. Git is the world's most advanced distributed version control system, cloning a project is very fast. Every development can clone a local repository from the master, commit code to the local repository, view logs, create project branches, and so on even without the network. First, install the software. After the installation is complete, go to "Git" → "GitBash" in the start menu and pop a command line window to indicate successful installation of Git. The initialization window is shown in Fig. 1. Then you need to set up the machine information, which will be used by all Git repositories on this machine. The code is as follows: $gitconfig--globaluser.name"username" $gitconfig--globaluser.emailemail@example.com The next step is to create the version library. The Repository is the git directory, which holds many things, the most important of which is the staging area called stage (or index), the first branch master that git automatically creates for us, and a pointer to the master called HEAD. 1) Create an empty directory $mkdirmymenu $cdmymenu $pwd /Users/hxk/mymenu 2) Initialize the warehouse gitinit (The command turns this directory into a repository that git can manage) $gitinit InitializedemptyGitrepositoryin/Users/hxk/mymenu/.git/ Then log on to GitHub, find the source code to download, copy the source code address, git command window input gitclone plus source code address, you can download the source code. Here is a simple example : First in the upper right corner of the https://github.com/github search want source code, find the source, choose Cloneordownload, the diagram Fig. 2. Copy the download link, then open GitBash, type git clone after $, enter the url you want to download, and then press enter to download. The download process is shown in Fig. 3. Configuration of environment After downloading the source code, the source code should be compiled and generated, that is, the software development environment should be configured. The first step is to download and install the software used for compiling, including QT, VS, Git, CMAKE, SVN and NSIS. Qt is a cross-platform C++ graphical user interface application development framework developed by Qt Company. It can develop both GUI and non-GUI programs, such as consoles and servers. 3D slicer USES QT for overall interface design, so QT is indispensable, if you want to further optimize on the basis of 3D slicer or do some specific development, smooth use of QT is necessary. In the installation process of QT, installation components need to be selected, among which MSVC version is selected according to the required VS version, and VS version is recommended as VS2015.64bt. Other required components are shown in Fig. 4. The full name of VS is Microsoft Visual Studio. VS is a basically complete set of development tools that includes most of the tools needed for the entire software lifecycle, such as UML, code control, IDE, and so on. In the development of 3D slicer, VS+QT programming method is needed. After the installation of VS, QT plug-ins need to be installed to meet the programming requirements of 3D slicer. Open the Tools in the VS runtime interface, select extensions and updates, and search QT Visual Studio Tools for download and installation. Git is a free open source distributed version control system that can quickly and efficiently handle all projects from small to large. Git is easy to learn, takes up little space, and is lightning fast. It goes beyond configuration management tools like Subversion, CVS, Perforce, and ClearCase to features like cheap local branches, convenient staging areas, and multiple workflows. CMAKE is a cross-platform installation (compilation) tool that can describe the installation (compilation) of all platforms in simple statements. It can output various make file or project files and can test the C++ features supported by the compiler, similar to UNIX auto make. SVN, which stands for subversion, is an open source version control system that is effectively managed using a branch management system. In short, it is used to develop the same project with multiple people, realize the Shared resources, and realize the ultimate centralized management. SVN clients need to be downloaded and installed, not downloaded and unzipped. Otherwise, CMAKE cannot find SVN when compiling. The full name of NSIS is Null Soft Scriptable Install System, which is an open source Windows installation program. It provides installation, uninstallation, system Settings, file decompression and other functions. NSIS describes the behavior and logic of the installer through its scripting language. The NSIS scripting language is designed for applications such as installers. Source code compilation After the above software is prepared, the source code is compiled. Use CMAKE GUI to generate VS project files, open the interface of CMAKE GUI, and set the relevant path: Where is the source code: sets the path to the Slicer source code just downloaded; Where to build the binaries: sets where to build the project and code store. Click "Add Entry" to increase the path of Qt5. For specific path, please refer to the path of Qt5 installation is shown in Fig. 5. During compilation, various problems may occur, such as the missing: Patch_EXECUTABLE shown in the figure above, which is because the file named Patch_EXECUTABLE has not been found by CMAKE. After exploration, it is found that Patch_EXECUTABLE is a file in GIT and its path can be found. Add the path corresponding to Patch_EXECUTABLE in CMAKE. There may be other minor problems. According to the problem of red, change the corresponding path and continue Configure. Wait until Configure is complete to proceed to the next step. Click Generate, and Slicer project file will be generated. Click Open Project or directly open slicer.sln in the directory of E:\Slicersc\ slicer-build. Right click All build to build, start the long build, note, this build must connect to the Internet, the process will download the relevant source code for compilation. Source code debugging After the complete compilation, use the shortcut key win +R on the desktop to enter the running window, enter CMD and click ok to open the administrator command window, and enter the following command according to the location of the generated code, as shown in Fig. 9. 2) Qt 5.10 release. 3) CMAKE version should be 3.13.4 or above. The software package Select release when VS is generated, and then generate it. Find slicer. SLN file under slicer-build folder and open it. If it is not adjusted to the release mode, the package will fail. When adjusting the mode, you can operate on the basis of the original file, which can save a lot of build time. However, you cannot adjust the installation interface. Files such as the package can be found under slicer-build \_CPack_Packages\win-amd64\NSIS. The folder contains installation package and installation package unzipped files, which can be operated as needed. Advanced installer is an excellent package builder that can unpack files into advanced installer to optimize the installation interface or perform some software encryption. When the package is complete, the software needs to be further encrypted for its security. Super Dog is an excellent hardware encryption software. The encryption of Super Dog is completed by two USB flash disks, one child and one father. The father is the permission conferring end, which can give the child the permission to use various software. For example, the number of times you use the software, time, etc. The encrypted software can only be used normally in the case of the child flash disk with certain permissions. This encryption function is implemented by adding some Super Dog statements to the open source software. The function of these statements is to check whether the USB flash disk exists at the critical moment when the software is on and running, or to enter an encrypted password into the USB flash disk, and then read the password at the next time. If the USB flash disk does not exist, or cannot enter password, cannot read password, read password error, and so on, then terminate the software operation, so as to achieve the purpose of protecting the software. Conclusions Following the above process to build the environment, we can avoid detours and waste a lot of unnecessary time. Provide convenience for the secondary development of slicer. It is only when the tools the doctor uses are more appropriate that he exerts all his abilities. And the second development of the medical imaging software 3D slicer meets the needs of doctors in related fields. As is known to all, the most basic and difficult step of secondary development is environmental construction. Many people spend time and energy and cannot find their way. The author is also through long-term practice, many times, to find a convenient understanding, easy to operate the road. Recalling the hardships of construction, I hope that this article can help people with the same needs avoid detours and make some contribution to the secondary development of slicer. It can be built completely in accordance with the above process, which can quickly build the operating environment of 3D slicer and facilitate its secondary development. This is a relatively quick and simple method found in the continuous research, and it has practical significance. Medical imaging software 3D slider secondary development of the more successful cases are: 1) Radiopharmaceutical Imaging and Dosimetry: RPTDose, a 3D Slicer-based application that streamlines and integrates quantitative imaging analysis and dose estimation techniques to guide and optimize the use of radiopharmaceutical therapy agents in clinical trials. 2) Xoran Technologies: Image-guided Platform for Deep Brain Stimulation Surgery. Now and in the future, there will be more other secondary development software that is more suitable for doctors, so a simple and efficient environmental approach is necessary as a basis for secondary development. Jincai Chang received his Ph.D. degree in 2008 from Dalian University of technology, now he is Professor in North China University of Science and technology. His main research interests include theories and methods in mathematical modelling and scientific computation, numerical approximation and computational geometry, etc. In this paper Jincai Chang provide financial support and project management. Jianzhong Cui received his Ph.D. degree from Hebei Medical University. Now he is currently the vice President and director of neurosurgery of Tangshan Workers' Hospital. His research interests include precise surgical diagnosis and treatment of cerebrovascular diseases and intracranial tumors, etc. In this paper Jianzhong Cui provide resources and supervise.
Atlético Español F.C. History Atlético Español Fútbol Club In September 19, 1971 when a group of Spanish entrepreneurs / business people decided to buy the franchise of what was then known as Necaxa and change their name to Atlético Español. The reason for the name change was due to their nationality and ideology in the world of business. The branded club were modeled on the defunct team Real Club España, with a similar name and kit to the predecessor. The team would immediately obtain the nickname Toros because in their badge was a bull next a football and the initials AE. Español characterized itself to be a team which was a fighter, but their start was not the ideal one since in the 1971–72 season they were a point away from descending to the 2nd division. Other teams fighting to avoid relegation that season were Club de Fútbol Torreón, Irapuato FC and Club Veracruz, in the end Irapuato FC would be the team to be relegated. For the 1972–73 season Atlético Español would reach the semifinals against Club León losing 5–4 in the third leg going all the way to penalties.. In 1973–74 they would reach the final against a strong Cruz Azul. They would play a two legged tie in which Atlético Español would win the first leg 2–1 but lose the second 3–0, becoming the runner-up of the league. In 1975 Atlético Español would win their first and only international title in the CONCACAF Champions' Cup 1975, they played the final against Transvaal of Suriname defeating them 5–1 in aggregate. In 1976 they disputed the Copa Interamericana against Club Atlético Independiente of Argentina. Both games were played in Buenos Aires and after an aggregate score of 2–2 penalties needed to take place, Español losing the shootout to Independiente 4–2. Season 1980–81 Español would once again get into the liguilla getting 4th in group A led by Cruz Azul, who would lose the final against Pumas de la UNAM. For the season of 1981–82 they would dispute their ultimate ligulla getting to the quarterfinals versus Club de Fútbol Atlante, round they would lose 5–3 in aggregate. After 11 years of being Atlético Español, the owners of Toros decided to sell the franchise unexpected back to Mexican ownership, that ownership would be the Televisa Network. With that happening the colors and the name of Necaxa were back on track hoping to revive the tradition the team had left in México City, In July 21, 1982 the name change was something unexpected by the Mexican ownership since many in Mexico City then turned to support other soccer franchises such as Cruz Azul (of which gained the most), América, UNAM, Atlante, Toluca and others as well.
Organ preservation solutions increase endothelial permeability and promote loss of junctional proteins. OBJECTIVE To investigate the effects of the organ preservation solutions UW and Plegisol on endothelial permeability; occludin and vascular endothelial (VE)-cadherin content in human umbilical vein endothelial cells (HUVEC); and junctional localization of these proteins after exposure to these solutions. SUMMARY BACKGROUND DATA Organ preservation for transplantation is limited by several challenges, including loss of tissue function, tissue injury, and tissue edema. Occludin and VE-cadherin are responsible for maintaining and regulating the endothelial solute barrier. Several studies have noted organ edema and dysfunction with preservation, as well as gaps between endothelial cells suggesting that disorganization of junctional proteins (e.g., occludin and VE-cadherin) is responsible for interstitial edema. METHODS HUVEC monolayers were treated with 4 degrees C UW and Plegisol for 3 and 6 hours and then reperfused with normal buffer. Permeability was examined using FITC-dextran tracer during the reperfusion phase. Occludin and VE-cadherin content at different time points was measured by Western blotting. Treated groups were also examined by immunofluorescence for occludin, VE-cadherin, and F-actin. RESULTS Compared with untreated controls, cold preservation for 3 and 6 hours increased endothelial permeability after rewarming, which appears to depend on the duration of cold exposure. Monolayers exposed to 3 hours of cold preservation did not have increased permeability in the first hour after rewarming but had significantly increased permeability after the first hour and all subsequent time points. Monolayers exposed to 6 hours of cold preservation had increased permeability after the first hour and at all later time points. Western blotting demonstrated that occludin content was decreased to a similar extent with all solutions after 3 hours of cold preservation. Six hours of cold preservation in Plegisol reduced the occludin content significantly compared with UW and control. VE-cadherin content was unchanged after 3 hours of cold preservation but was dramatically reduced in all groups at 6 hours. Immunofluorescent staining demonstrated junctional gap formation and discontinuous staining of occludin and VE-cadherin with all cold preservation protocols; changes in F-actin organization were observed at 3 and 6 hours after cold preservation. CONCLUSION The changes in occludin, VE-cadherin, and F-actin content and organization and increased permeability associated with cold storage demonstrate that alterations of the tight and adherens junctions may underlie organ edema associated with cold organ preservation. These data also suggest that novel strategies to maintain the content and integrity of endothelial junctional proteins may provide an important therapeutic avenue for organ preservation.
The Internet continues to make available ever-increasing amounts of information which can be stored in databases and accessed therefrom. Additionally, with the proliferation of portable terminals (e.g., notebook computers, cellular telephones, personal data assistants (PDAs), smartphones and other similar communication devices), users are becoming more mobile, and hence, more reliant upon information accessible via the Internet. For example, many users are interested in using the vast information base of the Internet to locate driving directions to a destination address or to locate businesses in close proximity to a pre-selected location. As polygon geometry storage and query systems (e.g., mapping applications) continue to evolve with respect to the Internet, there is an ongoing demand to locate additional focused and targeted information by users. Conventionally, mapping applications have been used primarily to provide users with directions to and/or from a particular location. As well, conventionally, these applications oftentimes provide additional generic information about the particular destination location. By way of example, when planning for a vacation, a user can use a mapping application to easily request driving directions from one location to another. Additionally, these mapping applications can be employed to provide other information about a destination location. For example, many applications can assist a user to research a destination location with regard to “must see” locations. Today, as these mapping applications continue to evolve, uses for the underlying information also continues to evolve. For example, it is not uncommon for a user to search for specific information based upon a reference point. By way of specific example, today, a user can search for specific establishments within a defined radius of a reference point. Similarly, it is sometimes useful for a user to define an area in order to locate a specific group of targeted items that fall within the defined area. For example, one use of this area-based analysis would be directed to a targeted advertising campaign. Another common example would be directed to a political campaign. In either of these scenarios, it is oftentimes desirable to be able to locate a demographic characteristic with respect to an identified region, either arbitrary or defined (e.g., county line, state).
Kemp said he plans to present the ordinance on May 24 and hopes the council will vote on it on June 5. He said there are exemptions — including for private clubs and tobacco stores — but bars are included. Officials in Hopkinsville have mulled a smoking ban for years, but Kemp said he thinks the timing is right for a ban to pass because surveys and polls show support for the measure. Kemp said he has received mostly positive comments about the proposal from businesses. Health Department Director Mark Pyle said a ban would reduce the prevalence of secondhand smoke. "That's what we're talking about here: the right of non-smokers to breathe clean air," Pyle said. "From a health standpoint, it's really a no-brainer." Kemp said under current law, non-smokers can choose to bypass businesses that allow smoking and smokers can bypass establishments that don't allow it, but he said there is more to it than just a matter of individual rights. "I just think it's such an overriding health concern that it trumps the individual-rights argument," Kemp said.
Melatonin promotes hepatic differentiation of human dental pulp stem cells: clinical implications for the prevention of liver fibrosis Melatonin's effect on hepatic differentiation of stem cells remains unclear. The aim of this study was to investigate the action of melatonin on hepatic differentiation as well as its related signaling pathways of human dental pulp stem cells (hDPSCs) and to examine the therapeutic effects of a combination of melatonin and hDPSC transplantation on carbon tetrachloride (CCl4)induced liver fibrosis in mice. In vitro hepatic differentiation was assessed by periodic acidSchiff (PAS) staining and mRNA expression for hepatocyte markers. Liver fibrosis model was established by injecting 0.5 mL/kg CCl4 followed by treatment with melatonin (5 mg/kg, twice a week) and hDPSCs. In vivo therapeutic effects were evaluated by histopathology and by means of liver function tests including measurement of alanine transaminase (ALT), aspartate transaminase (AST), and ammonia levels. Melatonin promoted hepatic differentiation based on mRNA expression of differentiation markers and PASstained glycogenladen cells. In addition, melatonin increased bone morphogenic protein (BMP)2 expression and Smad1/5/8 phosphorylation, which was blocked by the BMP antagonist noggin. Furthermore, melatonin activated p38, extracellular signalregulated kinase (ERK), and nuclear factorB (NFB) in hDPSCs. Melatonininduced hepatic differentiation was attenuated by inhibitors of BMP, p38, ERK, and NFB. Compared to treatment of CCl4injured mice with either melatonin or hDPSC transplantation alone, the combination of melatonin and hDPSC significantly suppressed liver fibrosis and restored ALT, AST, and ammonia levels. For the first time, this study demonstrates that melatonin promotes hepatic differentiation of hDPSCs by modulating the BMP, p38, ERK, and NFB pathway. Combined treatment of grafted hDPSCs and melatonin could be a viable approach for the treatment of liver cirrhosis.
Six persons were killed and several others were injured in a chemical tank blast in Bijnore district of Uttar Pradesh on Wednesday, police said. The incident took place at the Mohit Chemical and Metro factory situated on the Dehat Marg in the morning. Senior officials including the district magistrate Atal Rai and Superintendent of Police (SP) Umesh Kumar Singh rushed to the scene of the incident and are overseeing the relief and rescue operations, a home department officials informed IANS. One labourer is missing. Workers at the factory allege that the chemical tank had been leaking for past many days but despite their please the management turned a blind eye. The incident happened at a time when the labourers were working to plug the leakage by welding it. Those who were killed, are all laborers and have been identified as Chetram, Vikrant, Lokendra, Kamalveer, Balgovind and Ravi. More details on the missing are awaited, an official said. Page generated in 0.0511 seconds.
81, of Kailua, Hawai'i, died in Kailua on January 23, 2019. She was born in Honolulu, Hawai'i. Visitation: 9:00 a.m.; Services: 10:30 a.m. on Saturday, March 2, 2019 at St. Anthony of Padua Kailua. Inurnment: 9:00 a.m. on Saturday, March 9 at Valley of the Temples.
Quantifying concept drifting in network traffic using ROC curves from Naive Bayes classifiers Concept drifting poses a real challenge for network models which depends on statistical heuristics learned from the data stream, for example Anomaly Based Detection/Prevention Systems. These models tend to become inconsistent over a period of time as the underlying data stream like network traffic tends to change and get affected by evolution of concept drift. Change in network traffic pattern is inevitable, it impacts the enterprises which are dynamic in nature especially cloud-centric enterprises. These changes in the network pattern can be of short time period or they can be persistent for longer time duration. Change in network traffic pattern is not always because of malicious activity, changes can be benign and thus impacting the performance of the IDS/IPS model. There is a need to quantify concept drifts and incorporate them in the model. In this paper we have proposed a supervised learning model to quantify the concept drift in the network traffic. The proposed model uses adaptive learning strategies with fixed training window to constantly evolve the model. Classification of data is done by Naive Bayes Classifier. ROC curve generated from Naive Bayes classifiers has been used as a de facto method for identifying concept drift. Classifications have been carried out on entire dataset and also on specific flow attributes like source ip, destination ip, source port, destination port, flags and protocols. In this paper we demonstrate the capabilities of the proposed model to identify drift in the network pattern and also which flow attributes have contributed in concept drifting using ROC curve.
The invention relates to a cleaning apparatus for a photosensitive member of an electrophotographic machine, and more particularly, to such apparatus in which a developing toner which remains attached to a photosensitive member carrying an electrostatic latent image is removed by utilizing a rotating cleaning brush. As is well recognized, for an electrophotographic copying machine of the dry type which utilizes a developing toner in the form of powder, a cleaning apparatus is used to remove unnecessary developing toner which remains attached to a photosensitive member carrying an electrostatic latent image after a toner image has been transferred onto a record sheet. A variety of cleaning apparatus have been proposed in the prior art. However, such apparatus generally comprises a rotating cleaning brush disposed for contact with the photosensitive member, in combination with a toner trap associated with the brush for collecting the toner removed by the brush. The trap includes a filter which prevents the removed toner from dispersing into and outside the machine, and suction means. A cleaning apparatus of such kind is associated with a striker rod disposed for abutment against the cleaning brush so that the toner attaching to the brush may be shaken off therefrom. The rod strikes the brush to remove the toner therefrom, and is provided because the efficiency to remove toner from the photosensitive member degrades and the dispersion of the toner tends to increase as the cleaning apparatus continues to be used over a prolonged period of time. However, it is found that as the striker rod becomes contaminated by the deposition of the toner, there occurs a phenomenon called a filming on the surface of the photosensitive member, resulting in a substantial degradation in the image quality achieved. In the cleaning apparatus described above, the cleaning brush contacts the surface of the photosensitive member while rotating in order to remove any residual toner from such surface. However, the cleaning action does not remain perfect, but after a given period of use, a thin film of toner is formed on the surface of the photosensitive member to cause an adverse influence upon the copying operation. Such formation of a toner film is referred as a filming phenomenon. The purpose of the striker rod is to prevent the occurrence of such a filming phenomenon. However, experiments have shown that a strong adherence of the toner onto the surface of the rod or a local abrasion produced in the rod results in a decreased cleaning effect leading to the occurrence of a filming phenomenon. It is also found by experiments that the striker rod may be temporarily rotated to change its surface adapted to engage the brush in order to avoid the above disadvantage. The number of revolutions of the rod may be as low as one revolution per one thousand copies. An electrophotographic copying machine of the type described is usually designed to provide a multiple copy operation in which a single electrostatic latent image formed by one exposure is utilized to produce a plurality of copies. During a multiple copy operation, a toner developing step and a transfer step are repeated with a common latent image, and hence an exposure unit, a charger, a cleaning apparatus and a neutralizer lamp are left inoperative. Also the cleaning brush is arranged to be movable into contact with or away from the photosensitive member so that it may be left inoperative during such operation in order to avoid any degradation in the quality of the latent image. At this end, the rotatable brush is supported by a pair of rockable arms, so that the arms may be rocked to move the brush into contact with or away from the photosensitive member.
ANCHORAGE, Alaska (KTUU) — With just one more weekend left for the Alaska high school football season, this week's KTUU Tailgate Tour heads to Wasilla for the 40th annual Potato Bowl between Palmer and Wasilla. The match up between crosstown rivals dates all the way back to 1979, and the Moose own a 29-10 series lead according to the Mat-Su Valley Frontiersman. Palmer won last year’s game 27-26 in overtime, giving Palmer's Rod Christiansen an impressive 19-8 record as head coach in Potato Bowl games. The 2018 Potato Bowl kicks off at 7:00 p.m. on Sept. 28 at Wasilla High School. As the regular season winds down, playoff opportunities are aplenty across the prep football landscape. The Alaska Sports Broadcasting network has the East Thunderbirds leading their Division I poll. Friday’s game between East and Service will serve as the CIC championship game, with the winner taking the conference number one seed into the state playoffs. South High School will return to the playoffs for the first time since their state championship season in 2014. At the bottom of the conference, Dimond and West will kick off for the fourth and final playoff berth. In the Chugach Conference, Bartlett locked up the conference crown with a blowout victory over Wasilla. The Golden Bears jumped two spots in the ASBN Division I poll this week, winning 40-0 over the Warriors, while Colony fell to third place after defeating Juneau-Unified 49-0. In the ASBN Div. II/III polls, Eagle River jumps from fifth to second after defeating Chugiak 34-21 for the first time in school history. Soldotna remains atop the small schools poll heading into the last week of the season, taking on rival Kenai High School.
Transcriptome analysis of human dermal fibroblasts following red light phototherapy Fibrosis occurs when collagen deposition and fibroblast proliferation replace healthy tissue. Red light (RL) may improve skin fibrosis via photobiomodulation, the process by which photosensitive chromophores in cells absorb visible or near-infrared light and undergo photophysical reactions. Our previous research demonstrated that high fluence RL reduces fibroblast proliferation, collagen deposition, and migration. Despite the identification of several cellular mechanisms underpinning RL phototherapy, little is known about the transcriptional changes that lead to anti-fibrotic cellular responses. Herein, RNA sequencing was performed on human dermal fibroblasts treated with RL phototherapy. Pathway enrichment and transcription factor analysis revealed regulation of extracellular matrices, proliferation, and cellular responses to oxygen-containing compounds following RL phototherapy. Specifically, RL phototherapy increased the expression of MMP1, which codes for matrix metalloproteinase-1 (MMP-1) and is responsible for remodeling extracellular collagen. Differential regulation of MMP1 was confirmed with RT-qPCR and ELISA. Additionally, RL upregulated PRSS35, which has not been previously associated with skin activity, but has known anti-fibrotic functions. Our results suggest that RL may benefit patients by altering fibrotic gene expression. www.nature.com/scientificreports/ performed high-throughput RNA sequencing (RNA-Seq) to identify genes and pathways associated with RL. RNA-Seq allows for unbiased discovery of therapeutic targets and whole transcriptome gene expression analysis. Results Transcriptomic profiling of human dermal fibroblasts. Principal component analysis (PCA) demonstrated that samples segregate according to donors, which is characteristic of human subjects' analysis (Table 1 and Fig. 1A). The heat map of the top 30 genes with maximum variance values, calculated for all samples, shows similar segregation by donor line (Fig. 1B). When PCA analysis was separately performed for each donor, samples collected at the 0-h timepoint clustered apart from the 4, 12, and 24 h (Fig. 1C,D). 640 J/cm 2 treated samples clustered separately from control for 3 out of 4 donor lines (Fig. 1D). It should be noted that changes between treatment and control were found to be in the same direction of gene expression dimensionality reduced space. To account for interpersonal variations in gene expression, DESeq2, capable of paired analysis, was used to identify differentially expressed genes (DEGs) 26. Analysis of the samples revealed 859 DEGs following RL with a twofold change in expression and false discovery rate (FDR) < 0.05 across all timepoints. The complete list of DEGs from the analysis with fold change and FDR can be found in the supplemental dataset. Confirmation of differential expression and ontology. Verification of MMP1 protein and gene expression was performed (Figs. 3A-D). MMP1 produces matrix metalloproteinase-1 (MMP-1), also known as collagenase, and is capable of enzymatically degrading the collagen found in fibrosis 32,33. Changes in gene expression between RT-qPCR and RNA-Seq were in the same direction and had similar fold changes at the 0, 4, 12, and 24 h time points (Figs. 3A). A Pearson correlation statistical test was performed to determine consistency between RT-qPCR and RNA-Seq found an R = 0.98 and a significant P-value of 0.015 (Fig. 3B). At 4, 12, and 24 post-RL irradiation, the supernatant was collected from the control and RL-treated sample. MMP-1 protein expression was quantified using ELISA. By 24 h post-treatment, RL treated samples released significantly more MMP-1 compared to control (Fig. 3C). A Pearson correlation statistical test to determine consistency between MMP-1 ELISA and RNA-Seq differential expression was performed. RNA-Seq and ELISA were highly (R = 0.98), but not significantly (P = 0.14), correlated (Fig. 3D). We previously found that RL increased intracellular ROS generation and inhibited cell proliferation in HDF donor 1 25. GO pathway analysis revealed enrichment of cellular responses to oxygen containing compounds and regulation of cell proliferation in all 4 donors treated with 640 J/cm 2 (Fig. 2F). We confirmed that ROS increases and cell count decreases in donors 1-4 when treated with RL. 320 and 640 J/cm 2 RL increased ROS in a dose dependent manner at 0 h post-treatment (Fig. 3E). 640 J/cm 2 RL decreased cell count by 48 h post irradiation (Fig. 3F). Discussion We present whole transcriptome gene expression analysis of HDFs at 0, 4, 12, and 24 h after treatment with 320 or 640 J/cm 2 RL. There were more DEGs within 4 h of treatment than by 12 and 24 h after RL treatment ( Fig. 2A). This is consistent with our previous finding that RL mediated effects on HDF migration dissipated within 12 h 25. MMP1 expression was confirmed with RT-qPCR and ELISA for MMP1. Additionally, PRSS35, which produces a serine protease with collagen 1 degrading potential, was found to have greater than 30 fold increased expression at 4, 12, and 24 h after RL treatment, and thus may impart anti-fibrotic effects via collagen degradation 34. This study provides foundational insights for future investigation into photobiomodulation and fibrosis. We have previously shown that RL directly decreases collagen protein expression 25. GO pathway analysis demonstrated enrichment of genes related to the organization of the extracellular matrix (Fig. 2F). As a result, we sought to investigate genes and pathways associated with anti-fibrotic activity. Other researchers have found that skin MMP-1 expression and secretion increases in response to UV and visible light 35,36. Li et al. found that 3 J/cm 2 RL increased the expression of multiple MMPs, including MMP-1 37. We confirmed similar increases in MMP-1 expression and secretion in HDF treated with high fluence RL (Fig. 3). This is significant as RL is not associated with skin cancer and aging like UVA phototherapy. PRSS35 is a serine protease that may have collagen-1 degrading function 34. PRSS35 has been previously linked to gonadal function, but human epididymis protein 4 (HE4), an inhibitor of PRSS35, was highly expressed in fibrotic kidneys 34,38,39. We observed a greater than 30-fold increase of PRSS35 expression in the RL treated HDFs, suggesting that PRSS35 induction by RL may reduce collagen. The increased expression of collagen-1 degrading enzymes may highlight the anti-fibrotic mechanisms of RL through the degradation of extracellular collagen. To further characterize the potential of RL in fibrosis, DEGs in the TGF- signaling pathway were analyzed ( Figure S3). 40. We have previously shown that 640 J/cm 2 RL decreases SMAD-2 phosphorylation in TGF-1 induced HDFs within 4 h of irradiation 41. Phosphorylated SMAD 2/3 translocate to the nucleus and increase the expression of collagen. In our RNA-Seq analysis, HDFs were not TGF- induced, but there was nevertheless downregulation of profibrotic SMAD3 at 4, 12, and 24 h post-RL treatment. TF analysis confirmed that SMAD3 was likely involved with regulation of cellular activity following RL phototherapy (Fig. 2G). SMAD-4 is pro-fibrotic, but its expression was increased at 4 h post-RL treatment 45. SMAD-7 has anti-fibrotic properties and was slightly downregulated in our RNA-seq data 46,47. There were other highly expressed DEGs involved with SMAD regulation. RANBP3L produces a protein that acts as a nuclear factor that can export SMAD-1 and other proteins known to have anti-fibrotic properties as part of the TGF- pathway 48. MECOM produces a protein called EVI-1, which acts as a transcriptional regulator that can inhibit SMAD protein activity 49,50. We previously performed RNA-Seq for miRNA from HDF donor line 1 and found that miRNA-21, miRNA-23, and miRNA-31 were decreased, while miRNA-29, miRNA-196a, and let-7a were increased 51. These microR-NAs have been identified as mediators of skin fibrosis 52. We repeated our miRNA analysis with HDF donor lines 1-4 and confirmed that miRNA-21 (Mir21) expression was significantly downregulated following RL irradiation ( Figure S2). miRNA-21 regulates TGF-/SMAD signaling, and decreased expression of miR-21 is anti-fibrotic 52. Additionally, miRNA-145 expression was decreased. miRNA-145 is increased in hypertrophic scars and has been identified as a therapeutic target for anti-fibrotic therapies 53. FOS/JUN family of proteins (i.e., FOSL1, FOSL2, JUNB, and JUN) were identified as TF regulating the response of fibroblasts to RL phototherapy. JUN proteins form homodimers or heterodimers with FOS proteins to increase the expression of AP1 28,30. AP1 can regulate cell cycle progression and extracellular matrix organization 28. c-Jun (JUN) is phosphorylated by c-Jun N-terminal kinases (JNKs) in response to cellular stress, growth factors, or cytokines 28. c-Jun is important in IL-17 mediated production of MMP-1 in HDFs 28,29. Similarly, inhibition of JNK in HDFs prevented the upregulation of MMP-3 and MMP-1 in response to UVB 28,31. Fibrotic responses to FOS/JUN activity can change based on cell type and conditions. c-Jun is upregulated in the skin of patients with systemic sclerosis and smooth muscle actin positive (SMA +) HDFs 30. In fibrotic mouse models, phosphorylation of c-Jun is associated with profibrotic cellular responses via activation of AKT 30. However, we previously found that increased AKT phosphorylation by RL in HDFs inhibited migration and was associated with decreased collagen deposition 25,28. RL and near-infrared radiation stimulate cytochrome C oxidase in the mitochondria, altering mitochondrial membrane potential and increasing intracellular ATP and free-radical ROS 20,22. We confirmed that ROS increases following irradiation with 320 and 640 J/cm 2 RL using flow cytometry (Fig. 3E). ROS may alter the activity of fibrotic pathways, including TGF-, mTOR, and AKT 22,25. GO analysis demonstrated the enrichment of cellular responses to oxygen-containing compounds (GO: 1901701, p-value: 6.53 10 -5, q-value: 1.67 10 -2 ). Transcription factor co-regulatory network analysis indicated that RELA, which codes for the p65 subunit of NF-B, is predicted to regulate gene expression in RL-treated HDFs (Fig. 2G) 27,54. The heterodimer of RELA and p50 is the most abundant form of NF-B 27,54. NF-B is involved with inflammation and cellular responses to stress, including ROS 27,54. NF-B has been previously implicated in photobiomodulatory mechanism since low fluences of near-infrared light-activated NF-B in mouse embryonic fibroblasts 55. Our primary objective was the discovery of RL induced transcriptome modulation. Two previous studies by Kim et al. and Li et al. examined the effects of RL on HDF transcription using RNA-Seq and demonstrated similar regulation of genes/pathways related to MMPs, FOS, JUN, NF-kB, SMAD1/7, oxidative stress, and inflammation 37,56. Our comprehensive data set is a strength of this study and may serve as a reference for future research. This objective was limited by sparse prior research on photobiomodulation transcriptomics. This limitation restricts our enrichment analysis because photobiomodulation pathways may be under-represented in the major databases such as KEGG and GO. Our and others' research may contribute to the identification of photobiomodulation pathways 37,56. Another limitation of our study was that we did not use HDFs isolated www.nature.com/scientificreports/ from fibrotic tissue. However, fibrotic HDFs may lose their fibrotic phenotype after being removed from their in vivo fibrotic niche. Non-fibrotic HDFs may be a good analog as RL has similar anti-fibrotic effects in normal and keloid-derived HDFs 23,24. In future research, the transcriptomic effects on fibrotic skin from reconstructed three-dimensional, animal, and clinical models should be assessed 57,58. In vivo and tissue culture models may respond to RL phototherapy differently. In conclusion, we identified several genes that may contribute to the mechanism of RL treatment of fibrosis. MMP1 is a critical mediator of fibrotic disease that was modulated by RL treatment. We identified PRSS35 as a potential mechanism of RL anti-fibrosis due to its 30-fold increased expression. PRSS35 could be the focus of future photobiomodulation studies because of its previously limited characterization and profoundly differential expression in RL treated samples. Our results suggest that RL has the potential to benefit patients with fibrosis by altering gene expression. Methods Cell culture. Normal HDFs were obtained from the American Type Culture Collection (CRL-2617, CRL-2697, and CRL-2796) and Coriell Biorepository (AG13145). HDF Donor samples were used per relevant guidelines and regulations. HDFs were sub-cultured in DMEM (Invitrogen; Carlsbad, CA) supplemented with 10% FBS (R&D Systems; Minneapolis, MN) and 1% antibiotic/antimycotic (Invitrogen). Cells were maintained in a humidified incubator with 5% CO 2 and 20% O 2. RNA was collected from 35 mm tissue culture dishes (Corning, Corning, NY) that were initially seeded at low confluency (2 10 4 cells total; 4,000 cells per 1.77 cm 2 surface area) between passages four and seven 59. Twenty-four hours after seeding, samples were treated with RL, and RNA was collected at 0, 4, 12, and 24 h time points. HDF donors. RNA-seq was performed with total RNA samples collected from four commercially available HDF cultures obtained from three different anatomical sites: two from the abdomen, one from the forearm, and one from the lower leg (Table 1). RL treatment. HDFs were treated with RL as previously described. Briefly, an LED unit (Omnilux; Globalmed technologies, Napa, CA) was utilized for all experiments. The LEDs have a rectangular aperture with dimensions 4.7 cm 6.1 cm and emit visible red light at 633 ± 30 nm wavelength in the electromagnetic spectrum. The light has a power density of 872.6 W/m 2 at room temperature and 10 mm from the bottom of the plastic culture dish. Cell cultures were treated to 320 J/cm 2 or 640 J/cm 2 (3667 s for 320 J/cm 2 and 7334 s for 640 J/ cm 2 ) of RL at approximately 34 C. During RL treatments, the cells were exposed to environmental 20% O 2 and 412 parts per million CO 2 concentrations outside of the incubator. Controls were placed on plate warmers set to 34 C and protected from light with aluminum foil to match RL treated samples' environmental conditions. RNA isolation. Total RNA from HDFs was collected at 0, 4, 12, and 24 h after RL treatment. The miRNeasy (Qiagen; Germantown, MD) kit was used to isolate RNA from cell cultures following the manufacturer's suggested protocol. To briefly summarize, Qiazol reagent (Qiagen) was used to lyse cells, followed by chloroform extraction. The aqueous layer was obtained and mixed with 100% ethanol (Sigma; St. Louis). Spin columns further aided the separation of RNA and impurities from samples. Samples were treated with RNase-free DNase (Qiagen) to ensure no genomic DNA contamination. Finally, RNase free water was used to elute sample RNA. All samples had RNA quality assessed by Tapestation 2200 (Agilent Technologies; Santa Clara, CA). All samples had RNA integrity number values of 9.9 or 10.0. Library preparation and sequencing. RNA RNA for RT-qPCR and RNA-Seq was collected separately. Red and blue bars represent fold-change for MMP1 from RT-qPCR and RNA-Seq, respectively (B) Pearson correlation of MMP1 differential expression between RT-qPCR and RNA-Seq show high (R = 0.98) and significant correlation (p <.05). (C) MMP-1 protein secretion confirmation of RNA-Seq. Culture supernatant was collected from RL and control samples from all four donors at 4, 12, and 24 h post-irradiation. MMP-1 protein secretion was quantified using ELISA (D) Pearson correlation between MMP-1 and ELISA show high (R = 0.98), but not significant correlation (p >.05). (E) 320 and 640 J/cm 2 RL immediately increased ROS generation as assessed by rhodamine-123 MFI. Following RL phototherapy, HDFs were stained with DHR-123 (which converts to rhodamine-123 in the presence of ROS) for 30 min. HDF were collected and MFI was measured using flow cytometry (F) 640 J/cm 2 decreased cell counts as assessed by crystal violet elution. Following RL, HDFs were fixed and stained with crystal violet. The optical density of eluted crystal violet served as a proxy for cell count. For each donor, the MMP-1 ELISA, ROS flow cytometry, and cell counts experiments were performed with a technical repeat of at least 3. Relative (RL/ control) MMP-1 expression, rhodamine-123 MFI, and cell counts were pooled from the 4 donor lines and compared to a hypothetical mean of 1 (indicating no difference between RL and control), using a one sample T-Test. P <.05 (*) was considered significant. www.nature.com/scientificreports/ Mapping and identification of differentially expressed genes. Sequencing reads were mapped to the UCSC human reference genome (GRCh37/hg19), and the following read counts were evaluated by STAR (version 2.5.2) 60. Gene expression level normalization and differential expression analysis were carried out by DESeq2 (version 1.6.3) bioconductor R package 26. To compare samples before and after treatment for different cell lines, a multifactor design was used applying DESeq2 controlling for the effect of cell line difference. Differential expression p-values were corrected for multiple testing using the false discovery rate (FDR) method. Enrichment analysis was performed with Enrichr 61,62. RT-qPCR. RT-qPCR experiments used materials and equipment from Bio-Rad (Hercules, CA). 100 ng of RNA was synthesized into cDNA with the iScript reverse transcription kit using a C1000 thermal cycler. RT-qPCR was performed with 1 ng of cDNA on the BioRad CFX96 using SYBR green. ELISA. At 4,12, and 24 h following RL irradiation, we quantified total human MMP-1 in collected HDF culture media using ELISA (R&D Systems) according to the manufacturer's guidelines. For each sample, the concentration of released MMP-1 was indexed to the total intracellular protein. We quantified collected intracellular protein using Bradford reagent (Bio-Rad). Optical density was measured for ELISA and protein concentration using a 96-well plate reader (Synergy 2, Biotek; Winooski, VT). For each donor, the experiment was performed in technical triplicate. Relative MMP-1 expression (RL/control) was pooled from the 4 donor lines and compared to a hypothetical mean of 1, indicating no difference between RL and control, using a one sample T-Test. P < 0.05 was considered significant. ** indicated p < 0.01. Cell count. Cell counts were assessed using crystal violet 63. Following treatment with RL, experimental and control samples were placed in a humidified incubator for 48 h. Cells were fixed with 4% formaldehyde (Sigma) and stained with 0.1% crystal violet (Thermo-fisher Scientific; Waltham, MA). 10% acetic acid (Thermo-fisher Scientific) was used to elute the crystal violet. Optical density of eluted crystal violet was quantified with a plate reader at 595-nm. For each donor, the experiment was performed with a technical repeat of n = 3-5. Relative counts (RL/control) were pooled from the 4 donor lines and compared to a hypothetical mean of 1 (indicating no difference between RL and control), using a one sample T-Test. P < 0.05 was considered significant (*). Free radical reactive oxygen species generation. For free radical ROS generation, HDFs were assayed using dihydrorhodamine-123 (DHR-123; Thermo-fisher Scientific). Cells were irradiated with RL and then treated with DHR-123 for 30 min. Non-fluorescent DHR-123 converts to fluorescent rhodamine-123 in the presence of ROS. RL treated and control cells were detached with 0.25% trypsin EDTA (Thermo-fisher Scientific), collected, and analyzed with flow cytometry (Fortessa; BD; San Jose, CA). Intracellular ROS generation was assessed immediately following irradiation (0 hours). Positive control cells were treated with 0.6 mM hydrogen peroxide (Thermo-fisher Scientific) for 30 min. ROS was quantified as the median fluorescent intensity (MFI) of rhodamine-123 using FlowJo Software (BD). For each donor, the experiment was performed with a technical repeat of n = 4 or 5. Relative MFIs of rhodamine-123 (RL/control) were pooled from the 4 donor lines and compared to a hypothetical mean of 1 (indicating no difference between RL and control), using a one sample T-Test. P < 0.05 was considered significant (*). Data availability The datasets generated during and/or analysed during the current study are included within manuscript or available from the corresponding author on reasonable request.
Long-term follow up after coronary sirolimus drug-eluting stent implantation for cardiac transplant arteriopathy. There has been limited published experience with the use of sirolimus drug-eluting stents in the setting of cardiac transplant vasculopathy. Systemic sirolimus has been shown to protect against progressive transplant vasculopathy, and sirolimus-eluting stents have shown to reduce the risk of in-stent restenosis in native coronary arteries. We report a case of acute myocardial infarction in the setting of advanced transplant vasculopathy, and describe the long-term results of sirolimus drug-eluting stent implantation.
Matchup Scheduling with Multiple Resources, Release Dates and Disruptions This paper considers the rescheduling of operations with release dates and multiple resources when disruptions prevent the use of a preplanned schedule. The overall strategy is to follow the preschedule until a disruption occurs. After a disruption, part of the schedule is reconstructed to match up with the preschedule at some future time. Conditions are given for the optimality of this approach. A practical implementation is compared with the alternatives of preplanned static scheduling and myopic dynamic scheduling. A set of practical test problems demonstrates the advantages of the matchup approach. We also explore the solution of the matchup scheduling problem and show the advantages of an integer programming approach for allocating resources to jobs.
Odd species of Nepticulidae (Lepidoptera) from the American rainforest and southern Andes. In addition to numerous new species that can be placed to genera, our recent study of a large collection sample of Nepticulidae (Lepidoptera) from Central and South America revealed a few odd-looking new species, the taxonomic position of which seems rather problematic and, therefore, preliminary. Here we describe three new species of pygmy moths (Nepticulidae) from the Amazonian rainforest (Venezuela) and southern Andes (Chile and Argentina) possessing uncommon morphology. We also provide the first photographic documentation of the Central American Acalyptris argentosa (Puplesis Robinson, 2000) with rather odd and hitherto unknown hindwing scaling. All species treated in the paper are extensively illustrated with drawings and (or) photographs of the adults and genitalia.
Summer was easy in childhood. For one, everything could be contained to one block. I spent many summers at my grandparents’ home in the Austin neighborhood while my parents worked during the day. As I got older, I began to spend more and more time inside, not because there was little to do outside, but because the containment of the block was no longer satisfactory. But as a child, it was easy to acquire as much of the goodness of summer in one block as it was to spend time moving from neighborhood to neighborhood and activity to activity. I’m thinking about the women who put up snow cone stands. I can’t remember how much they cost (probably not a lot). What I do remember is how sticky and messy they were and how that seemed unique to summer as well. Summer is a time of cold, meltable treats. A sweet relief comes in the form of sticky fingers and a messiness that was more or less acceptable. As an adult, I still face that: the sloppiness of summer, the ways in which we are subject to the heat in all of its glory and frustration. We wait all winter and spring for the heat and then, when it arrives, we forget everything else that comes with it. I’m thinking about barbecues as well. We are nearing the Fourth of July. Although Summer has technically just started, in our minds, it truly hits around Memorial Day. That is when the laziness of extra-long days feels acceptable. Barbecues are a means of indulgence in a manner that feels in opposition to the indulgence of winter holidays. For one, in summer, it feels easy to “keep going.” Once I stop eating during a holiday dinner, my relatives rarely question my actions. But in the summer, another plate can be found just waiting. Summer is a time of being young. And if that is out of grasp, it is a time of feeling young, too. When I think about my favorite songs of summer, I remember my favorite films nd television shows about youth. These films might not have taken place in summer, but they successfully invoked the best feelings of being young, and so songs like Space’s “Female of the Species” (as heard in My Mad Fat Diary) or Supergrass’ “Alright” (as heard in Clueless), feel especially right for the right now. And waiting too feels inherent to the spirit of the season. I sometimes forget that fall is quietly waiting. And so I too wait – in lines, in stores, at parties, for transportation – all for the chance to experience the things that were taken from us the other nine months of the year. What are those things? Maybe the sticky sweetness (like mentioned above) of gelato. A Sunday evening is sometimes spent waiting for a scoop or three of Black Dog Gelato, but it all feels worth it. Treats are available throughout the year, but warmth trumps deliciousness. Maybe too I’ll wait for a party or performance in a way that seems unfeasible in the winter. “I do NOT wait in lines,” I used to say to friends, not because I thought I deserved better treatment, but because the cold was too much to bear for a good time. But in the summer evenings, when the breeze is just right, I can wait in line longer than expected, all for the sort of magic that only brews when the days and nights are equally enjoyable.
Prevalence of malaria in HIV positive and HIV negative pregnant women attending antenatal clinics in south eastern Nigeria Introduction Globally, malaria in pregnancy is a public health challenge. Malaria and HIV are among the two most important diseases contributing to the global health burden of our time. HIV positive pregnant women are at increased risk of all the adverse outcomes of malaria in pregnancy. Objective The objective of this study was to compare malaria parasitaemia between HIV positive and HIV negative pregnant women attending antenatal clinics offering Preventing Maternal to Child Transmission (PMTCT) services in Enugu metropolis, south-eastern Nigeria. Methods A descriptive cross sectional study was conducted among 200 HIV positive and 200 HIV negative pregnant women attending antenatal clinics in Enugu. Two out of five hospitals that provide PMTCT services were selected through balloting. Finger pricked blood samples were collected and thick blood films were examined for malaria parasite using giemsa expert microscopy. A structured interviewer administered questionnaire was used for data collection. Data was analysed using SPSS version 22. Results The HIV positive pregnant women (76%) and HIV negative women (68.5%) studied were mostly in the age range of 25-34 years. Mean gestational age of HIV positive and HIV negative participants were 23.4±10.7 and 23.2±10.1 weeks respectively (P=0.001). The prevalence of malaria infection among HIV positive pregnant mothers was 81% (162/200) and 75% (150/200) among HIV negative pregnant women (P < 0.001). The HIV positive mothers had more moderate parasitaemia (86/200: 53.1%) compared to 43/200: 28.7% in HIV negative mothers (P<0.001). Even though more HIV positive mothers (54.5%) used insecticide treated nets ITNs during pregnancy compared to 41.5% in HIV negative mothers, moderate malaria parasitaemia was higher in HIV positive mothers. HIV positive nulliparous pregnant women had the highest rate of malaria parasitaemia (32/36: 88.9%). Conclusion Moderate malaria parasitaemia was higher among HIV positive pregnant women. All malaria preventive strategies should be intensified in pregnancy as ITNs provided little protection.
Dynamic portfolio rebalancing through reinforcement learning Portfolio managements in financial markets involve risk management strategies and opportunistic responses to individual trading behaviours. Optimal portfolios constructed aim to have a minimal risk with highest accompanying investment returns, regardless of market conditions. This paper focuses on providing an alternative view in maximising portfolio returns using Reinforcement Learning (RL) by considering dynamic risks appropriate to market conditions through dynamic portfolio rebalancing. The proposed algorithm is able to improve portfolio management by introducing the dynamic rebalancing of portfolios with vigorous risk through an RL agent. This is done while accounting for market conditions, asset diversifications, risk and returns in the global financial market. Studies have been performed in this paper to explore four types of methods with variations in fully portfolio rebalancing and gradual portfolio rebalancing, which combine with and without the use of the Long Short-Term Memory (LSTM) model to predict stock prices for adjusting the technical indicator centring. Performances of the four methods have been evaluated and compared using three constructed financial portfolios, including one portfolio with global market index assets with different risk levels, and two portfolios with uncorrelated stock assets from different sectors and risk levels. Observed from the experiment results, the proposed RL agent for gradual portfolio rebalancing with the LSTM model on price prediction outperforms the other three methods, as well as returns of individual assets in these three portfolios. The improvements of the returns using the RL agent for gradual rebalancing with prediction model are achieved at about 27.993.4% over those of the full rebalancing without prediction model. It has demonstrated the ability to dynamically adjust portfolio compositions according to the market trends, risks and returns of the global indices and stock assets. Introduction In modern portfolio theory, portfolio optimisation is one of the objectives to maximise returns of a portfolio while minimising risks using diversification methods. Financial market risk analysis and behavioural risk studies are involved in optimising portfolios. There are two common strategies for financial asset allocations within a portfolio to manage market risk. One is strategic asset allocation (SAA) that attempts to balance risks and returns with different weightages for target asset allocations. The other is tactical asset allocation (TAA), that attempts to switch portfolio allocations to the most attractive asset proportions when certain financial markets trend changes are predicted using market forecasting tools. In practice, a combination of SAA and TAA strategies is deployed in tandem to maximise their advantages by Asset Management firms (AMF), such as JPMorgan and Goldman Sachs. By combining SAA and TAA strategies, the asset allocation consists of a fixed percentage amount and a variable percentage amount in a portfolio depending on market conditions. In the behavioural risk management, studies have been conducted to investigate how returns of a portfolio may be 1 affected by the risk adversity of individuals, such as portfolio managers in AMF. The risk adversity of an individual can range in a risk spectrum being from risk averse at one end, to risk seeking at the other end with varying degrees of loss aversion and sensitivity. According to prior research outcomes on the loss aversion, people usually are more sensitive to losses than gains. It will affect individual decision-makings and portfolio asset prices in financial markets. As such, the risk adversity of individuals may affect the financial market volatility. It is beneficial to explore techniques in artificial intelligence (AI) and machine learning algorithms as portfolio construction strategies in fund management involving SAA and TAA approaches. AI and machine learning algorithms have been utilised to maximise the returns of constructed portfolios with self-learning and less human interventions, such as evolutionary computation, genetic algorithms (GA), particle swarm optimization algorithm, fuzzy logic, reinforcement learning (RL), and recurrent reinforcement learning (RRL). Fuzzy neural network is also used for market risk prediction, with such information being useful for portfolio construction. Serrano presents the research work using random neural network (RNN) to predict stock market index prices. RL is a type of machine learning algorithm being used in various applications. The RL agent has learning capability through interaction with its environment. An action is decided by the RL agent, according to the current state in the environment. The action is going to change the current state into the next state. A reward is given to the RL agent for each action. A new action will be decided by the RL agent in the new state of the environment. The iteration repeats for the RL agent, aiming to achieve maximised total rewards. In some scenarios, multiple agents can learn and work collaboratively to change the environment to suit certain needs. RL is reported to solve problem statements of financial industry, such as pricing strategy optimization in insurance industry, bank marketing campaigns offering credit card services, and portfolio managements. RL has been utilised for trading of financial assets on the stock and foreign exchange market. Almahdi and Yang introduce RRL-based portfolio management method for computing and optimizing investment decisions with time efficiency by incorporating past investments actions in time-stacks. The experiments have been conducted for a portfolio with ten stocks selected from different sectors of S&P 500 in time frame of January 2013 to July 2017. Deep reinforcement learning (DRL) models with adjustable trading policy and stock performance indicator data have been presented for active portfolio management. A portfolio management system is depicted using RL with multiple agents, each of which trades its own subportfolio under different policy in current market states. The different actions of the multi-agent and diversified portfolios aim to diversify the risks and maximize the rewards of each agent. RL agents with two policy algorithms with Q-function for four states and five actions are reported, to re-allocate portfolio with two assets; one asset as S&P 500 Exchange Traded Funds (ETF), and the other as Barclays Capital U.S. Aggregate Bond Index (AGG) or a 10-year U.S. T-note. The discrete RL agent's actions to re-balance the two assets in the portfolio are taken quarterly, semi-annual or yearly without considering transaction costs and taxes. The performance of annual rebalancing frequency is observed to have better investment returns comparing to quarterly and semi-annual rebalancing frequency. A DRL and rule-based policy approach is presented for different versions of agents trading against each other in a continuous virtual environment. The signals of relationships between actions and market behaviours are generated by the risk curiositydriven learning to improve the quality of actions. Its performance and profitability are analysed through experiments using eight financial assets individually. Park et al. derive a DRL trading strategy for multiple assets with experiments performed using two portfolios: one consisting of three EFT assets from the U.S. stock market, the other with three EFT assets from the Korean stock market. It reports that discrete action space modelling has more positive effects over continuous action space modelling in terms of lower turnover rate and more practical in realworld applications. The DRL model is utilized for algorithmic trading through learning from features of environmental states and financial signal representations to improve action decision-making. The weights of the features including current financial product features, related financial products features and technical indicators are adjusted and re-assigned based on the learning outcomes to enhance the accuracy. A framework and trading agent implemented by DRL are employed for algorithmic trading with generic action set to adjust trading rules by learning the market conditions. The effectiveness of the framework is evaluated through individual experiments on three stock and two index assets separately. Many of the reported methods have certain assumptions, such as without considering transaction costs, where the profitability of the algorithm will be significantly impacted by transaction costs in practical scenarios. Most research in RL for stock trading normally predicts trading strategies and decisions with trading a fixed number of shares, according to various trading signals, trends, features, and market conditions. As another challenge in RL trading research, it is usually more difficult to predict varied number of shares for trading in actions under different market conditions. A RL algorithm with continuous-time, discrete-state for policy optimization has been introduced for managing financial portfolio, which is characterized by transaction costs involving time penalization. Portfolio rebalancing is performed through performance measurement using the RRL method and adjusted objective function with the consideration of transaction costs and coherent risk of market conditions. The buy or sell signals are generated and asset allocation weights are adjusted according to market volatility situations. Actions to sell stocks and stop-loss strategy are taken when the market volatility is high, while actions to buy stocks are taken by new generated re-enter signals. A portfolio consisting of five ETF assets in the time frame of January 2011 to December 2015 is selected for the experiments. Jeong and Kim combine RL and a deep neural network (DNN) for prediction by adding DNN regressor to a deep Q-network, with experiments conducted on four different stock indices individually: S&P500, KOSPI, HSI, and EuroStoxx50. It enables the predictions with different number of shares for each asset for the first time, compared to trading with fixed number of shares of other methods, that increases the trading profits. A deep Q-learning framework is introduced for portfolio management consisting of a global agent and multiple local agents, each of which handles trading of a single asset in the portfolio. The global agent manages the rewards function for each local agent. The experiments are performed using a crypto portfolio consisting of four cryptocurrencies assets: Bitcoin, Litecoin, Ethereum and Riple in time frame from July 2017 to December 2018. The RL approach is reported to combine with the RNN to simulate investment decisions of asset bankers on profit marking on specific asset markets under different variables and configurations. This paper focuses on providing an alternative view in maximising portfolio returns using RL by considering dynamic risks appropriate to market conditions and transaction costs through portfolio rebalancing. The proposed RL agent aims to improve returns of the portfolio Net Asset Value (NAV), by exploring four methods using a combination of full portfolio rebalancing, gradual portfolio rebalancing, without price prediction model, and with Long Short-Term Memory (LSTM) price prediction models. These four approaches will be presented and compared using three constructed financial portfolios. One of the portfolios consist of three global market indices with different risk levels. The other two portfolios consist of stock assets from different sectors of NYSE and NASDAQ markets with the presences of mixed market trends including bullish, bearish, and stagnant conditions. These assets in the portfolios are uncorrelated as much as possible. The performances of these four RL approaches for portfolio rebalancing will be discussed in this paper. Insights from this research will help portfolio managers to systematically improve the performance of portfolios by dynamically rebalancing asset allocations in tandem with changing market trends. The main contributions of this paper are shown as follows: The portfolio rebalancing is performed by considering asset diversifications, risks, and returns using the combination of SAA and TAA strategies. The investment allocations to each asset in the portfolio are dynamically adjusted by the RL agent at run time. Market information lags are usually caused by lagging technical indicators that are computed using historical price data. The impacts of such lags in market trend detection have been analysed and mitigated using LSTM price prediction models in portfolio rebalancing methods. In the event of wrong action via portfolio rebalancing predicted by the proposed RL agents, the impacts of transaction/commission fees are analysed and compared using two different methods: full portfolio rebalancing and gradual portfolio rebalancing. The remaining parts of the paper are organised as follows: Sect. 2 describes the proposed RL portfolio rebalancing methodology. Section 3 presents the proposed RL agent for dynamic portfolio rebalancing and the corresponding experiments. Section 4 concludes the paper. Configurations of RL Decision-makings of RL agents are based on Q values. A RL agent aims to determine a policy p and maximize the long-term rewards through a series of actions interacting with its environment, as shown in Eq.. where R t is the accumulated sum of rewards till to the terminal time T; c is the discount rate in the range of ; and r t is the reward received at the time t. For the action a t in the state s t at the time t, the Q-value under a policy p (i.e. the value of a state-action pair) is derived by the expected return correspondingly, which is represented in Eq.. Neural Computing and Applications 34:7125-7139 7127 where the notation of Q p s; a is known as the action value function, i.e. the Q-function. The set-up of the RL agent defined in this paper is presented in this section. Observable period The observable period used by the proposed RL agent is the market prices in the range of 2014-2018. Actions of portfolio rebalancing are taken according to the market trend reversals. The market trend reversals indicate the potential of a market to experience an up-trend or downtrend, with the associated magnitude of the trend reversals. The direction of the trends is indicated by the sign of the function of market trend reversals potential, which can be derived by the mean of exponential moving average (EMA) and Moving Average Convergence Divergence (MACD) of share prices. Price signal y t is measured on the trading day t. The EMA imparts a higher weightage x on recent prices near the current trading day t c, that is calculated in Eq. : The value of weightage x is derived in Eq.. where k denotes the number of past trading days from the current trading day. Generally, the EMA is capable of providing a responsive indication of price trends and fluctuations. Trend reversals are detected using the crossovers (i.e. intersections) of the MACD signal line. A MACD line is derived from the long-term EMA (26 periods) and the short-term EMA (12 periods) as illustrated in Eq.. The signal line is derived from the EMA, i.e. the 9 periods EMA. The crossovers are monitored to obtain insights of the price trends. During a buy crossover, where the MACD line intersects upwards with the signal line, it indicates that the period will be undergoing a bullish period. Conversely, during a sell crossover, where the MACD line intersects downwards with the signal line, it indicates that the period will be undergoing a bearish one. An advantage of the MACD is that both the momentum and trend can be determined in a single indicator. State A RL agent is able to interact with its environment at each time step t. The environment is represented in the form of a state s t e S, where S is the set of all available states. The state vector at time t that is observed by the RL agent is updated in Eq. : where EMA 1 is the standardised 6-day EMA of 15 days of the high risk index; EMA 2 is the standardised 6-day EMA of 15 days of the medium risk index; MACD 1 is the normalised 6-day difference in MACD line and signal line of the high risk index; MACD 2 is the normalised 6-day difference in MACD line and signal line of the medium risk index; and Dt is the difference in the number of days from the previous market trend reversal. Action With the current state s t as input, the RL agent takes an action a t e A(s t ), where A(s t ) is the set of possible actions being taken in the state s t. For each action, a reward r t is received to evaluate the action outcomes, while the state will be moving into the state s t?1. The modifications of the reward structure of the RL agent are limited to four different actions as follows: Increase high risk assets portfolio composition rate, and at the same time, reduce the composition of other assets. Increase medium risk assets portfolio composition rate, while reducing the composition of other assets. Increase both high and medium risk assets portfolio composition rate, while reducing the composition of low risk assets. Increase low risk assets portfolio composition rate, while reducing the composition of other assets. Reward structure In the environment, rewards are defined per step of the training. Each step refers to each market trend reversal detected in the period under study. The NAV reward comprises of two different components: the NAV change reward, and the current NAV reward. For the first component, the NAV change reward is computed using Eq. : where the Changed composition refers to the NAV obtained after a period of 10 days using the changed portfolio composition according to the actions of the RL agent; the Original composition is the NAV obtained after a period of 10 days using the original portfolio composition; and TSF denotes the Time Scaling Factor. The process is as shown in Fig. 1, where the difference between the Changed composition and the Original composition is visualised at time t = 2. For the term of TSF, its value is determined using Eq. : The TSF in Eq. scales the NAV change rewards from 0.5 to 1.0, to reduce the degree of rewards achievable over time. This is to place more emphasis on initial actions as they have a greater impact on the final NAV due to the compounding effect. For the second component, the current NAV reward is computed by simple division of the current NAV over a constant of 10,000,000 to normalise the reward, as shown in Eq.. where the current NAV is the NAV obtained after a period of 10 days using the changed portfolio composition. It is to normalise the current NAV reward. As such, the total reward per step is obtained as in Eq.. Therefore, by referring to the rewards received per step, the RL agent will gain an insight to the performance of its immediate action based on the current state of the environment. RL agent For the RL Agent, a Q network is set up to determine the Q values of actions for each state. The Q network is shown in Fig. 2, where it consists of one input layer, one hidden layer and one output layer. The size of the hidden layer is 100 neurons. The Q value of an action to be predicted by the Q network is determined by the Bellman equation, as shown in Eq.. where s' is next state in the set of S states; a' is the next action; Q(s', a') is the Q value for the next state s'; r(s, a) is the rewards of the current state; the discount rate c is the discount of the next Q value which is set at 0.99 in this paper; and P(s' |s, a) is the probability of the state s' happening, given s and a which is set to a value of 1. Since the optimal policy is explored and followed by the agent, it aims to determine the best possible next action a' in the state s' to maximize the Q value, as shown in 3 Proposed RL for dynamic rebalancing The proposed RL agent is used to dynamically rebalance a portfolio. In order to study and evaluate the performance of the RL agent, four different combinations for portfolio switches and price prediction models are considered as follows. Firstly, we will present the proposed RL agent with a full portfolio rebalancing, without price prediction. The full portfolio rebalancing method refers to the full change of the composition of asset reallocation to the highest setting to be guided by the highest potential. It aims to achieve a better return within a single trading day. This is done whenever trend reversals are detected. Secondly, the proposed RL agent with a full portfolio rebalancing can be improved using the asset price prediction achieved by LSTM models. It is able to fine tune the technical indicator by centring and swifting the market information lag using the LSTM price prediction models. Thirdly, the proposed RL agent is revised to incorporate a gradual portfolio rebalancing method without price predictions. The gradual portfolio rebalancing method changes the asset composition rates by k% per trading day, instead of complete change to highest composition rate for an asset in the single day. Fourthly, the RL agent with gradual portfolio rebalancing method is further improved using the LSTM price prediction models. These four combinations of methods for the proposed RL agent will be described with the corresponding experiments' result analyses. Experiment set-up In order to better illustrate the performance analysis in the experiment, the proposed RL agent is used to dynamically rebalance three portfolios with good portfolio diversifications. One portfolio consists of global indices with assets at different risk levels. The other two portfolios comprise stock assets from different sectors in the U.S. market. These three portfolios are depicted as follows: The first portfolio includes three market index assets with varying degrees of market risk, namely the IBOVESPA Index (BVSP) which is a Brazillian stock index, the TSEC weighted index (TWII) which is a Taiwanese stock index, and the NASDAQ Composite The time frames are selected due the mixture of different market trends of the assets in the portfolios. It could enable more obvious observations on the process of dynamic portfolio rebalancing by the proposed algorithm, such that the performances can be better evaluated accordingly in the experiments. Trend reversal periods are generated using the MACD, for which a buy crossover indicates an upwards trend reversal while a sell crossover indicates a downwards trend reversal. The experiments are set up with an initial NAV of $300,000 for each portfolio. It is initially split evenly by allocating $100,000 to each asset in each portfolio. The rebalancing approach is achieved through the combination of SAA and TAA strategies, with the base composition rates (BCR) given to each asset in the portfolio to prevent only keeping a single asset. It is to preserve the asset diversification of each portfolio. In the experiments, the BCR is set as 0.1 for each asset in the portfolio. A 0.125% commission rate is imposed on each transaction made during the executions of portfolio rebalancing. Tensorflow is used in the following RL experiments. The SciKit-Learn library is used for feature scaling in data preprocessing. Proposed RL with full portfolio rebalancing method The flowchart of the RL agent for full portfolio rebalancing method is shown in Fig. 4. The adjustments of the composition rates of the portfolio are completed in a single day once the market trend reversal is detected. As indicated in the flowchart, the action policy of the RL agent for the full portfolio rebalancing is set as follows: Construct a portfolio consisting of a number of n assets according to the initial NAV being equally allocated to each asset. Upon the detection of an upward trend reversal for the ith asset (1 B i \ n), increase the composition rate of this asset as shown in Eq. in a single trading day, where BCR refers to base composition rate as mentioned in Sect. 3.1. In the experiment, the BCR is set as 0.1. Hence, full allocation is set to the i-th asset. Reduce the composition rate of each of the remaining assets to be at the BCR. This sets the allocation to the remaining assets at the BCR allocation setting. In the experiments to dynamically rebalance portfolios of this paper, there are three assets in each portfolio, i.e. n = 3. As such, when an upward trend reversal of one asset is detected, the action policy is set to switch the composition rate of the corresponding asset to be 0.8, as derived in Eq.. While the composition rates of the remaining two assets of the portfolios are reduced to 0.1 each, respectively. The experiments have been performed for the RL agent with full portfolio rebalancing and no predictive models of the three portfolios. For the first portfolio consisting of three market indices assets, the experimental results are (1/n) 100% Asset 2 Composition rate 2 : (1/n) 100% Asset 1 Composition rate 1 : (1/n) 100% Trend reversal of i th asset detected? Yes No Full portfolio rebalancing with adjustments in a single day: For the i th asset, adjust its composition rate i into: 1 -BCR (n-1) For remaining n -1 assets, adjust each composition rate into: BCR shown in Fig. 5a, with the black plots representing the results of the portfolio NAV rebalanced by the RL agent. It is observed that the performance of the RL agent is not good, with a 20-25% difference in performance lower than those of the BVSP and IXIC indices. The RL agent fails to exploit the bullish trends present in the BVSP and IXIC indices. It fails to rebalance the portfolio to increase the composition rates of the BVSP and IXIC indices. For the second portfolio consisting of three stock assets from S&P 500 reported in, the experiment results of full portfolio rebalancing without predictive modelling are shown in Fig. 5b. It is observed that the results of the RL agent in the black plots are not comparable to those of individual stock assets in this portfolio. The NAV performances of the RL agent are about 17-28% lower than those of WMT, AXP, and MCD in the second portfolio. For the third portfolio comprising three stock assets from the NASDAQ market, the experiment results of full portfolio rebalancing are shown in Fig. 5c. The NAV results of the RL agent in the black plots suffer loss due to the very different market trends of each stock asset. The performance of the RL agent in full portfolio rebalancing is worse than that of individual stock assets in this portfolio, with about 16-116% lower than those of MNDT, UNIT and UMBF in the portfolio. The experiment results are not satisfactory. The model of the proposed RL agent for full portfolio rebalancing needs be further examined and improved. Market information lag After inspections, one of the reasons to cause unacceptable performance in the RL model could be the information lag in the indicators used to detect market trend reversals. As shown in Fig. 6, the price trend prediction is performed based on the EMA of the historic prices in the past seven trading days, i.e. 0 B t B 6. The notation C1 models the EMA window used in the experiment. The true centre of the computed EMA is actually at t = 3 instead of t = 6, at the day of action. Therefore, the derived price trend of the market at t = 6 in C1 is actually the true price trend of the market at t = 3, which is a 3-day lag in information. As such, for the indicator giving a better and more accurate prediction of the true market trends at t = 6, it is required to predict the prices for t = 7, t = 8 and t = 9. Therefore, it can judiciously compute the true EMA of stock prices at t = 6 as shown in C4 of Fig. 6. LSTM price prediction model In order to verify the hypothesis of the RL agent having decreased performance due to the information lag in the market, a further study is conducted. In our research, LSTM models are trained to predict stock prices of the next three trading days, i.e. t = 7, t = 8 and t = 9. The prediction is conducted in order to reduce the time lag of the indicators of EMA and MACD used in the state of the RL environment. It can improve the information provided to the RL agent for its actions. The LSTM model for price prediction is shown in Fig. 7. In this model, 32 LSTM units are used between the input and output layers, with a lookback period of seven days and a learning rate of 0.001. The loss is computed using the mean squared error approach. All features are normalized before being passed to the input layer. In the experiments of the full portfolio rebalancing for three portfolios, the LSTM models are trained using their historic prices in the range of year 2014-2018. The prices of the next three days are predicted for each asset in the portfolios during the process of portfolio rebalancing. Using this method, the information lag is reduced, to improve the performance of the portfolio rebalancing. Full portfolio rebalancing with LSTM price prediction The set of experiments have been performed again for the three portfolios using the full portfolio rebalancing method with the LSTM prediction models by removing the 3-day market information lag. The experiment results with the LSTM price prediction model for the first portfolio are shown in Fig. 8a full rebalancing with prediction model improved by about 5% compared to that of full rebalancing without prediction model. The RL agent is able to better rebalance the portfolio according to market trends, e.g. in the period of the beginning of 2015 and 2018. Therefore, the reduced time lag improves the performance of the full portfolio rebalancing. However, the performance of the RL agent is still below those of the BVSP and IXIC indices by about 15-20%. For the second portfolio, the experiment results of full rebalancing with the price prediction model are shown in Fig. 8b. It is observed that the RL agent performs better than the method of full rebalancing without prediction model. The NAV results are better than individual stock assets in January 2017-October 2017. But its performance drops in year 2018 and becomes worse than those of AXP, MCD and WMT by about 11-21%. For the experiments of the full rebalancing with prediction model for the third portfolio, the results are shown in Fig. 8c. It is observed that the performance of the RL agent is improved by about 11% compared to the method of full rebalancing without prediction model. Its NAV results are better than those of MNDT, but worse than those of UMBF by about 70%. Further improvements to the proposed RL model are required to enhance the performance. One aspect of the proposed RL agent to be looked into is the policy of actions. Currently, a full portfolio rebalancing method is used when a trend reversal is detected, where the composition rate of an asset can drastically step changed from the BCR to the value of (1 -BCR 9 (n -1)) and vice versa. For example, if the portfolio consists of three assets, the composition rate of an asset changes drastically from 0.1 to 0.8 for the selected action. For the full portfolio rebalancing method, if the proposed RL agent chooses a wrong action, the penalty will be maximised in periods of uncertainty, due to the high commission fees, i.e. transaction costs, incurred in a big change in composition of assets. Gradual portfolio rebalancing Therefore, it would be better if the RL agent adopts a gradual change in composition rate instead of a full switch in composition from the BCR to the value of (1 -BCR 9 (n -1)) within a single day. It will reduce the penalty of mistakes and improve results. When a market trend reversal in the portfolio is detected, the revised RL actions for the gradual portfolio rebalancing method are as follows: Increase high risk asset portfolio composition rate by k% per day, where 0% \ k \ (1 -BCR 9 (n -1)); and reduce the composition of other assets, until reaching BCR or another trend reversal detected. Increase medium risk assets portfolio composition rate by k% per day; and reduce the composition of other assets, until reaching BCR or another trend reversal detected. Increase both high and medium risk assets portfolio composition rate by 0.5 9 k% per day; and reduce the composition of low risk assets, until reaching BCR or until another trend reversal detected. Increase low risk assets portfolio composition rate by k% per day; and reduce the composition of other assets, until reaching BCR or until another trend reversal detected. Using the gradual portfolio rebalancing method, when a market trend reversal is detected, the changes of the composition rate for the corresponding asset are set by k% per day. The portfolio rebalancing will continue for few days, until this asset reaches the value of (1 -BCR 9 (n -1)) or the remaining assets reach the BCR, or another trend reversal is detected. As such, changes in the portfolio compositions are less sudden. Continuous trend reversals within short time intervals will have a lesser penalty in commission charges due to a slower composition change. Additionally, mistakes made by the RL agent will be less costly and less commission is incurred as well, if the next trend reversal is close by. However, using this gradual portfolio rebalancing method, the reactions to significant short term bullish trends and bearish trends will be slower. The proposed RL agent may not be able to rebalance the portfolio fast enough to either exploit the large bullish trend or protect against the large bearish trends. Therefore, it is a trade-off between the value of k% changes per day and the reaction latency. After a few experiments for fine tuning of the value of k with three assets in the portfolio, the gradual portfolio rebalancing method exhibits better performance when k% is set at 30%. As such, in the experiment of the gradual portfolio rebalancing method of this paper, the changes of the composition rate for the corresponding asset are set as 30% per day. Two more sets of experiments have been conducted for the gradual portfolio rebalancing method; one is without the LSTM price prediction model. The other is with the price prediction model to remove the 3-day market information lag. The experiments of the RL agent for the gradual portfolio rebalancing method without prediction model for the three portfolios are conducted, which are discussed in the next three paragraphs. For the first portfolio, Fig. 9a shows that the RL agent is able to adopt a more risk adverse approach by increasing the composition rate of the IXIC index asset at the end of Fig. 9 a Results of 1st portfolio with gradual rebalancing without prediction model. b Results of 2nd portfolio with gradual rebalancing without prediction model. c Results of 3rd portfolio with gradual rebalancing without prediction model 2014, which prevents further decreases in the portfolio NAV. Additionally, at the beginning of 2018, it adopts a risk seeking stance by increasing the portfolio composition of the BVSP index asset, which led to a substantial increase in NAV. However, at the end of 2018, it does not rebalance to the IXIC index asset in time to exploit the rapid rise in the IXIC index. Overall, the RL agent has an improved performance and a better risk profile, as its movements are less volatile than that of the BVSP index of the first portfolio. The experiment results of the second portfolio are shown in Fig. 9b. The NAV results of the RL agent in black plots for gradual portfolio rebalancing without the prediction model are better than those of the full portfolio rebalancing without prediction model by about 12%; while they are marginally better than those of the full portfolio rebalancing with prediction model. For the experiments of the third portfolio, the results are shown in Fig. 9c. It is observed that the RL agent performs better than its results for full portfolio rebalancing without prediction model by about 41%; and marginally better than those of the full portfolio rebalancing with prediction model. Similarly, for the gradual portfolio rebalancing method with the prediction model, the experiments are performed for the three portfolios, presented in the next three paragraphs. The experiment results of the first portfolio using the gradual portfolio rebalancing with the LSTM prediction model are shown in Fig. 10a. It is observed that the proposed RL agent using the prediction model further improves the performance of the portfolio NAV by around 5%, better than that achieved without the LSTM prediction model. Figure 10a shows a trend for the RL agent to increase the portfolio composition of the IXIC index asset, which leads to the similarity in the shape of the portfolio NAV curve and that of the IXIC index. For the second portfolio managed by the RL agent using gradual portfolio rebalancing with the prediction model, the experiment results are shown in Fig. 10b. It is observed that the RL agent performs better than all individual stock assets in this portfolio by about 8-14%. The RL agent rebalances the composition rates of the portfolio successfully to ride the upwards market trends in year 2017 and after February 2018. For experiments of the third portfolio, the results of the RL agent using gradual portfolio rebalancing with the prediction model are shown in Fig. 10c. The RL agent outperforms each individual stock assets in year 2018 by about 9-100%. Observed from Fig. 10a- fees is larger than the effect of the RL agent not being able to exploit bullish trends and protect against bearish trends. Discussions As discussed previously in Sect. 3, there are four different methods of the proposed RL agent, i.e. full portfolio balancing without LSTM prediction model, full portfolio balancing with LSTM prediction model, gradual portfolio balancing without LSTM prediction model, and gradual portfolio balancing with LSTM prediction model. Experiments have been performed using the three portfolios consisting of market index assets and stock assets with different risk levels from different sectors. It is observed from the experiment results that the RL rebalanced portfolios are able to switch among assets according to the market trends of each asset, to increase the profits, while considering the corresponding market risks in the experiment period. In order to better visualise the quantitative performance enhances, the comparison results of these four RL methods for the first portfolio are shown in Table 1. The comparison results for the second portfolio and the third portfolio are shown in Tables 2 and 3, respectively. The percentage of NAV return at the end of 2018 is referred as the increment of the portfolio NAV at the end of 2018 based on the initial NAV at the beginning of the experiment period. It is computed in Eq., where the initial NAV is $300,000. NAV return % NAV at the end of 2018 initial NAV initial NAV 14 The other variable shown in Tables 1, 2 and 3 is the percentage of NAV max drop in the experiment period, which is referred as the percentage of maximum NAV decrement in the time frame. Its calculation is shown in Eq.. It is observed from Tables 1, 2 and 3 that the performances of NAV portfolio managed by the four different types of RL agent keep improving with better NAV return and enhanced maximum decrement in the experiment period. Observed in Table 1 for the 1st portfolio, although their percentages of the NAV return are lower than that of the BVSP index, the methods of the RL agent show better percentage of NAV max drop and much lower volatility. Here, we compare the RL rebalancing strategies against simple Buy and Hold strategies of the three underlying assets; namely BVSP, TWII and IXIC. It illustrates the capability of the proposed RL agent to maximum the profits and handle well high risk assets, such as the BVSP index asset. It is shown in Table 2 for the 2nd portfolio, the four RL agents exhibit the same trends to improve the NAV return of the portfolio. The gradual portfolio rebalancing with the LSTM prediction model achieves the best returns at 63.3% than individual assets of AXP, MCD, WMT in this portfolio, as well as better than those of AXP, MCD, and WMT reported in in considering their trading hourly returns and corresponding portfolio weights. The 3rd portfolio results shown in Table 3 indicate the same trend of the NAV returns for the four RL agents, with the gradual portfolio rebalancing with the LSTM prediction model obtaining the best returns. It is also observed that this portfolio exhibits larger volatility with negative returns and percentages of NAV drops of the portfolio assets. It is caused by the mixture of different market trends of the individual stock asset in the experiment time frame. The RL-based approach to balance a portfolio consisting of different risk assets allows the opportunistic attempts to benefit from market trend reversals. The main disadvantage of such an opportunistic strategy is the commission leak which reduced the overall NAV. This will fail if any of underlying assets experience prolonged bullish market trend where a simple Buy and Hold strategy works better over that period, since there are lesser commission leaks. The gradual portfolio rebalancing with the prediction model suffers the least maximum NAV drop when compared against the simple Buy and Hold strategy for the three constituent assets, as well as the other three variants of portfolio rebalancing strategies as shown in Tables 1, 2 and 3. Conclusions In this paper, the proposed RL agent has demonstrated the ability to dynamically adjust the portfolio composition rates according to the market trends, risks and returns of each asset throughout the periods under study. Four different methods for the proposed RL agent are discussed and evaluated including full and gradual portfolio rebalancing, without and with price prediction models on technical indicator centring. The experiments for these four WMT -17.5 methods have been conducted and evaluated using three portfolios: one portfolio for market index assets, the other two portfolios are stock assets of NYSE and NASDAQ market. Observed from the experiment results presented in Figs. 5,8,9,10 and Tables 1, 2 and 3, the RL agent with gradual portfolio rebalancing with LSTM prediction model performs better, as it uses the appropriate trading behaviours in gradually adjusting the portfolio composition, instead of switching portfolio compositions in a single trading day. The performance improvements of the gradual rebalancing with prediction model are achieved at about 27.9-93.4% over the full rebalancing without prediction model. Its performances are also higher than most of the individual assets in these three portfolios, except for the BVSP market index. The experiments illustrate that a properly tuned RL agent with and without a LSTM price prediction model to centre technical indicators can utilise dynamic rebalancing with adjusting risks to improve portfolio returns. Thus, the strategy of dynamic portfolio rebalancing with vigorous risks coupled with the concepts of SAA and TAA strategies is shown to work well by the proposed algorithm. Future works regarding the RL agent can try to improve the stock prediction model that is used to reduce the time lag for technical indicators. It aims to examine the issue of optimising the number of trades to reduce commission leak and examine the effectiveness of other techniques of RL such as the Actor-Critic, Experience Replay or Double Q-learning in dynamic portfolio rebalancing. Funding Not applicable. Availability of data and material Data are available upon request. Code availability Code is available upon request. Declarations Conflict of interest The authors declare no conflict of interest. Consent for publication Yes. All authors agree for the publication. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
CYBERJAYA, Feb 12 (Bernama) -- Animation not only features entertaining characters and movements but has become one of the most important medium of today's world to attract audiences and facilitate message delivery. Realizing that, the Communications and Multimedia Consumer Forum of Malaysia (CFM) is providing an opportunity to challenge budding and established local animation enthusiasts who want to showcase their talent by participating in CFM ANIMATCH 2019 contest which offers a cash prize of RM8,500 for the champion. Second and third prize winners will receive a cash prize of RM5,000 and RM3,000 respectively. The 7 finalists will each receive a cash prize of RM500.
Emerging from his self-imposed cone of silence, RNC Chairman Michael Steele gave an interview Wednesday in which he said he would consider a run for president and claimed to have strategically planned his recent confrontation with radio talk host Rush Limbaugh. Speaking to CNN's Don Lemon, Steele said he "would think about" running for the White House but only if "that is where God wants me to be at that time." "I would have to have a very long conversation with the wife and kids," he said, "because this business is not a fun thing. Our politics today does not incite or inspire someone to make that sacrifice, because the way our politics is played out today, in all honesty, is very ugly." That statement, while curious considering Steele's early, rocky tenure at the RNC, was not the only newsworthy proclamation made. Later, he was asked to look back at his tiff with Limbaugh, in which he claimed that the bombastic radio personality was "incendiary" and "ugly" only to backtrack on his words. It looked embarrassing for the Maryland Republican at the time. But Steele contended that it was all a part of a grand "strategic" plan. "I'm very introspective about things. I'm a cause-and-effect kind of guy. So if I do something, there's a reason for it... It may look like a mistake, a gaffe. There is a rationale, there is a logic behind it," he said. "I want to see what the landscape looks like. I want to see who yells the loudest. I want to know who says they're with me but really isn't." "It helps me understand my position on the chess board. It helps me understand, where, you know, the enemy camp is and where those who are inside the tent are," Steele added. "It's all strategic." <0--691--hh>0--691--hh>
Caitlin Johnstone: The face of the Alt-Left Over the past few days, myself and Banter writer Jeremy Fassler has been engaged in a war of words with far left, self described "rogue" journalist Caitlin Johnstone. Fassler mentioned Johnstone, a writer and one-time astrologer from Melbourne, Australia, as an example of the increasing insane Alt-Left community continuing to propagate provably false conspiracy theories about Russia's interference with the U.S election and the monstrously offensive lie that the DNC had Seth Rich murdered because he had leaked emails to Russia. ADVERTISEMENT Thanks for watching! Visit Website Her writing can basically be characterized as an astonishing display of emotionally manipulative narcissism, devoid of logic, fact or sanity that would be laughed out of any reputable media outlet. But this is the era of the internet, and apparently anyone can make a buck pretending to be a journalist. Johnstone believes herself to be a major player in the war against the evil, amorphous "Deep State" trying to blow up the world (yes, really) by creating a nuclear conflict with Russia. Johnstone was absolutely livid about the piece Jeremy published, and accused the Banter of threatening her life by accusing her of writing for Russian propaganda outlets like Russia Insider (because of course she is so vital to the cause). ADVERTISEMENT Thanks for watching! Visit Website ADVERTISEMENT Thanks for watching! Visit Website In fairness to Johnstone, it appears she hadn't actually written for Russian propaganda outlets specifically, so we amended the article to reflect the fact that she had instead knowingly allowed them to publish her work there (which she denied, despite her email clearly showing she knew). Either way, it was a trivial point and not the purpose of Fassler's original article, which was to expose Johnstone's ludicrous "journalism" and the vile conspiracy theories she is helping perpetuate in far left (and right) wing circles. We've received dozens of emails from Johnstone angrily threatening to sue us for lying about her, each one more unhinged than the last. As well as tweeting out private emails from me (apparently showing me "abusing" her), she is now promising to make taking us down her "life's work", a threat that would be funny if it wasn't so sad. Generally speaking, I don't pay too much attention to low hanging fruit like Johnstone, but I do feel her writing provides an almost perfect example of the type of paranoid conspiracy fueled nonsense creating great instability in journalism today. The half-baked, lazily researched fear mongering she engages in is this era's iteration of the 9/11 conspiracy theories that plagued the internet in the early 2000's. Unsurprisingly, Johnstone is herself part of the 9/11 conspiracy theory movement, having penned articles for the '9/11 Truth Action Project', a site dedicated to promoting ridiculous conspiracies about the US government's involvement in the attacks on 9/11 (we'll wait for Johstone to angrily deny she knowingly wrote for them shortly after this is published). This all might seem like a complete waste of time, but I do think it is worth looking at how the mind of a paranoid conspiracy theorist manipulates evidence to fit their disturbed world view and attract other vulnerable people to their cause. Incidentally, I don't think Johnstone's intentions are bad -- she likely believes herself to be a noble crusader for the truth, waking up the sheeple brainwashed by the corporate media with her searing insights and fearlessness. I actually sympathize with her somewhat, as we do live in highly uncertain times with an bewildering number of information sources vying for our attention, and she clearly isn't handling it all that well. However, she has chosen to spread her paranoid delusions on the internet in an act of sheer narcissism to a relatively wide audience, and has displayed shockingly little self awareness and astonishing levels of aggression when called out for her bullshit. Let's take a look at the mind blowing opening sentence of her truly deranged piece propagating the Seth Rich conspiracy theory: A statement purportedly authored by Seth Rich’s parents has been published in the CIA-funded Washington Post titled “We’re Seth Rich’s parents. Stop politicizing our son’s murder.” I don’t really know what is meant by this slogan about “politicizing” Seth Rich’s murder which mainstream outlets keep repeating and which Rich’s parents have oddly begun parroting, but speaking for myself I am not pushing any political agenda at all by reporting on the Seth Rich case. I’m pushing the prevention of a world-ending nuclear holocaust. Right off the bat, Johnstone wants her readers to believe that the CIA -- apparently in collusion with the murderous DNC -- directed the Washington Post to print a fake news story quoting Seth Rich's parents to direct people away from the shocking truth that their son leaked emails from the DNC, not the Russian government. Not only that, "rogue journalist" Caitlin Johnstone is not only here to save stupid Americans from themselves, but the whole world! The evidence for this? Amazon's Jeff Bezos, who bought the Washington Post, got a contract for Amazon Web Services to build servers for the CIA. That's it. According to Johnston this means the CIA now controls the Washington Post, and forced the globally respected, award winning media outlet to print fake news stories to cover up their murders. Of course Johnstone isn't actually saying this overtly, but that's what she wants her susceptible readers to think. Let's be completely clear about this. Seth Rich was not murdered by the DNC and was not the email leaker. This is not debatable anymore as the evidence for this is completely overwhelming. Only halfwits like Johnstone and Sean Hannity are continuing to spread the monstrous lie, and if there was justice in the world, they would be sued into oblivion for profiting off of it. The rest of Johnstone's piece is equally as insane, devoid of context and atrociously researched. Johnstone is courageously playing daring truth seeker over 10,000 miles away in another country, and her addled mind simply wanders from one conspiracy theory to the next, tying them together to create a grand narrative pitting the evil forces of moderate liberalism, MSNBC and people who work in government against brave truth seekers like herself: The still unproven accusation that the DNC emails released by WikiLeaks were originally taken by Russian hackers was what began the manufacturing of support for these escalations. Americans generally didn’t think much about Russia until the mainstream media started telling them to, but now even local town halls which have nothing to do with foreign policy are dominated by this dangerous Russia hysteria. It was these hacking allegations that manufactured support for Obama’s provocative sanctions and increase of troops along Russia’s border at the end of his term, which Rachel Maddow has openly said cannot be pulled back without making Trump appear guilty of collusion with the Kremlin. Do you see how this works? Does anyone get this? The fact-free Russia hysteria is being used to pressure Trump into maintaining these omnicidal tensions in the Baltic region, Ukraine and Syria which could blow up any second and lead to a chain of events which see a nuclear warhead being deployed by either side accidentally, on purpose, or a mixture of the two in the chaos of armed conflict, and once one goes off, they all do. There is little point trying to convince someone like Johnstone of what every sane person in the world knows to be almost certainly true, that Russia hacked the DNC and attempted to swing the election in favor of Donald Trump. Why? Because she so desperately doesn't want it to be true that she will accept no evidence, no matter how compelling, that interfere's with her narrative. Johnstone and other conspiracy theorists appears to genuinely believe that by repeating that the Russia hacking story is "falling apart" over and over and over again that it will magically make it so. It isn't falling apart -- the evidence grows stronger by the day, and Johnstone and Co. are looking more idiotic by the minute. The truth is, Caitlin Johnstone is a fantasist playing sleuth detective from her bedroom in Australia, and enjoying all the attention she is getting. She will no doubt use this article to portray herself as a victim of a grand conspiracy -- a brave soldier in the war against the McCarthyite press, and a forceful advocate for Truth, Justice and the Australian(?) Way. Her paying fans will lap it all up, and she'll march on spouting more nonsensical garbage to a crowd that has now completely isolated itself from objective reality. This would be amusing if it wasn't causing so much damage to real journalists dedicated to doing real journalism. I'm no fan of the corporate media, and I believe it is in dire need of a radical shake up. But screeching lunatics like Johnstone do not have the public's best interest at heart -- particularly when they don't even live in the country they spend all day blogging about -- and do not care about honestly sifting through evidence and putting forward intellectually honest arguments, like an actual journalist would do. They care about themselves and the attention they can get from dark sermons about nefarious forces that only they understand. This isn't about "the truth", it's about Caitlin Johnstone and her little vanity project on the internet. If Johnstone wants "to go to war" with us and do as much damage as she possibly can to the Banter, she's more than welcome. We have far bigger fish to fry, so I won't be paying her any more attention after this. And as every conspiracy theory loon will find out at some point, the only person they're really doing damage to is themselves. Update: A previous version of this article stated that Johnstone had not trained as a journalist. According to her, she graduated with a degree in journalism in Australia in 2003, although there is no mention of this on her medium account or profiles on any of the sites she has written for. Regardless, the Banter maintains that Johnstone is not by any meaningful definition of the word, a journalist.
Just Smart or Just and Smart Cities? Assessing the Literature on Housing and Information and Communication Technology ABSTRACT Housing issues, including affordability, instability, and the search for available units, present ongoing challenges for urban inhabitants. Supporters claim information and communication technology (ICT) can solve housing problems through increased efficiency, transparency, and the creation of smart cities. However, little is known about the actual use and application of ICT data on housing issues. This article reviews and assesses recent empirical research involving housing and ICT data. Using Web of Science to identify relevant articles, we find most studies focus on housing search and prices or home sharing, which partly reflects the availability of data in these domains. Few articles use ICT data to explore housing challenges for economically vulnerable, historically disadvantaged, or marginalized groups. We discuss concerns about representation in ICT data related to housing and argue for more attention to the needs of vulnerable groups to help build more inclusive smart cities.
SARS-CoV2 neutralizing activity of ozone on porous and non-porous materials The COVID-19 pandemic has generated a major need for non-destructive and environmentally friendly disinfection methods. This work presents the development and testing of a disinfection process based on gaseous ozone for SARS-CoV-2-contaminated porous and non-porous surfaces. A newly developed disinfection chamber was used, equipped with a CeraPlas™ cold plasma generator that produces ozone during plasma ignition. A reduction of more than log 6 of infectious virus could be demonstrated for virus-contaminated cotton and FFP3 face masks as well as glass slides after exposure to 800 ppm ozone for 10−60 min, depending on the material. In contrast to other disinfectants, ozone can be produced quickly and cost-effectively, and its environmentally friendly breakdown product oxygen does not leave harmful residues. Disinfection with ozone could help to overcome delivery difficulties of personal protective equipment by enabling safe reuse with further applications, thereby reducing waste generation, and may allow regular disinfection of personal items with non-porous surfaces. Introduction SARS-CoV-2, the causative coronavirus agent of the infectious disease COVID-19, is mainly transmitted via respiratory droplets and aerosols. Direct contact with virus-contaminated surfaces such as cell phones, computer keyboards or door handles can also lead to infections, and fecal-oral transmission has been reported. The survival time of SARS-CoV-2 on various surfaces has been described, with recovery of infectious virus for up to 28 days when dried on non-porous surfaces such as glass or metal at 20 C and 50 % relative humidity. Another study recovered SARS-CoV-2 from plastic surfaces for up to 28 days at room temperature (RT) and 40-50 % relative humidity. On the outer layer of surgical masks infectious virus could be recovered after 7 days. One of the strategies for protecting against transmission is the use of personal protective equipment (PPE), including face masks and eye protection. During certain phases of the COVID-19 pandemic, face masks became sparsely available. The single use of PPE created a great demand, resulting in critical delivery delays of weeks and months. On the other hand, the extensive use of face masks and other protective items has led to a new form of massive waste generation. The disposal of huge amounts of used PPE components is an organizational challenge and a previously underestimated hazard for the environment. Due to the high stability and rapid transmission of the virus, the shortage of PPE and the environmental pollution, an easy to use and sustainable disinfection method is needed that can make an important contribution to combating the pandemic and protecting the environment through the safe recycling of PPE. Ultraviolet (UV) irradiation, vaporized hydrogen peroxide, moist heat, microwave-generated steam processing and liquid chemicals have all been reported to sterilize PPE, but each method has its disadvantages. Any kind of mask comprises a combination of various materials, each of which is differently sensitive to chemical or radiation treatments. For example, liquid treatments such as alcohols require drying time, may cause oxidation, e.g. at metal clamps, and lead to a loss of filtering performance. Some materials are sensitive to heat or Abbreviations: BSL3, biosafety level 3; FFP3, filtering facepiece 3; FCS, fetal calf serum; PPE, personal protective equipment; VI, Virus input; PFU/mL, plaque forming units per ml. chemicals, and UV light is unsuitable for materials with complex structures because areas shaded from the UV light will not be disinfected. Some masks also include electrostatically charged filter materials that may be adversely affected or even discharged by various treatments. An alternative is the use of the strongly oxidizing gas ozone. Industrially, ozone is produced on a large-scale for water purification, paper and pulp processing, disinfection of plant and animal products as well as sterilization of medical supplies. It reacts with most elements of the periodic system except for noble metals, fluorine and the inert gases. The antimicrobial activity of ozone has been reported for a broad range of bacterial targets on surfaces such as glass, plastic or steel. Its antiviral activity has been demonstrated for targets in enveloped and non-enveloped viruses, including the viral capsid, specific viral attachment epitopes and viral DNA/RNA. Ozone disinfection of an N95 respirator has already been reported for Pseudomonas aeruginosa, which was selected as a test organism because of its spore-forming capacity and high resistance to disinfection processes. It was assumed that SARS-CoV-2 would likely be more susceptible to ozone disinfection than the other species tested. Exposure to ozone did not show significant changes in the filtering capacity of the N95 respirator after 10 cycles. Ozone disinfection of artificially SARS-CoV-2-contaminated KF94 face masks, which are similar to N95 respirators, has been reported recently. Moreover, the virus-inactivating activity of ozone was demonstrated for different metals, contaminated with a corona pseudovirus and HuCoV-229E, such as stainless steel, nickel and copper as well as glass. Since ozone dosages for disinfection of different surfaces vary, a process is needed that is suitable for hard materials and PPE. To address the need for safe and easy disinfection an experimental disinfection chamber has been developed (TDK Electronics GmbH & Co OG, Deutschlandsberg, Austria). The heart of the chamber is the patented cold plasma generator (CeraPlas™ element), the function of which is based on piezoelectric direct discharge (PDD). This converts a low periodic input signal into a high output voltage via piezoelectric coupling effects and enables the ionization of the surrounding gas at atmospheric pressure and ambient temperature. The generator provides a high ionization rate and an efficient ozone generation rate. Advantages are the low energy consumption during the ignition of the cold plasma in air at atmospheric pressure, the low thermal load for test materials (below 50 C) and the compact dimensions. In addition, ozone generation enables the elimination of unpleasant odours. Cold plasma applications have increased in a variety of different fields over the last decades, including the automotive industry, medical devices, biomedical applications, skin care and surface treatments. A relatively new application of cold plasma is the field of virus inactivation Effect of ozone treatment on cotton masks. Heat-drying (5 at 40 C) alone or in combination with ozone treatment for 10 min, compared to positive controls (pos_Ctrl; cells infected with the same virus copy numbers as loaded on the masks). Cq values of 6 samples for each condition (5 _40 C, 5 _40 C_10 _ozone, pos_Ctrl) are shown. Non-infected cells (2 samples) served as negative controls (neg_Ctrl). Nd: not detectable. The aim of this study was to evaluate the disinfection capacity of ozone, generated within a newly developed disinfection chamber, against SARS-CoV-2 on cotton and FFP3 face masks as examples of porous materials, and on glass as an example of a non-porous material, to be used on a small scale, e.g. in homes, companies, offices, etc. Ozone generation process for disinfection The disinfection chamber developed (TDK Electronics GmbH & Co OG, Deutschlandsberg, Austria) is an experimental device to investigate the inactivation of SARS-CoV-2 on porous and non-porous surfaces. The prototype is an aluminium chamber with a nominal capacity of 1450 mL, containing a plasma generator and a microcontroller to regulate the disinfection process. The virus-contaminated matrices undergoing the disinfection process are positioned on the metallic sample holder in the disinfection chamber, which is closed by a screw cap on the top ( Supplementary Fig. S1). The plasma generator (CeraPlas™ element, Relyon Plasma GmbH, Regensburg, Germany) produces a cold plasma inside the disinfection chamber and ozone is generated as a side effect of the plasma generation. The ozone concentration was monitored before the experimental series to evaluate the reactive environment inside the disinfection chamber. During the experiments, no external air was supplied. After a time period of 10 or 60 min the disinfection chamber was opened in a laminar flow cabinet and was flushed with ambient air to terminate the disinfection process. Preparation of SARS-CoV-2 virus stock All experimental procedures with SARS-CoV-2 were performed in a biosafety level (BSL)-3 laboratory at room temperature (RT, 22− 24 C) and 45 % relative humidity. The experimental series were performed using a SARS-CoV-2 virus isolate (Human 2019-nCoV Isolate ex China Strain: BavPat1/2020) originated in the city of Wuhan (Hubei province, China). The virus was obtained under a licence agreement from the Charit University Hospital, Berlin, Germany (Institute of Virology, Prof. Drosten). Virus stocks were prepared by infecting VeroE6 cells with the virus isolate and incubating them at 37 C and 5% CO 2 for 72 h. The cell culture supernatants were collected, centrifuged for 10 min at 3000 xg and sterile filtered using 0.2 m syringe filters (Thermo Fisher Scientific, Waltham, USA). Adherent cells in cell culture flask were frozen with fresh medium, thawed again and scratched from the surface to release intracellular viral particles. Cell lysate samples were centrifuged for 10 min at 3000 xg to remove cell debris and sterile filtered using 0.2 m syringe filters. Supernatants and cell lysates were pooled and stored at − 80 C. The virus titer was determined via the Spearman-Karber method. In brief, VeroE6 cells were seeded in 48-well cell culture plates and infected with the serially diluted virus stock (6 wells for each dilution) for 1 h at 37 C and 5% CO 2. After infection, cells were washed twice with MEM, then MEM supplemented with 2% FCS was added to each well and the cells were incubated at 37 C and 5% CO 2 for 72 h. All wells were observed under the microscope to estimate the highest dilution at which all showed a cytopathic effect (CPE). The TCID 50 titer was calculated using the formula: log 10 50 % end, where x 0 = log 10 of the reciprocal of the highest dilution at which all wells showed CPE, d = log 10 of the dilution factor, n i = number of replicates used in each individual dilution, and r i = number of positive wells (out of n i ). Summation was started at dilution x 0. The resulting TCID 50 titer per ml was multiplied by 0.7 to predict the number of PFU/mL. Face mask preparation and ozone treatment Cotton face masks (100 % cotton, white, Bro Handel GmbH, Villach, Austria) and FFP3 face masks (Blautex, Produktions-u Vertriebsges.m.b.H., Salzburg, Austria) were cut into circular pieces of 2 cm in diameter and were positioned in Petri dishes (60 mm diameter, Merck KGaA, Darmstadt, Germany). Each tested group consisted of 6 different pieces. The mask pieces were dried for 1 h at 40 C in a BSL2 laboratory to reduce residual moisture prior to virus application. Thereafter, they were immediately taken to the BSL3 laboratory to be used for the virus neutralization assay. 50 L of virus suspension (3.89E + 04 pfu/mL), buffered with 25 mM HEPES at pH 7.4, (Thermo Fisher Scientific) were spotted on and quickly absorbed by each mask piece. Subsequently, the samples were heat-dried for 5 min at 40 C in an incubator and treated with ozone in the disinfection chamber for a total exposure time of 10 min. For the first 5 min, ozone was generated in the disinfection chamber until a concentration of 800 ppm was reached. This was then followed by 5 min additional exposure time in the closed chamber. Ozone decomposition to 750 ppm occurred over this time. Control mask pieces underwent the same procedure including heat-drying at 40 C, but remained in the closed Petri dishes for the duration of the experiment and were not treated with ozone. For recovery of viral particles, the mask pieces were placed into 2 mL safe-lock tubes (Eppendorf Austria GmbH, Vienna, Austria) containing 1 mL serum-free OptiPro cell culture medium and were vortexed for 10 s. Samples were centrifuged for 10 min at 1500 xg and sterile filtered using 0.45 m syringe filters (Merck KGaA, Darmstadt, Germany). 140 L of each sample was collected to determine the recovered virus particles that served as virus input (VI) for the neutralization assay by viral RNA extraction and RT-qPCR (Supplementary Tables S1-S3). Glass slide preparation for the SARS-CoV-2 stability test Complementary to the procedure above for porous materials (face masks) the stability of SARS-CoV-2 was also tested on non-porous glass surfaces. Glass slides (LACTAN, Chemikalien und Laborgerte Vertriebsgesellschaft m.b.H & Co KG, Graz, Austria) were cut to 2 2 cm and cleaned with 70 % ethanol. Each tested group consisted of 3 glass slides. The slides were pre-treated for 20 s with cold plasma, produced with a Piezo brush PZ2® (Relyon Plasma GmbH, Regensburg, Germany) to avoid droplet formation. Twenty L of virus suspension (3.89E + 04 pfu/mL), buffered at pH 7.4 with 25 mM HEPES was spotted onto each plasma pre-treated glass slide. The slides were heat-dried at 40 C for 10 min and then kept in closed Petri dishes at RT for 0, 1, 24 and 48 h, respectively. The dried virus suspensions on the slides were recovered after the different time periods by washing each slide with 500 L of serum-free cell culture medium. Samples were collected to determine the VI before cell infection. Vero CCL-81 cells were infected with virus samples for 1 h at 37 C and 5% CO 2. Cell culture supernatants were collected 48 h after each infection to determine viral copy numbers via viral RNA isolation and RT-qPCR (below). Glass slide preparation for ozone treatment Glass slides were cut to 2 2 cm, cleaned with 70 % ethanol and placed in 60 mm diameter Petri dishes. To avoid droplet formation on the surface, slides were pre-treated for 20 s with a Piezo brush PZ2® (Relyon Plasma GmbH, Regensburg, Germany). Each tested group consisted of 6 glass slides. Twenty L of virus suspension (3.89E + 04 pfu/ mL) buffered with 25 mM HEPES was spotted to each glass slide. All samples were heat-dried for 10 min at 40 C. Glass slides were treated with ozone in the disinfection chamber for a total exposure time of 10 and 60 min. Ozone was initially generated for 5 min until a concentration of 800 ppm was reached, followed by 5 or 55 min exposure in the closed chamber, during which ozone decomposition to 750 ppm (5 min) or 400 ppm (55 min) occurred. Control samples were heat-dried at 40 C and kept in closed Petri dishes without ozone treatment for the duration of the experiment. Slides were then washed with 500 L of serum-free OptiPro cell culture medium for virus recovery. Samples were collected for determination of the VI and Vero CCL-81 cells were infected with the virus suspensions. Cell culture supernatants were Infection assays For infection assays 30,000 Vero CCL-81 cells per well were seeded into 48-well cell culture plates (Corning Incorporated, Kennebunk, ME, USA) 24 h prior to virus infection. The cells were infected with the virus recovered from the samples (VI) prepared as described above for 1 h at 37 C and 5 % CO 2. For infection assay controls the same amount of virus suspension as applied onto the different matrices was mixed with serumfree OptiPro cell culture medium and was applied to the cells for infection. Non-infected cells served as negative controls. After infection the cells were washed twice with phosphate buffered saline (PBS) (Thermo Fisher Scientific). 440 L serum-free cell culture medium was added to the cells and 140 L supernatant of each well was collected subsequently to determine the timepoint 0 (t0) values. SARS-CoV-2 replicates rapidly in Vero CCL-81 cells and reaches peak titers between 48− 72 h post infection ; thus incubation times of 48 or 72 h at 37 C and 5% CO 2 were chosen. After the incubation period, 140 L cell culture supernatant was removed from each well to determine the virus copy numbers at the timepoints (t48, t72) post infection via viral RNA extraction and RT-qPCR (below). RNA isolation and quantitative, real-time PCR (RT-qPCR) Viral RNA was isolated using the QIAamp® Viral RNA Mini Kit (Qiagen, GmbH, Hilden, Germany) according to the manufacturer's recommendations. RNA samples were eluted with 40 L Milli-Q water and stored at − 80 C. RT-qPCR was performed using a Rotor-Gene Q thermal cycler (Qiagen) and the QuantiTect®Probe PCR Kit (Qiagen). Primers and probe sequences for the SARS-CoV-2 nucleocapsid N1 region were used as recommended by the Centers for Disease Control and Prevention (CDC) in February 2020 and obtained from Eurofins Genomics (Ebersberg, Germany). Primer and probe sequences, PCR mastermix components and thermal profile are shown in Supplementary information, Tables 1-3. Immunohistochemistry (IHC) For the immunohistochemical detection of SARS-CoV-2 in infected cells, 48-well plates were fixed for 30 min with 4% neutral-buffered formalin and were washed 3 times with PBS. Plates were incubated with 0.1 % Triton X-100 (Merck KGaA) for 10 min, washed 3x with PBS and incubated for 30 min in 3% H 2 O 2 (Merck KGaA) dissolved in methanol (Merck KGaA, Darmstadt, Germany). After a further PBS washing step, 100 L of the primary antibody, SARS-CoV-2 (2019-nCoV) Nucleocapsid Antibody (Rabbit monoclonal antibody (Mab); Sinobiological, China, Cat# 40,143-R019) diluted 1:1000 in REAL Antibody Diluent (Agilent Dako, Carpinteria, CA, USA, Cat# S202230− 3) was added to each well. The plates were washed 3 times with PBS after 1 h incubation at RT. The Ready-to-use detection system reagent EnVision™ + Dual Link System HRP (Agilent Dako, Cat# K5007) was added for 30 min, followed by washing with PBS 3 times. AEC Substrate Chromogen (Agilent Dako, K346430− 2) was applied to each well and incubated for 3 min, and the reaction was stopped by adding PBS. Wells were washed again with PBS to remove reagent and fresh PBS was added to keep the wells humid. Images were taken by light microscope (Nikon, Eclipse, TS100; Nikon Europe BV, Amsterdam, Netherlands) equipped with a JENOPTIK GRYPHAX® camera (Breitschopf, Innsbruck, Austria). SARS-CoV-2 infected cells appear red after antibody staining. Fig. 4. No infection can be seen in all six samples (same experiment and sample labeling as shown in Fig. 2). a: To calculate viral copy numbers based on the RT-qPCR cq-values a calibration curve (Supplementary Figure S3) based on a certified RNA standard (ATCC VR-1986D™) was used. This standard contains 4.73 10 3 genome copies per 1 L. Viral copy numbers of VI, t48 and t72 were calculated using the resulting equation y = 1.422x + 35.079. Ozone generation for disinfection Ozone was generated by the cold plasma source inside the Fig. 4. No virus can be seen in either sample (same experiment as shown in Fig.2). Magnification 150x. Results are shown from two independent experimental series with 6 samples per experimental condition, each. A: Experimental series 1 showing virus copy numbers from heat-dried (5 at 40 C) and 10 ozone-treated samples compared to positive control (cells infected with same copy numbers as loaded on masks). P value: <0.0001**** B: Experimental series 2 showing virus copy numbers from heat-dried (5 at 40 C) and 10 ozone-treated samples compared to positive control (cells infected with same copy numbers as loaded on masks). Data is plotted in Log 10 intervals. P value: 0.0021** (t test). After drying and 60 min ozone treatment a reduction of infectious virus by a factor of > log 6 was obtained. All groups consisted of 6 replicates, each. Data is plotted in Log 10 intervals. P value: <0.0001**** (One-way ANOVA). disinfection chamber. The ozone-based disinfection process is divided in two phases: generation (phase 1) and chemical decomposition (phase 2). The disinfection process of the virus-contaminated surfaces begins in phase 1. The exponential increase of ozone concentration inside the disinfection chamber over time is shown in Fig. 1A. After the cold plasma source is terminated, the disinfection chamber remains closed and the chemical decomposition phase begun. Ozone concentration inside the chamber decreases in the sealed container during the chemical decomposition time as shown in Fig. 1B. In this study an ozone generation time of 5 min was performed for all surfaces tested, resulting in a maximal concentration of 800 ppm inside the chamber. The additional exposure of 5 min for FFP3 and cotton face masks led to a decomposition of ozone to 750 ppm. The 55 min additional exposure for the glass surfaces resulted in decomposition to 400 ppm ( Supplementary Information, Fig. S2). After phase 2 the disinfection test chamber was opened in a laminar flow cabinet and flushed with ambient air, which terminates the disinfection process. SARS-CoV-2 inactivation by ozone treatment of cotton face masks Cotton masks loaded with SARS-CoV-2 showed that heat-drying for 5 min in combination with ozone treatment of 10 min ( Fig. 2; 5 _40 C_10 _ozone) led to virus reduction by a factor of >8.99 10 6 in all 6 samples compared to the positive controls. In contrast, heat-drying without ozone treatment ( Fig. 2; 5 _40 C) led only to minor inactivation of up to log 1 in the tested samples and the virus particles remained infectious, as indicated by the low cq values reflecting the high number of virus particles released into the cell culture medium during 72 h postinfection. As positive controls, cells were infected with the same virus copy numbers as loaded on the masks, with non-infected cells as negative controls. The amount of virus recovered from the mask pieces after ozone treatment and from control masks, which was the virus input (VI) for cell infection (VI is shown in the Supplementary Fig. S4). Recovery of virus particles was similar for each tested group, underlining that the cells were infected with approximately the same number. Moreover, virus particles were measured at t0 (after infection and washing the cells with PBS and addition of fresh cell culture medium, Supplementary Fig. S5) to obtain a baseline value, and the increase in measured virus particles after the cultivation period (e.g. t72 -t0) was used to calculate virus replication. In order to calculate virus copy numbers from cq values obtained by RT-qPCR, a standard curve using a certified SARS-CoV-2 RNA standard was generated (Supplementary Figure S3). Results demonstrate that heat-drying in combination with ozone treatment yielded more than log 7 inactivation of SARS-CoV-2 compared to the positive controls (infected cells) as shown in Fig. 3. The effect of ozone-treatment of cotton masks on virus inactivation was also analyzed by using an antibody to the SARS-CoV-2 nucleocapsid protein as independent read-out (Figs. 4-6). SARS-CoV-2 infected cells appear red after IHC staining. Stained cell culture plates demonstrate that virus recovered from heat-dried masks without ozone treatment was still infectious (Fig. 4). The absence of SARS-CoV-2 infected cells is confirmed by antibody staining in all 6 wells that were incubated with virus recovered from masks dried at 40 C for 5 min and then treated with ozone for 10 min (Fig. 5). Cells infected with the same virus copy number as applied to the mask pieces were used as positive controls. In this case infection can be seen in all 6 wells (Fig. 6). In Fig. 7, noninfected cells stained with SARS-CoV-2 antibody serving as negative control are shown. In both samples no virus infection can be seen. SARS-CoV-2 inactivation by ozone treatment of FFP3 masks In line with the results for the cotton masks, the combination of heatdrying and ozone treatment of FFP3 masks resulted in virus inactivation by a factor of log 6 in all tested samples. Two independent experimental series were carried out using 6 mask pieces for each testing condition as described above. Viral copy numbers for heat-dried and ozone-treated samples ( Fig. 8; 5 _40 C_10 _ozone) were compared to positive controls ( Fig. 8; pos_Ctrl). A virus inactivation of > log 6 was achieved in both experimental series. SARS-CoV-2 stability and inactivation on glass surface In order to test the stability of SARS-CoV-2 on non-porous surfaces, glass slides were loaded with virus and recovery of infectious virus was tested after different time periods. Immediately after loading and heatdrying at 40 C, a mean viral copy number of 2.82 10 6 of infectious SARS-CoV-2 virus was recovered (Fig. 9). After 1 h at RT, detectable infectious virus was reduced to 2.68 10 5. Interestingly, the mean viral copy number did not decrease further after 24 h or 48 h at RT. Thus, SARS-CoV-2 is stable and infectious on non-porous surfaces for at least 48 h at RT. SARS-CoV-2 inactivation by ozone treatment of glass surface Glass slides were loaded with SARS-CoV-2 suspension and Vero CCL-81 cells were infected with recovered virus to test virus inactivation. After drying at 40 C for 10 min, infectious virus was reduced by a factor of log 1 compared to the positive control, while drying and ozone treatment for 10 min led to a further reduction by a factor of log 1. Drying and ozone treatment for 1 h led to virus inactivation by a factor of > log 6 ( Fig. 10). The virus neutralizing activity was confirmed by antibody staining which showed no infected cells in wells of the slides dried and ozone-treated for 1 h (data not shown). Discussion The wearing of face masks and other PPE is an important measure to protect from SARS-CoV-2 infection. Disadvantages of these single-use products are supply chain and production difficulties due to the enormous demand during the pandemic. This has led to shortages in production capacity particularly for masks with higher protection levels, such as FFP2 and FFP3, which are needed for healthcare professionals, and were also required for the general population in several countries in the SARS-CoV-2 wave in spring 2021. As a further consequence, immense amounts of waste were generated which became a major environmental issue. Safe and sustainable recycling methods could be an alternative to discarding products after single use. This work describes a highly efficient disinfection process for SARS-CoV-2 that can be applied to porous and non-porous surfaces, demonstrated with different types of masks as well as glass slides, to exemplify a non-porous surface found on a variety of personal items such as mobile phones, tablets, watches or glasses. The combination of heat-drying and ozone treatment resulted in a virus reduction of more than log 6 for cotton and FFP3 masks as well as for glass slides. According to the guideline of the German Association for the control of virus diseases (DVV) and the Robert Koch Institute (RKI) a reduction of at least log 4 is required for virus-inactivating disinfectants. Hence, the combination of drying and ozone treatment appears to represent a suitable method for decontamination of various surfaces. However, different properties of surfaces for stabilization and inactivation of virus have to be considered and tested for specific applications. For example, Zucker and co-workers demonstrated different surface tensions of virus-containing droplets on different metal surfaces and glass, which was mirrored by different virus-inactivating activities of ozone. Whether the drying-step included in our experiments overcomes and or mitigates this effect has to be determined. The virus concentration used in this work for the application on the different materials was 3.89 10 4 PFU/mL, which corresponds to a mean cq value of 18.5 for positive controls after sample treatment and recovery as shown for the cotton face mask (Supplementary Figure S4). When calculated for viral copy numbers based on a reference standard this corresponds to 1.16 10 5 viral copies. Viral load from a single cough from a person with a high viral load in respiratory fluid may generate >10 5 viral copies. The viral load used for sample loading in this work therefore corresponds to amounts that could be transmitted by contagious individuals. The different ozone treatment times of materials described in this work can be explained by their structures. Cotton masks are not intended for medical use and are not certified. However, they prevent the spread of potentially contagious droplets. This type of mask is reusable after washing but disinfection with ozone would save time and be less damaging to the fabric compared to daily washing. FFP3 masks provide the most effective protection for users with a minimum filtration percentage of 99 % for very fine particles. According to the manufacturers, FFP3 masks have a limited lifespan of a few hours, after which their filtration efficiency can no longer be guaranteed and they should be discarded. Cotton fabrics and layered FFP3 masks have much larger surfaces than glass slides. These masks absorb virus suspensions quickly leading to a large surface distribution in the materials. As a result, the contact surface for ozone increases and the samples are disinfected within short time periods. Liquids on glass surfaces form droplets due to the non-porosity of the surface. Preliminary experiments showed that if treated with ozone, virus particles in droplets cannot be inactivated because of the surface tension. To overcome this effect, test glass slides were pre-treated with cold plasma. This surface treatment was chosen because a liquid film represents a more realistic setting than a large droplet. The surface energy of inorganic materials like glass is increased by plasma treatment. Without treatment, glass has a surface energy of 47 mN/m. After treatment with a piezo brush PZ2®, the surface energy increased to >67 mN/m. If the surface energy of the glass slide is the same or greater than that of the liquid, the liquid will spread on the surface resulting in virus contamination resembling that of aerosols. This fact makes the wetting of non-porous surfaces possible and allows the distribution of the virus suspension on the glass surfaces for testing of disinfection procedures. The resulting liquid film can then be heat-dried in 10 min. Disinfection with ozone for 10 min is effective for some, but not all, treated samples. This is most likely due to the non-uniform drying of the liquid film. However, after an ozone treatment of 60 min an efficient disinfection (>log 6 reduction of infectious virus) for all glass slides treated with a combination of heat and ozone can be achieved (Fig. 10). In contrast to other disinfection methods, gaseous ozone has many advantages. Its high reactivity makes it useful as a disinfectant for bacterial and viral contaminations which has been described in detail. Ozone penetrates complex objects such as layered face masks from all sides and reaches every surface of the sample that cannot be achieved with e.g. alcohol or UV light. It is thermodynamically unstable and its breakdown product, oxygen, is environmentally harmless. However, when inhaled, ozone can cause adverse respiratory effects, such as shortness of breath, chest pain, wheezing, coughing and airway inflammation, While long term exposure has been linked to the development of asthma and may be associated with lung cancer [14,. Consequently, the direct opening of the disinfection chamber is only possible under laboratory conditions and in a laminar flow cabinet. To guarantee maximal safety for the future use of ozone-producing disinfection devices outside of the laboratory, the installation of a catalyst for the active chemical decomposition of ozone before opening the device is easily implemented. One commonly used catalyst is manganese oxide and other options are platinum group metals, less widely used due to their high cost. Conclusion The results of this work present a highly efficient combined heatdrying and ozone treatment process that is suitable for the disinfection of various porous and non-porous SARS-CoV-2 contaminated surfaces. A reduction of more than log 6 for SARS-CoV-2 was demonstrated for cotton and FFP3 masks and for glass slides. The method could help to overcome delivery difficulties of face masks and reduce waste caused by single-use PPE. The disinfection of other personal items of daily life is also within scope as well as industrial applications. Funding This work was funded by TDK Electronics GmbH & Co OG. Declaration of Competing Interest RelyOn Plasma is a subsidiary of the TDK company and provided the apparatus for the study. The co-authors M. Puff and A. Melischnig are employees of TDK Electronics and co-inventors of the technology, which is owned by TDK-Electronics. They declare no conflict of interest in relation to this study.
Firefox developers are considering making Web plugins like Adobe Flash an opt-in feature. Although there is still a long way to go before it’s ready for Firefox proper, switching to an opt-in, "click-to-play" approach for plugins could help make Firefox faster, more secure, and a bit easier on the laptop battery. A very early version of the "click-to-play" option for plugins is now available in the Firefox nightly channel. Once that’s installed you’ll need to type about:config in your URL bar and then search for and enable the plugins.click_to_play flag. Once that’s done, visit a page with Flash content and it won’t load until you click on it. While HTML5 reduces the need for Flash and other plugins, they’re still a big part of the Web today. Even where HTML5 has had great success—like the video tag—it hasn’t yet solved every publisher’s problems and remains incapable of some of the things Flash can do. That means Flash will likely remain a necessary part of the Web for at least a few more years. At the same time, Flash and other plugins are often responsible for poor performance and security vulnerabilities. So if something is necessary, but can slow down your browser and can be the source of attacks, what do you do? Another popular solution is the click-to-play approach that Mozilla developers are considering. It’s not a new solution, as Chrome already offers the option, but so far no Web browser has yet made it the default behavior. Visit a webpage with embedded Flash content when Flashblock or something similar is installed and you’ll see a static image where the Flash movie would normally be playing. Click the image and then the plugin loads. Whether or not the click-to-play approach that Mozilla is considering will ever become the default behavior for Firefox remains to be seen. This very early release is rough around the edges and nowhere near ready for prime time, but the goal is to have it be part of—disabled, but part of—Firefox 14.
Characterization of the Effects of Borehole Layouts in Geo-exchange In a ground-source heat pump (GSHP) system, when the heating and cooling loads are not balanced, the ground temperature may migrate up or down after a few years of operation. This change in ground temperature can lower system efficiency because of the ineffective heat transfer temperatures. The present work contributes to fundamental understanding of thermal imbalance in borehole design. Long term ground temperatures were simulated using finite element methods to imitate the performance of GSHP systems. Borehole field configurations are explored and different aspect ratios of borehole layouts were compared. In addition, an alternative borehole configuration was studied, which involves alternating the length of individual boreholes within a single system. The results of the studies expressed potential in alleviating the effects of thermal imbalance by changing borehole field layout and potential in reducing borehole separation distance by altering individual borehole lengths.
Quantitative Real-Time Reverse Transcription-PCR Analysis of Deformed Wing Virus Infection in the Honeybee (Apis mellifera L.) ABSTRACT Deformed wing virus (DWV) can cause wing deformity and premature death in adult honeybees, although like many other bee viruses, DWV generally persists as a latent infection with no apparent symptoms. Using reverse transcription (RT)-PCR and Southern hybridization, we detected DWV in all life stages of honeybees, including adults with and without deformed wings. We also found DWV in the parasitic mite Varroa destructor, suggesting that this mite may be involved in the transmission of DWV. However, the detection of the virus in life stages not normally associated with mite parasitism (i.e., eggs and larvae) suggests that there are other modes of transmission. The levels of DWV in different life stages of bees were investigated by using TaqMan real-time quantitative RT-PCR. The amounts of virus varied significantly in these different stages, and the highest levels occurred in pupae and in adult worker bees with deformed wings. The variability in virus titer may reflect the different abilities of bees to resist DWV infection and replication. The epidemiology of DWV is discussed, and factors such as mite infestation, malnutrition, and climate are also considered.
Dynamical Cluster Analysis of Cortical fMRI Activation Localized changes in cortical blood oxygenation during voluntary movements were examined with functional magnetic resonance imaging (fMRI) and evaluated with a new dynamical cluster analysis (DCA) method. fMRI was performed during finger movements with eight subjects on a 1.5-T scanner using single-slice echo planar imaging with a 107-ms repetition time. Clustering based on similarity of the detailed signal time courses requires besides the used distance measure no assumptions about spatial location and extension of activation sites or the shape of the expected activation time course. We discuss the basic requirements on a clustering algorithm for fMRI data. It is shown that with respect to easy adjustment of the quantization error and reproducibility of the results DCA outperforms the standard k-means algorithm. In contrast to currently used clustering methods for fMRI, like k-means or fuzzy k-means, DCA extracts the appropriate number and initial shapes of representative signal time courses from data properties during run time. With DCA we simultaneously calculate a two-dimensional projection of cluster centers (MDS) and data points for online visualization of the results. We describe the new DCA method and show for the well-studied motor task that it detects cortical activation loci and provides additional information by discriminating different shapes and phases of hemodynamic responses. Robustness of activity detection is demonstrated with respect to repeated DCA runs and effects of different data preprocessing are shown. As an example of how DCA enables further analysis we examined activation onset times. In areas SMA, M1, and S1 simultaneous and sequential activation (in the given order) was found.
1. Field of the Invention Exemplary embodiments of the present general inventive concept relate to a fusing device to fix an image to a recording medium by applying heat to the image and an image forming apparatus having the same. 2. Description of the Related Art Image forming apparatuses are devised to print an image on a recording medium. Examples of image forming apparatuses include printers, copiers, fax machines, and devices combining functions thereof. In an electro-photographic image forming apparatus, after light is irradiated to a photosensitive member charged with a predetermined electric potential to form an electrostatic latent image on a surface of the photosensitive member, a developer is fed to the electrostatic latent image, forming a visible image. The visible image, formed on the photosensitive member, is transferred to a recording medium. The visible image transferred to the recording medium is fixed to the recording medium while passing through a fusing device. A generally widely used fusing device includes a heating roller having a heat source therein, and a press roller arranged to come into close contact with the heating roller so as to define a fusing nip. When a recording medium onto which an image has been transferred enters the fusing nip between the heating roller and the press roller, the image is fixed to the recording medium under the influence of heat and pressure inside the fusing nip. With the recent tendency of higher print speed of an image forming apparatus, it may be necessary to improve fusing performance via more effective heat transfer to a recording medium.
Air travel can be expensive for one person, and if you're planning a trip for your whole family, the cost adds up fast. With many people planning their summer vacations, now is the best time to consider some tips that can help you save big when it comes to buying airline tickets. Finding cheap flights is easy when you go online to a travel site like Travelocity.com. Simply type in your travel date and locations and you'll quickly be able to compare different airlines' flight times and ticket costs. This can save you lots of time and money. If you're able, be flexible with your travel dates. By altering the day you leave or the day you come home by several days, you could save hundreds of dollars. For example, if it really doesn't matter if you get home on a Sunday or a Monday, this is a great option for finding cheap airline tickets. If you are traveling to a popular destination, it's not uncommon that the area will be serviced by multiple airports. If you can't find a low fare at one, try another one that is close. The cost difference might be worth it. Need a hotel in addition to your flight? By booking together, you'll often save when you search for cheap flights online. Maybe you don't need a hotel, but you'd like to rent a car? That is another package option that could help you save some cash. If you're flying to a location where you're likely to have a layover, shop around and compare booking options. Sometimes buying the two flights for the trip separately can cost you much less, but not always. That's why it's smart to check both options and choose whatever one saves you the most. Some people will tell you that Tuesday afternoon or Wednesday morning is the best day to buy tickets. Others say wait until the weekend when less people are searching on their computer. There is a theory and reason for any day of the week, so the general rule of thumb is if you haven't found the right price and your travel date is still several weeks out, don't be afraid to wait a day or two and try again.
Abducens Nerve Palsy Associated With Pseudomeningocele After Lumbar Disc Surgery: A Case Report Study Design. Clinical case report and review of the literature. Objective. To highlight the importance of including cerebrospinal fluid leak and pseudomeningocele in the differential diagnosis in a patient presenting with diplopia due to abducens palsy after spine surgery and to highlight the possibility of cure after successful surgical repair of the dural defect. Summary of Background Data. Abducens nerve palsy after spine surgery is extremely rare, with only 3 reported cases in the literature. We report the first case of abducens nerve palsy associated with a clinically evident pseudomeningocele, which was completely cured by successful repair of the dural defect. Methods. A 53-year-old male patient with diabetes presented 6 weeks after lumbar disc surgery with persistent headache, a fluctuant swelling at the operated site, and diplopia secondary to left abducens nerve palsy. Clinical examination revealed a left abducens nerve palsy and magnetic resonance imaging showed a pseudomeningocele due to dural tear at L4L5. He underwent exploration, and the dural defect was repaired using 60 Vicryl and reinforced with a fibrin sealant. Results. After dural closure, pseudomeningocele and headache resolved completely and diplopia improved partially. At 4-week follow-up, there was complete resolution of diplopia. Clinical examination showed full recovery of the lateral rectus function, indicating resolution of the abducens palsy. Magnetic resonance imaging showed complete resolution of pseudomeningocele. Conclusion. Although uncommon, abducens nerve palsy after cerebrospinal fluid leak should be considered in the differential diagnosis of diplopia developing in a patient who has undergone spine surgery. After confirmation of pseudomeningocele radiologically, early surgical intervention with repair of the dural defect can result in complete recovery of the abducens nerve palsy.
Effect of Oxide Supports on the Activity of Pd Based Catalysts for Furfural Hydrogenation We investigated the effect of oxide supports on the hydrogenation of furfural over Pd catalysts on various supports (Al2O3, SiO2, TiO2, CeO2, and ZrO2). Pd catalysts (5 wt%) prepared by chemical reduction on various supports. The dispersion and uniformity of Pd were affected by the properties of the support and by the nucleation and growth of Pd. The conversion of furfural was enhanced by greater Pd dispersion. The selectivity for cyclopentanone and tetrahydrofurfuryl alcohol was affected by physicochemical properties of Pd catalyst and reaction parameters. High Pd dispersion and high acidity of the catalyst led to greater C=C hydrogenation, thereby, generating more tetrahydrofurfuryl alcohol. The Pd/TiO2 catalyst showed the highest cyclopentanone yield than other catalysts. The Pd/TiO2 catalyst exhibited the >99% furfural conversion, 55.6% cyclopentanone selectivity, and 55.5% cyclopentanone yield under the optimal conditions; 20 bar of H2, at 170 °C for 4 h with 0.1 g of catalyst.
Franklin Chepkwony Franklin Chepkwony (born 15 June 1984), sometimes spelled Frankline, is a Kenyan professional long-distance runner. He has a marathon personal best time of 2:06:11 hours. Chepkwony's first marathon run came at the 2011 Nairobi Marathon, where he placed second in a time of two hours and eleven minutes. He made his international marathon debut at the 2012 Zurich Marathon, winning with a time of 2:10:57. Later that year, Chepkwony set a personal best of 2:06:11 when finishing second at the Eindhoven Marathon in October, 25 seconds behind compatriot Dickson Chumba. This time ranked him 26th in the world over the distance for that year. In 2013, Chepkwony won the Seoul International Marathon in 2:06:59, taking home $80,000 for winning the race under 2:10:00. He ran his second marathon in the Netherlands in October, but did not perform as well as he had in Eindhoven, coming seventh in the Amsterdam Marathon in a time of 2:09:53 hours. In November he won the Boulogne-Billancourt Half Marathon in France, setting a course record of 1:00:11. He opened the following season at the Santa Pola Half Marathon, coming second. In April, Chepkwony finished third in the 2014 Boston Marathon, his first top finish in the World Marathon Majors, behind American Meb Keflezighi and Kenyan Wilson Chebet. Chepkwony is a training partner of Dennis Kimetto and Geoffrey Mutai.
There's a disease spreading around the United States right now, a disease that until recently, we'd considered eliminated — something that wasn't spreading anymore except in places where it was imported from outside the country. But as the measles outbreak that started in Disneyland and has spread to at least 14 states shows, measles is something we have to worry about again. And while in most people this virus causes a rash and a fever, in some it does does more. In one out of 20 children who catch it, it causes pneumonia. In a smaller number, one in 1,000, it can cause a sort of encephalatis, a tragic brain illness that can leave its victims deaf, blind, or brain damaged — if they survive at all. And when you are talking about one of the most infectious viruses known to humankind, one where every infected person spreads the virus to an average of 15 people, that's scary, especially since some people have stopped doing the one thing we know will protect against this virus almost completely. That thing is getting vaccinated against diseases that we have vaccines for, a medical procedure so safe and so transformative that it's listed at the top of the CDC's list of top 10 public health achievements of the 20th century. We hope to develop vaccines for new diseases, but as an article published in September 2014 in Science Translational Medicine notes, we also need to keep making sure people get the vaccines that exist now. Only 5% of kids worldwide receive all 11 vaccinations recommended by the World Health Organization. In many places around the globe, this reflects a lack of access or an inability to afford these vaccines, but vaccination rates are also declining in some wealthy communities in the U.S. This unnecessarily and irresponsibly puts lives at risk. This is what's happening with measles right now. Here are seven facts, mostly gathered by researchers in that 2014 study in Science Translational Medicine, that show why vaccines are one of the most important inventions in human history. 1. Vaccines save tens of thousands of American children every year. Vaccination has eliminated or reduced a wide range of once-common diseases in the U.S. Without current vaccines, approximately 42,000 of the 4.1 million children born in the U.S. in 2009 would die early deaths. For that same group of kids, researchers estimate that vaccines have prevented and will prevent 20 million cases of disease. That alone is a huge impact, and those are estimates for kids born in one year in one country. 2. Vaccines wipe out deadly diseases. Between 1900 and 1979, smallpox killed approximately 300 million people and disfigured millions more — that's more deaths than occurred in all wars and conflicts in the 20th century. By 1979, vaccination programs had wiped out smallpox in the wild around the planet. Without those programs, people would still be suffering from the disease today. We've seen massive reductions in infections from other awful illnesses as a result of vaccines too. Polio has also been eradicated in much of the world, though pockets of the disease persist in places where it's hard to implement vaccination programs. In the U.S., smallpox is far from our only vaccine success story. We've also had a 98% reduction in cases of other vaccine-preventable diseases including measles, mumps, rubella, and tetanus. 3. Vaccines prevent chronic diseases, including certain cancers. Vaccines are primarily known for offering protection against infectious diseases, like measles and smallpox, but that's not all they do. The Hepatitis B vaccine significantly reduced chronic hepatitis and hepatocellular carcinoma cases in Taiwan, where the virus was common. Not only did this eliminate 90% of the virus among the newly vaccinated population, it also cut liver cancer rates by 50%. Places with relatively high levels of HPV vaccine coverage, like Australia, have seen significant decreases in HPV infections. This is expected to drastically reduce rates of cervical and throat cancer. If HPV vaccine rates were to rise around the world, it could eliminate hundreds of thousands of cases of these cancers every year. 4. Vaccines save billions of dollars every year. Treating someone who is already sick is expensive, and the sicker they are, the worse those costs can be. Diseases that disable or kill people can require lifelong treatment. Not only is that treatment expensive, but disability and death also reduce or eliminate lifetime earnings. Take, for example, that same group of 4.1 million kids born in the U.S. in 2009. Researchers estimate that the vaccines they receive will save $13.5 billion in health treatment costs and almost $70 billion when measuring other costs to society, like lost productivity. The global health costs saved by another three vaccines — pneumococcal, rotavirus, and Haemophilus influezae type b — are expected to total $63 billion between 2011 and 2020. 5. Vaccination prevents millions of cases of death and disease. Annual vaccines for kids already save up to 3 million lives a year around the globe. The pneumococcal, rotavirus, and Haemophilus influezae type b vaccines — just three of many — are expected to prevent 102 million illnesses and 3.7 million deaths between 2011 and 2020. Measles vaccinations reduced disease cases in the European Union by 90% between 1993 and 2007. Vaccinations cut rubella infections in that same region by 99% between 2001 and 2010. 6. Getting vaccinated is better for a person and their community than treating an illness. Vaccines are more effective than most treatments for a disease that occur after infection. Many of the illnesses that vaccines have eradicated or seriously reduced used to leave people scarred for life — and those are the ones who survived. A few doses of a vaccine can usually provide long-term or even life-long protection. Many of the illnesses that we now inoculate people against, like measles, are also highly contagious. But vaccination not only helps prevent these diseases in the people who receive the vaccine, it also helps stop these diseases from infecting people who cannot get the vaccine in the first place, including those who are too young. However, this "community protection" is only effective if enough people get the vaccine in the first place. 7. Vaccines will save even more lives in the future — especially if we continue to invest in them around the world. Vaccines work, and they're safe. However, some places still don't have full access to the them, and some diseases can't be vaccinated against yet. If currently available vaccines were accessible around the world, their life- and money-saving benefits would be extended to all. And as vaccines are developed for other deadly diseases, those benefits have the potential to go even further. Access needs to be expanded, but people need to keep getting the vaccines they have access to as well. After all, as the researchers write, "vaccines that remain in the vial are 0% effective."
Murine infection with lymphocytic choriomeningitis virus following gastric inoculation Laboratory studies of arenaviruses have been limited to parenteral routes of infection; however, recent epidemiological studies implicate virus ingestion as a natural route of infection. Accordingly, we developed a model for oral and gastric infection with lymphocytic choriomeningitis virus to enable studies of mucosal transmission and vaccination by this additional route.
Light-dependent regulation of ascorbate in tomato by a monodehydroascorbate reductase localized in peroxisomes and the cytosol. Ascorbate is a powerful antioxidant in plants, and its levels are an important quality criteria in commercial species. Factors influencing these levels include environmental variations, particularly light, and the genetic control of its biosynthesis, recycling and degradation. One of the genes involved in the recycling pathway encodes a monodehydroascorbate reductase (MDHAR), an enzyme catalysing reduction of the oxidized radical of ascorbate, monodehydroascorbate, to ascorbate. In plants, MDHAR belongs to a multigene family. Here, we report the presence of an MDHAR isoform in both the cytosol and peroxisomes and show that this enzyme negatively regulates ascorbate levels in Solanum lycopersicum (tomato). Transgenic lines overexpressing MDHAR show a decrease in ascorbate levels in leaves, whereas lines where MDHAR is silenced show an increase in these levels in both fruits and leaves. Furthermore, the intensity of these differences is light dependent. The unexpected effect of this MDHAR on ascorbate levels cannot be explained by changes in the expression of Smirnoff-Wheeler pathway genes, or the activity of enzymes involved in degradation (ascorbate peroxidase) or recycling of ascorbate (dehydroascorbate reductase and glutathione reductase), suggesting a previously unidentified mechanism regulating ascorbate levels.
The power supplier is a power conversion device used to drive the operation of electronic equipment and to convert the external applied electricity into a power mode (e.g., AC power or DC power) suitable for the electronic equipment. The general power supplier, as shown in FIG. 1, utilizes a housing 1 to protect and accommodate the internal electronic components or circuits. The housing 1 has an electric connecting slot 2 and a switch on/off unit 3 respectively mounted thereon, wherein the electric connecting slot 2 is used to electrically connect to the external power source, and the switch on/off unit 3 is connected with the internal electronic circuits for controlling the power conduction. However, because the consumer electronic product becomes thinner and thinner, the power supplier is also required to reduce the volume thereof. Therefore, for achieving this purpose, the design now integrates the electric connecting slot 2 and the switch on/off unit 3 together for reducing the occupied space. But, owing to the narrow space, the switch on/off unit 3 might be erroneously touched as the user connects the power source to the electric connecting slot 2, so that the power supply will suddenly be powered off, and also, the internal electronic components will be damaged because of the burst.
Tom Clancy's The Division launched its second high-profile update just yesterday, unleashing a plague of game breaking bugs across multiple platforms. Some users are even reporting missing characters. The big release — called update 1.2 or Conflict — contains new player-versus-player and player-versus-environment content, as well as a second centerpiece raid called Clear Sky. Many users, including Polygon staff, are experiencing persistent issues with stability and matchmaking in multiple game modes. One of the PvE systems, called High Value Targets, relies on a new currency called Intel. But when some players turn in their hard-won Intel, HVT missions refuse to unlock. When they do unlock, named bosses often fail to materialize, leaving players empty-handed. The only solution seems to be attempting the mission again and again. We started, and completed, our first HVT mission three times yesterday afternoon before the named enemy actually showed up at the mission's end. More frustrating are the new raid's matchmaking woes. Many users are reporting that when they attempt to start Clear Sky the raid simply resets itself endlessly. Sometimes players are returned to the start of the mission, other times all the way back to the nearest safehouse. After these resets, the user interface appears to brighten or double and the game itself slows to a crawl. We spent an hour last night trying to get into the Clear Sky raid only to be turned back again and again by these glitches. Most troubling are reports that some players are missing one or more of their characters, and that daily missions are failing to refresh. Similar issues cropped up after the last update. VG24/7 reports that Ubisoft is aware of this specific issue and is currently working on a fix. The good news is that the time we have spent with the update so far is promising. Search and destroy and HVT missions, when they function correctly, add a much needed solo option for players looking to gather end-game items, including matching gear sets. They also give players a good reason to explore the nooks and crannies of the map. Clear Sky — what little we've been able to play — seems to be more dynamic than Falcon Lost's uninspiring, 15-wave endurance test. But after wrestling with the broken matchmaking system we've decided to wait it out until stability improves. The team behind The Division has already tried to make amends for previous technical issues by giving every player a fixed number of Phoenix Credits, the game's most valuable currency. No word yet if a similar gift is in the pipeline for dedicated players upset with the 1.2 update so far. The Division was a huge sales success for publisher Ubisoft, which claimed the title broke all previous sales records in its first 24 hours. The game's reception gave much needed momentum to Ubisoft, which earlier this year was the target of a hostile takeover attempt by media giant Vivendi. While no official player numbers are available for consoles, Steam Charts — a service that tracks player populations using Valve Software's publicly available Steam Web API — seems to indicate a nearly 60 percent decline in The Division's player base on Windows PC over the last 30 days. We'll have more on Tom Clancy's The Division and any hot fixes when they're announced.
Oral Supplementation Effect of Iron and its Complex Form With Quercetin on Oxidant Status and on Redistribution of Essential Metals in Organs of Streptozotocin Diabetic Rats Abstract Background and aims: Quercetin, is a polyphenolic antioxidant compound. It is able to form complex with metal ions such as iron and exerts a broad range of biological activities like improving metabolic disorders. This research aims at investigating the effect of oral supplementation of iron (2.5mg Fe/Kg/day) and its complex form (molar ratio 1:5; 2.5mg/25mg/Kg/day) with quercetin (25mg/Kg/day) on lipid metabolism, oxidant status and trace elements contents in organs of Wistar diabetic rats (45 mg/kg/rat.ip of streptozotocin) during eight weeks of experimentation. Material and method: To achieve this, liver and adipose tissue enzymes activities, NO, O2−, TBARs, carbonyl protein levels in plasma were analysed. Metals (Cu, Fe, Mg, Zn) analysis of organs were determined by inductively coupled plasma atomic emission spectroscopy. Results: Iron supplemented alone induced a noticeable disorder in lipid, lipoprotein, lipases and oxidant status. Yet, it caused an imbalance in the redistribution of metals in the organs of diabetic and non diabetic rats. Iron-quercetin complex was shown as less harmful and more beneficial than iron supplemented alone. Conclusions: This complex could reverse oxidative stress and iron deficiency mostly caused by the diabetic disease but at the same time it induces an imbalance in redistribution of other essential metals.
On CNN’s Reliable Sources this morning, The Intercept’s Jeremy Scahill went off on the “atrocious” media coverage of the Syria missile strikes, even calling out CNN’s Fareed Zakaria in particular. Stelter came to his colleague’s defense and said there was more to what Zakaria said––including criticism of the president––both last week and today. Lara Setrakian, co-founder of Syria Deeply, said that there has been a “bizarre mix” in the media with either ignoring Syria or “fetishizing” it. She said there’s a lot of questions that the press isn’t asking that it absolutely needs to.
Michael Capellas quit as the No. 2 executive at Hewlett-Packard, just hours after a newspaper reported he is a leading candidate to take over troubled WorldCom. Capellas, who came to HP (HPQ) as part of its $19 billion acquisition of Compaq, is leaving to "pursue other career opportunities," according to an HP statement. The Wall Street Journal reported that Capellas, who had been Compaq's CEO before becoming Hewlett-Packard's president, had become the front-runner to succeed WorldCom CEO John W. Sidgmore. The Journal also said other executives are still being considered by WorldCom's search committee, including XO Communication's (XOXO) chairman and CEO Dan Akerson and BellSouth's vice chairman Gary Forsee. Pocket over Palm: For its emerging line of handheld computers, Dell has spurned the handheld software offered by Palm, and instead chosen Microsoft's Pocket PC software. The new Dell Axim X5 handhelds, which will soon go on sale for under $300 after a $50 rebate, aim to take market share from devices made by competitors HP (HPQ), Toshiba and Casio that run Microsoft (MSFT) software. Analysts say Dell's (DELL) ability to keep prices low and still make a profit, unlike competitors such as Palm (PALMD) or Sony (SNE), is a very big deal in the handheld industry. Dell's forthcoming device is similar to others selling for $500 or more. "It probably is the biggest singular event since Microsoft introduced Pocket PC 2002 to the market," said Todd Kort, a handheld analyst. "Dell is going to really upset the entire market." Telecom takeover: Sweden's Telia extended its $5.9 billion takeover offer for Sonera of Finland by a week, but said it was on the verge of success. The new company, to be known as TeliaSonera, would be the largest carrier in Scandinavia, with market-leading positions in Finland and Sweden. It would also have significant stakes in operators in the Baltics, Russia, Turkey and central Asia. The merger of the two state-controlled carriers, if completed, would be the first cross-border union of former European telecom monopolies in Europe. 3-D chips: IBM said it has developed a way to build three-dimensional integrated circuits that can increase the amount of computing power on a microchip. IBM (IBM) said that in its 3-D circuit, it builds separate layers of transistors at the same time and then connects them together electrically. It also decreases the length of the wires that connect different parts of the chip, which adds performance. Intel (INTC) is also working on 3-D circuits and in September said that it had developed a tri-gate transistor, which has three gates rather than the one gate per transistor that is the current industry standard. Laptop processors: AMD has introduced its fastest processor, one designed for laptop computers, as the company seeks to take a larger piece of one of the more profitable parts of the personal computer industry. AMD, which is Intel's principal rival in the microprocessor market, said its Athlon XP 2200+ processor is immediately available in Europe in laptops made by Fujitsu Siemens. AMD (AMD) and Intel (INTC) have targeted the mobile PC market more aggressively in the past several years because notebooks are the fastest growing segment of an otherwise largely stagnant PC industry. Those chips are also more profitable. Battery life boost: National Semiconductor (NSM) and ARM Holdings (ARMHY) will team up to develop and sell technology to extend the life of cellular phone batteries, ultimately by up to five times current levels. Those improvements could mean six to nine hours of battery life on a single charge in the technology's initial rollout, said Ravi Ambatipudi, a senior manager in the portable power systems group of National Semiconductor. The technology will let cell-phone makers reduce the battery's size and cut the size and cost of phones, he said. You've got services: VeriSign said it will provide e-mail, website and domain name services to members of AOL's new small-business Internet service. The partnership, whose terms were not disclosed, lets VeriSign (VRSN), which provides Internet domain names, telecom and e-commerce security services, offer Web services to small businesses looking to create an online presence. Next-gen remote: Universal Electronics has reached a deal with RadioShack to sell a remote control billed as the next step in the developing market for devices that can control a room full of electronics. Universal Electronics (UEIC), which owns a database of 130,000 codes that let remote controls speak to virtually any device, will sell its Kameleon remote technology in the United States at RadioShack (RSH).
Aqueous biphasic systems formed in (zwitterionic salt+inorganic salt) mixtures Abstract The manuscript reports on a new class of aqueous biphasic systems (ABSs) formed in mixtures of inorganic salts (ISs) and zwitterionic salts (ZWSs). Aqueous ternary phase diagrams characterized by a binodal curve were determined for systems consisting of four ISs, K3PO4, K2HPO4, K2HPO4/KH2PO4, and K2CO3, and three structurally similar ZWSs differing in hydrophobicity. Comparison of phase behaviour of ABSs composed of ZWSs, ionic liquids (ILs) and zwitterions was provided. Potential of ZWSs based systems for extraction of aromatic molecules and amino acids, such as glycine, L-tryptophan, DL-phenylalanine, eugenol, and phenol was examined. Feasibility and limitations of isolation of products after partition and recovery of ZWS were discussed.
CASSELBERRY -- A gun store and shooting range where a mother killed herself and her son three weeks ago suspended all firearms rentals after another customer died at the business Monday. A lawyer for Shoot Straight on U.S. Highway 17-92 said rental of firearms would not resume until the state lets gun ranges conduct background checks on customers. Police arrived at Shoot Straight shortly after 5:40 p.m. and found Jason Kevin McCarthy, 26, of Winter Springs with a massive wound to the head, caused by a single gunshot from a rented gun. McCarthy's death was "an apparent suicide," Casselberry police Lt. Dennis Stewart said. "The family indicated that Mr. McCarthy has a history of mental-behavior issues," Stewart said. "But this was not expected." On April 5, Marie Moore, 44, of Altamonte Springs used a rented gun to shoot her son Mitchell Moore, 20, and then turned the gun on herself. He died at the scene. She died at Florida Hospital Altamonte. Police said the woman had a history of mental problems. In notes and tapes made before the shooting, she described herself as the "Antichrist" and said God wanted her to kill her son and herself to save her family and the world from violence. She apologized in advance to police and store owners, saying the shooting needed to take place in a public location to get media attention. Afterward, the owners of Shoot Straight, who run three stores in Central Florida, tried to conduct the same background checks on gun renters that state law requires them to conduct on buyers, store attorney Joerg Jaeger said. But the Florida Department of Law Enforcement, which runs background checks, said it was not allowed and would not do it. Monday's shooting is another sign that stores should have the ability to conduct checks on gun renters, Jaeger said. "If it costs us money, it costs us money," Jaeger said of the store's decision. "The safety of individuals is more important." An FDLE spokesman said he could not provide comment Monday night because he needed to gather information on the situation. The FDLE can check criminal records, but the agency does not have access to medical or mental-health records. Shoot Straight already asks rental applicants whether they have been found mentally incompetent, are taking medication, have committed domestic violence or are a convicted felon, but there's no way to confirm whether the responses are correct, Jaeger said. People taking "mood-altering" drugs are not permitted to rent a gun, he said.
U.S. Sen. Harry Reid, D-Nev., the Senate majority leader, wants to keep nuclear waste in our neighborhoods, not his. In 2008, in testimony before the Senate Commerce Committee, Reid said, “It’s time to keep Americans safe by keeping nuclear waste where it is.” That is, at our Waterford nuclear plant and all the others. He stated “Yucca mountain’s price tag is (in 2008) $96 billion,” money spent on a hole that was supposed to be ready for nuclear waste in 1998. In 1987, a $4 billion study designated Yucca the best of 10 sites. He’s been sinking money into constructing that “Hole in his Mountain” where 904 atomic bomb tests had been performed instead of for his friend’s health care initiative. Reid has successfully thwarted the objectives of Congress’ waste disposal policy. His Nevada jobs program has employed his constituents to dig a hole that he will not permit the nation to fill. He believes we should protect a hundred sites instead of one in a dangerous terrorist world. He should resign for his failure to safely implement the Nuclear Waste Policy Act of 1982 during 28 years of his time in Congress for monumental nonperformance, not for an ignorant remark.
. The practice of intestinal stoma, transitory or permanent, has a series of implications of a physiological, pharmacological, psychological and communitarian character that must be attended to in an integral and individualised way for each patient. Frequently, the ostomised patient is subjected to pharmacological therapy. However, the foreseeable effect of the medicines administered can be affected by factors related to the stoma. Thus, descriptions have been made of extensive resections of ileum that affect the process of the oral absorption of medicines, especially in pharmaceutical forms of enteric covering, delayed release and pills. This would mean access of the unabsorbed portion of the active principle to the collecting device through the faeces and a possible alteration of the duration and intensity of the pharmacological effect. On the other hand, pharmaco surveillance studies have revealed that numerous active principles produce changes in intestinal motility, either on the basis of its fundamental mechanism of action (laxatives, anti-diarrhoea, prokinetics), or as a collateral or secondary effect (antiacids, antidepressants, antihistamines, opioid analgesics). The appearance of constipation and, especially, of diarrhoea can be disturbing and worrying for ostomised patients, and particularly grave in ileostomised patients, due to the dehydration to which it can give rise. Similarly, changes in the colour and odour of faeces, secondary to the administration of medicines (ferrous salts, aluminium hydroxide, bismuth compounds) can needlessly alarm the patients who detect them in the ostomy collecting device (pouch). All these factors can create difficulties for the adhesion of the patient to the proscribed treatment and, as a result, affect its success. However, they can be avoided, corrected or justified with good counselling by the health professionals involved in caring for enterostomized patient.
We, the birds in the field - This Can't Be Happening! A bird flies up from the tall grass when I enter the field. Me, barefoot. Bird, flying up. I would not be able to find her nest. Any more than I need to worry about rhyming. As the tractor mows closer and closer. To save one. Go around. Where there are groups of wild daisies. Because I know that nature is sentient. And I know that everything wants to live.
An Interaction-Driven Many-Particle Quantum Heat Engine: Universal Behavior A quantum heat engine (QHE) based on the interaction driving of a many-particle working medium is introduced. The cycle alternates isochoric heating and cooling strokes with both interaction-driven processes that are simultaneously isochoric and isentropic. When the working substance is confined in a tight waveguide, the efficiency of the cycle becomes universal at low temperatures and governed by the ratio of velocities of a Luttinger liquid. We demonstrate the performance of the engine with an interacting Bose gas as a working medium and show that the average work per particle is maximum at criticality. We further discuss a work outcoupling mechanism based on the dependence of the interaction strength on the external spin degrees of freedom. A quantum heat engine (QHE) based on the interaction driving of a many-particle working medium is introduced. The cycle alternates isochoric heating and cooling strokes with both interaction-driven processes that are simultaneously isochoric and isentropic. When the working substance is confined in a tight waveguide, the efficiency of the cycle becomes universal at low temperatures and governed by the ratio of velocities of a Luttinger liquid. We demonstrate the performance of the engine with an interacting Bose gas as a working medium and show that the average work per particle is maximum at criticality. We further discuss a work outcoupling mechanism based on the dependence of the interaction strength on the external spin degrees of freedom. Universality plays a crucial role in thermodynamics, as emphasized by the description of heat engines, that transform heat and other resources into work. In paradigmatic cycles (Carnot, Otto, etc.) the role of the working substance is secondary. The emphasis on universality thus seems to prevent alternative protocols that exploit the many-particle nature of the working substance. In the quantum domain, this state of affairs should be revisited as suggested by recent works focused on many-particle quantum thermodynamics. The quantum statistics of the working substance can substantially affect the performance of a quantum engine. The need to consider a many-particle thermodynamic cycle arises naturally in an effort to scale-up thermodynamic devices and has prompted the identification of optimal confining potentials as well as the design of superadiabatic protocols, first proposed in a singleparticle setting. The divergence of energy fluctuations in a working substance near a second-order phase transition has also been proposed to engineer critical Otto engines. In addition, many-particle thermodynamics can lead to quantum supremacy, whereby quantum effects boost the efficiency of a cycle beyond the classical achievable bound. The realization of superadiabatic strokes with ultracold atoms using a Fermi gas as a working substance has been reported in. Quantum technologies have also uncovered novel avenues to design thermodynamic cycles. Traditionally, interactions between particles are generally considered to be "fixed by Nature" in condensed matter. However, a variety of techniques allow to modify interparticle interactions in different quantum platforms. A paradigmatic examples is the use of Feschbach and confinement-induced resonances in ultracold atoms. Digital quantum simulation similarly allows to engineer interactions in trapped ions and superconducting qubits. In this Letter, we introduce a novel thermodynamic cycle that exploits the many-particle nature of the working substance: it consists of four isochoric strokes, alternating heating and cooling with a modulation of the interparticle interactions. This cycle resembles the four-stroke quantum Otto engine in that expansion and compression strokes are substituted by isochoric processes in which the interparticle interactions are modulated in time. Using Luttinger liquid theory, the efficiency of the cycle is shown to be universal in the lowtemperature regime of a one-dimensional (1D) working substance. We further shown that when an interacting Bose gas is used as such, the average work output is maximum at criticality. Interaction-driven thermodynamic cycle.-We consider a quantum heat engine (QHE) with a working substance consisting of a low-dimensional ultracold gas tightly confined in a waveguide. Ultracold gases have been previously considered based quantum cycles where work is done via expansion and compression processes, both in the non-interacting and interacting regimes [7,. We proposed the implementation of quantum cycle consisting of four isochoric strokes, in which heating and cooling strokes are alternated with isentropic interaction-driven processes. In the latter, work is done onto and by the working substance by increasing and decreasing the interatomic interaction strength, respectively. This work can be transferred to other degrees of freedom as we shall discuss below. The working substance consists of N particles with interparticle interactions parameterized by the interaction strength c. Both the particle number N and system size L are preserved throughout the cycle and any equilibrium point is parameterized by a point (c, T ) indexed by the temperature T and interaction strength c. Specifically the interactiondriven quantum cycle, shown in Fig. 1 for the 1D Lieb-Linger gas, involves the following strokes: Interaction driven quantum cycle. The working substance is driven through four sequential isochoric strokes alternating heating and cooling processes at different temperature T with isentropes in which the interparticle interaction strength c is ramped up and down. The dashed lines correspond to the thermal entropy calculated from the thermodynamic Bethe ansatz equation of the Lieb-Liniger gas. Interaction ramp-up isentrope (A → B): The working substance is initially in the thermal state A parameterized by (c A, T A ) and decoupled from any heat reservoir. Under unitary evolution, the interaction strength is enhanced to the value c B and the final state is non-thermal. Hot isochore (B → C): Keeping c B constant, the working substance is put in contact with the hot reservoir at temperature T B and reaches the equilibrium state (c B, T C ). Interacting Bose gas as a working substance.-Consider as a working substance an ultracold interacting Bose gas tightly confined in a waveguide, as realized in the laboratory, see Fig. 1. The effective Hamiltonian for N particles is that of the Lieb-Liniger model where x jℓ = x j − x ℓ, with 2m =h = 1. The spectral properties of Hamiltonian can be found using coordinate Bethe ansatz. We consider a box-like trap where any energy eigenvalue can be written as E = ∑ i k 2 i in terms of the ordered the quasimomenta 0 < k 1 < k 2 < < k N. The latter are the (Bethe) roots {k i } of the coupled algebraic equations determined by the sequence of quantum numbers {I i } with i = 1, 2,, N. As a function of c and T, the 1D Bose gas exhibits a rich phase diagram. We first consider the strongly coupling regime and use a Taylor series expansion in 1/c. For a given set of quantum numbers I n = {I (n) i }, the corresponding energy eigenvalue takes the form where the interaction-dependent factor c reads The spectrum of a strongly interacting Bose gas is thus characterized by eigenvalues with scale-invariant behavior, i.e., n (c)/ n (c ) = c / c. In this regime, the work output is thus becomes independent of temperature of the heat reservoirs. Beyond the strongly-interacting and low-energy regimes, we resort to a numerically-exact solution of Eqs. for finite particle number N. We enumerate all the possible sets I n of quantum numbers for low-energy states and solve Eqs. numerically for a given interaction c. With the resulting quasimomenta {k n,1, k n,2, }, and the corresponding energy eigenvalues n = ∑ i k 2 n,i, the probability that the Bose gas at temperature T is found with energy n is set by the Boltzmann weights p n = e − n /T / ∑ m e − m /T (with k B = 1). The equilibrium energy of the states A and C is set by thermal averages of the form E = ∑ n p n n that in turn yield the expressions for Q 2 and Q 4. Here, a proper cutoff of the possible sets {I n } can be determined by p n /p G ≪ 1, where p G is the probability for the working substance to be in the ground state. The numerical results for the efficiency and output work W are shown in Fig. 2 as a function of the interaction strength. For fixed values of T C and T A, the maximum work is studied as a function of c A while keeping c B constant, see Fig. 2. The efficiency is well reproduced by Eq. at strong coupling, that captures as well a monotonic decay with increasing interaction strength. The efficiency is found to be essentially independent of the temperature in the strong interaction regime, whereas the work output is governed by the temperature and interaction strength. In the thermodynamic limit (where N and L → ∞ with n = N/L being kept constant), the equilibrium state of the 1D Bose gas is determined by the Yang-Yang thermodynamics . The pressure is then given by where the "dressed energy" (k) is determined by thermodynamic Bethe Anstatz (TBA) equation The particle density n and entropy density s can be derived from the thermodynamics relations in terms of which the internal energy density reads E = −p + n + T s. Both interaction-driven strokes are considered to be adiabatic. As a result, the heat absorbed during the hot isochore stroke (B → C) is given by Here T B and T D can be determined from the entropy, by setting s(c A, where T A and T C are the temperature of the cold and hot reservoir, respectively. The efficiency and work can then be obtained by numerically solving the TBA equation. Universal efficiency at low temperature.-The low-energy behaviour of 1D Bose gases is described by the Tomonaga-Luttinger liquid (TLL) theory, in which the free energy density reads where E 0 is the energy density of ground state, v s is the sound velocity which depends on particle density n and interaction c. The entropy density s can be obtained as the derivative of free energy, s = −∂ F /∂ T = T /3v s. The expression for the heat absorbed and released are respectively given by where are the entropy density and the sound velocity of the state i, respectively. Using the fact that the strokes and are isentropes, it follows that and as a result, the efficiency and work output are given by As the TLL theory describes the universal low-energy behavior of 1D many-body systems, Eqs. - provide a universal description of the efficiency and work of quantum heat engines with a 1D interacting working substance at low temperatures, which are applicable to any cycles whose working strokes and heat exchanging strokes are separated. In particular, in the strongly interacting regime, the sound velocity of 1D Bose gases is given by v s ≃ 2n(1 − 4nc −1 + 12n 2 c −2 ). In this regime, thus the result reduces to Eq.. On the other hand, in the weak interaction regime, the sound velocity is given by v s ≃ 2n c n − 1 2 c n 3 2, and thus the efficiency yields indicating an enhancement of the performance with respect to the strongly-interacting case. Quantum critical region.-We next focus on the performance of an interaction-driven QHE at quantum criticality. The 1D Bose gas displays a rich critical behavior in the temperature T and chemical potential plane, see Fig. 3. In the region of ≪ 0 and T ≫ n 2, i.e. the mean distance between atoms is much larger than thermal wave length, the system behaves as a classical gas (CG). A quantum critical region (QC) emerges between two critical temperatures (see the white dashed lines in have the same power of the temperature dependence. In the region with > 0 and temperatures below the right critical temperature, the quantum and thermal fluctuations can reach equal footing: the TLL region discussed above. For fixed cycle parameters (c A, c B, T A, and T C ), we study the performance of the engine across the QC region by changing n. We numerically calculate the efficiency and average work W /N by using the TBA equation, and show that near the quantum critical region W /N has a maximum value, see Fig. 3. We set c A = 1, c B = 3, T A = 1, and T C = 5 for the heat engine and let the density n increase from 0.1 to 23. The red, green, and blue dashed lines in Fig. 3 correspond to the density n ≃ 0.2, 1.4, and 6.2, respectively. In order to understand the maximum of W /N, we also plot this three engine cycles (A → B → C → D → A) in the phase diagram of specific heat in the T − plane in the right panel of Fig. 3. When the engine works near the boundary from QC to TLL, it has the maximum average work. Discussion.-The modulation of the interaction strength c is associated with the performance of work, that has recently been investigated for different working substances. An experimentally-realizable work outcoupling mechanism can be engineered whenever the value of the coupling strength specifically depends on the configuration of other degrees of freedom. This is analogous to the standard Carnot or Otto cycles where the working substance is confined in a box-like, harmonic, or more general potential. The latter is endowed with a dynamical degree of freedom that is assumed to be slow (massive) so that in the spirit of the Born-Oppenheimer approximation can be replaced by a parameter. Similarly, the modulation of the coupling strength in an interaction-driven cycle can be associated with the coupling to external degrees of freedom. As an instance, consider the choice of an interacting SU 1D spinor Fermi gas in a tight waveguide as a working substance. The Hamiltonian can be mapped to the Lieb-Liniger model with an operator-valued coupling strength that depends on the spinor degrees of freedom =( j ℓ ). The latter can be reduced to a real number c = c( j ℓ ) making use of a variational method, an approximation corroborated by the exact solution in a broad range of parameters. Thus, the dependence of the interaction strength on the spin degrees of freedom provides a possible work outcoupling mechanism. An alternative relies on the use of confinementinduced resonances. The scattering properties of a tightly confined quasi-1D working substance in a waveguide can be tuned by changing the transverse harmonic confinement of the waveguide. The frequency ⊥ of the latter directly determines the interaction strength c, i.e., c = c( ⊥ ). In this case, the role of ⊥ parallels that of the box-size or harmonic frequency in the conventional Carnot and Otto cycles with a confined working substance. Finally, we note that an interaction-driven cycle can also be used to describe QHEs in which the interaction-driven strokes are substituted by processes involving the transmutation of the particle quantum exchange statistics, e.g., a change of the statistical parameter of the working substance. The Hamiltonian can be used to describe 1D anyons with pair-wise contact interactions with coupling strengthc and statistical parameter characterizing the exchange statistics, smoothly interpolating between bosons and fermions. The spectral properties of this Lieb-Liniger anyons can be mapped to a bosonic Lieb-Liniger model with coupling strength c =c/ cos( /2). The modulation of c can be achieved by the control of the particle statistics, tuning as proposed in. Conclusions.-We have proposed an experimental realization of an interaction-driven quantum heat engine that has no single-particle counterpart: It is based on a novel quantum cycle that alternates heating and cooling strokes with processes that are both isochoric and isentropic and in which work is done onto or by the working substance by changing in the in-teratomic interaction strength. This cycle can be realized with a Bose gas in a tight-waveguide as a working substance. Using Luttinger liquid theory, the engine efficiency has been shown to be universal in the low temperature limit, and set by the ratio of the sound velocities in the interaction-driven strokes. The optimal work can be achieved by changing the ratio of the sound velocity, e.g., by tuning the interaction strength. An analysis of the engine performance across the phase diagram of the Bose gas indicates that quantum criticality maximizes the efficiency of the cycle. Our proposal can be extended to Carnot-like interaction driven cycles in which work and heat are simultaneously exchanged in each stroke. Exploiting effects beyond adiabatic limit may lead to a quantum-enhanced performance. The use of non-thermal reservoirs and quantum measurements constitutes another interesting prospect. Our results identify confined Bose gases as an ideal platform for the engineering of scalable many-particle quantum thermodynamic devices. Consider a system with N identical bosons with a contact interaction confined in a one-dimensional (1D) hard-wall potential. This system is described by the Hamiltonian where g 1D =h 2 c/m is the 1D coupling constant and c parametrizes the interaction strength. Hereafter, we seth = 2m = 1. This model can be exactly solved by the Bethe ansatz, and the wavefunction is given by where the sum is taken over all N! permutations P of N integers such that P : (1, 2,, N) → (P 1, P 2, P N ), and i takes ±1. Here, k P i is the the wave number which satisfies the Bethe equations and the energy of this state is given by Taking the logarithm of the Bethe equations, one obtains where {I i } are integer quantum numbers, which are arranged in ascending order: 1 ≤ I 1 ≤ ≤ I N. Thus, the wave numbers {k i } are also in ascending order, i.e., 0 < k 1 < < k N. Scaling invariant behavior We next focus on the low energy excitations which dominate the thermodynamics in the strongly-interacting Bose gas at low temperature. A Taylor expansion of the rhs of Eq. for c ≫ k N yields the following asymptotic solution Note that, in this regime, the dependence of the quasimomentum k i on the other quasimomenta k j with j = i is of higher order in k N /c. In this sense, the algebraic Bethe ansatz equations decouple in this limit. The energy eigenvalue n (c) for a given set of quantum numbers I n ≡ {I Thus, up to corrections of order 1/c 2, the energy eigenvalue can be written in terms of that of free fermions rescaled by an overall factor resulting from the interactions. We refer to this factor as the generalized exclusion statistics parameter, in view of, and define it as Using it, the energy eigenvalues of a strongly-interacting 1D Bose gases simply read and exhibit the scale-invariant behavior ENERGIES OF EQUILIBRIUM AND NON-EQUILIBRIUM STATES The equilibrium state of the system at temperature T can be characterized by the partition function where the summation is taken over all the quantum states. The average energy of the system is given by with p n (c, T ) being the probability measure of the n-th eigenstate in the Gibbs ensemble given by During the interaction-driven strokes, the system generally deviates from the initial equilibrium state as a result of the modulation of the interaction strength c. However, since the energy gap for low-energy excitations is nonzero in the system with finite N bosons, a sufficiently slow ramping process becomes adiabatic and keeps the probability distribution p n (c, T ) unchanged. Therefore, for the non-equilibrium state resulting from an adiabatic ramping process, the average energy of the system at the interaction parameter c is given by Due to the scale-invariant behavior at strong coupling, the energy of the non-equilibrium state at the end of the interaction-ramp isentropic process is found to be Furthermore, the scale-invariant behavior Eq. indicates that we can relate a non-equilibrium state in adiabatic interactiondriven processes to an equilibrium state with an effective temperature T by noting that where the effective temperature reads QUANTUM HEAT ENGINE For the interaction-driven quantum heat engine (see Fig. 1 in the main text), the heat absorbed from the hot reservoir is given by while the heat released from the engine to the cold reservoir reads Therefore, the efficiency and the work W done by the engine are given by The efficiency and work W can also be obtained numerically by solving Eq.. In our calculations, we take a proper cutoff for excited states whose probability p n given by Eq. is much smaller than that of the ground state. THERMODYNAMIC LIMIT In the laboratory, a Bose gas confined in an effectively 1D trap typically consists of thousands of particles. It is generally considered that such system is well described by the thermodynamic limit, i.e., N and L → ∞ with n = N/L being kept constant. The equilibrium states can then be described by the Yang-Yang thermodynamic equation. The pressure p, particle density n, entropy density s and internal energy density E can then be found as described in the text. Note that we take the thermodynamic limit after the slow ramping limit to guarantee that no diabatic transitions occur during the ramping processes. Thus, the probability p n (c, T ) is unchanged during the ramping processes. In these processes, the entropy should also be constant because no heat is transferred in or out of the working substance. Given the expressions for the heat absorbed during the hot isochore stroke (B to C) and the heat released during the cold isochore stroke (D to A) the efficiency and work output W are given by Here, T B and T D can be determined using the fact that the interaction-driven strokes are isentropic, namely where T A and T C denote the temperature of the cold and hot reservoir, respectively. The efficiency can thus be obtained numerically by solving Eq.. Universality at low energies The low-energy physics of 1D Bose gases can be well captured by the Luttinger liquid theory. The free energy density F is given by F where E 0 is the energy density of the ground state, v s is the sound velocity which depends on the particle density n and the interaction strength c. The entropy density s is given by the derivative of the free energy, namely, The heat absorbed from the hot resevoir in the hot isochore stroke (B to C) is given by and the heat released to the cold reservoir in the cold isochore stroke (D to A) is Here, namely, To simplify, we introduce the following two dimensionless parameters: Since T A < T B < T C, we get The work can be extracted from the heat engine is given by An interesting question is what is the optimal work extracted from this heat engine when the temperatures of the two reservoirs, T A and T C, are fixed. It is easy to show that the work output W always has a maximum value (see the right panel of Fig. 4) at = c with 52) and, respectively, where for a given c A is obtained by numerically calculating the sound velocities v A s and v B s at zero temperature. The black dashed line corresponds to c ≈ 0.69 for = 0.5 given by Eq.. In our numerical calculation, we set n = 1, c B = 2, and L = 1. where a = 27 2 1 + ( 2 /27) − 1 ≈ 4 /2. Thus for small ≪ 1, we get The efficiency of the heat engine is Especially, for the strong interaction case, the sound velocity is v s ≈ 2n 1 − 4(n/c) + 12(n/c) 2, and thus the efficiency is given by which agrees with Eq. in the thermodynamic limit. For the weak interaction case, the sound velocity is v s ≈ 2n (c/n) − (2) −1 (c/n) 3/2 1/2, and the efficiency is given by The work output and the efficiency given by Eqs. and are universal at low energies for 1D system. Here we shall take 1D Bose gases as a platform to test these universal properties. For the interaction-driven engine with fixed particle number N and the length L studied in the present work, the sound velocity is changed during the two isoentropic strokes by changing the interaction strength. We numerically calculate the work W and efficiency for = 0.5 and n = N/L = 1. The interaction strength c B is fixed at c B = 2, which corresponds to the sound velocity v B s ≈ 2.50. From Eq., the optimal work is obtained at c ≈ 0.69 for = 0.5. The numerical results are shown by the blue lines in Fig. 4. In the low temperature region, the numerical results are well explained by the results of the Luttinger liquid theory given by Eqs. and, which are shown by the red solid lines. In the high temperature region, these two results deviate, which indicates the breakdown of the Luttinger liquid theory. FIG. 5. Phase diagram of the 1D Bose gas. The color contour shows the specific heat. With increasing the chemical potential, the system undergoes a crossover from the classical gas (CG) to the Tomonaga-Luttinger liquid (TLL) across the quantum critical (QC) region. Quantum criticality The 1D Bose gases show rich critical properties in the plane of the temperature T and the chemical potential ; see Fig. 5. In the so-called the Tomonaga-Luttinger liquid (TLL) region, where > 0 and the temperature is below the right critical temperature, the magnitude of quantum and thermal fluctuations are comparable. In this region, the low-energy properties can be well captured by the Luttinger liquid theory and the excitation spectrum is given by E = v s |k|, where v s and k are the sound velocity and the wave vector, respectively. In the region of /T ≪ 0, i.e., the mean distance between atoms is much larger than thermal wave length, the system behaves as a classical gas (CG). A quantum critical (QC) region emerges between two critical temperatures (see the white dashed lines in Fig. 5) fanning out from the critical point c = 0. In this region, quantum fluctuation and thermal fluctuation have the same power of the temperature dependence. The density n is monotonically increasing with the chemical potential for a given temperature T and the interaction strength c. Therefore, one can go across the QC region from the CG to TLL regions only by increasing the density n of the working substance. Our numerical calculation shows W /N takes a maximum value in the crossover region between QC and TLL (see Fig. 3 in the main text). 1D SPINOR FERMI GAS AS WORKING SUBSTANCE Denoting byP s i j = 1 4 − i j andP t i j = 3 4 + i j are the projectors onto the subspaces of singlet and triplet functions of the spin arguments ( i, j ) for fixed values of all other arguments, the Hamiltonian of the system is given by Here, v o is a strong, attractive, zero-range, and odd-wave interaction that is the 1D analog of 3D p-wave interaction. Similarly, g e denotes the even-wave 1D coupling constant arising from 3D s-wave scattering. The Hamiltonian of the spinor Fermi gas can be mapped to the Lieb-Liniger Hamiltonian by promoting the coupling strength to an operator dependent on the spin degrees of freedom where the coupling constants {c o, c e } are set by g e and v o. The operator-valued coupling strength can however be reduced to a real number c = (3c o + c e )/4 + (c o − c e ) i j making use of a variational method that combines the exact solution of the LL model with that of the 1D Heisenberg models. As it turns out, this approximation is corroborated by the exact solution of in a broad range of parameters. Thus, the dependence of the interaction strength on the spin degrees of freedom provides a possible work outcoupling mechanism.
A disproportionation reaction and/or a trans-alkylation reaction of aromatic hydrocarbons in production of benzene and xylene by disproportionation of toluene, production of xylene by trans-alkylation of toluene and trimethylbenzene or the like is an industrially important reaction, and a large number of catalyst systems have been so far proposed. In recent years, crystalline aluminosilicate zeolites such as faujasite and mordenite have been found to be effective catalysts. Especially, mordenite has a high disproportionation activity or trans-alkylation activity of aromatic hydrocarbons. However, U.S. Pat. No. 3,729,521 discloses that mordenite alone is not satisfactory with respect to an activity and a catalytic life and a combination of mordenite with metals belonging to the VIB Group, such as chromium, molybdenum and tungsten or metals belonging to the VIII Group, such as iron, cobalt, nickel and platinum is used to improve the activity and the catalytic life. Further, Japanese Patent Publication No. 45,849/1987 discloses a catalyst composed substantially of a mordenite component and a rhenium component. Nevertheless, this catalyst does not exhibit a satisfactory catalytic activity in a disproportionation reaction and/or a trans-alkylation reaction by which to produce xylene from feedstock containing aromatic hydrocarbons. Further, there is a process for industrially producing xylene from C.sub.9 aromatic hydrocarbons as feedstock with the aid of an amorphous silica-alumina catalyst (PETROTECH, 2(12) 1160, 1970). This process is problematic in that the catalyst has to be continuously regenerated using a moving bed because the yield and the activity are notably decreased over the course of time. A trans-alkylation reaction of feedstock containing C.sub.9 alkyl aromatic hydrocarbons with the aid of a zeolite catalyst has been reported [I. Wang, T. -C. et al., Ind. Chem. Res. 29 (1990) 2005]. However, the yield of xylene formed is not necessarily high. There has been so far no efficient process for producing xylene from C.sub.9 aromatic hydrocarbons.
Young people like Katy Perry. Voting? Not so much. (Wade Payne/Invision via Associated Press) We like to think everyone in the United States reads The Fix at least once daily. Unfortunately, the data tell a different story. In fact, as we discussed this morning, a striking percentage of the American public can't even name their most basic constitutional rights. But there's a difference between a lack of a civic understanding and complete apathy. And this latter group is a particularly interesting (non)political animal. So just who are these monsters? A new political typology study from the Pew Research Center refers to them as the "bystanders." These are the 10 percent of Americans who aren't registered to vote and don't really follow political news. Almost all of them (96 percent) have never made a political contribution in their lives. Put plainly: They really don't give a rip. And their apathy is hurting the Democratic Party. So why do these people matter? Because politics is as much about who doesn't participate as who does. American politics is dominated by the wealthy, the old and the educated — because they're the ones playing the game. The "bystanders," as you might imagine, are not wealthy, old or educated. They're also disproportionately Hispanic. Hispanics' share of the "bystanders" (32 percent) is about 2½ times as large as their share of the entire population (13 percent), and young people's share of the most apathetic group (38 percent) is nearly twice their share of the populace (22 percent). These "bystanders," as a whole, also tend to favor the Democratic Party and a liberal ideology — to the extent that they even care, of course. Pew Research Center So it's pretty clear which side this apathy hurts the most. And as much as this apathy is representative of the larger Hispanic and youth vote in 2014, it shows the difficult task Democrats have in turning out what should be their base. Unfortunately for Democrats, these folks are too busy playing "Call of Duty" and reading about Miley Cyrus to care. From Pew's report: Asked about their interest in a number of topics, 73% of Bystanders say they have no interest in government and politics, and two-thirds (66%) say they are not interested in business and finance. So what topics do interest them? Health, science and celebrities: 64% of Bystanders are interested in celebrities and entertainment (vs. 46% of the public). And, in a sign of their youth, they are drawn to video games: 35% call themselves a “video or computer gamer” (vs. 21% of the public). If Democrats could get these groups to care a little less about Tom Cruise and a little more about Ted Cruz, maybe they'd have a better voting coalition for the midterms.
The dissemination of the paparazzi-like coverage of Mrs. Obama's wardrobe have led to serious payoff, beyond the obvious rise in name recognition and general cachet. According to a study by N.Y.U finance and business professor David Yermack in the Harvard Business Review, when the First Lady wears a look, the company's stock price increases. We're all familiar with Jason Wu's rise from indie darling to a household name when Mrs. Obama wore his one-shouldered, white chiffon gown to the Inaugural Ball. And, we're well-versed in the First Lady's penchant for bold pieces designed by other high-end up-comers, such as Peter Som, Naheem Kahn, and Thakoon. These aren't public companies, but Saks, who sells such labels, is, and its stock price trended seriously upwards, well above the S&P 500. Yermack's article comes complete with graphs illustrating "The First Lady Index." In pure dollar value, he estimates that a single public appearance by Mrs. Obama generates an average of 14 million dollars. Interestingly, it's not an instantaneous spike either - the woman's got longer term influence; her choice of frocks affects stock prices three-weeks out, and help companies realize long-term gains. This is at least partially due to the proliferation of articles and sites devoted to her style, and the ease of which e-commerce allows the public to purchase. It's not just under-the-radar brands that are benefitting. Mainstream companies like J. Crew and Talbots—two entities, we might add, enjoying a major retail renaissance of late—are among the labels that profited as a result of being worn by the First Lady. As for what drives her style power, it's not the President. Even when his approval ratings are down, Mrs. Obama's style score soars.
Dissociation of plasma and spinal fluid ACTH in Nelson syndrome. Nine years after a bilateral adrenalectomy for Cushing syndrome and three years after a craniotomy for a chromophobe adenoma, a 51-year-old white woman with marked hyperpigmentation was reexamined because of recurrent visual field losses. Serum and cerebrospinal fluid adrenocorticotropic hormone (ACTH) levels were elevated. Following intravenous administration of hydrocortisone, the serum ACTH levels were only partially suppressed. Unexpectedly, the cerebral spinal fluid ACTH level increased and remained elevated. ( JAMA 228:491-492, 1974)
Today, I want to celebrate a milestone in my journey. It has been more than 12 years of hard work and determination, but I can finally say, that we have overcome most of our major challenges. If you have been following my blog, you know a bit about my journey as an entrepreneur. After college, I went looking for a job and when no one would hire me, I decided to start my own business. And I failed at it. Then, in 2011, I got some depressing news. My son was diagnosed with autism. His speech was not developing and he had no social skills. I found myself being a single mom, raising a son with autism and a wannabe entrepreneur. For the first time in my life, I started to think that I would have to give up on my dreams of owning a successful business. I needed something more secure for me and my son. Not only did my dreams of owning a successful business seem to be over, but having a healthy family life seem to an impossible feat as well. My son could not verbalize his needs and his frustrations manifested in the wildest tantrums. I could not take him anywhere. It was depressing, I felt isolated and cut off from society. People had mean things to say and the harsh criticisms did not make the burden any less. I felt like I was drowning in a sea of impossible. It looked hopeless. But one day, I asked God to show me where to apply my faith. I needed a solution to my problem and I knew that I would have to take action. I was a college graduate, and a single mom raising a son with autism. Getting a job would have given us some degree of financial “security”, but where would I leave my son if I should go to work? WHO would look after him? Would they be able to cope? Then one afternoon, I was noticing my son’s speech pattern as he was trying to tell me something. He would open and close his mouth, while making sounds, but he was not moving his lips to form his words. That’s when the idea came to me, to use the camera on my cellphone to record my mouth while saying his name and other words he would use at his age. I would let him sit and watch the video for hours. He would not only mimic the sounds, but he would mimic the movement of my lips as well. It took some time, but he began to say the words very clearly. We were making progress. It was good, I felt great… my son felt even better. We could start showing our face again. This was awesome. But as time progressed, I noticed that my son was outgrowing the little videos I made on the cellphone. He needed more, he needed to understand the variation in sentence structure. This was when the idea came to me to develop computer games that would ask him specific questions and he would be able to select his response. I thought it was genius. I had one small problem though… I didn’t know anything about building computer games. In fact, I have a degree in Math and Physics and I have never taken any formal computer lessons in my life. I decided to make good on my bright idea. I headed over to YouTube for some lessons in game development and 4 weeks later, I created my first game using Adobe Flash Professional. Not only did the games work, but I was also able to create several businesses selling educational games and mobile apps. I also won several awards for my tech start-ups. You can learn more from my about page.
On the Capacity of Radio Communication Systems with Diversity in a Rayleigh Fading Environment In this paper, we study the fundamental limits on the data rate of multiple antenna systems in a Rayleigh fading environment. With M transmit and M receive antennas, up to M independent channels can be established in the same bandwidth. We study the distribution of the maximum data rate at a given error rate in the channels between up to M transmit antennas and M receive antennas and determine the outage probability for systems that use various signal processing techniques. We analyze the performance of the optimum linear and nonlinear receiver processor and the optimum linear transmitter/receiver processor pair, and the capacity of these channels. Results show that with optimum linear processing at the receiver, up to M/2 channels can be established with approximately the same maximum data rate as a single channel. With either nonlinear processing at the receiver or optimum linear transmitter/receiver processing, up to M channels can be established with approximately the same maximum data rate as a single channel. Results show the potential for large capacity in systems with limited bandwidth.
Hepatic granulomas--an experience over the last 8 years. Twenty cases of hepatic granulomas seen in the Department of Medicine over the period of 8 years from 1981-1988 were reviewed. Prolonged fever and jaundice were the commoner presentations. While the aetiology was varied, patients with tuberculosis and idiopathic causes formed the major groups. There were also 2 rare causes, one due to cytomegalovirus infection and the other a result of allopurinol hypersensitivity. The idiopathic group of cases fared well but those with tuberculosis did badly and 2 out of 6 died. The absence of pulmonary involvement and the high incidence of jaundice and liver dysfunction in the patients with tuberculosis were the other striking features.
File - A child soldier (C), known as "Kadogo," meaning "small one" in Swahili, stands at the front line at Kanyabayonga in eastern Congo. The Democratic Republic of Congo’s government has launched a training program for former child combatants and victims of sexual violence. Nearly 80 people crammed into a small village hall Friday in Nyiragongo territory in eastern Congo to witness the signing of an agreement to help young victims, and perpetrators, of recent violence rebuild their lives. The two most important signatures on the document were those of the DRC president’s personal representative in charge of the fight against sexual violence and the recruitment of children, Jeanine Mabunda, and that of the administrator of Congo’s National Institute for Professional Training, Maurice Tshikuyi. Speaking through a translator, Tshikuyi said he and his Institute for Professional Training colleagues came to the meeting from Kinshasa because they wanted the agreement to be witnessed by local people. “Do you want me to sign this document?” he asked, to a chorus of approval. Under the agreement, the government will pay the Institute for Professional Training $25,000 to give two months of skills training to 75 people, mostly young former combatants or victims of sexual violence. Officially, Mabunda’s office has a mandate to work for the reintegration of former child soldiers into civilian life. In a sense that mission is largely accomplished, as the Congolese army demobilized its own under-age fighters years ago. Mabunda told reporters that her office and the U.N. Children's Fund found there was not a single child soldier at any DRC army camp that they visited. There are still many under age combatants and former combatants from eastern Congo’s armed groups, however, and sexual violence is still a major problem, although its scale should not be exaggerated, said Mabunda. She cited statistics indicating sexual violence cases reported in eastern DRC fell by 33 percent between 2013 and last year, from about 15,000 to about 10,000. The U.N. reports the Congolese security forces’ involvement in sexual violence has fallen sharply. The 75 youngsters to be enrolled for training are likely just a first wave, said Mabunda. This training for Nyiragongo is a pilot program, she said, and it should be repeated in other territories of North Kivu and in other provinces. Thomas Ngumijimana, one of the ex-combatants who wants to enroll on an INPP course, said most of them want training in agriculture, livestock rearing or carpentry, because that is what most people do around here. Ngumijimana said that after some training he could make $150 with agriculture over three months. The young women said they hoped to enroll on courses in dressmaking, pastry making and soap making. The Institute for Professional Training already has partnerships with the French Development Agency and other donors and is a trusted provider of training. The government’s pledge to spend $25,000 on these training courses comes two days after the head of the United Nations Mission in Congo, Martin Kobler, raised questions about the government’s commitment to reintegrating former combatants. Kobler said the official program for reintegrating former combatants has not yet started, although MONUSCO has allocated $6 million to a World Bank trust fund for the process. Donors have been reluctant to contribute to the fund because the government’s contribution is not clear, he said. But he added the new DRC defense minister is very active in trying to launch the program.
Spectrum-Map-Empowered Opportunistic Routing for Cognitive Radio Ad Hoc Networks Cognitive radio (CR) has emerged as a key technology for enhancing spectrum efficiency by creating opportunistic transmission links. Supporting the routing function on top of opportunistic links is a must for transporting packets in a CR ad hoc network (CRAHN) consisting of cooperative relay multi-radio systems. However, there lacks a thorough understanding of these highly dynamic opportunistic links and a reliable end-to-end transportation mechanism over the network. Aspiring to meet this need, with innovative establishment of the spectrum map from local sensing information, we first provide a mathematical analysis to deal with transmission delay over such opportunistic links. Benefitting from the theoretical derivations, we then propose spectrum-map-empowered opportunistic routing protocols for regular and large-scale CRAHNs with wireless fading channels, employing a cooperative networking scheme to enable multipath transmissions. Simulations confirm that our solutions enjoy significant reduction of end-to-end delay and achieve dependable communications for CRAHNs, without commonly needed feedback information from nodes in a CRAHN to significantly save the communication overhead at the same time.
Modern metal pipes found buried in ancient China are neither pipes nor evidence of ancient alien visitation. by Brian Dunning Filed under Aliens & UFOs, Ancient Mysteries Skeptoid Podcast #181 November 24, 2009 Podcast transcript | Download | Subscribe Listen: http://skeptoid.com/audio/skeptoid-4181.mp3 The southern shore of Lake Toson (Photo credit: Google Earth) Should you happen to visit Tibet anytime soon, be sure to stop by the city of Delingha. It's a town of most extraordinary beauty, nestled on the edge of the Qaidam Basin below a range of Himalayan hills. There you'll find the local residents proudly displaying their most famous distinction. For a few yuan you can probably get someone to take you to see it. Only a short journey outside of town is said to be a cave, and in this cave are a series of ancient metal pipes. These pipes predate all known history, and are embedded into the rock itself. They are said to lead through the very mountain, and connect to a nearby salt lake. The explanation? Ruins of a construction project 150,000 years ago, by alien visitors. The Baigong Pipes are an example of what paranormal enthusiasts refer to as "out of place artifacts", modern objects discovered in ancient surroundings. The Baigong Pipes are described as a sophisticated system of metal pipes, buried in geology in such a way that precludes the possibility of having been installed in modern times. They are located on Mt. Baigong in the Qinghai province of China, about 40 kilometers southwest of Delingha. Most accounts describe a pyramid-shaped outcropping on the mountain, and the cave containing the pipes is on this pyramid. 80 meters from the mouth of this cave is a salt lake (the twin of an adjacent freshwater lake), and more pipes can be found poking up along the shore. Most of the information you can find online about the Baigong Pipes appears to be originally sourced from a 2002 article from the Xinhua News Agency, talking about preparations by a team of scientists about to embark to this remote area to study the pipes. "Nature is harsh here," said one. "There are no residents let alone modern industry in the area, only a few migrant herdsmen to the north of the mountain." The two lakes are broad, shallow sinks at the low point of the vast Qaidam Basin. Searching for Mt. Baigong is likely to be fruitless: First, the area is largely flat and the nearest mountains are 20 or 30 kilometers away; second, baigong is a local word for hill and could mean anything in this context. The southernmost of the two lakes, Toson Hu or Lake Toson, has some low bluffs here and there along its southern and western sides (Google Maps link), and it is in one of these bluffs (about 50 or 60 meters in height) that author Bai Yu once happened to find what he described as a small cave, according to his book Into the Qaidam. [Update: The cave is on a point along Lake Toson's northeastern shore. Here is a better map link and a photo of the actual cave - BD] Bai was traveling the area in 1996, and described a lifeless lake surrounded by cone-shaped hills. The cave appeared to have been artificially dug, and was triangular, about six meters deep. Nearby were two similar caves, but they had collapsed and could not be entered. But what struck Bai was the array of manufactured metal pipes protruding up through the floor of the cave and embedded within its walls, one 40 cm wide. Following their path outside, Bai discovered more pipes protruding from the surface of the conical hill, and even more of them 80 meters away from the cave along the shore of the lake. Excited, he removed a sample and sent it to the Ministry of Metallurgical Industry. The result was 92% common minerals and metals, and 8% of unknown composition. Bai proceeded about 70 kilometers to the Delhi branch of China's Purple Mountain Observatory, a high vantage point from where he knew he could get a birds-eye view of the whole region. He saw great expanses of flat, open terrain, and putting two and two together, he concluded that this would make for a fine alien landing site. Unknown minerals and plentiful landing space meant that the Baigong Pipes had to be of alien origin. Scientists from the China Seismological Bureau visited the lake in 2001 to examine the pipes. Samples brought back to the Beijing Institute of Geology were examined by thermoluminescence dating, a technique that can determine how long it's been since a crystalline mineral was either heated or exposed to sunlight. The result came back that if these were indeed iron pipes that had been smelted, they were made 140-150,000 years ago. Human history in the region only goes back some 30,000 years, and so the alien theory seemed to have been confirmed. The following year the Xinhua news story was published, and the Baigong Pipes entered pop culture as, supposedly, genuine, tangible evidence of alien visitation. If you visit the area today, you'll find a locally-built monument to the aliens off the main highway, replete with a mockup metallic satellite dish. Internet forums buzz with the absence of followup articles by Xinhua; the natural conclusion is that it turned out the alien explanation was the true one and the Chinese government is suppressing any further reporting. Cracked.com touts the Baigong Pipes as one of Six Insane Discoveries that Science Can't Explain. And although that's where most reporting of the Baigong Pipes stops, it's also where responsible inquiry should begin. When you settle on a paranormal explanation, it means you've decided there is no natural explanation. In fact, when you don't yet know the explanation, you don't yet know the explanation; so you can't reasonably decide that the time is right to stop investigating. But so many do. Skeptical hypotheses have already been put forward, seeking a natural explanation for the Baigong Pipes that doesn't require the introduction of a wild assumption like alien visitation. The first thing we turn to are geological processes that might explain them. The Chinese have put forth several such hypotheses, including one involving the seepage of iron-rich magma into existing fissures in the rock. A 2003 article in Xinmin Weekly described how this might work. Fractures caused by the uplift of the Qinghai-Tibet plateau could have left the ground riddled with such fissures, into which the highly pressurized magma driving the uplift would have been forced. Assuming this magma was of the right composition that, when combined with the chemical effects of subsequent geological processes, we might very likely expect to see such rusty iron structures in the local rock. But evidence of this has never surfaced, and the Chinese dismissed this theory. They also noted that the Qaidam oil field would not be able to exist if there were active volcanism in the area as recently as 150,000 years ago. It was their next theory that ultimately led to a satisfactory explanation, and this theory involved the same hypothesized fissures in the sandstone. But, instead of being filled with iron-rich magma, the fissures could have been washed full of iron-rich sediment during floods. Combined with water and the presence of hydrogen sulfide gas, the sediment could have eventually hardened into the rusty metallic pipelike structures of iron pyrite found today. This theory was not fantastic, in part because there was no logical reason why the sandstone might happen to be laced with pipe shaped fissures. But the idea of flooding did make sense, given the geological history of the Qaidam Basin. Three years before Bai Yu took his first peek into the cave at Lake Toson, researchers Mossa and Schumacher wrote in the Journal of Sedimentary Research about fossil tree casts in Louisiana. They found cylindrical structures in the soil, thermoluminescence dated from 75-95,000 years ago. The chemical composition of the cylinders varied depending on where and when they formed and in what type of soil. The authors found that these were the fossilized casts of tree roots, formed by pedogenesis (the process by which soil is created) and diagenesis (the lithification of soil into rock through compaction and cementation). The result of this process was to create metallic pipelike structures, which by comparing the descriptions offered by researchers, appear to be a perfect match for the Baigong Pipes. The Chinese scientists eventually did come to the same conclusion, according to the Xinmin Weekly article. They used atomic emission spectroscopy to conduct a detailed chemical analysis of the rusty pipe fragments, and found them to contain organic plant matter. Under the microscope they found tree rings, consistently throughout the samples. Once they established that the Baigong Pipes were simply fossilized tree casts, they set about to discover how they got there. The Qaidam basin was once a vast lake, which has disappeared as the Qinghai-Tibet plateau uplifted the basin to its current elevation of about 2800 meters. Over the millennia, various floods filled the sink with runoff, alluvium, and debris including such fossils. They can now be found wherever such ancient flows deposited them, and it seems that Bai Yu was lucky enough to discover just such a pocket. And so we end up with a complete story of how rusty iron pipes, tens of thousands of years older than any people who might have forged them, can end up embedded in solid sandstone in such a way as to baffle the average observer. Like many amateur researchers, Bai Yu stumbled upon an extraordinary discovery, but through his lack of applicable knowledge, misinterpreted what he saw. Those who underestimate the Earth's ability to produce fascinating effects are often left to grope for goofy explanations like alien construction projects. I find that the Baigong Pipes are one of the better examples of the folly of stopping at the paranormal explanation, compared to the rich rewards offered by following the scientific method to uncover what's really going on. By Brian Dunning Follow @BrianDunning
It's time to revise the nation's goals for education technology, according to the Department of Education, which has been seeking comments on suggested changes. Since 1996, four overriding principles have framed the Clinton administration's approach to school technology: All teachers will have the training and support they need to help students learn to use technology; all teachers and students will have modern multimedia computers in their classrooms; every classroom will be connected to the Internet; and effective software and online-learning resources will be integral parts of every school's curriculum. Those goals, known as the "four pillars," have influenced federal initiatives ranging from the creation of the E-rate program, which provides schools and libraries with discounts on telecommunications services, to grant programs such as the Technology Literacy Challenge Fund. The pillars have taken on currency outside Washington as well, in part by influencing state and local officials who have administered or applied for federal technology grants and by influencing experts who evaluated applications and the funded projects. But the landscape of technology in the schools has shifted over the past four years, and many experts believe the goals are due for a re-examination. "[The goals] helped focus attention on [technology-]access issues and things that are fairly easy to count," said Barbara Means, a researcher in educational technology at SRI International, a think tank in Menlo Park, Calif. "Now there's an interest in getting a deeper look at the experiences teachers and students are having." The Education Department commissioned background papers last fall on how the goals could be revised; the papers are now posted online, at www.air.org/forum. The public-comment period ended last week. "I would hope this revision would be the result of a continuing conversation we have had for seven years," Linda G. Roberts, the department's director of educational technology, said. "We've been trying very hard to be open to feedback." The department plans to release the revised goals in the fall. Some possible changes were discussed by a panel of experts the department assembled for two days in Washington in December. Most agreed that the department needed a new "pillar" that would focus on research into the best ways to use technology, said Ms. Means, who was on the panel. Another new issue was preparing children for the intellectual and moral demands of the digital age. "One thing that came through from the group of experts was this notion of focusing on technology literacy and cyber-citizenship and responsibility," Ms. Roberts said. "That never came up as a goal five years ago." Ms. Roberts, who did the initial research and drafting of the four pillars, said she was gratified that they had been widely accepted in policy discussions on education technology. But the work that people have done on the goals and the need to tackle new challenges make a review necessary, she said. "It would be mistake to feel like we're done," Ms. Roberts said. Some pillars may not change, she added. The number of multimedia computers and classrooms connected to the Internet has risen sharply since 1996, and policymakers have taken steps to bolster the technological skills of teachers and teachers-in-training. But less progress has been made in developing high-quality instructional software and online resources, some experts say. William L. Rukeyser, the president of Learning in the Real World, a nonprofit clearinghouse on education technology in Woodland, Calif., said he would welcome a greater emphasis by the federal government on research into the educational effectiveness and cost-effectiveness of technology. "It seems that in a lot of ways, the whole education technology effort has been driven by shopping-cart mentality," said Mr. Rukeyser, an outspoken critic of what he sees as many school systems' headlong rush into technology. "It's always easier to acquire something than to achieve something." "Poorer Schools Still Lagging Behind on Internet Access, Study Finds," Feb. 23, 2000. "Commission Begins Study of Online Educational Materials," Feb. 9, 2000. "U.S. Ed. Dept. Launches Grant Program for Technology Centers," Oct. 6, 1999. "Business Group Calls for More Technology Training," March 3, 1999. "Education Department Extends Its Reach on the Internet," Feb. 24, 1999. For background, previous stories, and Web links, read our Issues Pages on the Internet and the Digital Divide. Read our special report, Technology Counts '99: Building the Digital Curriculum, an in-depth look at digital content and how it's used in schools. Two reports from the U. S. Department of Education provide background on its technology priorities and initiatives: "An Educator's Guide To Evaluating the Use of Technology in Schools and Classrooms," December 1998, and "Getting America's Students Ready for the 21st Century: Meeting the Technology Literacy Challenge," June 29, 1996. "Computers and Classrooms: The Status of Technology in U.S. Schools," a policy report by the Educational Testing Service Network, includes statistics on schools' access to technology and a discussion of the effectiveness of educational technology. "The Role of Online Communications in Schools: A National Study," 1997, from the Center for Applied Special Technology, uses statistical measures to argue that students with online access perform better.
QCA & CQCA: Quad Countries Algorithm and Chaotic Quad Countries Algorithm This paper introduces an improved evolutionary algorithm based on the Imperialist Com- petitive Algorithm (ICA), called Quad Countries Algorithm (QCA) and with a little change called Chaotic Quad Countries Algorithm (CQCA). The Imperialist Competitive Algorithm is inspired by socio-political process of imperialistic competition in the real world and has shown its reliable performance in optimization problems. This algorithm converges quickly, but is easily stuck into a local optimum while solving high-dimensional optimization prob- lems. In the ICA, the countries are classified into two groups: Imperialists and Colonies which Imperialists absorb Colonies, while in the proposed algorithm two other kinds of countries, namely Independent and Seeking Independence countries, are added to the coun- tries collection which helps to more exploration. In the suggested algorithm, Seeking Inde- pendence countries move in a contrary direction to the Imperialists and Independent countries move arbitrarily that in this paper two different movements are considered for this group; random movement (QCA) and Chaotic movement (CQCA). On the other hand, in the ICA the Imperialists' positions are fixed, while in the proposed algorithm, Imperial- ists will move if they can reach a better position compared to the previous position. The proposed algorithm was tested by famous benchmarks and the compared results of the QCA and CQCA with results of ICA, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Particle Swarm inspired Evolutionary Algorithm (PS-EA) and Artificial Bee Colony (ABC) show that the QCA has better performance than all mentioned algorithms. Between all cases, the QCA, ABC and PSO have better performance respectively about 50%, 41.66% and 8.33% of cases.
Efficacy of Lactobacillus reuteri in clinical practice This paper addresses the primary aspects of developing gut microbiota and the immune system and the role of gut microbiocenosis in this process. Optimal gut colonization and adequate immune response are critical factors in developing tolerance to commensal microbes and anti-infectious protection. The authors highlight various prenatal, neonatal, and postnatal factors which prevent normal colonization of gastrointestinal mucosa. One of the methods to restore the qualitative and quantitative composition of gut microbiota are probiotics. Probiotics demonstrating antagonistic activity for pathogens compete with these pathogens for the adhesion to mucosal epithelium and nutrients, thereby inhibiting the production of bacterial toxins, modulating the functions of the immune system, improving gut microbiocenosis, maintaining the integrity of the gastrointestinal mucosal barrier, and controlling gut inflammation. Probiotics are a heterogeneous group of living bacteria with species- and strain-specific properties. For example, Lactobacillus reuteri is an effective probiotic commonly prescribed in premature babies, children with acute gastroenteritis, and infantile colic. KEYWORDS: gut microbiota, microbiocenosis, premature, C-section, probiotics, bacteria, immune system, infantile colic, Lactobacillus reuteri. FOR CITATION: Komarova O.N. Efficacy of Lactobacillus reuteri in clinical practice. Russian Journal of Woman and Child Health. 2021;4:277283 (in Russ.). DOI: 10.32364/2618-8430-2021-4-3-277-283.
God, Reason and Theistic Proofs Attempting to prove the existence of God is an ancient and venerable tradition within the discipline known as the philosophy of religion. But can we truly prove the existence of God using human reason alone? Just how do we prove the existence of God? Why try? Which, if any, of the various theistic proofs are persuasive? God, Reason, and Theistic Proofs tackles these fundamental questions head-on. Stephen T. Davis examines a cross-section of theistic proofs that have been offered by theologians and thinkers from Anselm to Paley, explaining in clear terms what theistic proofs are and what they try to accomplish. He then goes on to explore in depth the relationship between theistic proofs and religious realism, the ontological argument for the existence of God, the cosmological and teleological arguments, the position known as foundationalism, and the argument from religious experience. Wisely structured and clearly written, this volume will make an excellent resource for those looking for a comprehensive introduction to the debate surrounding the existence of God, or for those seeking intellectual validation for their faith.
Chemically robust and readily available quinoline-based PNN iron complexes: application in C-H borylation of arenes. Iron catalysts have been used for over a century to produce ammonia industrially. However, the use of iron catalysts generally remained quite limited until relatively recently, when the abundance and low toxicity of iron spurred the development of a variety of iron catalysts. Despite the fact that iron catalysts are being developed as alternatives to precious metal catalysts, their reactivities and stabilities are quite different because of their unique electronic structures. In this context, our group previously developed a new family of quinoline-based PNN pincer-type ligands for low- to mid-valent iron catalysts. These chemically robust PNN ligands provide air- and moisture-tolerant iron complexes, which exhibit excellent catalytic performances in the C-H borylation of arenes. This feature article summarises our recent work on PNN iron complexes, including their conception and design, as well as related reports on iron pincer complexes and iron-catalysed C-H borylation reactions.
High-intensity exercise training increases vascular transport capacity of rat hindquarters. The purpose of this study was to determine whether high-intensity exercise training increases the vascular flow capacity and capillary exchange capacity in isolated rat hindquarters. One group of 20 male Sprague-Dawley rats underwent six bouts of alternating running (2.5 min) and recovery (4.5 min), 5 days/wk at 60 m/min on a 15% grade for 6-10 wk (high-intensity exercise training), while a second group of 20 rats was cage confined (sedentary controls). Experiments were conducted in isolated, maximally dilated (papaverine) hindquarters perfused with an artificial plasma consisting of a Tyrode's solution containing 5 g/100 ml albumin. Vascular flow capacity was evaluated by measuring perfusate flow rate at four different perfusion pressures. Capillary exchange capacity was evaluated by measuring the capillary filtration coefficient. The efficacy of training was demonstrated by significant increases in succinate dehydrogenase activity in the white vastus lateralis and vastus intermedius muscles. Total hindquarter flow capacity was elevated 50-100% in the trained rats. This increased flow capacity was associated with an increase in the capillary filtration coefficient in the maximally vasodilated hindquarters, thus suggesting that the capillary exchange capacity was increased with high-speed exercise training. These results suggest that the vascular transport capacity in rat hindquarter muscles is significantly increased by high-intensity exercise training.
Black Matrilineage: The Case of Alice Walker and Zora Neale Hurston In their book on women writers and the nineteenth-century literary imagination, Sandra M. Gilbert and Susan Gubar revise Harold Bloom's psycholiterary model of poetic precedence to make it applicable to the female writer. In Bloom's Freudian model of poetic influence, the poet, like Oedipus, battles his precursor father at the intertextual crossroads and metaphorically kills him: he misreads and so swerves from, completes, or defines his discontinuity with his literary forebear. Bloom defines an author's inevitable dependence on tradition as necessarily anxious because writers deny obligation to precursors; they desire originality yet know it a fiction. The woman poet, however, finds no place in this paradigm of authorial interaction. On the one hand, she has few precursors who resemble herself, and on the other, she must come to terms with her difference from male writers who (metaphorically) beget the text upon the female muse. Gilbert and Gubar therefore posit that for the woman writer the "anxiety of influence" becomes a "primary 'anxiety of authorship."' Her alienation from the male canonical tradition appears in her text as marks, fissures, and traces of "inferiorization": rebellion masquerades as submission, poetic closure is ambivalent, structure and figurative language undercut stated thematic material.' The feminist
Fall 67: A Hectic Beginning News broadcasts in summer 1967 were filled scenes of riots, burning cities, and National Guard troops. The peaceful civil rights movement had morphed into a demand for black power. Death totals were rising in Vietnam as the war intensified and became the central focus of the student movement. As the semester began, President Henry handed off Urbana campus management to Jack W. Peltason, the new chancellor. Millet announced looser womens dorm rules, Steve Schmidt announced the opening of the Red Herring coffee shop, and Berkey, Durrett, and Fein, the primary campus-movement leaders, announced that SFS was disbanded--just as the Draft Resisters Union formed.
KABUL, Afghanistan — For more than a decade, wads of American dollars packed into suitcases, backpacks and, on occasion, plastic shopping bags have been dropped off every month or so at the offices of Afghanistan’s president — courtesy of the Central Intelligence Agency. All told, tens of millions of dollars have flowed from the C.I.A. to the office of President Hamid Karzai, according to current and former advisers to the Afghan leader. The C.I.A., which declined to comment for this article, has long been known to support some relatives and close aides of Mr. Karzai. But the new accounts of off-the-books cash delivered directly to his office show payments on a vaster scale, and with a far greater impact on everyday governing. Moreover, there is little evidence that the payments bought the influence the C.I.A. sought. Instead, some American officials said, the cash has fueled corruption and empowered warlords, undermining Washington’s exit strategy from Afghanistan. The United States was not alone in delivering cash to the president. Mr. Karzai acknowledged a few years ago that Iran regularly gave bags of cash to one of his top aides. At the time, in 2010, American officials jumped on the payments as evidence of an aggressive Iranian campaign to buy influence and poison Afghanistan’s relations with the United States. What they did not say was that the C.I.A. was also plying the presidential palace with cash — and unlike the Iranians, it still is. American and Afghan officials familiar with the payments said the agency’s main goal in providing the cash has been to maintain access to Mr. Karzai and his inner circle and to guarantee the agency’s influence at the presidential palace, which wields tremendous power in Afghanistan’s highly centralized government. The officials spoke about the money only on the condition of anonymity. It is not clear that the United States is getting what it pays for. Mr. Karzai’s willingness to defy the United States — and the Iranians, for that matter — on an array of issues seems to have only grown as the cash has piled up. Instead of securing his good graces, the payments may well illustrate the opposite: Mr. Karzai is seemingly unable to be bought. Over Iran’s objections, he signed a strategic partnership deal with the United States last year, directly leading the Iranians to halt their payments, two senior Afghan officials said. Now, Mr. Karzai is seeking control over the Afghan militias raised by the C.I.A. to target operatives of Al Qaeda and insurgent commanders, potentially upending a critical part of the Obama administration’s plans for fighting militants as conventional military forces pull back this year. But the C.I.A. has continued to pay, believing it needs Mr. Karzai’s ear to run its clandestine war against Al Qaeda and its allies, according to American and Afghan officials. Like the Iranian cash, much of the C.I.A.’s money goes to paying off warlords and politicians, many of whom have ties to the drug trade and, in some cases, the Taliban. The result, American and Afghan officials said, is that the agency has greased the wheels of the same patronage networks that American diplomats and law enforcement agents have struggled unsuccessfully to dismantle, leaving the government in the grips of what are basically organized crime syndicates. The cash does not appear to be subject to the oversight and restrictions placed on official American aid to the country or even the C.I.A.’s formal assistance programs, like financing Afghan intelligence agencies. And while there is no evidence that Mr. Karzai has personally taken any of the money — Afghan officials say the cash is handled by his National Security Council — the payments do in some cases work directly at odds with the aims of other parts of the American government in Afghanistan, even if they do not appear to violate American law. Handing out cash has been standard procedure for the C.I.A. in Afghanistan since the start of the war. During the 2001 invasion, agency cash bought the services of numerous warlords, including Muhammad Qasim Fahim, the current first vice president. “We paid them to overthrow the Taliban,” the American official said. The C.I.A. then kept paying the Afghans to keep fighting. For instance, Mr. Karzai’s half brother, Ahmed Wali Karzai, was paid by the C.I.A. to run the Kandahar Strike Force, a militia used by the agency to combat militants, until his assassination in 2011. A number of senior officials on the Afghan National Security Council are also individually on the agency’s payroll, Afghan officials said. While intelligence agencies often pay foreign officials to provide information, dropping off bags of cash at a foreign leader’s office to curry favor is a more unusual arrangement. Afghan officials said the practice grew out of the unique circumstances in Afghanistan, where the United States built the government that Mr. Karzai runs. To accomplish that task, it had to bring to heel many of the warlords the C.I.A. had paid during and after the 2001 invasion. By late 2002, Mr. Karzai and his aides were pressing for the payments to be routed through the president’s office, allowing him to buy the warlords’ loyalty, a former adviser to Mr. Karzai said. Then, in December 2002, Iranians showed up at the palace in a sport utility vehicle packed with cash, the former adviser said. The C.I.A. began dropping off cash at the palace the following month, and the sums grew from there, Afghan officials said. Payments ordinarily range from hundreds of thousands to millions of dollars, the officials said, though none could provide exact figures. The money is used to cover a slew of off-the-books expenses, like paying off lawmakers or underwriting delicate diplomatic trips or informal negotiations. Much of it also still goes to keeping old warlords in line. One is Abdul Rashid Dostum, an ethnic Uzbek whose militia served as a C.I.A. proxy force in 2001. He receives nearly $100,000 a month from the palace, two Afghan officials said. Other officials said the amount was significantly lower. Mr. Dostum, who declined requests for comment, had previously said he was given $80,000 a month to serve as Mr. Karzai’s emissary in northern Afghanistan. “I asked for a year up front in cash so that I could build my dream house,” he was quoted as saying in a 2009 interview with Time magazine. Some of the cash also probably ends up in the pockets of the Karzai aides who handle it, Afghan and Western officials said, though they would not identify any by name. That is not a significant concern for the C.I.A., said American officials familiar with the agency’s operations. “They’ll work with criminals if they think they have to,” one American former official said. Interestingly, the cash from Tehran appears to have been handled with greater transparency than the dollars from the C.I.A., Afghan officials said. The Iranian payments were routed through Mr. Karzai’s chief of staff. Some of the money was deposited in an account in the president’s name at a state-run bank, and some was kept at the palace. The sum delivered would then be announced at the next cabinet meeting. The Iranians gave $3 million to well over $10 million a year, Afghan officials said. At the time, Mr. Karzai’s aides said he was referring to the billions in formal aid the United States gives. But the former adviser said in a recent interview that the president was in fact referring to the C.I.A.’s bags of cash. No one mentions the agency’s money at cabinet meetings. It is handled by a small clique at the National Security Council, including its administrative chief, Mohammed Zia Salehi, Afghan officials said. Mr. Salehi, though, is better known for being arrested in 2010 in connection with a sprawling, American-led investigation that tied together Afghan cash smuggling, Taliban finances and the opium trade. Mr. Karzai had him released within hours, and the C.I.A. then helped persuade the Obama administration to back off its anticorruption push, American officials said. An earlier version of this article misstated the job title that Khalil Roman held in Afghanistan from 2002 until 2005. He was President Hamid Karzai’s deputy chief of staff, not his chief of staff. Afghan Leader Confirms Cash Deliveries by C.I.A.
After Hammond's 315mph crash he had a momentary lapse of memory. It was only short term, but lasted long enough to give me something good to write about. During an interview with Jonathan Ross, he told him that he completely forgot he was married to his wife Mindy, and instead thought he was married to a French girl. He said: "I didn't forget Mindy, I fancied her - thank God - I attempted to chat her up." According to Mindy, Hammond told her he shouldn't talk to her because he's got to go back to his wife. We can all guess what she was thinking at this point - he's having an affair. But of course he wasn't. This is Hammond we're talking about after all. After the crash he apparently had a 22-second memory and wasn't able to daydream. But obviously he made a full recovery and Mindy got her hamster back. In Hammond's autobiography called 'On the Edge: My Story', he writes about the moments leading up to his crash, until he can't remember anymore of course, as he was put into a coma for a fortnight. At that point in the book, Mindy takes over and writes about what it was like when it felt like she could lose the love of her life and the father of her children. Loading... Unfortunately I am going to include a Loose Women YouTube video in this article (don't shoot me), just because for those who haven't read the autobiography, the video sheds some light on how Mindy felt during the horrible ordeal. Thank goodness it was a happy ending, because I don't want to live in a world where the only power couple you hear about on the news is Kim and Kanye. No thank you very much! So why do we love Richard and Mindy Hammond so much? Well, they show us what a real power couple is. We do merchandise! CLICK THE LINK BELOW AND USE DISCOUNT CODE MAY17 FOR THE MONTH OF MAY! teespring.com/stores/gtnation # richardhammond # mindyhammond # topgear # thegrandtour # grandtour
. There are several difficulties that impede medical support of space missions. We can assume that the long-term space missions will be accompanied by functional changes in the organism. These changes will be a natural reaction to the factors of space flight. Complex action of factors can lead to the development of both non-specific changes (general adaptation syndrome), and specific changes. We analyzed the physiological changes after long space flights and carried out the correlation of these changes with previously identified proteins in the urine. It is possible to determine the mechanisms of adaptation of the human organism to the conditions of life on Earth after a long stay in Earth orbit.
The title of today’s Wallbuilders Live radio broadcast, brought to you courtesy of Religious Right “historian” David Barton, was “Why is Obama Trying to Remove God From the United States?” Barton attacked Obama’s Christian faith , whose Christian-nation version of U.S. history is promoted by right-wingers including Glenn Beck and Rep. Michele Bachmann, hasbefore. Today, Barton and co-host Rick Green were joined by Rep. Randy Forbes to complain about the president’s insufficient godliness. complained Forbes hasabout a speech President Obama gave in Indonesia in November, in which Obama said, “But I believe that the history of both America and Indonesia should give us hope. It is a story written into our national mottos. In the United States, our motto is E pluribus unum – out of many one…our nations show that hundreds of millions who hold different beliefs can be united in freedom under one flag.” letter Forbes and his colleagues in the congressional prayer caucus saw that sentiment as threatening rather than inspiring. The caucus sent ato President Obama in December complaining that he had repeatedly referred to Americans having “inalienable rights” without mentioning God as their source; that he had told Indonesians that the American motto was E Pluribus Unum rather than the official “In God We Trust,” and that he had referred to our country as being united under one flag without mentioning that the Pledge of Allegiance includes the phrase “under God.” speech we do not consider ourselves a Christian nation or a Jewish nation or a Muslim nation. We consider ourselves a nation of citizens who are bound by ideals and a set of values.”) Forbes once gave aon the House floor attacking President Obama for supposedly saying in Turkey that the U.S. was not a Judeo-Christian nation. (In fact, Obama had said that one of America’s strengths is that “ Barton sees conspiracy afoot, saying that this is no dumb mistake but a “deliberate intent” to leave God out of traditional acknowledgments in order to try to get Americans to forget that devotion to God is a defining characteristic of the U.S. “The President is trying to communicate a worldview that is devoid of God because that’s his worldview. That’s where he is.” Barton even suggested darkly that Obama is violating his oath to uphold the Constitution when he cites the Declaration of Independence without mentioning the Creator. Barton and Green lavished praise on Rep. Forbes and celebrated the fact that the new Congress will have more people like him. Said Barton, “that’s the cool thing about this last election. We sent a bunch of people to Congress who think like we do.”
If wearing an Apple Watch is meant to make you look cool, you should be able to charge your favorite wearable with the same level of suaveness. Luckily, now that Amber has launched its Kickstarter campaign, that may not be so difficult to do. Self described as a “watchcase power bank for Apple Watch,” Amber is a “chic watchcase” that allows you to charge your Apple Watch from anywhere without being chained to your desk, waiting for 100 percent. And better yet, it’ll charge your iPhone too. Relying on “world-class battery manufacturer, Desay,” Amber notes that it’s able to charge your Apple Watch, your iPhone, and itself at the same time whenever it’s plugged into an external power source. “By directing the incoming electric current flow outward, Amber effectively saves its battery cycle life,” says its makers. “And when Amber itself runs out of charge, it simply functions as a normal charger and powers devices immediately.” So now, if you find that any of your smart devices are running out of juice, simply plug them into the Amber (or place it inside the Amber) and let it charge while you continue about your day. Mimicking the colors currently offered by Apple devices, you can buy an Amber that matches your watch or your smartphone, and if you order now, you can snag one of these beauties for $55. Already, 944 backers have pledged $67,490, allowing the Amber team to exceed its $50,000 goal with 13 days left in the campaign. So get yours while you can, and never be caught without a charged Apple Watch (or iPhone) ever again.
As we make sense of Saturday night’s fight between Floyd Mayweather Jr and Conor McGregor, a fight that exceeded all sane expectations, let’s look at the historical wake it left, along with the legacies of both fighters. For some reason, boxing pundits and purists — like yours truly — who are gatekeepers of boxing ancestry, were supposed to be appalled by this fight, this farce, this veiled armed robbery with a remote to replace a gun. To not only watch but eagerly await this fight was an assault on our old-world sensibilities, a blight on boxing, its history and its champions. We disrespected Jack Dempsey just by punching the PPV button. Two sports joined forces for a night, a shotgun wedding that worked. It proved that UFC fighters can box, which can only lead to more prosperous times between boxing and MMA. Most cage fighters were either wrestlers, boxers or both before they took their talents to the octagon, so it’s not shocking that their gifts translate outside of it. And thus it keeps the door cracked open for future bouts. When pondering McGregor, the loquacious Irishman who swapped his familiar octagon for the squared circle, we can (and should) say ‘job well done.’ He kept the most refined fighter of the last 20 years at bay for four rounds, then stood tall for five more rounds, fighting with gifts and guts until the referee ended the fight and his night. Not bad for a fella collecting welfare checks in Dublin fewer than five years ago. McGregor not only represented himself with aplomb, he was a shining emblem of his sport (UFC) and his boss (Dana White). And he strolls out of the ring with at least $75 million. Surely some could do without his post-fight presser, a walnut-shaped bump under his left eye, preening with a bottle of his personal whiskey. But if anyone earned a drink, it was Conor McGregor, who may not have won the fight, but may have won something just as important — respect. Floyd Mayweather Jr. has not always represented the best side of celebrity or the ideals of sport. But he’s never disgraced boxing inside the ropes, and has painted a few masterpieces on a blue canvas over the years. If attention, gossip and gate receipts are the main metrics for any athletic endeavor, then Mayweather has pumped celebrity and economic life into a sport on life support. There’s no such thing as bad publicity, though McGregor and Mayweather stretched the bounds of civility in the weeks leading up to this event. But they proved that Saturday night was more than a PR platter, an amalgam of insults, profanities and indignities. It was, considering the subterranean expectations, one heck of a fight. This fight served two vital goals. It proved that Conor McGregor was more than a tattooed, traveling PR circus, more than a gifted street fighter who simply switched ring dimensions for a fat paycheck and a chance to run his gums a little longer. He is a high-end fighter in any form, in any forum, an athlete with artistic splendor to be taken seriously on any surface. And it proved that boxing, at its best, is still an essential sport. No sport steals the bold ink, or drains your adrenal gland, quite like fight night at its best. Two stars like Mayweather and McGregor were more than names beaming from a marquee. They are respected and respectable fighters, who fused two combat sports for nearly 10 rounds of skill, will and action. Between Showtime’s boxing devotion and Al Haymon’s PBC brainchild, there are ample infusions of talent and TV time. Mayweather-McGregor was perfectly placed on the sports calendar, a month before MLB’s pennant chase, and a couple weeks before the start of the NFL season. And placement matters now. If this is indeed Mayweather’s swan song — and he swears it is — then he went out on a high note with an epic paycheck. It’s a stretch to say he surpassed Rocky Marciano’s 49-0 mark against a boxing neophyte, but Mayweather didn’t need it for a place high in the archives. He is, at least, the best boxer of the last 20 years. Some wonder if this is McGregor’s first or last foray into boxing. Since he soared to stardom with UFC, it makes sense for him to return to his ancestral fighting home. Not many 29 year olds take up athletic careers, particularly one as perilous as boxing. No matter what he does, he left that ring a victor in ways he never could have left an octagon. Both men were unusually gracious after the bout, taking parallel high roads to riches. You couldn’t find two men, and two fighters, more different. But in some crucial ways, they were the same. Perhaps that’s why they were so jovial toward each other at the end. In a sense, each man was gazing at his professional reflection. We’re so cynical now, we demand that one man must gain at another’s expense. Sometimes life allows for more than one winner. Sometimes the powers get something right, even if it’s accidental. Mayweather-McGregor was likely cobbled together for the most selfish of reasons — age, wage and vanity. They don’t care about the public or the pundits. Yet it wound up serving and pleasing us all. Rather than question it, just raise a glass of Notorious whiskey to two men who made one night pretty special, against all odds, in a town that makes the odds.
Long-term effects of in vivo angioplasty in normal and vasospastic canine carotid arteries: pharmacological and morphological analyses. OBJECT A canine model of hemorrhagic vasospasm of the high cervical internal carotid artery (ICA) was used to study the long-term effects of transluminal balloon angioplasty (TBA) on the structure and function of the arterial wall. METHODS Forty dogs underwent surgical exposure of both distal cervical ICAs, followed by baseline angiographic studies on Day 0. Dogs in Group A (20 animals) underwent simple exposure of one ICA and placement of a silicone elastomer cuff around a segment of the opposite artery. These animals underwent repeated angiography on Day 7, and then TBA was performed on the uncuffed ICA; the cuff was removed from the opposite vessel. For dogs in Group B (20 animals), blood clot-filled cuffs were placed around both ICAs, and on Day 7 angiography was repeated and TBA was performed on one randomly selected ICA. Four animals were then killed from each group, and in the remaining animals the cuffs were removed from both ICAs. On Days 14, 21, 28, and 56, four animals from each group underwent repeated angiography and were then killed to permit pharmacological and morphological analyses of the ICAs. This protocol yielded five study categories: cuffed nonblood-coated arteries not subjected to TBA, blood-coated arteries not subjected to TBA, blood-coated arteries subjected to TBA, normal arteries subjected to TBA, and control arteries obtained from the proximal ICA in each animal. The contractile responses of isolated arterial rings obtained from each ICA were recorded after treatment with potassium chloride, noradrenaline, and serotonin, whereas relaxations in response to the calcium ionophore A23187 and papaverine were recorded after tonic contraction to noradrenaline had been established. Morphological analysis was performed using scanning electron microscopy. Arteries surrounded by an empty cuff exhibited no angiographic, pharmacological, or morphological differences compared with normal arteries on any study day. Arteries surrounded by blood developed angiographically confirmed vasospasm on Day 7, with characteristic pharmacological and morphological features; resolution of these symptoms occurred by Day 21. Vasospastic arteries subjected to TBA on Day 7 remained dilated on angiographic studies, exhibited impaired responses to pharmacological agents (except for papaverine), and showed altered morphological features until Day 28. Normal arteries subjected to TBA on Day 7 remained dilated on angiographic studies, exhibited impaired responses to pharmacological agents (except for papaverine), and displayed altered morphological features until Day 14. CONCLUSIONS These results indicate that the canine high cervical ICA model produces consistent and reproducible vasospasm that follows a similar time course to that seen in humans. When TBA is performed in vasospastic arteries, it results in an immediate functional impairment of vascular smooth muscle that lasts for 2 weeks, with resolution at 3 weeks; morphological changes are mostly resolved 3 weeks post-TBA. In normal vessels, TBA causes functional impairment and morphological alterations that are not as severe or as long-lasting as those seen in vasospastic arteries.
Mastering Iron: The Struggle to Modernize an American Industry, 18001868 by Anne Kelly Knowles (review) Anne Kelly Knowles�s geographically based analysis of the antebellum American iron industry is not a detailed technological history nor an economic or labor history of the iron industry, but rather a richly nuanced relational understanding of iron manufacturing in terms of resources, markets, transportation, and workers. So, not only does manufactory type and size matter, but so too do ore quality and availability, topography and geographic distance, and skilled workforce makeup, all nicely charted through a series of GIS (geographic information systems)�based maps and further illustrated with a number of more traditional historical case studies.
Semidefinite Optimization Approaches for Satisfiability and Maximum-Satisfiability Problems Semidefinite optimization, commonly referred to as semidefinite programming, has been a remarkably active area of research in optimization during the last decade. For combinatorial problems in particular, semidefinite programming has had a truly significant impact. This paper surveys some of the results obtained in the application of semidefinite programming to satisfiability and maximum-satisfiability problems. The approaches presented in some detail include the ground-breaking approximation algorithm of Goemans and Williamson for MAX-2-SAT, the Gap relaxation of de Klerk, van Maaren and Warners, and strengthenings of the Gap relaxation based on the Lasserre hierarchy of semidefinite liftings for polynomial optimization problems. We include theoretical and computational comparisons of the aforementioned semidefinite relaxations for the special case of 3-SAT, and conclude with a review of the most recent results in the application of semidefinite programming to SAT and MAX-SAT. Introduction Semidefinite optimization, commonly referred to as semidefinite programming (SDP), has been a remarkably active area of research in optimization during the last decade. The rapidly growing interest is likely due to a convergence of several developments: the extension to SDP of efficient interior-point algorithms for linear programming; the richness of the underlying optimization theory; and the recognition that SDP problems arise in several areas of applications, such as linear control theory, signal processing, robust optimization, statistics, finance, polynomial programming, and combinatorial optimization. The handbook provides an excellent coverage of SDP as well as an extensive bibliography covering the literature up to the year 2000. The impact of SDP in combinatorial optimization is particularly significant, including such breakthroughs as the theta number of Lovsz for the maximum stable set problem, and the approximation algorithms of Goemans and Williamson for the maximum-cut and maximum-satisfiability problems. The focus of this paper is on recent progress in the application of SDP to satisfiability (SAT) and maximum-satisfiability (MAX-SAT) problems. This is one of the latest developments in the long history of interplay between optimization and logical inference, which dates back at least to the pioneering work of Williams, Jeroslow, and others (see e.g. ). The optimization perspective on propositional logic has mostly focussed on the formulation of SAT and MAX-SAT as 0-1 integer linear programming problems. This formulation can then be relaxed by allowing the 0-1 variables to take any real value between 0 and 1, thus yielding a linear programming relaxation, which is far easier to solve. For some types of problems, such as Horn formulas and their generalizations, close connections exist between the logic problem and its linear programming relaxation (see e.g. ). We refer the reader to the book of Chandru and Hooker for an excellent coverage of results at the interface of logical inference and optimization. This survey presents some of the approaches developed in recent years for obtaining SDP relaxations for SAT and MAX-SAT problems; the main theoretical properties of these relaxations; and their practical impact so far in the area of SAT. We discuss two types of contributions of SDP researchers to satisfiability, namely polynomial-time approximation algorithms for MAX-SAT problems; and polynomial-time computational proofs of unsatisfiability for SAT. Our focus in this paper is on the construction, analysis, and computational application of SDP relaxations for SAT and MAX-SAT, and thus we do not discuss algorithms for solving SDP problems in any detail. However, all the work presented is motivated by the fact that SDP problems can be solved efficiently using one of the algorithms that have been implemented and benchmarked by researchers in the area. We refer the reader to Monteiro for a survey of the state-of-the-art in SDP algorithms. The remainder of this paper is organized as follows. After concluding this Introduction with some preliminaries and notation, we provide in Section 2 a short introduction to SDP: the definition of an SDP problem and examples of SDP relaxations for the maximum-cut (max-cut) and MAX-SAT problems; some basic properties of positive semidefinite matrices; a few results on the geometry of SDP for max-cut; the basic concepts of duality in SDP; and a few comments on the computational complexity of SDP. Most of these results are referred to in later sections, and the remainder are included for completeness of the presentation. Section 2 may be skipped without loss of continuity by a reader familiar with SDP and its application to combinatorial optimization problems. Section 3 focuses on SDP-based approximation algorithms for the MAX-SAT problem. We begin by presenting the ground-breaking random hyperplane rounding algorithm of Goemans and Williamson for MAX-2-SAT. We then provide an overview of subsequent improvements on their approach, including the vector rotation technique of Feige and Goemans, and the biased hyperplane algorithm of Matuura and Matsui. For MAX-3-SAT, we present the algorithm of Karloff and Zwick whose performance ratio is optimal (unless P=NP). We also mention some of the proposed extensions of these ideas to general MAX-SAT. In Section 4, we introduce the Gap relaxation of de Klerk and others. This SDP relaxation is based on the concept of elliptic approximations for SAT. We show how this relaxation displays a deep connection between SAT and eigenvalue optimization. It also characterizes unsatisfiability for a class of covering problems, including mutilated chessboard and pigeonhole instances, in the sense that the Gap relaxation for this class is always infeasible. Since it is possible to compute a certificate of infeasibility for this type of SDP problem (see Section 2.3), this approach provides a proof of unsatisfiability in polynomialtime for these problems in a fully automated manner. We then introduce in Section 5 the concept of higher semidefinite liftings for polynomial optimization problems. This concept was first applied to the max-cut problem in, and more generally to 0-1 optimization problems in. The higher liftings approach generalizes the Gap relaxation, but the resulting SDP problems grow very rapidly in dimension. Hence, the concept of partial higher liftings for SAT was proposed and analyzed in. The objective here is the construction of SDP relaxations which are linearly-sized with respect to the size of the SAT instance, and are thus more amenable to practical computation than the entire higher liftings. The construction of such partial liftings for SAT is particularly interesting because the structure of the SAT instance directly specifies the structure of the SDP relaxation. The resulting SDP relaxations as well as some of their properties are presented. Finally, in Section 6, we compare the feasible sets of four of the aforementioned SDP relaxations for the special case of 3-SAT, and in Section 7, we review the most recent results in the application of SDP to SAT and MAX-SAT. Preliminaries and Notation We consider the satisfiability (SAT) problem for instances in conjunctive normal form (CNF). Such instances are specified by a set of proposition letters p 1,..., p n, and a propositional formula = m j=1 C j, with each clause C j having the form C j = k∈I j p k ∨ k∈ jp k where I j, j ⊆ {1,..., n}, I j ∩ j = ∅, andp i denotes the negation of p i. Given such an instance, we consider the following two questions: SAT Determine whether has a model, that is, whether there is a truth assignment to the variables p 1,..., p n such that evaluates to TRUE; MAX-SAT Find a truth assignment to the variables p 1,..., p n which maximizes the number of clauses in that are satisfied. This is the unweighted version of MAX-SAT. If each clause C j has a weight w j associated with it, then the weighted MAX-SAT problem seeks a truth assignment so that the total weight of the satisfied clauses is maximized. It is clear that MAX-SAT subsumes SAT, and indeed it is likely a much harder problem. For k ≥ 2, k-SAT and MAX-k-SAT refer to the instances of SAT and MAX-SAT respectively for which all the clauses have length at most k. The optimization problems we consider here are generally represented by a constraint set F and an objective (or cost) function f that maps the elements of the constraint set into real numbers. The set F represents all possible alternatives x, and for each such x, the value f (x) of the objective function is a scalar measure of the desirability of choosing alternative x. Thus, an optimal solution is an x * ∈ F such that f (x * ) ≥ f (x) for all x ∈ F. MAX-SAT problems are clearly optimization problems: given any truth assignment, the objective function counts the number of satisfied clauses, and the constraints describe the 2 n possible truth assignments. On the other hand, from an optimization perspective, the SAT problem can be approached in two different ways: 1. we may convert it to a MAX-SAT instance with w j = 1 for every clause, solve it, and determine that it is satisfiable (resp. unsatisfiable) if the optimal objective value is equal to (resp. strictly less than) the number of clauses; or 2. we may view it as a feasibility problem, that is, we look for a set of constraints F that must be satisfied by every model, but not by every truth assignment, and thus we reduce the SAT problem to the problem of determining whether F has a feasible solution (which will correspond to a model) or F = ∅ (which means the SAT instance is unsatisfiable). In this survey, we focus on the latter approach, but the former is briefly mentioned in Section 7. A Brief Introduction to Semidefinite Programming Semidefinite programming refers to the class of optimization problems where a linear function of a symmetric matrix variable X is optimized subject to linear constraints on the elements of X and the additional constraint that X must be positive semidefinite. This includes linear programming (LP) problems as a special case, namely when all the matrices involved are diagonal. A variety of polynomial-time interior-point algorithms for solving SDPs have been proposed in the literature, and several excellent solvers for SDP are now available. We refer the reader to the semidefinite programming webpage as well as the books for a thorough coverage of the theory and algorithms in this area, as well as a discussion of several application areas where semidefinite programming researchers have made significant contributions. In particular, SDP has been successfully applied in the development of approximation algorithms for several classes of hard combinatorial optimization problems beyond the results that we present in this paper. The survey articles provide an excellent overview of the results in this area. Like LP problems, SDP problems also come in pairs. One of the problems is referred to as the primal problem, and the second one is the dual problem. Either problem can be chosen as "primal", since the two problems are dual to each other. In this paper, we choose the standard formulation of SDP as follows: where (P) denotes the primal problem, and (D) the dual problem; the variables X and Z are in S n, the space of n n real symmetric matrices; X 0 denotes that the matrix X is positive semidefinite; the data matrices A i and C may be assumed to be symmetric without loss of generality; and b ∈ and y ∈ are column vectors. Furthermore, we use the scalar product between two matrices in S n defined as where Tr M denotes the trace of the square matrix M, which is the sum of the diagonal elements. In summary, SDP is the problem of optimizing a linear function subject to linear equality constraints and the requirement that the matrix variable be positive semidefinite. The set of positive semidefinite matrices has a surprisingly rich structure. In the next subsection, we present some of the properties of such matrices that will be relevant for the application of SDP to SAT and MAX-SAT. Before proceeding, we provide two examples that illustrate how SDP can be applied to combinatorial optimization problems. The first example is a derivation of the basic SDP relaxation for the max-cut problem which, following the ground-breaking work of Goemans and Williamson, has become one of the flagship problems for studying applications of semidefinite programming to combinatorial optimization. The max-cut SDP relaxation is of interest in the context of SAT because it is also a relaxation of the so-called cut polytope, an important and well-known structure in the area of integer programming which is closely related to the SAT relaxations in this survey. The reader is referred to for a wealth of results about the cut polytope. The second example below shows how the max-cut relaxation can be extended to a basic SDP relaxation for MAX-SAT. Example 1 (A basic SDP relaxation for max-cut). The max-cut problem is a combinatorial optimization problem on undirected graphs with weights on the edges. Given such a graph G = (V, E) with node set V and edge set E, the problem consists in finding a partition of V into two parts so as to maximize the sum of the weights on the edges that are cut by the partition. (An edge is said to be cut if it has exactly one end on each side of the partition.) We assume without loss of generality that G is a complete graph since non-edges can be added in with zero weight to complete the graph without changing the problem. Let the given graph G have node set {1,..., n} and let it be described by its weighted adjacency matrix W = (w ij ). Let the vector v ∈ {−1, 1} n represent any cut in the graph via the interpretation that the sets {i : v i = +1} and {i : v i = −1} specify a partition of the node set of the graph. Then max-cut may be formulated as: so that the term multiplying w ij in the sum equals one if the edge (i, j) is cut, and zero otherwise. Since w ij = w ji,, and the objective function can be expressed as: where L := Diag (W e)−W denotes the Laplacian matrix associated with the graph, e denotes the vector of all ones, and Diag denotes a diagonal matrix with its diagonal formed from the vector given as its argument. We can thus rewrite as: where Q := 1 4 L. To obtain an SDP relaxation, we now formulate max-cut in S n. Consider the change of variable X = vv T, v ∈ {−1, 1} n. Then X ∈ S n and v T Qv = Tr v T Qv = Tr QX (using the fact that Tr AB = Tr BA), and it can be shown (see Theorem 8 below) that max-cut is equivalent to max Tr QX s.t. X i,i = 1, i = 1,..., n rank (X) = 1 X 0, X ∈ S n Removing the rank constraint (which is not convex) gives the basic SDP relaxation: where diag denotes a vector containing the diagonal entries of the matrix argument. The dual SDP problem is Example 2 (A basic SDP relaxation for MAX-SAT). Given an instance of MAX-SAT, we represent each boolean variable p k by a ±1 variable v k, and each negation byp k by a ±1 variable v n+k, k = 1,..., n. Furthermore, introduce v 0 ∈ {−1, 1} with the convention that p k is TRUE if v k = −v 0, and FALSE if v k = v 0. Now the general MAX-SAT problem can be formulated as: where z j = 1 if and only if clause C j is satisfied. To relax this to an SDP, we replace the requirement that v i ∈ {−1, 1} by v i ∈ S n, where S n denotes the unit sphere in n+1. The matrix variable X for the SDP relaxation is now obtained by letting X s,t := v T s v t, s, t = 0, 1,..., 2n. This gives a (2n + 1) (2n + 1) matrix and the SDP is: (see Theorem 5 below), this problem can be rewritten as an SDP problem closely related to the basic max-cut relaxation: In both examples, the SDP problem is a relaxation of the original problem. Thus, the optimal value of the SDP problem provides in each case a global upper bound on the true optimal value of the combinatorial problem. Note also that these two examples illustrate two different, but equivalent, ways of deriving (and interpreting) SDP relaxations for combinatorial problems: 1. Embed the vector of binary variables into a rank-one matrix, formulate the combinatorial problem exactly using the entries of this matrix, and then remove the rank constraint to obtain an SDP; or 2. Replace the binary variables with real vectors of a suitably chosen length, and interpret their inner products as entries in a positive semidefinite matrix. Both of these will be used in the sequel. Positive Semidefinite Matrices We begin with the definition of positive semidefiniteness. Definition 1. A matrix A ∈ S n is said to be positive semidefinite (psd) if When the condition holds with strict positivity for all y = 0, A is said to be positive definite (pd). We use the notation A 0 for A positive semidefinite, and A 0 for A positive definite. We use S n + (resp. S n ++ ) to denote the set of psd (resp. pd) matrices. To prove this, observe that for all y ∈ n. The 3 3 identity matrix is also feasible, and it is easy to check that it is pd. It follows immediately from Definition 1 that: Every non-negative linear combination of psd matrices is psd: If j ≥ 0, j j > 0, X j 0 for all j, then the linear combination j j X j satisfies for all y = 0, and hence is pd. If X, Y ∈ S n +, then X + (1 − )Y ∈ S n + for all 0 ≤ ≤ 1. The same holds for S n ++, and thus both are convex sets. Note however that only S n + is closed. Consider the following definition: Definition 2. A subset K ⊂ S n is called a cone if it is closed under positive scalar multiplication, i.e. X ∈ K whenever X ∈ K and > 0. Note that the origin may or may not be included. Clearly, both S n + and S n ++ are convex cones. The cone S n + further possesses the following properties: It is proper if it has nonempty interior in S n and is closed, convex, and pointed. For a proper cone K, we define the dual cone K * as The self-duality of S n + follows from the following theorem: Hence, Theorem 1 above is only one of many properties of psd matrices. We present here a few more that will be of use in the sequel. First we have the following well-known theorem. Corollary 1. All the eigenvalues of A ∈ S n are real. This leads to a first characterization of psd matrices. Another useful characterization of psd matrices is formulated in terms of the principal minors of the given matrix. Hence, by considering I = {1, 3}, we can prove that the matrix in Example 4 is not psd. A property which will be useful for the application to MAX-SAT is the existence of a square root of a psd matrix, which follows immediately from Theorems 2 and 3: Note that the matrix W is not unique. Indeed, for any orthogonal matrix Q ∈ n, let W Q := Q W. Then W T Q W Q = W T Q T QW = W T W = A, since Q T Q = I by the orthogonality of Q. A specific choice of W which is very useful from a computational point of view is the Cholesky decomposition: where L is a lower triangular matrix. (If A is pd, then L is nonsingular with strictly positive entries on the diagonal.) The Cholesky decomposition can be computed efficiently (see for example ) and is useful in many practical algorithms for solving SDP problems. Finally, there are two more properties which we will make use of: Then all the eigenvalues of A are strictly positive, i.e. A is pd. The Geometry of SDP for Max-cut We present here some results about the feasible set of the basic SDP relaxation of max-cut. We refer the reader to the excellent book of Deza and Laurent which brings together the large body of results in this area, and focus here on results that will be useful in the sequel. Definition 5. The cut matrices in S n are the matrices of the form The cut matrices are real, symmetric, psd, and rank-1. Furthermore, all their entries are ±1, and in particular diag (X) = e. The following theorem, based on results in, gives two characterizations of these matrices involving positive semidefiniteness. Proof: The inclusions ⊆ 1 and ⊆ 2 are clear. It remains to prove the reverse inclusions. Let X ∈ 1. Then X is symmetric and rank-1, and thus it has the form X = yy T. Since diag (X) = e, y 2 i = 1 for every i, and we can choose y such that y 1 = 1 (replace y by −y if necessary), we deduce that X ∈. Let X ∈ 2 and partition it as 1 x T xX, where the vector x is the first column of X minus the initial 1. Then by Theorem 6,X − xx T 0. Since diag (xx T ) = e, it follows that diag (X − xx T ) = 0 and henceX − xx T = 0 (apply Theorem 4 to every 2 2 principal minor). Therefore, X = 1 Note that consists of 2 n−1 isolated points in S n. We now consider convex relaxations of, particularly some over which it is possible to optimize in polynomial-time. The smallest convex set containing is the cut polytope, defined as the convex hull of the cut matrices: However, optimizing a linear function over C n is equivalent to solving the max-cut problem, and is hence NP-hard. A fruitful approach is to approximate the cut polytope by a larger polytope containing it and over which we can optimize in polynomial time using LP. A well-known relaxation is the metric polytope M n defined as the set of all matrices satisfying the triangle inequalities: The triangle inequalities model the fact that for any assignment of ±1 to the entries of x, the entries X ij, X ik, X jk of the corresponding cut matrix must comprise an even number of negative ones. Alternatively, it is possible to approximate the cut polytope with non-polyhedral convex sets. For instance, if we relax 1 by removing the rank constraint, we obtain another convex relaxation: The set E n is precisely the feasible set of the basic SDP relaxation of max-cut. It is also known as the set of correlation matrices, and it has applications in several areas, including statistics, finance, and numerical analysis (see e.g. and the references therein). For 3 3 principal submatrices of elements of E n, the following Lemma from will be quite useful in the sequel. 3. If c 2 = 1 then a = c b. Proof: Using Theorems 4 and 6, we have: which proves the first claim. The other two claims follow by similar arguments. Duality in SDP For (P) and (D) in the primal-dual pair defined above, we have (as in LP): Theorem 9. (Weak Duality) IfX is feasible for (P) and,Z for (D), then C X ≤ b T. Proof: by Theorem 1. However, because of the nonlinear psd constraint, SDP duality has some cases that do not occur in LP. We illustrate this with the following two examples from . Positive Duality Gap In LP, if both (P) and (D) are feasible, then there is no duality gap. This may fail for SDP. For example, It is easy to see that (P) has optimal objective value −a, while (D) has 0. Weak Infeasibility Even if there is no duality gap at optimality, the optimal value may not be attained for (P) and (D). Consider the primal-dual pair and observe that the optimal objective value 0 is attained for (P), but not for (D). To avoid these difficulties, we require that the SDP pair satisfy a constraint qualification. This is a standard approach in nonlinear optimization. The purpose of a constraint qualification is to ensure the existence of Lagrange multipliers at optimality. These multipliers are an optimal solution to the dual problem, and thus the constraint qualification ensures that strong duality holds: it is possible to achieve primal and dual feasibility with no duality gap. For applications to combinatorial optimization problems such as SAT and MAX-SAT, Slater's constraint qualification is usually easy to verify: simply exhibit a feasible matrix which is pd (and not psd) for each of the SDP primal and dual problems. Example 5 (Slater's Constraint Qualification for max-cut). To illustrate Slater's constraint qualification, we show that it holds for the basic SDP relaxation of max-cut. Clearly, the n n identity matrix is pd and feasible for the primal SDP. For the dual, choose y with entries sufficiently large such that y i > n j=1 Q ij for each i = 1,..., n. Then the matrix Z = Diag (y) − Q will be pd (by Theorem 7) and feasible for the dual SDP. Finally, we will use SDP to prove that certain instances of SAT are unsatisfiable. To prove that an SDP is infeasible, it suffices to compute a certificate of infeasibility, that is, a pair (y, Z) such that The existence of such a pair implies that the dual SDP problem is unbounded below, and by weak duality (Theorem 9) the primal problem must be infeasible (since any primal feasible solution implies a lower bound on the optimal solution of the dual). Computational Complexity and SDP Relaxations When considering optimization approaches, we must assess the computational complexity of these problems. It is well known that SAT was the first problem shown to be NP-complete, although several important special cases can be solved in polynomial time, and that the MAX-SAT and MAX-k-SAT problems are known to be NP-hard. The hardness of these problems motivates the study of optimization problems which are not exact formulations of the SAT and MAX-SAT problems, but rather relaxations that can be solved in polynomialtime. These relaxations then lead to either approximation algorithms with polynomial-time complexity and provable approximation guarantees, or branch-and-bound (enumerative) approaches which solve the problem exactly, but have no guaranteed polynomial-time complexity. For completeness, we summarize here (based on and ) a few facts about the complexity of SDP. The fact that SDP problems can be solved in polynomial-time to within a given accuracy follows from the complexity analysis of the ellipsoid algorithm (see ). More specifically for our purposes, consider the SDP problem (P) defined in with integer data, a given rational > 0, and a given integer R > 0 such that either (P) is infeasible or X ≤ R for some feasible X. Then it is possible to find in polynomial-time either a matrix X * at distance at most from the feasible set of (P) such that C X * − p * ≤, or a certificate that the feasible set of (P) does not contain a ball of radius. The complexity of the algorithm is polynomial in n,, log(R), log( 1 ), and the bit length of the input data. It is worth pointing out that, in contrast to LP, some peculiar situations may occur in SDP. First, there are SDP problems with no rational optimal solution. For instance, the pair of constraints (taken from ) 1 x x 2 0 and 2x 2 2 x 0 have x = √ 2 as the unique feasible solution (apply Corollary 2). Another situation that may occur in SDP is that all feasible solutions are doubly exponential. Consider the set of constraints (taken from ) Then any feasible solution must satisfy x i ≥ 2 2 i −1, i = 1,..., n, which means that every rational feasible solution has exponential bitlength. Approximation Algorithms for MAX-SAT A -approximation algorithm for MAX-SAT is a polynomial-time algorithm that computes a truth assignment such that at least a proportion of the clauses in the MAX-SAT instance are satisfied. The number is the approximation ratio or guarantee. Hstad proved that for any > 0, there is no ( 21 22 + )-approximation algorithm for MAX-2-SAT, and no ( 7 8 + )-approximation algorithm for MAX-SAT (unless P=NP). This section, based on the excellent presentation by Laurent and Rendl in , presents an overview of the approximation algorithms in the literature, and a detailed description of the groundbreaking algorithm of Goemans and Williamson. The first approximation algorithm for MAX-SAT is a 1 2 -approximation algorithm due to Johnson. Given n values i ∈, i = 1,..., n, the algorithm sets each variable p i to TRUE independently and randomly with probability i. Therefore, the probability that If we choose i = 1/2 for all i = 1,..., n, this probability is 1 − 2 −l(C j ), and thus the total expected weight of the satisfied clauses is This gives a randomized 1 2 -approximation algorithm for MAX-SAT. Improved approximation algorithms have since been obtained. The first 3 4 -approximation algorithm for MAX-SAT was proposed by Yannakakis and makes use of solutions to maximum flow problems, and subsequently Goemans and Williamson presented another 3 4 -approximation algorithm using LP. However, the best-known approximation guarantees for MAX-SAT problems make use of SDP relaxations and appropriate randomized rounding schemes. SDP-based Approximation Algorithms for MAX-2-SAT The breakthrough was achieved by Goemans and Williamson who proposed an SDPbased approximation algorithm for the MAX-2-SAT problem with a 0.87856 guarantee. Their algorithm uses an extension of the SDP relaxation in Example 2 above, obtained by adding for each 2-clause the appropriate inequality as follows: For clarity of presentation, we assume for the remainder of this section that j = ∅ for all clauses. It is straightforward to modify the relaxations and the analysis to account for negated variables. The Goemans-Williamson SDP relaxation for MAX-2-SAT is thus Furthermore, it follows from Lemma 1 that X k,n+k = −1 ⇒ X s,n+k = −X s,k for s = 0, 1,..., 2n, i.e. the columns k and n + k of X are the negative of each other, which is consistent with the definition of the vectors v i in Example 2. The algorithm of Goemans and Williamson proceeds as follows: Step 1 Solve the SDP relaxation, obtaining an optimal value * GW and a corresponding optimal solution X *. Step 2 Compute a Gram decomposition of X * using, for instance, the Cholesky decomposition (Theorem 5). Thus obtain a set of 2n Step 3 Randomly generate a vector ∈ S n and let H denote the hyperplane with normal. Step 4 For i = 1,..., n, let We now sketch the analysis of the algorithm's performance. Let ij := arccos(v T i v j ) denote the angle between v i and v j, p(i) denote the probability that the clause x i is satisfied, and p(i, j) denote the probability that the clause x i ∨ x j is satisfied. Our goal is to establish a relationship between * GW and the expected number of satisfied clauses. First, the probability that H separates v 0 and v i is equal to 0i. Therefore, p(i) = 0i, which implies that Thus, p(i) ≥ 0 z i for every clause of length 1. For clauses of length 2, p(i, j) is equal to the probability that at least one of the vectors v i, v j is separated from v 0 by H. Letp(i, j) denote the probability that v 0, v i and v j all lie on the same side of H, so that p(i, j) = 1 −p(i, j). A straightforward way to calculate this probability is to observe that it is equal to Hence, the expected total weight of the satisfied clauses is at least 0 times the optimal value of the SDP relaxation. Theorem 10. For an instance of MAX-2-SAT, the Goemans-Williamson algorithm as described above provides a truth assignment for which where * GW is the optimal value of, and 0 is as defined in. Since the optimal value of the MAX-2-SAT problem is at least the expected value of the randomized truth assignment, this proves that the algorithm is an 0 -approximation algorithm for MAX-2-SAT. The randomized hyperplane rounding procedures can be formally derandomized using the techniques in. This MAX-2-SAT algorithm led to an improved 0.7584-approximation algorithm for general MAX-SAT in. For MAX-2-SAT, a further significant improvement was achieved by Feige and Goemans who proposed a 0.931-approximation algorithm for MAX-2-SAT. They start with the SDP relaxation augmented by the n 3 triangle inequalities. From the optimal solution X * of this strengthened SDP relaxation, they obtain a set of n + 1 vectors v 0, v 1,..., v n as in step 2 of the algorithm of Goemans and Williamson above. However, instead of applying the random hyperplane rounding technique to these vectors directly, Feige and Goemans use them to generate a set of rotated vectors to which they then apply the hyperplane rounding. The general idea is as follows. We define a rotation function to be a continuous function r : → such that r = 0 and r( − ) = − r(). Given such a rotation function, and the vectors v 0, v 1,..., v n, then for i = 1,..., n, we define v r i, the rotated version of v i, to be the vector obtained by rotating v i in the plane spanned by v 0 and v i until the angle between v 0 and v i equals r( 0i ). As in the analysis above, it now follows that p r (i, j), the probability that at least one of the vectors v r i, v r j is separated from v 0 by H, is and therefore The goal now is to choose the rotation function r so that the performance ratio of the algorithm is as high as possible in the worst case, and knowing that the quantities cos 0i, cos 0j, and cos ij satisfy the triangle inqualities. Feige and Goemans consider a family of rotation functions of the form and claim a performance ratio of at least 0.93109 for the rotation function defined by = 0.806765. A precise performance ratio of 0.931091 for this algorithm was subsequently proved rigorously by Zwick. Matuura and Matsui obtained a higher performance ratio with an approximation algorithm which, like that of Feige and Goemans, uses the SDP relaxation with the triangle inequalities added. Matuura and Matsui fix v 0 = (1, 0,..., 0) T, and the remaining vectors v 1,..., v n are obtained from X * as usual. Note that the restriction on v 0 can be easily handled via an appropriate orthogonal matrix (see Theorem 5 and the discussion thereafter). Instead of rotating the set of vectors, Matuura and Matsui change the way of choosing the random hyperplane. Instead of using a uniform distribution, they select the random hyperplane using a distribution function on the sphere which is skewed towards v 0, and uniform in every direction orthogonal to v 0. A judicious choice of the distribution function yields a 0.935 performance ratio for their algorithm. Finally, Lewin, Livnat and Zwick proposed a combination of the Feige-Goemans and Matuura-Matsui approaches, namely the rotation of the set of vectors and the skewed hyperplane rounding, and obtained a 0.940-approximation algorithm for MAX-2-SAT. SDP-based Approximation Algorithms for MAX-3-SAT The MAX-2-SAT approach of Goemans and Williamson was extended to MAX-3-SAT by Karloff and Zwick. Karloff and Zwick use the following SDP relaxation: and extend the analysis of Goemans and Williamson to obtain an optimal (unless P=NP) 7 8 -approximation algorithm for MAX-3-SAT. We use the same notation as above, and further let * KZ denote the optimal value of the SDP relaxation, and p(i, j, k) denote the probability that the clause x i ∨ x j ∨ x k is satisfied. Our goal is to establish a relationship between * KZ and the expected number of satisfied clauses. Since 0 > 7 8, the analysis of Goemans and Williamson for clauses of lengths 1 and 2 gives the desired performance ratio. For clauses of length 3, however, a more careful analysis is required. Using the random hyperplane rounding technique, we still have that p(i, j, k) is equal to the probability that the random hyperplane H separates v 0 from at least one of the vectors v i, v j, v k. Therefore, p(i, j, k) = 1 −p(i, j, k), wherep(i, j, k) is the probability that v 0, v i, v j and v k all lie on the same side of H. By symmetry, it follows thatp(i, j, k) equals two times the probability that T v 0, T v i, T v j, and T v k are all non-negative. Denote this probability byq(i, j, k). The estimation ofq(i, j, k) is the main step of the performance analysis. Since we are only interested in the four inner products above, we can restrict ourselves to the 4-dimensional space spanned by v 0, v i, v j, and v k, with the vector replaced by its normalized projection onto this space, which is uniformly distributed on the unit sphere S 3. Hence, we may assume without loss of generality that we are working in 4 and that all five vectors of interest lie in S 3. If we define, where volume() denotes the 3-dimensional spherical volume. Since volume(S 3 ) = 2 2, it follows that It remains to estimate the spherical volume of T (0, i, j, k). When the vectors v 0, v i, v j, and v k are linearly independent, T (0, i, j, k) is a spherical tetrahedron. However, there is no known closed-form formula for this volume, and it is possible that none exists. Karloff and Zwick proved that if the instance of MAX-3-SAT is satisfiable, that is, if * KZ = 1, then p(i, j, k) ≥ 7 8. Zwick proved rigorously the performance ratio of 7 8 for general MAX-3-SAT. It is worth noting that both proofs were computer assisted: the first result involved one computation carried out using Mathematica with 50 digits of precision, and the second result was obtained using Zwick's RealSearch system, which makes use of interval arithmetic rather than floating point arithmetic, thus providing a rigorous proof. Further Extensions of the SDP-based Approach to MAX-SAT Karloff and Zwick also proposed a general construction of SDP relaxations for MAXk-SAT. For MAX-4-SAT specifically, Halperin and Zwick proposed an SDP relaxation, studied several rounding schemes, and obtained approximation algorithms that almost attain the theoretical upper bound of 7 8. Halperin and Zwick also consider strengthened SDP relaxations for MAX-k-SAT. Most recently, Asano and Williamson have combined ideas from several of the aforementioned approaches and obtained a 0.7846-approximation algorithm for general MAX-SAT. First Lifting: The Gap Relaxation We now turn to the application of SDP to SAT in terms of a feasibility problem. The initial work in this area is due to de Klerk, van Maaren, and Warners who introduced the Gap relaxation for SAT. This SDP relaxation was motivated by the concept of elliptic approximations for SAT instances. Elliptic approximations were first proposed in and were applied to obtain effective branching rules as well as to recognize certain polynomially solvable classes of SAT instances. We let TRUE be denoted by 1 and FALSE by −1, and for clause j and k ∈ I j ∪ j, define The SAT problem is now equivalent to the integer programming feasibility problem where l(C j ) = |I j ∪ j | denotes the number of literals in clause C j. If every x k = ±1, then the corresponding truth assignment satisfies C j precisely when This motivates the definition of the elliptic approximation of C j, denoted E j : This is called elliptic because the set of points in n contained in E j forms an ellipsoid. Using these approximations, we can reformulate SAT as the problem of finding a ±1 vector x in the intersection of the m ellipsoids. However, it is difficult to work directly with intersections of ellipsoids, but we can use these ellipsoids to obtain an SDP relaxation of this problem. At this point, there are two ways to use the concept of elliptic approximation for constructing an SDP relaxation for SAT, both of which lead to the Gap relaxation. The first derivation we present shows a deep connection between SAT and eigenvalue optimization, and justifies the name of the relaxation. The second derivation is more direct, and sets the stage for the subsequent development of tighter SDP relaxations. First Derivation of the Gap Relaxation This derivation is motivated by the transformation of the SAT problem into one involving the optimization of eigenvalues, which can be reformulated as an SDP problem. Our presentation is based on the derivation in. First, we consider the aggregation of this information into a single ellipsoid by taking the sum of the m ellipsoids. This again yields an ellipsoid. Furthermore, rather than giving each ellipsoid equal weight in the sum, we can associate a non-negative weight w j with each clause C j, and thus define the weighted elliptic approximation of the given SAT formula : It is straightforward to prove that Theorem 11. Let be a CNF formula with associated parameters s j,k and weighted elliptic approximation E(, w). If x ∈ {−1, 1} n is a satisfying truth assignment of, then x ∈ E(, w) for any choice of w ≥ 0. The contrapositive of Theorem 11 gives a sufficient condition for proving unsatisfiability. Since x ∈ {−1, 1} n implies x T x = n, another sufficient condition for unsatisfiability is: Next, we rewrite condition as is an m n matrix, and r is an m-vector with r j = l(C j )(l(C j ) − 2). Furthermore, we introduce an extra boolean variable x n+1 to obtain a homogeneous quadratic inequality: Note that Theorem 11 and Corollaries 3 and 4 still hold for this homogenized inequality. Withx := (x 1, x 2,..., x n, x n+1 ) T and the (n + 1) (n + 1) matrix and since x T x = n, we can rewrite asx T Q(w)x ≤ 0. Finally, we add a correcting vector u ∈ n to obtai and it is easy to check thatx TQ (w, u)x =x T Q(w)x. The upshot of this derivation is that the given formula is unsatisfiable if we can find a pair (w, u) with w ≥ 0 for whichx TQ (w, u)x > 0 for allx ∈ {−1, 1} n+1. Thus, we consider the problem max (n + 1) s.t.Q(w, u) I w ≥ 0 The optimal value of problem is the gap of formula. We have the following result: Corollary 5. If the formula has a positive gap, then it is unsatisfiable. Proof: If the optimal value of the problem is positive, then there exists * > 0 and a pair (w, u) with w ≥ 0 such thatQ(w, u) is pd, sinceQ(w, u) * I 0 holds. By Corollary 4, must be unsatisfiable. Note that problem is an SDP. To write down its dual, we rewrite the matrix Q(w, u) as follows: We make the following observations: 1. Since the objective function is constant, this is equivalent to a feasibility problem. Furthermore, Tr (s j s T j Y ) = Tr (s T j Y s j ) = s T j Y s j. Hence, the dual problem is equivalent to Second Derivation of the Gap Relaxation Recall the inequality defining the ellipsoid approximation E j of clause C j : Expanding it, we obtain: and since every term in the double sum with k = k equals 1 (for ±1 solutions), we have Letting Y = xx T and applying Theorem 8, we obtain the formulation where we refer to the first row (and column) of the matrix variable Y = 1 y T y Y as the 0 th row (and column), so that Y has rows and columns indexed by {0, 1,..., n}. Omitting the rank constraint, we obtain an SDP relaxation. It is straightforward to check that this is the same SDP as that obtained using the first derivation. Properties of the Gap Relaxation The Gap relaxation characterizes unsatisfiability for 2-SAT problems (see Theorem 12 below). More interestingly, it also characterizes satisfiability for a class of covering problems, such as mutilated chessboard and pigeonhole instances. Rounding schemes and approximation guarantees for the Gap relaxation, as well as its behaviour on so-called (2 + p)-SAT problems, are studied in. We present here some details about the first two properties of the Gap relaxation, and refer the reader to the cited papers for more details. First we note that for each clause of length 1 or 2, the elliptic approximation inequality can be set to equality. Therefore we henceforth consider the Gap relaxation in the following form: A first result about the Gap relaxation is: Therefore, the Gap relaxation is feasible if and only if is satisfiable. Furthermore, given a feasible solution Y for the relaxation, it is straightforward to extract a truth assignment satisfying the 2-SAT formula by the following algorithm: 1. For every k such that Y 0,k = 0, set x k = sign (Y 0,k ); 2. For every remaining constraint j that remains unsatisfied, we have the constraint's two variables k and k such that Y 0,k = 0 and Y 0,k = 0. Hence, Y k,k = −s j,k s j,k = ±1. Considering all these ±1 elements, the matrix can be completed to a rank-1 matrix that is feasible for. See for more details. We will make use of this result in the sequel. The next result about the Gap relaxation is concerned with a class of covering instances. Let {V 1,..., V q } and {V q+1,..., V q+t } be two partitions of the set of variables {p 1,..., p n } such that t < q. We consider CNF formulas of the form: We show that the SDP approach, using the Gap relaxation, can prove such formulas to be unsatisfiable in polynomial-time in a fully automated manner, i.e. without using any problem-specific information. For these instances, the Gap relaxation can be written as We prove the following theorem. Proof: If t ≥ q, then the formula is satisfiable, and clearly the SDP relaxation is feasible. Suppose now that t < q. Consider the following partition of Y feasible for : where is n n and diag ( ) = e. By Theorem 6 and the positive semidefiniteness of Y, for every vector s j. Hence, for each of the first q constraints, Summing these inequalities over j = 1,..., q, we obtain Now consider the remaining t constraints. For j = q + 1,..., t, adding up all the corresponding constraints yields and therefore Summing these inequalities over j = q + 1,..., t, we obtain Equations and together imply which is a contradiction since t < q. Theorem 13 shows that for the class of instances, the SDP approach not only proves that infeasibility can be shown in polynomial-time, but also provides a certificate of infeasibility in polynomial-time without making explicit use of any additional knowledge about the instance. On the other hand, for any instance of SAT in which all the clauses have length three or more, the identity matrix is always feasible for the Gap relaxation. Therefore, this relaxation cannot prove unsatisfiability for any such instances. The work presented in the following section was partly motivated by the search for an SDP relaxation which can be used to prove that a given SAT formula is unsatisfiable independently of the lengths of the clauses in the instance. Strengthened SDP Relaxations via Higher Liftings The SDP relaxations presented in this Section are strengthenings of the Gap relaxation, and they inherit many of its properties. They are constructed using ideas from a "higher liftings" paradigm for constructing SDP relaxations of discrete optimization problems. The concept of lifting has been proposed by several researchers and has led to different general purpose hierarchical frameworks for solving 0-1 optimization problems. The idea of applying SDP relaxations to 0-1 optimization dates back at least to Lovsz's introduction of the so-called theta function as a bound for the stability number of a graph. Hierarchies based on LP relaxations include the lift-and-project method of Balas, Ceria and Cornujols, the reformulation-linearization technique of Sherali and Adams, and the matrixcuts approach of Lovsz and Schrijver. Researchers in the SAT community have studied the complexity of applying some of these techniques, and generalizations thereof, to specific classes of SAT problems (see the recent papers ). We note that semidefinite constraints may also be employed in the Lovsz-Schrijver matrix-cuts approach, but in a different manner from that of the lifting paradigm we consider in this section. The higher-liftings paradigm we consider is based on the Lasserre hierarchy of SDP relaxations of polynomial optimization problems. The idea behind this paradigm can be summarized as follows. Suppose that we have a discrete optimization problem on n binary variables. The SDP relaxation in the space of (n + 1) (n + 1) symmetric matrices is called a first lifting. (For SAT, this corresponds to the Gap relaxation.) Note that, except for the first row, the rows and columns of the matrix variable in this relaxation are indexed by the binary variables themselves. To generalize this operation, we allow the rows and columns of the SDP relaxations to be indexed by subsets of the discrete variables in the formulation. These larger matrices can be interpreted as higher liftings, in the spirit of the second lifting for max-cut proposed by Anjos and Wolkowicz, and its generalization for 0-1 optimization independently proposed by Lasserre. A detailed analysis of the connections between the Sherali-Adams, Lovsz-Schrijver, and Lasserre frameworks was done by Laurent. In particular, Laurent showed that the Lasserre framework is the tightest among the three. This fact motivates the study of its application to the SAT problem. For higher liftings of the max-cut problem, one of the theoretical questions that has been considered is to prove conditions on the rank of an optimal matrix for the SDP relaxation which ensure that the optimal value of the SDP is actually the optimal value of the underlying discrete problem. This question was settled for liftings of max-cut as follows. First, the rank-1 case is obvious since the optimal solution of the SDP is then a cut matrix (Theorem 8). For second liftings, a rank-2 guarantee of optimality was proved by Anjos and Wolkowicz. This result was extended to the whole of the Lasserre hierarchy by Laurent who showed that as the higher liftings for max-cut in the Lasserre hierarchy increase in dimension, correspondingly increasing rank values become sufficient for optimality. These rank-based conditions for optimality can be interpreted as a measure of the relative strength of the relaxations. Applying directly the Lasserre approach to SAT, we would use the SDP relaxations Q K−1 (as defined in ) for K = 1, 2,..., n where the matrix variable of Q K−1 has rows and columns indexed by all the subsets I with |I| ≤ K. (Hence for K = 1, we obtain the matrix variable of the Gap relaxation.) The results in imply that for K = n, the feasible set of the resulting SDP relaxation is precisely the cut polytope. However, this SDP has dimension exponential in n. Indeed, the SDPs that must be solved when using this approach quickly become far too large for practical computation. For instance, even for second liftings (corresponding to K = 2) of max-cut problems, only problems with up to 27 binary variables were successfully solved in. This limitation motivated the study of partial higher liftings, where we consider SDP relaxations which have a much smaller matrix variable, as well as fewer linear constraints. The objective of this approach is the construction of SDP relaxations which are linearly-sized with respect to the size of the SAT instance, and are thus more amenable to practical computation than the entire higher liftings. The construction of such partial liftings for SAT is particularly interesting because we can let the structure of the SAT instance specify exactly the structure of the SDP relaxation. We now outline the construction of these relaxations. Derivation of Two SDP Relaxations The strengthened SDP relaxations are based on the following proposition. where the coefficients s j,k are as defined in above. Proof: By construction of the coefficients s j,k, the clause is satisfied if and only if s j,k x k equals 1 for at least one k ∈ I j ∪ j, or equivalently, if k∈I j ∪ j (1 − s j,k x k ) = 0. Expanding the product, we have The result follows. Using Proposition 1, we formulate the satisfiability problem as follows:.., n The next step is to formulate the problem in symmetric matrix space. Let P denote the set of nonempty sets I ⊆ {1,..., n} such that the term k∈I x k appears in the above formulation. Also introduce new variables for each I ∈ P, and thus define the rank-one matrix.. whose |P| + 1 rows and columns are indexed by {∅} ∪ P. By construction of Y, we have that Y ∅,I = x I for all I ∈ P. Using these new variables, we can formulate the SAT problem as: Relaxing this formulation by omitting the rank constraint would give an SDP relaxation for SAT. However, in order to tighten the resulting SDP relaxation, we first add redundant constraints to this formulation. This approach of adding redundant constraints to the problem formulation so as to tighten the resulting SDP relaxation is discussed in detail for the max-cut problem in. The constraint rank (Y ) = 1 implies that for every triple I 1, I 2, I 3 of subsets of indices in P such that the symmetric difference of any two equals the third, the following three equations hold: Hence we can add some or all of these redundant constraints to formulation without affecting its validity. We choose to add the equations of the form for all the triples {I 1, I 2, I 3 } ⊆ P satisfying the symmetric difference condition and such that (I 1 ∪ I 2 ∪ I 3 ) ⊆ (I j ∪ j ) for some clause j. Beyond the fact that they tighten the SDP relaxation, this particular subset of redundant constraints was chosen because it suffices for proving the main theoretical result (Theorem 14 below). We refer to the resulting SDP relaxation as the R 3 relaxation: and Y ∅,I 3 = Y I 1,I 2, ∀{I 1, I 2, I 3 } ⊆ P such that I 1 ∆I 2 = I 3 and (I 1 ∪ where I i ∆I j denotes the symmetric difference of I i and I j. The R 3 terminology relates to the fact that a rank-3 guarantee holds for this SDP relaxation (see Theorem 14). If we had chosen P to contain all the subsets I with |I| ≤ K, where K denotes the length of the longest clause in the SAT instance, and had added all the redundant constraints of the form Y I 1,I 2 = Y I 3,I 4, where {I 1, I 2, I 3, I 4 } ⊆ {∅} ∪ P and I 1 ∆I 2 = I 3 ∆I 4, then we would have obtained the Lasserre relaxation Q K−1 for this problem. However, as mentioned earlier, the resulting SDP has a matrix variable of dimension |P| + 1 = O(n K ), which is too large for practical computational purposes, even when K = 2. In contrast, the partial higher liftings approach yields an SDP relaxation with a much smaller matrix variable as well as fewer linear constraints corresponding to symmetric differences. The matrix variable of R 3 has dimension O(m * 2 K ) = O(m), since for practical SAT instances K is a very small constant. The number of constraints is also O(m), and although the SDP can have as many as ( 1 2 (2 K − 2)(2 K − 1) + 1)m linear constraints, the presence of common variables between different clauses means that it will typically have many fewer constraints. The computational performance of R 3 was studied in and it was observed that when used in a branching algorithm, the SDP relaxation is still impractical for solving SAT problems with more than about 100 clauses, unless the solution can be obtained without resorting to branching (see Sections 5.3 and 6.2 for more details on the computational performance of R 3 ). Therefore, a more compact semidefinite relaxation, denoted R 2, was proposed in. This relaxation is also a strengthening of the Gap relaxation, and is computationally superior to R 3 because of significant reductions in the dimension of the matrix variable and in the number of linear constraints. The matrix variable of the compact SDP relaxation is a principal submatrix of the matrix variable in R 3, and it was shown in that although the SDP relaxation R 2 does not retain the rank-3 guarantee, it has a rank-2 guarantee. Hence, it is a compromise relaxation between the Gap and R 3, and it completes a trio of linearly sized semidefinite relaxations with correspondingly stronger rank guarantees. To obtain R 2, we replace P by a smaller set of column indices, namely O := {I | I ⊆ (I j ∪ j ) for some j, |I| mod 2 = 1}. The set O consists of the sets of odd cardinality in P. It is clear that the sets in P of even cardinality corresponding to terms appearing in the above formulation are all generated as symmetric differences of the sets in O. Having chosen our set of column indices, we introduce new variables for each I ∈ O, and define the rank-1 matrix whose |O| + 1 rows and columns are indexed by ∅ ∪ O. By construction, we have Y ∅,I = x I for all I ∈ O and Y {min(I)},I∆{min(I)} = x I for all I ∈ P\O. (Note that T ∆{min(T )} is an element of P\O when |T | is even.) This means that the new variables corresponding to subsets of logical variables of odd cardinality appear exactly once in the first row of Y, and the new variables corresponding to subsets of even cardinality have the "representative" matrix entries Y {min(T )},T ∆{min(T )}. By the same approach as for R 3 above, we obtain the R 2 relaxation: Note that for 2-SAT, R 2 is precisely the Gap relaxation. Theoretical Properties of the Strengthened SDP Relaxations It is clear that if the propositional formula is satisfiable, then using any model it is straightforward to construct a rank-one matrix Y feasible for every SDP relaxation. The contrapositive of this statement gives a sufficient condition for proving unsatisfiability using the SDP relaxations. Lemma 2. Given a propositional formula in CNF, if any one of the SDP relaxations is infeasible, then the CNF formula is unsatisfiable. Several results are known for conditions on the rank of a feasible matrix Y which guarantee that a model can be obtained from Y. Such rank conditions for an SDP relaxation are significant for two reasons. Firstly, from a theoretical point of view, the rank value can be viewed as a measure of the strength of the relaxation. For general instances of SAT, the Gap relaxation will prove satisfiability only if a feasible matrix of rank 1 is obtained, whereas for our relaxation any matrix with rank 1, 2, or 3 immediately proves satisfiability. Furthermore, the higher rank value also reflects the relaxation's greater ability to detect unsatisfiability, compared to the Gap relaxation. Secondly, from a practical point of view, the rank conditions are helpful because of the inevitable occurrence of SDP solutions with high rank when there are multiple optimal solutions to the original binary problem. This happens because interior-point algorithms typically converge to a matrix in the interior of the optimal face of S n +, and in the presence of multiple solutions this face contains matrices of rank higher than one. Therefore, the ability to detect optimality for as high a rank value as possible may allow an enumerative algorithm to avoid further branching steps and potentially yield a significant reduction in computational time. The results on the SDP relaxations are summarized in the following theorem: Theorem 14. Given any propositional formula in CNF, consider the SDP relaxations presented. Then If at least one of the Gap, R 2, or R 3 relaxations is infeasible, then the formula is unsatisfiable. If Y is feasible for the Gap and rank Y = 1, then a truth assignment satisfying the formula can be obtained from Y. If Y is feasible for R 2 and rank Y ≤ 2, then a truth assignment satisfying the formula can be obtained from Y. If Y is feasible for R 3 and rank Y ≤ 3, then a truth assignment satisfying the formula can be obtained from Y. For any instance of SAT, the SDP relaxations Gap, R 2, and R 3 form a trio of linearly sized SDP relaxations for SAT with correspondingly stronger rank guarantees. If we use R 1 to refer to the Gap relaxation, then the names of the relaxations reflect their increasing strength in the following sense: For k = 1, 2, 3, any feasible solution to the relaxation R k with rank at most k proves satisfiability of the corresponding 3-SAT instance. Furthermore, the increasing values of k also reflect an improving ability to detect unsatisfiability, and an increasing computational time for solving the relaxation. Nonetheless, the dimensions of the relaxations grow only linearly with n and m. If we consider the Lasserre relaxations Q K−1 for K ≥ 3, results such as Theorem 14 (and stronger ones) would follow from . The important point here is that Theorem 14 holds for the significantly smaller relaxations obtained as partial liftings. To conclude this Section, we sketch the proof that if Y is feasible for the R 2 relaxation and rank Y ≤ 2, then Y yields a truth assignment proving that the SAT instance is satisfiable. The result follows from the following lemmata. Lemma 4. If Y is feasible for the R 2 relaxation and rank Y ≤ 2, then each column indexed by a subset ∈ O of cardinality || ≥ 3 is a ±1 multiple of a column corresponding to a singleton in O. Proof: For every subset with || ≥ 3, let i 1, i 2, and i 3 denote any three variables in, and let = \{i 1, i 2, i 3 }} be the set of all the other variables in. (Note that may be empty.) Let Y denote the principal submatrix of the matrix variable Y corresponding to the rows and columns of Y indexed by {∅, {i 1 } ∪, {i 2 } ∪, {i 3 } ∪, }. The principal submatrix Y has the form and by Lemma 1, it follows that Y I,\{i 1,i 2 } = Y I,. Hence, the entire column equals times the column \{i 1, i 2 }. If \{i 1, i 2 } is a singleton, we are done. Otherwise, repeat the argument using \{i 1, i 2 } in place of. Eventually a singleton will be reached, and then the column corresponding to the singleton will be a ±1 multiple of the column. Let us now consider the implications of Lemma 4 for the constraints enforcing satisfiability. For each term of the form Y (T ) such that |T | ≥ 3, Lemma 4 implies that Therefore, in the constraints enforcing satisfiability, all the terms corresponding to subsets of cardinality greater than 2 can be replaced either by ±1 or by a term corresponding to a subset of cardinality at most 2. (The case of 3-SAT is discussed in detail in.) Now, consider a reduced SDP obtained as follows: For every constraint, make all the substitutions for terms corresponding to subsets of cardinality greater than 2 using the observation above. Simplifying the result, each constraint becomes either a tautology or a constraint of the form corresponding to a clause of length 2. Take every constraint arising from the latter case, and let the matrix be indexed by the set of singletons {i k } such that Y ({i k }) appears in at least one of these constraints. The SDP defined by the matrix and the constraints arising from the second case above, plus the constraint 0, is precisely of the form of the Gap relaxation for an instance of 2-SAT. This means that Lemma 4 allows us to reduce the problem to an instance of 2-SAT. Finally, recall that we have a matrix Y feasible for the original R 2 relaxation, and observe that its principal submatrix indexed by the set of singletons used to define is feasible for this reduced SDP. By Theorem 12, this implies that the instance of 2-SAT is satisfiable. Consider a model for the 2-SAT instance obtained using the algorithm following Theorem 12. For each of the variables in the original instance of SAT that are not present in this truth assignment, either they should be set equal to (or to the negative of) one of the variables in this assignment (according to the analysis above), or they are "free" and can be assigned the value +1, say. We thus obtain a model for the original SAT instance, and have proved the claim. Computational Proofs of Infeasibility for a Class of Hard Instances As mentioned at the beginning of this section, an important objective in the study of partial higher liftings for SAT is the construction of SDP relaxations which are more amenable to practical computation than the entire higher liftings. The potential of this computational approach has been shown in the results obtained by the author in. The results in those papers show that the R 3 relaxation yielded proofs of unsatisfiability for some hard instances with up to 260 variables and over 400 clauses. In particular, R 3 is able to prove the unsatisfiability of the smallest unsatisfiable instance that remained unsolved during the SAT Competitions in 2003 and 2004. Researchers in SDP have developed a variety of excellent solvers, most of which are freely available. An extensive listing of solvers is available at. For application to SAT, it is important to use a solver which, when given an infeasible SDP, provides us with a certificate of infeasibility, because that certificate is for us a proof of unsatisfiability for the SAT instance. The results we present are for randomly generated SAT instances obtained using the generator hgen8. A set of 12 instances generated using hgen8 was submitted for the SAT Competition 2003 (see ) The source code to generate these instances, which includes an explanation of their structure, is available at. The results in Table 1 were obtained using the solver SDPT3 (version 3.0) with its default settings, and running on a 2.4GHz Pentium IV with 1.5Gb of RAM. (The SDP relaxations for the remaining instances in the set were too large for the computing resources available.) In particular, the R 3 relaxation was able to prove the unsatisfiability of the instance n260-01 in Table 1, which was the smallest unsatisfiable instance that remained unsolved during the 2003 Competition. (An instance remained unsolved if none of the top five solvers was able to solve it in two hours, running on an Athlon 1800+ with 1Gb of RAM.) Indeed, the SDP relaxation appears to be quite effective on the type of instances generated by hgen8, as we randomly generated several more instances of varying size and all the corresponding SDP relaxations were infeasible. The additional results are presented in Table 2. It is important to point out that the effectiveness of the SDP approach for these instances is due to the fact that no branching is needed to obtain a certificate of infeasibility for the SDP relaxation. Computational results on instances of 3-SAT from the Uniform Random-3-SAT benchmarks available at are reported in Section 6, and it is observed there that if branching is required, then the computational effort required by the SDPbased approach is still too large in comparison to other SAT algorithms in the literature. Nonetheless, the SDP approach already complements existing SAT solvers in the sense that these difficult hgen8 instances can be solved in reasonable time. (Incidentally, the n260-01 instance remained unsolved in the SAT Competition 2004.) These results motivate our current research considering this and other classes of hard instances for which the partial higher liftings approach may be effective with little or no branching. The most recent results in this direction are reported in Section 7. Theoretical Comparison For the case of 3-SAT, we can make some specific comparisons between the feasible sets of the Karloff-Zwick relaxation and the three SDP relaxations in Theorem 14. It was observed in that for 3-SAT, the Gap relaxation has an interesting connection to the relaxation of Karloff and Zwick. For each clause of length 3, the linear constraint in the Gap relaxation is precisely equivalent to the sum of the three linear constraints used in the Karloff-Zwick relaxation. Using the notation defined in to account for negated variables, the three Karloff-Zwick constraints may be rewritten as: for each 3-clause with variables {p i 1, p i 2, p i 3 }. Now consider any matrix Y feasible for the R 2 and R 3 relaxations. The principal submatrix of Y corresponding to the rows and columns indexed by {∅} ∪ {x k : k = 1,..., n} is a positive semidefinite matrix with the same structure as the matrix variable in the Gap relaxation. As for the constraints, it is clear that for clauses of length 1 or 2, the linear constraints expressing satisfiability are equivalent to those in either the Karloff-Zwick or Gap relaxations. For clauses of length 3, the linear constraint in R 2 and R 3 can be rewritten as and the positive semidefiniteness of Y implies that the right-hand side is always nonnegative. Relaxing the equation to an inequality, we have which is the first constraint above for the Karloff-Zwick relaxation. The other two constraints can be shown to hold by similar arguments. Hence, the SDP relaxation R 2, and therefore also R 3, for 3-SAT are at least as tight as the Karloff-Zwick relaxation. In summary, we have the following theorem: Theorem 15. For instances of 3-SAT, the feasible sets of the Gap, Karloff-Zwick (KZ), R 2, and R 3 relaxations satisfy Computational Comparison The results of a computational study of the performance of R 1, R 2, and R 3 were reported in. A branching algorithm was implemented in Matlab running on a 2.4 GHz Pentium IV with 1.5Gb. The test suite consisted of both satisfiable and unsatisfiable instances of 3-SAT from the Uniform Random-3-SAT benchmarks available at. Two sets of problems were considered, with n = 50 and n = 75 variables, and m = 218 and m = 325 clauses respectively. A total of 40 instances of 3-SAT from each set were used, half of them satisfiable and the other half unsatisfiable. For all the relaxations, the algorithm was stopped after 2 hours on the instances with 50 variables, and after 3 hours on the instances with 75 variables. The results are for small problems, but they clearly illustrate the tradeoffs involved in the choice of SDP relaxation as well as the advantage of the R 2 relaxation over the other two relaxations when applied together with a branching algorithm. The main conclusions were that: The Gap relaxation is solved most quickly of all three, but the branching algorithm reaches a large depth in the search tree before it stops. As a result, its total time is higher than that of the algorithm using R 2. The opposite happens with the algorithm using R 3 : each SDP relaxation takes much longer to solve, but the depth reached in the search tree is less than for the other two relaxations. Therefore, in comparison with the other two relaxations, the R 2 relaxation is the most effective, and it can be routinely used to prove both satisfiability and unsatisfiability for instances with a few hundred clauses. However, we observe that the computational time for the branching algorithm using R 2 still increases quite rapidly. This was illustrated in by allowing the branching algorithm using R 2 to run to completion on 40 instances of 3-SAT with n = 100 and m = 430 from the same set of benchmarks, evenly divided between satisfiable and unsatisfiable instances. The results show that proofs of satisfiability require over one hour on average, and proofs of unsatisfiability over six hours. Although the Karloff-Zwick relaxation was not considered, it is clear that the computational time for any SDP-based algorithm is dominated by the effort required to solve the SDPs, and that regardless of the choice of SDP relaxation, the computational effort is still too high to be competitive with other SAT solvers whenever branching is actually used. Therefore, even though branching is not always necessary (as observed in Section 5.3), the competitiveness of the SDP approach depends on the development of novel algorithms for solving SDPs by taking advantage of their structure. Important advances have been made recently on algorithms for SDP that exploit certain types of structure (see e.g ), and current research is considering how to apply them for solving the SAT relaxations. Recent Research Results The application of SDP to SAT problems continues to be an active area of research. We close this survey with a summary of two recent working papers on the application of SDP to MAX-SAT and SAT. Solving MAX-SAT Using SDP Formulations of Sums of Squares In their working paper, van Maaren and van Norden consider the application of Hilbert's Positivstellensatz to MAX-SAT. The idea is to formulate MAX-SAT as a global polynomial optimization problem, akin to the approaches we have seen, but in such a way that it can then be relaxed to a sum of squares (SOS) problem, and the latter can be solved using SDP (under certain assumptions). The starting point for this approach is the observation that for each clause C j = k∈I j x k ∨ k∈ jx k and for each assignment of values to x k ∈ {−1, 1}, i ∈ I j ∪ j, the polynomial defined as where x = (x 1,..., x n ) T and the parameters s j,k are as defined in, satisfies F C j (x) = 0, if C j is satisfied by the truth assignment represented by x 1, otherwise. With a given CNF formula, we thus associate two aggregate polynomials: Clearly, F (x) is a non-negative polynomial on n, F B (x) is non-negative on {−1, 1} n, and for x ∈ {−1, 1} n, both polynomials give the number of unsatisfied clauses. Hence, MAX-SAT is equivalent to the minimization of either of these polynomials over {−1, 1} n. A first SDP relaxation is obtained as follows. Suppose we are given a column vector of monomials in the variables x 1,..., x n and a polynomial p(x). Then p(x) can be written as an SOS in terms of the elements of if and only if there exists a matrix S 0 such that T S = p (see e.g. ). Note that by Theorem 5, and hence we have an explicit decomposition of p as an SOS. Therefore, if then m − g * is an upper bound for MAX-SAT. Note that since is fixed, the equation F (x) − g = T S is linear is S and g, and hence problem is indeed an SDP. Also, g * ≥ 0 since F (x) is an SOS by definition. Another SDP relaxation is obtained by considering F B (x), but here we have to work modulo I B, the ideal generated by the polynomials x 2 k − 1, k = 1,..., n. (The fact that each polynomial that is non-negative on {−1, 1} n can be expressed as an SOS modulo I B follows from the work of Putinar.) The resulting SDP problem is and thus m − g * B is also an upper bound for MAX-SAT. For the remainder of this section, we focus on the SDP approach using. The key point in this SOS approach is the choice of, and van Maaren and van Norden consider the following possibilities: GW is the basis containing 1, x 1,..., x n ; p is the basis containing 1, x 1,..., x n, plus the monomial x k 1 x k 2 for each pair of variables that appear together in a clause; ap is the basis containing 1, x 1,..., x n, plus the monomials x k 1 x k 2 for all pairs of variables; t is the basis containing 1, x 1,..., x n, plus the monomial x k 1 x k 2 x k 3 for each triple of variables that appear together in a clause; pt is the basis containing 1, x 1,..., x n, plus the monomial x k 1 x k 2 for each pair of variables that appear together in a clause, plus x k 1 x k 2 x k 3 for each triple of variables that appear together in a clause. The SDP relaxations are referred to as SOS GW, SOS p, SOS ap, SOS t, and SOS pt respectively. Starting with MAX-2-SAT, van Maaren and van Norden prove that SOS GW is precisely the dual of the SDP. Furthermore, they show that for each triple x k 1 x k 2 x k 3, adding the monomials x k 1 x k 2, x k 1 x k 3, and x k 2 x k 3 gives an SDP relaxation at least as tight as that obtained by adding the corresponding triangle inequality to. A comparison of the relaxations SOS pt and for MAX-3-SAT is also provided. The results are summarized in the following theorem: Theorem 16. For every instance of MAX-2-SAT, The SDP relaxation SOS GW gives the same upper bound as the relaxation of Goemans and Williamson. The SDP relaxation SOS ap is at least as tight as the Feige-Goemans relaxation consisting of plus all the triangle inequalities. For every instance of MAX-3-SAT, the SDP relaxation SOS pt provides a bound at least as tight as the Karloff-Zwick relaxation. From the computational point of view, van Maaren and van Norden provide computational results comparing several of these relaxations on instances of varying sizes and varying ratios of number of clauses to number of variables. They also propose rounding schemes for MAX-2-SAT and MAX-3-SAT based on SOS p and SOS t respectively, and present preliminary results comparing their performance with the rounding schemes presented in Section 3 above. The SOS approach can also be applied to obtain proofs of unsatisfiability. It is straightforward to prove that: Proposition 2. Given a formula in CNF, if there exists a monomial basis and an > 0 such that F B (x) − is a SOS modulo I B, then is unsatisfiable. van Maaren and van Norden compare the performance of the R 3 relaxation with the SOS approach using either SOS t or SOS pt. Their preliminary results suggest that SOS pt offers the best performance. An Explicit Semidefinite Characterization of SAT for Tseitin Instances The results in this section apply to a specific class of SAT instances that has been studied for over 30 years, and is known to be hard for many proof systems. They are the Tseitin propositional formulas, first introduced in. These instances are constructed using graphs whose vertices are points on the plane with integer coordinates, and whose edges are segments of unit length along the coordinate axes. Consider a p q toroidal grid graph, and label the rows {0, 1,..., p − 1} and the columns {0, 1,..., q − 1}. Identify each vertex by a pair (i, j) with i ∈ {0, 1,..., p − 1} and j ∈ {0, 1,..., q − 1}. Each vertex (i, j) has degree four, and its four incident edges are denoted by: {(i − 1, j), (i, j)}, {(i + 1, j), (i, j)}, {(i, j − 1), (i, j)}, {(i, j), (i, j + 1)}, where (here and in the sequel) the subtractions and sums are taken mod p for the first index, and mod q for the second index. For each vertex (i, j), fix the parameter t(i, j) ∈ {0, 1}, and associate with each edge a boolean variable: v r (i, j) is the variable corresponding to the edge {(i + 1, j), (i, j)} v d (i, j) is the variable corresponding to the edge {(i, j), (i, j + 1)} Thus, there are 2pq boolean variables in the SAT instance. For notation purposes, since vertex (i, j) has two other edges incident to it, we further define v u (i, j) := v d (i − 1, j) and v l (i, j) := v r (i, j − 1). Furthermore, each vertex (i, j) contributes eight clauses to the SAT instance, and the structure of the clauses is determined by the value of t(i, j): 1. if t(i, j) = 0 then all clauses of length four on v l (i, j), v r (i, j), v u (i, j), v d (i, j) with an odd number of negated variables are added; and 2. if t(i, j) = 1 then all clauses of length four on v l (i, j), v r (i, j), v u (i, j), v d (i, j) with an even number of negated variables are added. We denote the eight clauses thus obtained by C (i, j), = 1,..., 8. Hence there are 8pq clauses in the SAT instance. It is well known that the SAT instance is unsatisfiable if and only if the sum of the t(i, j) is odd (see e.g. ). In the recent working paper, the author proposes an SDP problem which characterizes the satisfiability of these instances, and is of dimension linear in the size of the instance. This is a result in the same vein as Theorems 12 and 13 for the Gap relaxation. These are not the first proofs that Tseitin or pigeonhole instances can be solved in polynomial-time (see e.g. ), but again we stress that the SDP approach not only establishes satisfiability or unsatisfiability in polynomial-time, but also computes an explicit proof of it in polynomialtime. Using this matrix, the resulting SDP relaxation is: find ∈ S 14pq s.t. The construction of the SDP relaxation follows the paradigm of partial higher liftings introduced in Section 5. Since the matrix variable has dimension 14pq, and since there are 23pq −1 linear equality constraints, the SDP problem is linearly-sized with respect to 2pq, the number of boolean variables in the SAT instance. Furthermore, the structure of the SDP relaxation is directly related to the structure of the SAT instance. Note also that there are many more valid linear constraints that could be added to the SDP problem. Such constraints equate elements of the matrix variable that would be equal if the matrix were restricted to have rank equal to one. The motivation for our particular choice of additional constraints in is that although they are relatively few, they are sufficient for proving the following characterization of unsatisfiability: Finally, we point out that even though Theorem 17 as stated is specific to the instances defined here, the SDP formulation and the proof of exactness can in principle be extended to other graph-based instances whose satisfiability is determined using quantities akin to the t(i, j) parameters. We refer the reader to for more details.
Once upon a time, Congress used to write detailed tariff bills that were stuffed full of giveaways to special interests, with destructive effects on both the economy and American diplomacy. So in the 1930s FDR established a new system in which the executive branch negotiates trade deals with other countries, and Congress simply votes these deals up or down. The U.S. system then became the template for global negotiations that culminated in the creation of the World Trade Organization.
ANALYSIS OF RISK MANAGEMENT IN COTS COMPONENTS In recent years Commercial-off-the shelf components has become a great demand in IT industry due to its various advantages. They are defined as components which are easily implemented into existing system without a need for customization. COTS components include hardware or software products readily available for sale by third party organizations, which the user or developer can purchase from them. Many quality models were developed in order to evaluate the quality of the COTS components. Various risks are involved if quality of COTS component is not verified. Risk management is defined as set of actions that help the project manager plan to deal with uncertain occurrences. Coding, debugging, unit testing and code inspections are reduced, while design and testing are modified thus, various risk management methods are proposed which serves as a guide to both developers and the end users. This paper deals with various risk management in COTS components. The objective of this work is to enable the extension of research in the field of COTS components.
Stretch-Induced Shish-Kebabs in Rubbery Poly(L-Lactide) The oriented crystallization in stretched rubbery poly(L-lactide) has been studied with the aid of in-situ rheo-Fourier transform infrared spectroscopy (FTIR) measurements and morphological observations. The oriented segments that survived after stretching are first transformed into shish structure composed of helical sequences via intra-chain conformational ordering and propagation, followed by the transverse growth of kebabs from the coiled chains in the surrounding matrix. Moreover, the formation of shish structure and kebabs shows different dependences on the stretching temperature as a result of different controlling molecular processes.
There is a perceived need for food containers which have the ability to protect ingredients packaged therein against loss of essential oils and/or flavorings, such as fruit or citrus juices, beverages, and the like. Paperboard coated with polyethylene has been tried for this purpose, but it falls short of providing an acceptable container because polyethylene absorbs, or permits the migration of, an appreciable amount of the essential oils and/or flavorings. The loss of these oils and/or flavorings results in loss of taste and aroma of the juice, such as orange juice. U.S. Pat. No. 3,120,333, for example, discloses the well-known gable-top milk carton prepared from a laminate of paperboard extrusion-coated on both sides with polyethylene; the polyethylene is employed as a moisture barrier and is in contact with the milk. U.S. Pat. No. 3,464,546 discloses a multi-layer container for latex and oil based paints wherein the inner layer is an oxygen barrier resin, such as a vinylidene chloride polymer. U.S. Pat. No. 3,560,227 discloses high barrier coated papers which include, inter alia, the use of a vinylidene chloride polymer sandwiched between two layers of polyethylene which are adhered to a paper base, said to be useful as a barrier for oxygen and water vapor. The use of polyethylene film as a packaging material is well known, including packages which employ a multi-layer construction wherein the polyethylene is the layer which is in direct contact with packaged ingredients. It has been generally believed that a barrier layer (such as a vinylidene chloride polymer) in a multi-layer construction is effective as a barrier, even though it is not the layer in contact with the packaged ingredients; this belief is well-founded when considering only the barrier properties for oxygen and water vapor. We have found, however, that when a barrier for the essential oils and/or flavorings in juices or beverages is needed, then the halopolymer, such as a polymer of vinylidene chloride, needs to be the layer in direct contact with the juices and beverages in order to be efficient. If there is a layer of polyethylene between the halopolymer and the packaged juices and beverages, then the polyethylene absorbs a significant amount of the essential oils and/or flavorings and having the halopolymer behind the polyethylene does not prevent the absorption into the polyethylene. The loss of essential oils and/or flavorings results in loss of flavor and aroma and the storage time (shelf-life) is considerably shortened if polyethylene is the layer in contact with the juices and/or beverages. Of importance among the esssential oils and/or flavorings are terpenes. Limonene is a cyclic terpene which can be dextro or laevo; d-limonene is an essential oil found in citrus fruits; it provides at least a large percent of the distinctive flavor and aroma of citrus fruits. Other aroma/flavoring ingredients found in nature's products are included within the meaning of the expression "essential oils and/or flavorings". For instance, vanillin (the aroma and flavor constituent of vanilla bean extract), eugenol (the chief constituent of oil of cloves) and isoeugenol (in nutmeg oil) are among the flavorings added to food products such as baked goods and dry cereals and the like. A technique has been developed (Journal of The A.O.A.C., Vol. 49, No. 3, 1966, p. 628) which measures the approximate concentration of d-limonene in orange juice as a measure of the essential oils and flavorings. A loss of d-limonene upon storage causes a perceptible change in taste and aroma; this loss of d-limonene and change in taste and aroma is undesirable and should be avoided. It is an object of this invention to provide a laminate material for making containers for foods, juices and beverages which contain essential oils and/or flavorings. A further object is to provide in such containers an inner-wall barrier layer which substantially prevents migration of essential oils and/or flavorings from foods, juices or beverages stored therein.
CEELO Green shocked everyone when he arrived at the Grammys dressed like the love child of C3PO and a Ferrero Rocher. Here’s why he did it. We've had a little time to reflect on all the craziness that was the 2017 Grammys, and now we look at all the fallout from this year's major music event. In case you missed it, Adele humbly rejected her Album of the Year award win (although she kept it in the end), but did she break off a piece for fellow nominee Beyonce? Also, Justin Bieber apparently is STILL not happy with the fact that Selena Gomez and the Weeknd are an item, as he takes yet ANOTHER shot at the Starboy. Heather and Stephanie recap these stories and more in today's Daily Rewind. IT was perhaps the most bizarre attention-seeking stunt in a night filled with them: CeeLo Green rocking the Grammys red carpet dressed like the love child of C3PO and a Ferrero Rocher. It now appears that Green’s bizarre get-up on Grammy night — complete with gold body paint and facial prosthetics — was the public debut of a brand new artistic alter ego. Like Ziggy Stardust, Sasha Fierce and Chris Gaines before him, CeeLo’s undergoing artistic reinvention. Ladies and gentlemen, meet ‘Gnarly Davidson’. More than 20 years into his career, Green has invented Davidson as an alter ego to release his new, ‘edgy’ musical output. “He got the idea to have another personality for a specific kind of music he’s doing, so that’s why he transformed himself. [His new music] is pop music, a lot of it, edgy pop music like his song, F — k me, I’m Famous, Motörhead, a lot of edgy stuff coming out,” an insider told the New York Post. The Post also quotes an insider as saying that Green planted the seeds for his new persona back when he released a faked video last year showing a cell phone exploding in his hand. “He was hatching Gnarly Davidson and turning into a superpower. Gnarly is a prankster, a grown up Dennis the Menace, who doesn’t talk,” the insider said of the video, which Green was later forced to apologise for after uproar from fans who thought he’d been seriously injured. We’re not sure how successful this much-hyped rebranding has been yet: so far, the official Gnarly Davidson Twitter account has around 1300 followers, or 0.06% of CeeLo’s 1.83m Twitter followers.
Andrew Soper convicted of sexually abusing pupils at St Benedict’s school in Ealing during 1970s and 80s A former abbot who fled to Kosovo to escape justice has been convicted of abusing 10 boys at a Catholic-run school in London during the 1970s and 80s. Andrew Soper, 74, formerly known as Father Laurence Soper, was found guilty of 19 charges of rape and other sexual offences after a lengthy trial at the Old Bailey. The scale of historical sexual abuse in the UK is a catastrophe. We need catharsis | Beatrix Campbell Read more Soper sexually abused pupils while he was master in charge of discipline at St Benedict’s school in Ealing, west London. He would assault them after subjecting them to corporal punishment using a cane. The first victim contacted police in 2004 after Soper left his role as abbot of Ealing Abbey and moved to the Benedictine order’s headquarters in Rome. The former pupil was initially told by officers there was insufficient evidence. Soper was later interviewed at Heathrow police station in 2010 and subsequently fled to Kosovo while on police bail the following year. He was arrested at Luton airport in August 2016 after being deported by the Kosovan authorities and returned to the UK. Tetteh Turkson, a senior Crown Prosecution Service lawyer involved in the case, said: “Soper used his position as a teacher and as a priest to abuse children for his own sexual gratification. “He compounded this by trying to evade justice and fleeing to Kosovo in order to go into hiding. The victims’ bravery in coming forward and giving evidence has seen him convicted of these serious offences.” A statement on behalf of the fee-paying independent school was issued by Alex Carlile QC after the conviction. He said: “St Benedict’s school is deeply concerned for, and distressed by, the ordeals faced by the victims of Laurence Soper, who have lived with the pain of his activities for so long. “The school apologises unreservedly for the serious wrongs of the past. The school regrets that Soper did not have the courage to plead guilty. “The result has been that innocent victims, whom he abused when they were boys in the school, were compelled to give evidence. They were subjected to cross-examination about matters in relation to which they were both helpless and innocent. “The fact that these matters took place many years ago does not mitigate the pain and injustice endured by them.” The statement said the school was now “a completely different institution”. Lord Carlile added: “The tough lessons of the past have been learned, and the errors and crimes of the past are in the daily consciousness and conscience of the school management … St Benedict’s cannot and will never forget Soper’s crimes. Nevertheless they are proud of the school as it now is, and as confident as ever they can be that everything is being done to ensure that such events cannot recur.” The school, which charges fees of about £5,000 a term, counts the former Conservative chair Lord Patten and entertainer Julian Clary among its alumni. Gillian Etherton QC, who led the prosecution, told the court victims were subjected to sadistic beatings by Soper for “fake reasons”. They included kicking a football “in the wrong direction”, “failing to use double margins”, and “using the wrong staircase”, leading to a caning and a sexual assault, she said. “It is the prosecution case that ‘punishments’ as described by the complainants in this case were carried out by Soper in entirely inappropriate ways and circumstances and, on many occasions, with what can only have been sexual motive,” Etherton added. Many of his victims have experienced flashbacks and nightmares. During the trial Soper denied using the cane as a ruse to abuse boys. The judge, Anthony Bate, remanded Soper in custody to be sentenced on 19 December. He was convicted of two counts of buggery, two counts of indecency with a child and 15 counts of indecent assault. Soper was found guilty of buggery, contrary to section 12(1) of the Sexual Offences Act 1956, since the offence took place when that act was in force. The offence was changed from buggery to rape by the Criminal Justice and Public Order Act 1994.
Entanglement between static and flying qubits in an Aharonov-Bohm double electrometer We consider the phase-coherent transport of electrons passing through an Aharonov-Bohm ring while interacting with a tunnel charge in a double quantum dot (representing a charge qubit) which couples symmetrically to both arms of the ring. For Aharonov-Bohm flux Phi_AB=h/2e we find that electrons can only be transmitted when they flip the charge qubit's pseudospin parity an odd number of times. The perfect correlations of the dynamics of the pseudospin and individual electronic transmission and reflection events can be used to entangle the charge qubit with an individual passing electron. INTRODUCTION A scalable solid-state quantum computer has to rely on a hybrid architecture which combines static and flying qubits. Recent works propose the passage of electrons through mesoscopic scatterers for the generation of entanglement between flying qubits, and charge measurements by an electrometer have been proposed and realized to entangle or manipulate static spin and charge qubits. Hence, it seems natural to exploit mesoscopic interference and scattering to entangle flying qubits to static qubits. In this paper we demonstrate that prompt and perfect entanglement of a flying and a static charge qubit can be realized when a double quantum dot occupied by a tunnel charge is electrostatically coupled to the symmetric arms of an Aharonov-Bohm interferometer . For an Aharonov-Bohm flux AB = h/2e (half a flux quantum), each electron passing through the ring signals that the tunnel charge has changed its quantum state, while each electron which is reflected signals that the tunnel charge has maintained its state. This can be used to produce perfect entanglement between the static charge qubit represented by the tunnel charge in the double dot, and the flying qubit represented by the charge of the conduction electron in the exit leads. Since this entanglement mechanism does not require any energy fine tuning (for the implications of energy constraints see Ref. ), the entanglement can be produced quickly by the passage of a single electronic wave packet through the system. Our proposal draws from both mesoscopic effects mentioned above -in essence, it consists of two electrometers both coupling to the same charge qubit, and pinched together to form an Aharonov-Bohm ring, which then represents a mesoscopic scatterer. Since we require that the coupling of the tunnel charge to the arms of the ring is symmetric, our proposal falls into the class of parity meters which have been discussed for the entanglement and detection of spin qubits and charge qubits. In the present paper we are concerned with charge degrees of freedom only, and the resulting entanglement should be detectable by current-charge cross-correlation experiments. SCATTERING THEORY In order to describe the properties of the Aharonov-Bohm double electrometer we start with the scattering region which is depicted in Fig. 1(a). It consists of two quantum wire segments labelled 1 and 2, which are symmetrically arranged around the double quantum dot. The tunnel charge in the double dot is described by a pseudospin, associated to states | ↑ when the charge is in the upper dot and | ↓ when the charge is in the lower dot. The ground state is given by the symmetric combination |+ = 2 −1/2 (| ↑ + | ↓ ), and the excited state is |− = 2 −1/2 (| ↑ − | ↓ ). Both states are separated by a tunnel splitting energy ∆. The orbital degree of freedom of the passing electron is described by basis states |1 and |2. Via its electrostatic repulsion potential V (x), the tunnel charge impedes the current in wire 1 when it occupies the upper dot, while it impedes the current in wire 2 when it occupies the lower dot. Outside of the range of the potential V (x) one finds plane-wave scattering states where k = 1 h 2m(E + ∆) for states of total energy E. The symbol = ± describes the propagation direction along the wire, = ± describes the orbital parity of the passing electron with respect to the wire segments 1 and 2, and = ± describes the pseudospin parity of the state of the double dot. For general scattering potential V (x), the incoming and outgoing components are then related by an extended scattering matrix which takes the initial and final state of the tunnelling charge into account:  Analogously, the complete Aharonov-Bohm double electrometer in Fig. 1(b) is described by an extended scattering matrix The amplitudes r ±,± (r ±,± ) are associated to reflection from the left (right) lead, and the amplitudes t ±,± (t ±,± ) are associated to transmission from left to right (right to left), while the first (second) subscript denotes the state of the tunnelling charge after (before) the passage of the electron through the ring. The matrix S can be related to the internal scattering matrixS by adopting the standard model for an Aharonov-Bohm ring developed in Ref.. The contacts to the left (s = L) and right (s = R) lead are characterized by reflection amplitudes s and transmission amplitudes s = 1 − 2 s which describe the coupling into the symmetric orbital parity state 2 −1/2 (|1 + |2 ). The Aharonov-Bohm flux AB mixes the symmetric and antisymmetric orbital parities in the passage from one contact to the other contact by a mixing angle = AB e/h. The lengths of the ballistic regions between the scattering region and the left and right contact are denoted by d L and d R, respectively. The total scattering matrix is then of the form Stationary concurrence The scattering amplitudes of the extended scattering matrix can now be used to assess how the tunnel charge on the double dot becomes entangled with the itinerant electron during its passage through the Aharonov-Bohm double electrometer. In particular, they describe by which lead the passing electron exits and how this is correlated to the final state of the double dot. We assume that the double dot is initially in the symmetric state |+ and hence uncorrelated to an arriving electron that enters the ring from the left lead. The degree of entanglement between the final state of the double dot and the exit lead of the electron can then be quantified by the concurrence which provides a monotonous measure of entanglement (C = 0 for unentangled states and C = 1 for maximal entanglement). For the case of a vanishing flux AB = 0, the mixing angle is = 0. It then follows from Eq. that the scattering matrix is of the form In this case the electron can only leave the system when the pseudospin of the scatterer has flipped an even number of times, hence the final state of the scatterer is identical to its initial state. There is no entanglement, and consequently the concurrence vanishes, C = 0. For AB = h/2e, the mixing angle is = /2, and the scattering matrix is of the form Now the electron becomes entangled with the double dot: the electron is reflected when the double dot is finally in its symmetric state (the pseudospin of the double dot has then flipped an even number of times), while the electron is transmitted when the double dot is finally in its antisymmetric state (it then has flipped an odd number of times). The concurrence C = 2|r ++ t −+ | signals perfect entanglement, C = 1, when the probabilities of reflection and transmission are identical, |r ++ | 2 = |t −+ | 2 = 1/2. The condition for perfect entanglement corresponds to the case of maximal shot noise. The entanglement in the Aharonov-Bohm double electrometer results because the total parity is conserved in the scattering from the double quantum dot. This conservation law yields the separate Eqs., which only couple wave components of the same total parity. Since the scattering matrix S for positive and negative total parity is identical, this realizes a parity meter which entangles the orbital parity of the passing electron to the pseudospin parity of the double dot. The role of the Aharonov-Bohm ring with flux = h/2e is to convert the orbital parity into charge separation in the exit leads. When the electron enters the ring at the left contact, this prepares it locally in the orbitally symmetric state. Neglecting the scattering from the double dot, the electron cannot leave at the right contact since it will end up there in the antisymmetric orbital state. Transmission is therefore only possible if the orbital parity is flipped by interaction with the tunnel charge. But since the total parity is preserved, this scattering event is necessarily accompanied by a parity change of the double dot. Consequently, the final state of the tunnel charge is perfectly correlated to the lead by which the passing electron leaves the Aharonov-Bohm double electrometer. Time-dependent concurrence The energy-dependent scattering matrix and the expression for the concurrence describe the stationary transport through the system. A pressing issue for many proposals involving flying qubits is that they require an energy fine tuning which entails a degrading of the entanglement in the time domain. As we will now show, the entanglement can in fact be increased above the value of the stationary concurrence when a single electronic wave packet is passed through the Aharonov-Bohm electrometer with AB = h/2e. In this non-stationary situation, the entanglement is quantified by the time-dependent concurrence where + and − are the orbital wavefunctions obtained by projection onto the symmetric and antisymmetric state of the double dot, respectively. The potential for entanglement enhancement follows from the lower bound of the time-dependent concurrence, which can be derived directly from the form of the scattering matrix when one assumes that the wave packet has completely entered the ring. Here P L (t) is the weight of the reflected wave packet, P R (t) is the weight of the transmitted wave packet, and P 0 (t) = 1 − P L (t) − P R (t) is the weight of the wave packet in the ring. For large times P 0 (t = ∞) = 0, and the weights P L (t = ∞) = R and P R (t = ∞) = T give the reflection and transmission probabilities of the wave packet, which can be obtained by an energy average over the wave packet components. For the case that this energy average yields equal reflection and transmission probabilities R = T = 1/2, the final value of the concurrence after passage of the electron is C(t = ∞) = 1. In this case, maximal entanglement results during the passage of the electron through the system -independent of the precise dependence of the stationary concurrence in the energetic range of the wave packet. This entanglement enhancement is only possible because the scattering matrix S retains its sparse structure for all energies. To give a specific example, we consider the case that the tunnelling charge induces a localized scattering potential V =h 2 2m g(x) in the quantum wire. This problem can be solved exactly for arbitrary values of the tunnel splitting ∆, scattering strength g, and ring parameters d s and s, using the technique of Ref.. Here we concentrate on the case of a vanishing tunnel splitting ∆ = 0, strong scattering g → ∞, transparent contacts with R = L = 0, and equal distance d L = d R ≡ d of the double dot to the contacts. The extended scattering matrix is then of the form wherec = cos kd ands = sin kd. At fixed energy, the stationary concurrence is given by C = | sin sin 2kd|, and the maximal stationary concurrence C = | sin 2kd| is attained at = /2, corresponding to AB = h/2e. The factor | sin 2kd| arises from the energy dependence of the reflection probability R = cos 2 kd. For a narrow wave packet, the reflection and transmission probabilities average to 1/2, and the time-dependent concurrence approaches the maximal value C(t = ∞) = 1 for large times. In order to assess the time scale on which the entangled state is formed we have performed numerical simulations which are presented in Fig. 2. The wave packet was propagated by a second-order Crank-Nicholson scheme, and the Aharonov-Bohm ring and the leads were formed by tight-binding chains at wavelengths much larger than the lattice constant. For a narrow wave packet, the simulations confirm that the final state is perfectly entangled. For broader wave packets, the final entanglement is not necessarily perfect and depends on the average wave number, but is always attained on a time scale comparable to the propagation time of the wave packet between the contacts in the ring. In Fig. 3 we assess how the degree of entanglement for a narrow wave packet depends on the scattering strength g. Perfect entanglement is obtained for g > ∼ 0.1, hence, already for rather weak coupling. For smaller g, the entanglement produced by a single passage of the wave packet is reduced. As required, the entanglement vanishes altogether in the limit g → 0. For small but finite g, one should expect that the entanglement can be enhanced by multiple passage of the wave packet through the ring, which can be achieved by isolating the system for a finite duration from the external electrodes (e.g., by pinching off the external wires via the voltage on some split gates). This expectation is confirmed in Fig. 4, which shows the time-dependent concurrence for a narrow wave packet which passes through the ring and is reflected at hard-wall boundaries in the external wires, at a distance of 4000 a to either side of the ring. Sensitivity to decoherence and asymmetries The time-dependent scattering analysis of the entanglement mechanism reveals an important and rather unique feature of the proposed device: As the entanglement can be generated in a single quasi-instantaneous scattering event, the proposed mechanism is rather robust against decoherence from time-dependent fluctuations of the environment (decoherence, however, will always become important in the subsequent dynamics of the system ). The main source of entanglement degradation hence comes from imperfections in the fabrication of the device itself, and here especially from imperfections which break the parity symmetry of the set-up. Arguably the most critical part of the set-up is the requirement of symmetric coupling to the double dot, as other asymmetries (in the arm length and contacts to the external wires) are just of the same character as in the conventional Aharanov-Bohm effect and can be partially compensated, e.g., via off-setting the magnetic flux. The sensitivity to asymmetries in this coupling is explored in Fig. 5, where the coupling strength to one arm is fixed to g = 0.1 while in the arm the strength is reduced by a factor. Astonishingly, a rather large degree of entanglement remains even for much reduced, which indicates that the proposed mechanism is rather more robust to asymmetries than it could have been expected. DISCUSSION AND CONCLUSIONS In conclusion, we have demonstrated that a tunnel charge on a double quantum dot can be entangled to an individual electron which passes through an Aharonov-Bohm ring with flux AB = h/2e. Ideal operation requires symmetric electrostatic coupling of the tunnel charge to the two arms of the ring. The entangled state is produced quickly, on time scales comparable to the passage of the electron through the ring. Since the entanglement in principle is generated in a single quasiinstantaneous scattering event, the proposed mechanism is robust against decoherence. The mesoscopic components for the proposed entanglement circuit -double quantum dots, ring geometries of quantum wires, and charge electrometers based on the electrostatic coupling of quantum dots to quantum wires -have been realized and combined in numerous experiments over the past decade, and recently the dynamics Fig. 2, but for g = 0.001 and a set-up in which the wires are terminated by hard walls at a distance 4000 a to either side of the ring. The concurrence is enhanced due to the multiple passages of the wave packet through the ring. The dashed line shows the probability P0(t) of the propagating electron to reside in the ring, which is revived in the multiple passages through the ring. of static qubits in double quantum dots has been monitored successfully. An initial experiment would target the finite conductance of the mesoscopic ring at half a flux quantum, AB = h/2e, where the conventional Aharonov-Bohm effect would yield total destructive interference and hence no current. The shot noise provides an indirect measurement of the underlying correlated dynamics of the tunnel charge and the mobile electron. A direct experimental investigation of the ensuing entanglement requires to measure the correlations between the individual transmission and reflection events and the pseudospin dynamics of the double quantum dot. The cross-correlation measurements could be carried out with a stationary stream of electrons, at a small bias V < t D /eh (hence an attempt frequency less than the typical dwell time t D of an electron in the ring). As in other proposals involving flying qubits, the incoming stream of electrons can be further diluted by a tunnel barrier in the lead from the electronic source. In a more sophisticated setting, a single charge would be driven into the system via an electronic turnstile. It also may be desirable to lead the reflected or transmitted charges into dedicated channels via electronic beam splitters or chiral edge states. We thank C. W. J. Beenakker and T. Brandes for helpful discussions. This work was supported by the European Commission, Marie Curie Excellence Grant MEXT-CT-2005-023778 (Nanoelectrophotonics).
Analysis of Employability Skills of Undergraduate Engineering Students in View of Employers Perspectives The employability matter is most requested one in this world. In one edge employers are claiming the right know-how to meet the ever-changing obligation of todays global economy and this has become something of a battle city. In the other edge is the training and skills division, which is functioning hard to help develop a good skilled workforce. The aim of this research was to grow up a clearer figuring out the skills employers expect from fresh people coming into the workforce to hire. Unemployment between 18-24 years old remains a key point. The analyzer wants to come out in this class about the employers expectations from the fresh people coming into the workforce. The researcher has analyzed about the worker demand for a good skilled workforce.
Congress Targets Fusion, Favors NIH Congress delivered a double punch to the U.S. fusion program last week when House and Senate panels voted separately to chop its budget well below the amount researchers agree is necessary to keep even a modest effort on track. Biomedical research fared much better: The House approved a 6.9% increase for the National Institutes of Health (NIH); the Senate is likely to follow suit with a smaller boost. These actions were part of a flurry of budget activity as Congress raced to clear as much legislation as possible before the August recess and the fall election campaigns.
Recent developments in ionic liquid pretreatment of lignocellulosic biomass for enhanced bioconversion Lignocellulosic biomass has been used as starting materials in the process of producing biofuels and chemicals. A pretreatment step is necessary to mitigate the inherent recalcitrance of lignocellulosic biomass during biochemical conversion. Certain ionic liquids (ILs) have been proven to be effective solvents to deconstruct recalcitrant lignocellulosic biomass for high sugar yield for over a decade. Dialkylimidazolium-based, choline-based, and protic acidic ILs are frequently used in biomass pretreatment. Extensive research in this field has been done with technological hurdles revealed during the process. In this review paper, advances in the biocompatibility of ILs and process integration, optimization and scale-up of IL pretreatment processes, IL recovery and reuse, and economic and sustainability analysis were presented. ILs continue to be actively used in biomass pretreatment due to their structural variability and functional tunability.
Brief Review: Electrochemical Performance of LSCF Composite Cathodes - Influence of Ceria-Electrolyte and Metals Element Solid oxide fuel cells (SOFC) are an efficient and clean power generation devices. Low-temperature SOFC (LTSOFC) has been developed since high-temperature SOFC (HTSOFC) are not feasible to be commercialized because high in cost. Lowering the operation temperature has caused substantial performance decline resulting from cathode polarization resistance and overpotential of cathode. The development of composite cathodes regarding mixed ionic-electronic conductor (MIEC) and ceria based materials for LTSOFC significantly minimize the problems and leading to the increasing in electrocatalytic activity for the oxygen reduction reaction (ORR) to occur. Lanthanum-based materials such as lanthanum strontium cobalt ferrite (La0.6Sr0.4Co0.2Fe0.8O3-) recently have been discovered to offer great compatibility with ceria-based electrolytes to be applied as composite cathode materials for LTSOFC. Cell performance at lower operating temperature can be maintained and further improved by enhancing the ORR. This paper reviews recent development of various ceria-based composite cathodes especially related to the ceria-carbonate composite electrolytes for LTSOFC. The influence of the addition of metallic elements such as silver (Ag), platinum (Pt) and palladium (Pd) towards the electrochemical properties and performance of LSCF composite cathodes are briefly discussed.
Characterization of the volatile composition of essential oils of some lamiaceae spices and the antimicrobial and antioxidant activities of the entire oils. The essential oils of Ocimum basilicum L., Origanum vulgare L., and Thymus vulgaris L. were analyzed by means of gas chromatography-mass spectrometry and assayed for their antioxidant and antimicrobial activities. The antioxidant activity was evaluated as a free radical scavenging capacity (RSC), together with effects on lipid peroxidation (LP). RSC was assessed measuring the scavenging activity of the essential oils on 2,2-diphenyl-1-picrylhydrazil (DPPH(*)) and OH(*) radicals. Effects on LP were evaluated following the activities of essential oils in Fe(2+)/ascorbate and Fe(2+)/HO systems of induction. Essential oils exhibited very strong RSCs, reducing the DPPH radical formation (IC) in the range from 0.17 (oregano) to 0.39 microg/mL (basil). The essential oil of T. vulgaris exhibited the highest OH radical scavenging activity, although none of the examined essential oils reached 50% of neutralization (IC). All of the tested essential oils strongly inhibited LP, induced either by Fe(2+)/ascorbate or by Fe(2+)/HO. The antimicrobial activity was tested against 13 bacterial strains and six fungi. The most effective antibacterial activity was expressed by the essential oil of oregano, even on multiresistant strains of Pseudomonas aeruginosa and Escherichia coli. A significant rate of antifungal activity of all of the examined essential oils was also exhibited.
The $H_0$ tension in light of vacuum dynamics in the Universe Despite the outstanding achievements of modern cosmology, the classical dispute on the precise value of $H_0$, which is the first ever parameter of modern cosmology and one of the prime parameters in the field, still goes on and on after over half a century of measurements. Recently the dispute came to the spotlight with renewed strength owing to the significant tension (at $>3\sigma$ c.l.) between the latest Planck determination obtained from the CMB anisotropies and the local (distance ladder) measurement from the Hubble Space Telescope (HST), based on Cepheids. In this work, we investigate the impact of the running vacuum model (RVM) and related models on such a controversy. For the RVM, the vacuum energy density $\rho_{\Lambda}$ carries a mild dependence on the cosmic expansion rate, i.e. $\rho_{\Lambda}(H)$, which allows to ameliorate the fit quality to the overall $SNIa+BAO+H(z)+LSS+CMB$ cosmological observations as compared to the concordance $\Lambda$CDM model. By letting the RVM to deviate from the vacuum option, the equation of state $w=-1$ continues to be favored by the overall fit. Vacuum dynamics also predicts the following: i) the CMB range of values for $H_0$ is more favored than the local ones, and ii) smaller values for $\sigma_8$. As a result, a better account for the LSS structure formation data is achieved as compared to the $\Lambda$CDM, which is based on a rigid (i.e. non-dynamical) $\Lambda$ term. Introduction The most celebrated fact of modern observational cosmology is that the universe is in accelerated expansion. At the same time, the most paradoxical reality check is that we do not honestly understand the primary cause for such an acceleration. The simplest picture is to assume that it is caused by a strict cosmological term,, in Einstein's equations, but its fundamental origin is unknown. Together with the assumption of the existence of dark matter (DM) and the spatial flatness of the Friedmann-Lematre-Robertson-Walker (FLRW) metric (viz. the metric that expresses the homogeneity and isotropy inherent to the cosmological principle), we are led to the "concordance" CDM model, i.e. the standard model of cosmology. The model is consistent with a large body of observations, and in particular with the high precision data from the cosmic microwave background (CMB) anisotropies. Many alternative explanations of the cosmic acceleration beyond a -term are possible (including quintessence and the like, see e.g. the review ) and are called dark energy (DE). The current situation with cosmology is reminiscent of the prediction by the famous astronomer A. Sandage in the sixties, who asserted that the main task of future observational cosmology would be the search for two parameters: the Hubble constant H 0 and the deceleration parameter q 0. The first of them is the most important distance (and time) scale in cosmology prior to any other cosmological quantity. Sandage's last published value with Tammann (in 2010) is 62.3 km/s/Mpc -slightly revised in Ref. as H 0 = 64.1 ± 2.4 km/s/Mpc. There is currently a significant tension between CMB measurements of H 0 -not far away from this value -and local determinations emphasizing a higher range above 70 km/s/Mpc. As for q 0, its measurement is tantamount to determining in the context of the concordance model. On fundamental grounds, however, understanding the value of is not just a matter of observation; in truth and in fact, it embodies one of the most important and unsolved conundrums of theoretical physics and cosmology: the cosmological constant problem, see e.g.. The problem is connected to the fact that the -term is usually associated with the vacuum energy density, = /(8G), with G Newton's coupling. The prediction for in quantum field theory (QFT) overshoots the measured value ∼ 10 −47 GeV 4 (in natural units c = = 1) by many orders of magnitude. Concerning the prime parameter H 0, the tension among the different measurements is inherent to its long and tortuous history. Let us only recall that after Baade's revision (by a factor of one half ) of the exceedingly large value ∼ 500 km/s/Mpc originally estimated by Hubble (which implied a universe of barely two billion years old only), the Hubble parameter was subsequently lowered to 75 km/s/Mpc and finally to H 0 = 55 ± 5 km/s/Mpc, where it remained for 20 years (until 1995), mainly under the influence of Sandage's devoted observations. Shortly after that period the first measurements of the nonvanishing, positive, value of appeared and the typical range for H 0 moved upwards to ∼ 65 km/s/Mpc. In the meantime, many different observational values of H 0 have piled up in the literature using different methods (see e.g. the median statistical analysis of > 550 measurements considered in ). As mentioned above, two kinds of precision (few percent level) measurements of H 0 have generated considerable perplexity Table 1: Best-fit values for the CDM, XCDM, the three dynamical vacuum models (DVMs) and the three dynamical quasi-vacuum models (wDVMs), including their statistical significance ( 2 -test and Akaike and Bayesian information criteria AIC and BIC). For detailed description of the data and a full list of references, see and. The quoted number of degrees of freedom (dof ) is equal to the number of data points minus the number of independent fitting parameters (4 for the CDM, 5 for the XCDM and the DVMs, and 6 for the wDVMs). For the CMB data we have used the marginalized mean values and covariance matrix for the parameters of the compressed likelihood for Planck 2015 TT,TE,EE + lowP+ lensing data from. Each best-fit value and the associated uncertainties have been obtained by marginalizing over the remaining parameters. in the recent literature, specifically between the latest Planck values (H Planck 0 ) obtained from the CMB anisotropies, and the local HST measurement (based on distance ladder estimates from Cepheids). The latter, obtained by Riess et al., is H 0 = 73.24 ± 1.74 km/s/Mpc and will be denoted H Riess 0. It can be compared with the CMB value H 0 = 67.51±0.64 km/s/Mpc, as extracted from Planck 2015 TT,TE,EE+lowP+lensing data, or with H 0 = 66.93 ± 0.62 km/s/Mpc, based on Planck 2015 TT,TE,EE+SIMlow data. In both cases there is a tension above 3 c.l. (viz. 3.1 and 3.4, respectively) with respect to the local measurement. This situation, and in general a certain level of tension with some independent observations in intermediate cosmological scales, has stimulated a number of discussions and possible solutions in the literature, see e.g.. We wish to reexamine here the H Riess 0 − H Planck 0 tension, but not as an isolated conflict between two particular sources of observations, but rather in light of the overall fit to the cosmological data SNIa+BAO+H(z)+LSS+CMB. Recently, it has been demonstrated that by letting the cosmological vacuum energy density to slowly evolve with the expansion rate, = (H), the global fit can be improved with respect to the CDM at a confidence level of 3 − 4. We devote this work to show that the dynamical vacuum models (DVMs) can still give a better fit to the overall data, even if the local HST measurement of the Hubble parameter is taken into account. However we find that our best-fit values of H 0 are much closer to the value extracted from CMB measurements. Our analysis also corroborates that the large scale structure formation data (LSS) are crucial in distinguishing the rigid vacuum option from the dynamical one. Dynamical vacuum models and beyond Let us consider a generic cosmological framework described by the spatially flat FLRW metric, in which matter is exchanging energy with a dynamical DE medium with a phenomenological equation of state (EoS) p = w, where w = −1 + (with | | 1). Such medium is therefore of quasi-vacuum type, and for w = −1 (i.e. = 0) we precisely recover the genuine vacuum case. Owing, however, to the exchange of energy with matter, = () is in all cases a dynamical function that depends on a cosmic variable = (t). We will identify the nature of (t) later on, but its presence clearly indicates that is no longer associated to a strictly rigid cosmological constant as in the CDM. The Friedmann and acceleration equations read, however, formally identical to the standard case: Here H =/a is the Hubble function, a(t) the scale factor as a function of the cosmic time, r is the energy density of the radiation component (with pressure p r = r /3), and m = b + dm involves the contributions from baryons and cold DM. The local conservation law associated to the above equations reads: where For w = −1 the last equation boils down to just Q = −, which is nonvanishing on account of (t) = ((t)). The simplest case is, of course, that of the concordance model, in which = 0 =const and w = −1, so that Q = 0 trivially. However, for w = −1 we can also have Q = 0 in a nontrivial situation, which follows from solving Eq.. It corresponds to the XCDM parametrization, in which the DE density is dynamical and self-conserved. It is easily found in terms of the scale factor: where 0 is the current value. From it then follows that the total matter component is also conserved. After equality it leads to separate conservation of cold matter and radiation. In general, Q can be a nonvanishing interaction source allowing energy exchange between matter and the quasi-vacuum medium under consideration; Q can either be given by hand (e.g. through an ad hoc ansatz), or can be suggested by some specific theoretical framework. In any case the interaction source must satisfy 0 < |Q| m since we do not wish to depart too much from the concordance model. Despite matter is exchanging energy with the vacuum or quasivacuum medium, we shall assume that radiation and baryons are separately self-conserved, i.e. Table 1. The XCDM curve is also shown. The values of 8 that we obtain for the models are also indicated. Right: Zoomed window of the plot on the left, which allows to better distinguish the various models. r + 4H r = 0 and b + 3H b = 0, so that their energy densities evolve in the standard way: r (a) = r0 a −4 and b (a) = b0 a −3. The dynamics of can therefore be associated to the exchange of energy exclusively with the DM (through the nonvanishing source Q) and/or with the possibility that the DE medium is not exactly the vacuum, w = −1, but close to it | | 1. Under these conditions, the coupled system of conservation equations - reduces t In the following we shall for definiteness focus our study of the dynamical vacuum (and quasivacuum) models to the three interactive sources: Here i =, dm, are small dimensionless constants, | i | 1, which are determined from the overall fit to the data, see e.g. Tables 1 and 2. The ordinal number names I, II and III will be used for short, but the three model names are preceded by w to recall that, in the general case, the equation of state (EoS) is near the vacuum one (that is, w = −1 + ). These dynamical quasi-vacuum models are also denoted as wDVMs. In the particular case w = −1 (i.e. = 0) we recover the dynamical vacuum models (DVMs), which were previously studied in detail in, and in this case the names of the models will not be preceded by w. In all of the above (w)DVMs, the cosmic variable can be taken to be the scale factor, = a(t), since they are all analytically solvable in terms of it, as we shall see in a moment. Model I with w = −1 is the running vacuum model (RVM), see. It is special in that the interaction source indicated in is not ad hoc but follows from an expression for the dynamical vacuum energy density, (), in which is not just the scale factor but the full Hubble rate: = H(a). The explicit RVM form reads The additive constant Combining the Friedmann and acceleration equations -, we find = −(4G/3) (3 m + 4 r + 3 ), and upon differentiating with respect to the cosmic time we are led t = − H (3 m + 4 r + 3 ). Thus, for = 0 (vacuum case) we indeed find = −Q for Q as in. However, for the quasi-vacuum case (0 < | | 1) Eq. does not hold if (H) adopts the form. This RVM form is in fact specific to the pure vacuum EoS (w = −1), and it can be motivated in QFT in curved spacetime through a renormalization group equation for (H), what explains the RVM name. In it, plays the role of the -function coefficient for the running of with the Hubble rate. Thus, we naturally expect || 1 in QFT, see. Interestingly, the RVM form can actually be extended with higher powers of H n (typically n = 4) to provide an effective description of the cosmic evolution from the inflationary universe up to our days. Models II and III are purely phenomenological models instead, in which the interaction source Q is introduced by hand, see e.g. Refs. [26, and references therein. The energy densities for the wDVMs can be computed straightforwardly. For simplicity, we shall quote here the leading parts only. The exact formulas containing the radiation terms are more cumbersome. In the numerical analysis we have included the full expressions. Details will be shown elsewhere. For the matter densities, we find: and for the quasi-vacuum energy densities: Two specific dimensionless parameters enter each formula, i = (, dm, ) and w = −1 +. They are part of the fitting vector of free parameters for each model, as explained in detail in the caption of Table 1. For i → 0 the models become noninteractive and they all reduce to the XCDM model case. For w = −1 we recover the DVMs results previously studied in. Let us also note that for i > 0 the vacuum decays into DM (which is thermodynamically favorable ) whereas for i < 0 is the other way around. Furthermore, when w enters the fit, the effective behavior of the wDVMs is quintessence-like for w > −1 (i.e. > 0) and phantom-like for w < −1 ( < 0). Table 3: Best-fit values for the CDM and models RVM, Q, wRVM and wQ by making use of the CMB+BAO data only. In contrast to Tables 1-2, we now fully dispense with the LSS data (see ) to test its effect. The starred/non-starred cases correspond respectively to adding or not the local value H Riess 0 from as data point in the fit. The AIC and BIC differences of the starred models are computed with respect to the CDM*. We can see that under these conditions models tend to have ∆AIC, ∆BIC< 0, including the last two starred scenarios, which are capable of significantly approaching H Riess 0. Given the energy densities and, the Hubble function immediately follows. For example, for Model I: Similar formulas can be obtained for Models II and III. For w = −1 they all reduce to the DVM forms previously found in. And of course they all ultimately boil down to the CDM form in the limit (w, i ) → (−1, 0). Structure formation: the role of the LSS data The analysis of structure formation plays a crucial role in comparing the various models. For the CDM and XCDM we use the standard perturbations equation m + 2H m − 4G m m = 0, with, however, the Hubble function corresponding to each one of these models. For the wDVMs, a step further is needed: the perturbations equation not only involves the modified Hubble function but the equation itself becomes modified. Trading the cosmic time for the scale factor and extending the analysis of for the case w = −1 ( = 0), we find where the prime denotes differentiation with respect to the scale factor, and the functions A(a) and B(a) are found to be as follows: Here r ≡ / m and ≡ − / m. For i = 0 we have = 3Hr, and after a straightforward calculation one can show that can be brought to the standard form Eq.. is better (cf. Table 3). The price, however, is that the concordance with the LSS data is now spoiled. Case IIIb is our theoretical prediction for the scenario proposed in, aimed at optimally relaxing the tension with H Riess 0. Unfortunately, the last three scenarios lead to phantom-like DE and are in serious disagreement with the LSS data. To solve the above perturbations equations we have to fix the initial conditions on m and m for each model at high redshift, namely when non-relativistic matter dominates over radiation and DE, see. Functions and s III = 1. We can check that for w = −1 all of the above equations - render the DVM results previously found in. The generalization that we have made to w = −1 ( = 0) has introduced several nontrivial extra terms in equations -. The analysis of the linear LSS regime is usually implemented with the help of the weighted linear growth f (z) 8 (z), where f (z) = d ln m /d ln a is the growth factor and 8 (z) is the rms mass fluctuation on R 8 = 8 h −1 Mpc scales. It is computed as follows (see e.g. ): where W is a top-hat smoothing function and T (p, k) the transfer function. The fitting parameters for each model are contained in p. Following the mentioned references, we have defined as fiducial model the CDM at fixed parameter values from the Planck 2015 TT,TE,EE+lowP+lensing data. These fiducial values are collected in p. In Figs. 1-2 we display f (z) 8 (z) for the various is included in the fit (cf. Table 2), and the plot on the right is for when that local value is not included (cf. Table 1). Any attempt at reaching the H Riess 0 neighborhood enforces to pick too small values 0 m < 0.27 through extended contours that go beyond 5 c.l. We also observe that the two (w)RVMs are much more compatible (already at 1) with the H Planck 0 range than the CDM. The latter, instead, requires some of the most external contours to reach the H Planck 0 1 region whether H Riess 0 is included or not in the fit. Thus, remarkably, in both cases when the full data string SNIa+BAO+H(z)+LSS+CMB enters the fit the CDM has difficulties to overlap also with the H Planck 0 range at 1, in contrast to the RVM and wRVM. models using the fitted values of Tables 1-3. We remark that our BAO and LSS data include the bispectrum data points from Ref. -see for a full-fledged explanation of our data sets. In the next section, we discuss our results for the various models and assess their ability to improve the CDM fit as well as their impact on the H 0 tension. Discussion Following the statistical analysis of the various models is performed in terms of a joint likelihood function, which is the product of the likelihoods for each data source and includes the corresponding covariance matrices. As indicated in the caption of Table 1, the CDM has 4 parameters, whereas the XCDM and the DVMs have 5, and finally any of the wDVMs has 6. Thus, for a fairer comparison of the various nonstandard models with the concordance CDM we have to invoke efficient criteria in which the presence of extra parameters in a given model is conveniently penalized so as to achieve a balanced comparison with the model having less parameters. The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are known to be extremely valuable tools for a fair statistical analysis of this kind. They can be thought of as a modern quantitative formulation of Occam's razor. They read : where n is the number of independent fitting parameters and N the number of data points. The bigger are the (positive) differences ∆AIC and ∆BIC with respect to the model having smaller values of AIC and BIC the higher is the evidence against the model with larger AIC and BIC. Take, for instance, Tables 1 and 2, where in all cases the less favored model is the CDM (thus with larger AIC and BIC). For ∆AIC and ∆BIC in the range 6 − 10 one speaks of "strong evidence" against the CDM, and hence in favor of the nonstandard models being considered. This is typically the situation for the RVM and Q dm vacuum models in Table 2 and for the three wDVMs in Table 1. Neither the XCDM nor the Q vacuum model attain the "strong evidence" threshold in Tables 1 or 2. The XCDM parametrization, which is used as a baseline for comparison of the dynamical DE models, is nevertheless capable of detecting significant signs of dynamical DE, mainly in Table 1 (in which H Riess 0 is excluded), but not so in Table 2 (where H Riess 0 is included). In contrast, model Q does not change much from Table 1 to Table 2. In actual fact, the vacuum model III (Q ) tends to remain always fairly close to the CDM. Its dynamics is weaker than that of the main DVMs (RVM and Q dm ). Being | i | 1 for all the DVMs, the evolution of its vacuum energy density is approximately logarithmic: III ∼ 0 (1 − 3 ln a), as it follows from with = 0. Thus, it is significantly milder in comparison to that of the main DVMs, for which I,II. The performance of Q can only be slightly better than that of CDM, a fact that may have not been noted in previous studies -see [21,26, and references therein. According to the same jargon, when the differences ∆AIC and ∆BIC are both above 10 one speaks of "very strong evidence" against the unfavored model (the CDM, in this case), wherefore in favor of the dynamical vacuum and quasi-vacuum models. It is certainly the case of the RVM and Q dm models in Table 1, which are singled out as being much better than the CDM in their ability to describe the overall observations. From Table 1 we can see that the best-fit values of i for these models are secured at a confidence level of ∼ 3.8. These two models are indeed the most conspicuous ones in our entire analysis, and remain strongly favored even if H Riess 0 is included (cf. Table 2). In the last case, the best-fit values of i for the two models are still supported at a fairly large c.l. (∼ 3.2). This shows that the overall fit to the data in terms of dynamical vacuum is a real option since the fit quality is not exceedingly perturbed in the presence of the data point H Riess 0. However, the optimal situation is really attained in the absence of that point, not only because the fit quality is then higher but also because that point remains out of the fit range whenever the large scale structure formation data (LSS) are included. For this reason we tend to treat that input as an outlier -see also for an alternative support to this possibility, which we comment later on. In the following, we will argue that a truly consistent picture with all the data is only possible for H 0 in the vicinity of H Planck 0 rather than in that of H Riess 0. The conclusion is that the H Riess 0 -H Planck 0 tension cannot be relaxed without unduly forcing the overall fit, which is highly sensitive to the LSS data. It goes without saying that one cannot have a prediction that matches both H 0 regions at the same time, so at some point new observations (or the discovery of some systematic in one of the experiments) will help to consolidate one of the two ranges of values and exclude definitely the other. At present no favorable fit can be obtained from the CDM that is compatible with any of the two H 0 ranges. This is transparent from Figs. 3 and 4, in which the CDM remains always in between the two regions. However, our work shows that a solution (with minimum cost) is possible in terms of vacuum dynamics. Such solution, is included in the fit (cf. Table 2), whereas in the right plot the case when that local value is not included (cf. Table 1). Again, any attempt at reaching the H Riess 0 neighborhood enforces to extend the contours beyond the 5 c.l., which would lead to a too low value of 0 m in both cases (cf. Fig. 3) and, in addition, would result in a too large value of 8 for the RVM. Notice that H 0 and 8 are positively correlated in the RVM (i.e. H 0 decreases when 8 decreases), whilst they are anticorrelated in the CDM (H 0 increases when 8 decreases, and vice versa). It is precisely this opposite correlation feature with respect to the CDM what allows the RVM to improve the LSS fit in the region where both H 0 and 8 are smaller than the respective CDM values (cf. Fig. 1). This explains why the Planck range for H 0 is clearly preferred by the RVM, as it allows a much better description of the LSS data. which inevitably puts aside the H Riess 0 range, is however compatible with all the remaining data and tends to favor the Planck range of H 0 values. The DVMs can indeed provide an excellent fit to the overall cosmological observations and be fully compatible with both the H Planck 0 value and at the same time with the needed low values of the 8 observable, these low values of 8 being crucial to fit the structure formation data. Such strategy is only possible in the presence of vacuum dynamics, whilst it is impossible with a rigid -term, i.e. is not available to the CDM. In Fig. 1 we confront the various models with the LSS data when the local measurement H Riess 0 is not included in our fit. The differences can be better appraised in the plot on the right, where we observe that the RVM and Q dm curves stay significantly lower than the CDM one (hence matching better the data than the CDM), whereas those of XCDM and Q remain in between. Concerning the wDVMs, namely the quasi-vacuum models in which an extra parameter is at play (the EoS parameter w), we observe a significant difference as compared to the DVMs (with vacuum EoS w = −1): they all provide a similarly good fit quality, clearly superior to that of the CDM (cf. Tables 1 and 2) but in all cases below that of the main DVMs (RVM and Q dm ), whose performance is outstanding. In Table 3, in an attempt to draw our fit nearer and nearer to H Riess 0, we test the effect of ignoring the LSS structure formation data, thus granting more freedom to the fit parameter space. We perform this test using the CDM and models (w)RVM and (w)Q (i.e. models I and III and testing both the vacuum and quasi-vacuum options), and we fit them to the CMB+BAO data alone. We can see that the fit values for H 0 increase in all starred scenarios (i.e. those involving the H Riess 0 data point in the fit), and specially for the cases Ia and IIIa in Table 3. Nonetheless, these lead to i < 0 and w < −1 (and hence imply phantom-like DE); and, what is worse, the agreement with the LSS data is ruined (cf. Fig. 2) since the corresponding curves are shifted too high (beyond the CDM one). In the same figure we superimpose one more scenario, called IIIb, corresponding to a rather acute phantom behavior (w = −1.184 ± 0.064). The latter was recently explored in so as to maximally relax the H 0 tension -see also. Unfortunately, we find (see Fig. 2) that the associated LSS curve is completely strayed since it fails to minimally describe the f 8 data (LSS). In Fig. 3 we demonstrate in a very visual way that, in the context of the overall observations (i.e. SNIa+BAO+H(z)+LSS+CMB), whether including or not including the data point H Riess 0 (cf. Tables 1 and 2), it becomes impossible to getting closer to the local measurement H Riess 0 unless we go beyond the 5 contours and end up with a too low value 0 m < 0.27. These results are aligned with those of, in which the authors are also unable to accommodate the H Riess 0 value when a string of SNIa+BAO+H(z)+LSS+CMB data (similar but not equal to the one used by us) is taken into account. Moreover, we observe in Fig. 3 not only that both the RVM and wRVM remain much closer to H Planck 0 than to H Riess 0, but also that they are overlapping with the H Planck 0 range much better than the CDM does. The latter is seen to have serious difficulties in reaching the Planck range unless we use the most external regions of the elongated contours shown in Fig. 3. Many other works in the literature have studied the existing H 0 tension. For instance, in the authors find H 0 = 69.13 ± 2.34 km/s/Mpc assuming the CDM model. Such result almost coincides with the central values of H 0 that we obtain in Tables 1 and 2 for the CDM. This fact, added to the larger uncertainties of the result, seems to relax the tension. Let us, however, notice that the value of has been obtained using BAO data only, what explains the larger uncertainty that they find. In our case, we have considered a much more complete data set, which includes CMB and LSS data as well. This is what has allowed us to better constrain H 0 with smaller errors and conclude that when a larger data set (SNIa+BAO+H(z)+LSS+CMB) is used, the fitted value of the Hubble parameter for the CDM is incompatible with the Planck best-fit value at about 4 c.l. Thus, the CDM model seems to be in conflict not only with the local HST estimation of H 0, but also with the Planck one! Finally, in Figs. 4 and 5 we consider the contour plots (up to 4 and 3, respectively) in the (H 0, 8 )-plane for different situations. Specifically, in the case of Fig. 4 the plots on the left and on the right are in exact correspondence with the situations previously presented in the left and right plots of, which has no significant differences with that of. purple, respectively, together with the isolated point (in black) extracted from the analysis of Ref., which we call IIIb. The cases Ia, IIIa and IIIb correspond to special scenarios with w = −1 for Models I and III in which the value H Riess 0 is included as a data point and then a suitable strategy is followed to optimize the fit agreement with such value. The strategy consists to exploit the freedom in w and remove the LSS data from the fit analysis. The plot clearly shows that some agreement is indeed possible, but only if w takes on values in the phantom region (w < −1) (see text) and at the expense of an anomalous (too large) value of the parameter 8, what seriously spoils the concordance with the LSS data, as can be seen in Fig. 2. data points on structure formation (cf. Fig. 1). On the other hand, in the case of Fig. 5 the contour lines correspond to the fitting sets Ia, IIIa of Table 3 (in which BAO and CMB data, but no LSS formation data, are involved). As can be seen, the contour lines in Fig. 5 can attain the Riess 2016 region for H 0, but they are centered at rather high values (∼ 0.9) of the parameter 8. These are clearly higher than the needed values 8 0.73 − 0.74. This fact demonstrates once more that such option leads to a bad description of the structure formation data. The isolated point in Fig. 5 is even worst: it corresponds to the aforementioned theoretical prediction for the scenario IIIb proposed in, in which the H Riess 0 region can be clearly attained but at the price of a serious disagreement with the LSS data. Here we can see, with pristine clarity, that such isolated point, despite it comfortably reaches the H Riess 0 region, it attains a value of 8 near 1, thence completely strayed from the observations. This is, of course, the reason why the upper curve in Fig. 2 fails to describe essentially all points of the f (z) 8 (z) observable. So, as it turns, it is impossible to reach the H Riess 0 region without paying a high price, no matter what strategy is concocted to approach it in parameter space. As indicated, we must still remain open to the possibility that the H Planck 0 and/or H Riess 0 measurements are affected by some kind of (unknown) systematic errors, although some of these possibilities may be on the way of being ruled out by recent works. For instance, in the authors study the systematic errors in Planck's data by comparing them with the South Pole Telescope data. Their conclusion is that there is no evidence of systematic errors in Planck's results. If confirmed, the class of the (w)RVMs studied here would offer a viable solution to both the H 0 and 8 existing tensions in the data, which are both unaccountable within the CDM. Another interesting result is the "blinded" determination of H 0 from, based on a reanalysis of the SNIa and Cepheid variables data from the older work by Riess et al.. These authors find H 0 = 72.5 ± 3.2 km/s/Mpc, which should be compared with H 0 = 73.8 ± 2.4 km/s/Mpc. Obviously, the tension with H Planck 0 diminished since the central value decreased and in addition the uncertainty has grown by ∼ 33%. We should now wait for a similar reanalysis to be made on the original sample used in, i.e. the one supporting the value H Riess 0, as planned in. In they show that by combining the latest BAO results with WMAP, Atacama Cosmology Telescope (ACT), or South Pole Telescope (SPT) CMB data produces values of H 0 that are 2.4 − 3.1 lower than the distance ladder, independent of Planck. These authors conclude from their analysis that it is not possible to explain the H 0 disagreement solely with a systematic error specific to the Planck data. Let us mention other works, see e.g., in which a value closer to H Riess 0 is found and the tension is not so severely loosened; or the work, which excludes systematic bias or uncertainty in the Cepheid calibration step of the distance ladder measurement by. Finally, we recall the aforementioned recent study, where the authors run a new (dis)cordance test to compare the constraints on H 0 from different methods and conclude that the local measurement is an outlier compared to the others, what would favor a systematics-based explanation. Quite obviously, the search for a final solution to the H 0 tension is still work in progress. Conclusions The present updated analysis of the cosmological data SNIa+BAO+H(z)+LSS+CMB disfavors the hypothesis =const. as compared to the dynamical vacuum models (DVMs). This is consistent with our most recent studies. Our results suggest a dynamical DE effect near 3 within the standard XCDM parametrization and near 4 for the best DVMs. Here we have extended these studies in order to encompass the class of quasi-vacuum models (wDVMs), where the equation of state parameter w is near (but not exactly equal) to −1. The new degree of freedom w can then be used to try to further improve the overall fit to the data. But it can also be used to check if values of w different from −1 can relax the existing tension between the two sets of measurement of the H 0 parameter, namely those based: i) on the CMB measurements by the Planck collaboration, and ii) on the local measurement (distance ladder method) using Cepheid variables. Our study shows that the RVM with w = −1 remains as the preferred DVM for the optimal fit of the data. At the same time it favors the CMB measurements of H 0 over the local measurement. Remarkably, we find that not only the CMB and BAO data, but also the LSS formation data (i.e. the known data on f (z) 8 (z) at different redshifts), are essential to support the CMB measurements of H 0 over the local one. We have checked that if the LSS data are not considered (while the BAO and CMB are kept), then there is a unique chance to try to accommodate the local measurement of H 0, but only at the expense of a phantom-like behavior (i.e. for w < −1). In this region of the parameter space, however, we find that the agreement with the LSS formation data is manifestly lost, what suggests that the w < −1 option is ruled out. There is no other window in the parameter space where to accommodate the local H 0 value in our fit. In contrast, when the LSS formation data are restored, the fit quality to the overall SNIa+BAO+H(z)+LSS+CMB observations improves dramatically and definitely favors the Planck range for H 0 as well as smaller values for 8 as compared to the CDM. In short, our work suggests that signs of dynamical vacuum energy are encoded in the current cosmological observations. They appear to be more in accordance with the lower values of H 0 obtained from the Planck (CMB) measurements than with the higher range of H 0 values obtained from the present local (distance ladder) measurements, and provide smaller values of 8 that are in better agreement with structure formation data as compared to the CDM. We hope that with new and more accurate observations, as well as with more detailed analyses, it will be possible to assess the final impact of vacuum dynamics on the possible solution of the current tensions in the CDM. Note Added: Since the first version of this work appeared in preprint form, arXiv:1705.06723, new analyses of the cosmological data have appeared, in particular the one-year results by the DES collaboration (DES Y1 for short). They do not find evidence for dynamical DE, and the Bayes factor indicates that the DES Y1 and Planck data sets are consistent with each other in the context of CDM. However, in our previous works -see in particular -we explained why the Planck results did not report evidence on dynamical DE. For instance, in they did not use LSS (RSD) data, and in they only used a limited set of BAO and LSS points. In the mentioned works we have shown that under the same conditions we recover their results, but when we use the full data string, which involves not only CMB but also the rich BAO+LSS data set, we do obtain instead positive indications of dynamical DE. A similar situation occurs with DES Y1; they do not use direct f (z) 8 (z) data on LSS structure formation despite they recognize that smaller values of 8 than those predicted by the CDM are necessary to solve the tension existing between the concordance model and the LSS observations. In contrast, let us finally mention that our positive result on dynamical DE is consistent with the recent analysis by Gong-Bo Zhao et al., who report on a signal of dynamical DE at 3.5 c.l using similar data ingredients as in our analysis.
For many of us, our treasured cast-iron skillets were passed down from an earlier generation. I remember how I came into my first piece, but it wasn’t a family affair. Long ago, a friend had to leave New York City, where we were both living, to return to his home to Louisiana to settle a personal matter. He asked me to look after his apartment, belongings and beloved stereo set. Three months later, he returned, grateful for the favor. His aunt had passed the skillet to him for his bachelor pad. He stared, cringed and gulped, but it was too late. The offer was on the table, and so was the skillet. I still treasure that black beauty as the centerpiece of some half-dozen pieces in my collection of cast-iron cookware. Cast-iron cooking was and is a way of life. Since the days of the earliest settlers, traditional cooking and cast-iron cookware have been part of American culture. As different sections of the country were settled by people with unique customs and cultures, regional cuisines developed that were primarily cooked in cast iron. • In New England, the Pilgrims and the immigrants who followed gave us the New England boiled dinner, Yankee pot roast, Boston baked beans and Maine clam chowder. The one-pot meals that literally cook themselves were the order of those early days and are with us still. • In the early days of Appalachia, the wood stove accommodated cast-iron skillets for frying and a “Dutch” cast-iron oven for roasting meats and baking bread. Country and Southern recipes still include baked and fried country ham with red-eye gravy, Southern fried chicken and fried green tomatoes, country cured hams, homemade jams and stone-ground grits and cornmeal dishes. • The Charleston area of South Carolina is known for its game as well as seafoods, including oysters, shrimp and crabs. Rice pilaf is prominent in the cuisine, and okra, grits and corn bread are also defining dishes made in heavy black skillets. • The cowboy impact on the West and Southwest was fueled in cast-iron cookware in the chuck wagons. They cooked their game, chilies, beans and biscuits over hardwood fires and created a prairie cuisine that is still a part of Southwestern culture. • The Louisiana Delta has a treasure of unique food ways, all adapted to cast-iron cookware. The French explorers began arriving in the 1700s, and we can thank them for etouffee, cassoulets and bouillabaisse, the forerunner of Louisiana gumbo. The West African slaves brought with them knowledge of the sugar cane and rice plantations and also introduced okra, a key ingredient in gumbo. When French-speaking Canadians, now known as Cajuns, fled Nova Scotia, they used their creative ways with lobsters to cook crawfish. The Spanish immigrants couldn’t reproduce their paella, so they substituted ingredients at hand and came up with jambalaya. Louisiana sausages and meats, such as andouille, tasso and boudin, are all evidence of the German influence. One of the most important things about cast iron is its durability. Because cast iron heats evenly and retains heat better than today’s aluminum and stainless steel, food can be prepared at lower temperatures. Seasoned properly, cast iron creates its own nonstick coating. Seasoning is a process where the pores in cast iron absorb oil to create that satiny finish. Aged pieces already have the smooth black patina that gives cast iron its unique cooking surface, but most new cast-iron utensils don’t come that way. Wash, rinse and thoroughly dry the new skillet (or other piece) to remove the protective wax coating. Put a tablespoon of solid vegetable shortening in the utensil, but do not use a salted fat such as butter or margarine. Warm the pan to melt the shortening, then use a cloth or paper towel to coat with oil the entire surface of the pan, inside and out, corners, edges and lids. Allow the cookware to heat upside down in the oven for one hour at 350 degrees. It’s a good idea to place aluminum foil on the bottom of the oven to catch any drippings. Turning the piece upside down prevents the oil from building up inside the pan. Using oven mitts, remove the utensil from the oven and wipe it with a paper towel. Store in a dry place. A third way is to buy a new pan already seasoned. Just this year, Lodge Manufacturing Co. in South Pittsburg, Tenn., which has been in operation since 1896, began marketing a line of pre-seasoned pans. Lodge also makes unseasoned skillets, Dutch ovens, chicken fryers, griddles, bake ware and serving ware. For more information, call 423/837-7181 or go the Web site: www.lodgemfg.com. Jambalaya is perhaps the best-known rice dish in America. When the early Spanish settlers came to New Orleans, they brought with them the recipe for paella. They quickly learned to adapt a version using local ingredients. Oysters and crawfish replaced clams and mussels. Andouille sausage took the place of jambon, or ham. Since the main ingredient in the dish was rice, the dish was named “jambon a la yaya.” Yaya is a word for rice in some African languages. The rest is history. Season chicken with salt and black pepper. Heat oil in large pan or cast-iron Dutch oven over high heat until hot. Add chicken, stirring until brown on all sides. Add sausage and cook 2 to 3 minutes. Remove chicken and sausage from pan and set aside. To same pan, add chopped onion, green pepper, celery and garlic. Cook and stir over medium-high heat until crisp-tender. Stir in rice, red pepper, broth and reserved chicken and sausage. Bring to a boil. Reduce heat to low, cover and simmer 30 minutes. Stir in scallions and tomato. Garnish with celery leaves. Serve immediately. Makes 8 servings.
September 29, 2012 • Damian Lewis is a newly minted Emmy winner for his role as Nicholas Brody on Showtime's tense spy thriller Homeland. September 28, 2012 • Showtime's Homeland, which swept this year's Emmy Awards, returns this weekend — as does another Showtime drama, Dexter. Critic David Bianculli says there's a rich bounty of returning series — and Homeland is the "most topical and meaningful drama on television." September 27, 2012 • There have been a lot of different Sherlock Holmes adaptations over the years, but Lucy Liu is probably the first woman to play Dr. Watson. In the CBS drama Elementary, Holmes and Watson solve crimes in modern-day New York City. Liu says a female Watson is a "wonderful concept." September 25, 2012 • A recap of the music in season three's opening episode, featuring Glen David Andrews and "Frogman" Henry. September 25, 2012 • Andrew Wallenstein, a TV editor for Variety, has seen pilots for all of the new TV shows starting this fall. His favorite is Last Resort, although he doesn't think it will stay on TV for long. September 25, 2012 • The actress played Kelly Kapoor on The Office, a role she also wrote and produced. Now she runs a new Fox comedy, The Mindy Project, in which she stars as an obstetrician whose personal life is a mess. Kaling tells Fresh Air that her late mother inspired her character's career. September 24, 2012 • The big winners at the 2012 Emmy Awards included Homeland and the ABC comedy, Modern Family. Albert Ching points to the dominance of Modern Family at the Emmys as proof that the award show plays it too safe when it comes to comedy by repeatedly rewarding the same programs. September 24, 2012 • The new Showtime drama Homeland, and the ABC comedy Modern Family were the big winners at the Emmy awards last night. Guest host Celeste Headlee discusses the winners and losers with Sheila Marikar. Marikar is an entertainment reporter and producer for ABCNews.com and she was backstage at last night's big event. September 24, 2012 • A night of mostly expected winners and lackluster production made for a boring Emmys telecast. September 24, 2012 • Homeland, which puts the battle against terrorism on American soil, was honored Sunday night as best drama series at the Emmy Awards. The show's stars Claire Danes and Damian Lewis also won trophies. Modern Family was named best comedy. September 24, 2012 • Indian-American Mindy Kaling has her own show on Fox called The Mindy Project. Chinese-American Lucy Liu is Dr. Watson in a modern Sherlock Holmes story on CBS called Elementary. TV critic Eric Deggans says neither show gives any indication there might be anything culturally Indian or Chinese about their characters. September 23, 2012 • On Sunday's Weekend Edition, we'll talk to Linda Wertheimer about Emmy night, antiheroes, and family shows. September 21, 2012 • On this week's show, we jump into fall television and make some doomed predictions, and then we roll around in the agony of great breakup songs and scenes. September 20, 2012 • The Scottish actress plays Margaret Thompson, a young Irish widow who marries a corrupt politician on HBO's Boardwalk Empire. Macdonald, who got her start in Trainspotting, tells Fresh Air that she enjoys playing a "strong character" for a change.
Effect of an Essential Oil Mixture on Skin Reactions in Women Undergoing Radiotherapy for Breast Cancer Purpose:This pilot study compared the effects of an essential oil mixture versus standard care on skin reactions in breast cancer patients receiving radiation. Method: Using an experimental design, 24 patients were randomized to standard care (i.e., RadiaPlexRx™ ointment) or an essential oil mixture. Products were applied topically three times a day until 1 month postradiation. Weekly skin assessments were recorded and women completed patient satisfaction and quality of life (QOL) instruments at 3-, 6-, and 10-week intervals. Results: No significant differences were found for skin, QOL, or patient satisfaction at interim or follow-up time points. Effect sizes were as follows: skin =.01 to.07 (small-medium effect); QOL =.01 to.04 (small effect); patient satisfaction =.02 (small effect). Conclusion: The essential oil mixture did not provide a better skin protectant effect than standard care. These findings suggest the essential oil mixture is equivalent to RadiaPlexRx, a common product used as standard care since it has been shown to be effective in protecting skin from radiation. Thus, this pilot provides evidence to support botanical or nonpharmaceutical options for women during radiotherapy for breast cancer.
He writes the songs that make people run. Some Rite Aid stores in California have taken to blasting Barry Manilow tunes as part of a plan to make loiterers scramble. Employees told the Wall Street Journal that the drugstore chain has been testing the tactic of playing songs by the 75-year-old crooner outside their stores — over and over, loudly — to deter panhandlers and vagrants. But the plan has also left neighbors mystified. “I thought some older man had died and left a Barry’s Most Depressing Hits CD on repeat,” said Lisa Masters, a professional drummer in Long Beach who couldn’t open her windows without hearing “Mandy” blaring. “I felt trapped in an episode of ‘The Twilight Zone,’” she said. When she called Rite Aid, she said an employee explained that the Manilow technique had worked at other locations and was now working in Long Beach. “His attitude was, ‘Would we rather have panhandlers or Manilow?’” Masters said. A Rite Aid spokeswoman told the paper that customers were finding it difficult to enter some of their stores because of loiterers, so the chain started employing a couple of deterrents — including Barry Manilow. “We are in the early stages of exploring this approach and have not made any decision about the potential rollout of this to additional stores,” she said. The Manilow technique has also been used at a Rite Aid in San Diego, according to WCPO. Over in Hollywood, neighbors have also been subjected to the Manilow music when strolling outside Rite Aid. John Fields, a record producer for the Jonas Brothers and Miley Cyrus, posted a video on YouTube showing him walking outside the drugstore as “Somewhere Down the Road” plays. “Another night with Barry Manilow,” he says. For some, the music is soothing. “It’s one of those beautiful moments,” LA comedian Debra DiGiovanni said of hearing “I Write the Songs” on repeat. Manilow, a Brooklyn-born star whose career has spanned more than 50 years, hasn’t heard about the tactic, his publicist said.
Azidothymidine (AZT) Inhibits Proliferation of Human Ovarian Cancer Cells by Regulating Cell Cycle Progression Background/Aim: Drug resistance is a significant cause of high mortality in ovarian cancer (OC) patients. The reverse transcriptase inhibitor azidothymidine (AZT) has been utilized as a treatment for tumors, but its role in OC treatment has not been revealed. The aim of the present in vitro study was to examine the influence of AZT on the growth of human OC cells and the involved proteins. Materials and Methods: The proliferation, cell cycle distribution, extent of apoptosis, mitotic index, and terminal restriction fragment length were examined in three OC cell lines, CaOV3, TOV112D, and TOV21G, treated with AZT. Results: AZT inhibited growth of the TOV21G and CaOV3 cell lines by regulating cell cycle distribution. Specifically, AZT caused G2/M phase arrest on TOV21G cells and S phase arrest on CaOV3 cells. In addition, AZT treatment induced up-regulation of p21 and p16 in the TOV21G and CaOV3 cell line, respectively. Conclusion: AZT inhibited cell proliferation in serous and clear cell OC via the regulation of cell cycle distribution.
A South African study on antecedents of intention to quit amongst employees in bed and breakfast establishments in the Free State province Bed and breakfasts (B&Bs) in South Africa are small accommodation establishments that strive to grow, whilst competing with large accommodation establishments (Hikido 2017). They are faced with several challenges such as low-paid employees who are not motivated to work because the work environment is stressful and demanding (Hlanyan & Acheampong 2017). Small accommodation businesses, especially B&Bs in South Africa, are also faced with the challenges of low income during off-seasons (Hsieh & Lin 2010), as well as competition from large accommodation establishments (Van ). It is, therefore, highly likely that B&B employees have high intentions to quit. Introduction Bed and breakfasts (B&Bs) in South Africa are small accommodation establishments that strive to grow, whilst competing with large accommodation establishments (Hikido 2017). They are faced with several challenges such as low-paid employees who are not motivated to work because the work environment is stressful and demanding (Hlanyan & Acheampong 2017). Small accommodation businesses, especially B&Bs in South Africa, are also faced with the challenges of low income during off-seasons (Hsieh & Lin 2010), as well as competition from large accommodation establishments (Van ). It is, therefore, highly likely that B&B employees have high intentions to quit. It, therefore, goes without saying that appreciating the antecedents to employees' intention to quit and the identification of strategies to retain such employees cannot be over-emphasised. Researchers (;Halawi 2014;Treuren & Frankish 2014) have pointed to a set of occupational, environmental and personal antecedents to employees' intention to quit. Job satisfaction and organisational commitment have been identified as the two common occupational antecedents to employees' intention to leave (Steil, Floriani & Bello 2019), and with reference to environmental antecedents, these include but are not limited to the existence of employment alternatives (), the economic development level, social security policy, employment policy, labour demand and supply conditions (Li & Lu 2014). According to Steil et al., age, schooling, sex, marital status, professional experience and family or kinship responsibility are the six personal antecedents that have been researched the most. Whilst considerable research has been conducted on antecedents to employees' intention to leave, there is a dearth of research on the antecedents of employees' intention to quit amongst small businesses, especially B&B establishments in the hospitality sector in South Africa. Whilst evidence is available on how organisation-related antecedents such as low compensation and those that are individual-related, for example, job satisfaction have been linked to intention to quit in large hospitality organisations in developing economies (), the same cannot be said of such a relationship amongst B&Bs found in either urban or rural contexts in developing economies. With such paucity of research, the objective of this study was, therefore, to determine the influence of selected individual (job satisfaction, organisational commitment and job stress) and organisational factors (human resource practices, quality of work environment and organisational structure) on intention to quit amongst employees in B&Bs in a particular district in the Free State province of South Africa. The choice of these factors is premised on empirical evidence showing that they have been mostly linked to several behavioural outcomes in the hospitality sector (Chen, Ayoun & Eyoun 2018;Grobelna, Sidorkiewicz & Tokarz-Kocik 2016). It has to be noted that employee turnover is costly to both parties in the employment relationship as the establishment has to invest more effort and time in acquiring, developing and retaining its employees (Grobler & De Bruyn 2011) and to employees, as it directly interferes with their career development. High turnover amongst employees leads to direct financial costs, and also results in below average organisational performance manifested by low quality, lessened efficiency, decreases in morale and even service disruptions (Jang & George 2012). The fact that employees have a high propensity to leave and look for alternative employment because of low salaries () gave impetus to the current inquiry. As such, the study sought to determine the impact of selected individual and organisational factors on intention to quit amongst employees in B &Bs from a selected district in the Free State province. Literature review Job satisfaction versus intention to quit Job satisfaction is a general expression of workers' positive attitudes towards their jobs (Thomas 2015). It is pivotal in achieving positive organisational outcomes such as increased productivity, but can also lead to detrimental outcomes such as intention to quit () if employees are not satisfied. Ibrar reported that job satisfaction does not only increase productivity but also decreases intention to quit amongst employees. This finding is corroborated by a study performed within the hospitality industry, which established that dissatisfied employees considered looking for alternative employment when they were not happy with working conditions, pay, relations with the supervisor and the job itself (Akgunduz & Sanli 2017). Similarly, Santerosanchez et al. posited that 24/7 working hours and high levels of job pressure in large established hotels are associated with job dissatisfaction, and ultimately intention to quit. Contrastingly, other researchers found out that job satisfaction had no direct relationship with intention to quit but mediates the relationship between creativity and intention to quit (Zhen, Mansorzd & Chong 2019). Notwithstanding this contradiction, there is also overwhelming evidence arguing that satisfaction levels differ by employee position, and, therefore, intentions to quit are not the same across all employees in any organisation (). Based on these findings, one can, therefore, conclude that job satisfaction has an impact on intention to quit amongst employees, even amongst B&B employees. The above evidence leads to the following hypothesis: H1: Job satisfaction has a significant positive effect on intention to quit. Organisational commitment versus intention to quit Organisational commitment is defined as: strong belief in and acceptance of the organisation's goals and values, a willingness to exert considerable effort on behalf of the organisation and a definite desire to maintain organisational membership. (Watson 2010, p.18) The concept is conceived as associated with intention to quit in the sense that the two concepts are directly opposite to each other -one concerns the notion of being attached to the organisation and the other being detached from an organisation (Kim, Song & Lee 2016). Organisational commitment can assume three dimensions: normative commitment, affective commitment and continuance commitment (Meyer & Allen 1997). Normative commitment reflects an employee's sense of duty to remain in employment, whilst affective commitment refers to the strength of an employee's identification with and involvement in an organisation (). Continuance commitment deals with the cognitive attachment that exists between an employee and the organisation as the costs of quitting outweigh the benefits (). Employees who do not have much organisational commitment may have a high propensity to leave the organisation (Gatling, Kang & Kim 2016). In contrast, employees with stronger organisational commitment are less likely to develop intentions to quit and leave the organisation (Wijnmaalen, Heyse & Voordijk 2016). Committed employees who feel much attached to an organisation and are extremely dedicated are less likely to show intent to quit because of their high level of commitment and willingness to invest more into the organisation. According to Kim, Im and Hwang, service industry jobs are associated with high-stress levels, -a factor that works against employee organisational commitment. This is supported by Jung and Yoon who contended that the service industries' dependence on human resources makes employee commitment critical to maintain, failure of which has negative outcomes detrimental to the organisation such as intention to quit might ensue. Related to this finding is a study which showed that whilst hospitality organisations can recruit talented and highly motivated employees, they seem to have difficulty in arousing their organisational commitment and, therefore, retain them (Chiang & Liu 2017). Because retaining employees is determined by how committed employees are (Jang & Kandampull 2018), hospitality organisations might need to ensure that there is commitment amongst their employees. As such, it is hypothesised that: H2: Organisational commitment has a significant negative effect on intention to quit. Job stress versus intention to quit Job stress has been defined as 'the pattern of emotional states and psychological reactions occurring in response to inability to cope with stressors from within or outside an organization' (Ekienabor 2017:124). The reactions, which lead to high job stress, are likely to result in low morale and high turnover amongst employees (Bowness 2017). Lo and Lamm established that working in the hospitality industry can be stressful because the industry is highly labour-intensive and has increasingly harsh environmental demands imposed upon it. Similarly, Namra and Tahira found that the nature of work within hotels and B&Bs, for example, includes frequent deadlines, unexpected interactions with guests, long working hours, night and evening work, repetitive work, high emotional demands, low influence (control), shift work, extensive work space and problems with coordination of work. Consequently, such demands on employees lead to work-related stress. Furthermore, evidence from a study by Mansor and Mohanna revealed that unpredictable and irregular working hours were stressful factors amongst hospitality organisations that led to increased intention to quit amongst employees. This was reaffirmed by Newnham who established that hospitality industry employees suffer from dissonance because they are required to display certain emotions all the time, which might not represent their actual feelings at any given time. The fact that these employees are continuously suppressing their actual emotions might lead to outcomes such as intentions to quit. The above review of literature led to the following hypothesis: H3: Job stress has a significant positive effect on intention to quit. Human resource practices versus intention to quit Human resource practices refer to those organisational activities which are directed at managing the pool of human resources and ensuring that the resources are employed towards fulfilment of organisational goals (Russo, Mascia & Morandi 2018). Some studies indicated that human resource activities linked to turnover intentions include the manner in which performance management is done (Nankervis & Debrah 2015) and how employee compensation is managed (Khaleefa & Al-Abdalaat 2017). Aside from pay, promotional opportunities, effective and supportive leadership, satisfactory compensation and work-group cohesion have the potential to influence turnover (). Akgunduz, Gok and Alkan also concluded that good monetary incentives offered to employees often reduced employees intentions to quit. A related study by Santhanam et al. amongst hotel frontline employees established that selection, training and compensation practices had an influence on turnover intentions. Interestingly, the study concluded that psychological contract breaches enhanced turnover intentions by employees, notwithstanding effective implementation of human resource management (HRM) practices. According to Lo and Lamm although employees in the hospitality industry are vulnerable in terms of poor working conditions and low wages, good compensation management practices might reduce their intentions to quit. However, Altarawmneh and Al-Kilan found out that despite the robust investment levels in HRM practices in Jordanian hotels, HRM practices themselves did not directly influence intentions to quit as employees' http://www.sajesbm.co.za Open Access intentions to quit may depend on other factors. Based on the discussions above, it is hypothesised that: H4: Human resource practices have a significant positive effect on intention to quit. Agbozo et al. confirmed that the QWE was a critical factor that influenced several outcomes, such as the level of satisfaction, intention to quit and motivation of the employees. The QWE is characterised by a good physical environment (e.g. in terms of heat or noise), a conducive psychological and social work environment (). Quality of work environment versus intention to quit Bednarska posited that the hospitality environment is one of the most complicated places to work in, with pressures on meeting expectations of customers and working long hours remaining foci pressure points for possible reactions by employees in the form of counterproductive work behaviours. Robinson et al. also acknowledged that shift work and the number of working hours put pressure on hospitality employees, significantly affecting their psychological, physical and emotional well-being, leading to turnover intentions. A small survey of employees in New Zealand by Markey, Ravenswood and Webber found that a majority of employees intending to quit viewed the QWE as poor. The results also showed that employees were likely to quit if they were stressed, if they were not a parent, if they experienced reduced job satisfaction and did not receive adequate important information, although the impact of the aforesaid factors was greater in workplaces with a good QWE. In view of this, it is hypothesised that: H5: The QWE has a significant positive effect on intention to quit. Organisational structure versus intention to quit Organisational structure is defined as the formal system of authority relationships and tasks that control and coordinate employee actions and behaviour to achieve organisational goals (Valentina & Cvelbar 2018). Organisational structure is related to employee attitudes and behaviour in organisations (Valentina & Cvelbar 2018). A study by Hamzat et al. amongst library and information science professionals in Nigeria's Osun State private universities revealed that the ownership structure of the organisation has a direct influence on the professionals' turnover intention. A related study by Lensen established that although a flat or non-hierarchical organisation (characterised by few lines of command and few layers) leads to greater expectations regarding job satisfaction and is considered more attractive, but it does not necessarily lead to lower turnover intentions. Regarding a functional structure, Cummings and Worley highlighted that the functional structure's emphasis on central power discourages growth, diminishes selfconfidence amongst employees and discourages them from becoming innovative and involved in business activities. Such a structure tends to work best in small-to-mediumsized enterprises like the B&B establishments under study (Sinha 2017). Another organisational structure, the customer-centric structure, which focuses on sub-units for the creation of solutions and satisfaction of key customers, is considered appropriate for hotels as it aims at meeting customer needs (Cummings & Worley 2015). Previous researchers (Palacios-Marques, Guijarro & Carrilero 2016) have attested that this approach can deliver significant benefits to an organisation. The benefits include: improved customer experience, reduced intentions to quit amongst employees, consistent engagement with customers and increased sales compared to other structural types like the functional structure. Based on the above discussion, the following hypothesis has been proffered: H6: Organisational structure has a significant positive effect on intention to quit. Organisational and individual factors -More empirical evidence Human resource practices and job satisfaction Human resource (HR) practices and job satisfaction have long been studied and are assumed to be closely associated (Ahmed, Zaman & Khattak 2017;Cortini 2016;Kampkotter 2017), because it is believed that sound HR practices result in better levels of job satisfaction, which ultimately improves organisational performance. The aforesaid relationship has also been established within the hospitality sector in Portugal (), Thailand (Ashton 2017) and Nigeria (Onyebu & Omotayo 2017). It is evident from these studies that the use of specific HR practices in organisations is associated with greater levels of job satisfaction. This evidence led to the following hypothesis: H7: HR practices have a significant positive effect on job satisfaction. Working environment versus organisational commitment Another area of interest is how the working environment affects organisational commitment. However, this area has not enjoyed much empirical attention in the literature (Holston-Okae 2017). Yet, workplace environmental factors are essential elements that determine the level of employees' commitment, their concentration and performance and the sustainability of a business (Funminiyi 2018). Funminiyi argued that employees are always content when they feel that their immediate environment -both physical sensations and emotional states -are in line with their obligations. McCoy and Evans opined that the way employees connect with their organisation's immediate workplace environment, influences to a great extent their commitment, efficiency, innovativeness, collaboration with other employees, absenteeism and, ultimately, their retention. In view of this above, it can be hypothesised that: H8: The QWE has a significant positive effect on organisational commitment. Organisational structure versus job stress The way an organisation is structured and how it is run can have a positive significant effect on job stress amongst employees. A study by Namra and Tahira found that job stress amongst hospitality employees was associated with changes in management and lack of participation in decision-making, which ultimately caused intention to quit because of employees' inability to be involved in the decisionmaking processes. Despite the different context, a related study by Daoli and Mohsenvand found that employees who were subjected to a centralised organisational structure experienced job stress, which then caused higher absenteeism, lower productivity, workplace aggression and increased intentions to quit. Highly-centralised organisations often have low levels of flexibility which often affect employees' stress levels as workers have limited autonomy and control over their work (Daoli & Mohsenvand 2017). Most small businesses usually follow a centralised structure (Vitez 2017); hence, one can conclude that high level of stress develops amongst B&B employees which might lead to increased intention to quit. It is, therefore, hypothesised that: H9: Organisational structure has a significant positive effect on job stress. Conceptual framework The conceptual framework as shown in Figure 1 depicts the researchers' understanding of how the variables in the current study connect with each other. Figure 1 summarises the hypothesised relationships discussed in the literature review section. It is hypothesised that employees' intention to quit might be influenced by the two broad categories of factors -individual and organisational. No moderation influences were investigated. The framework further assumes that organisational factors have an impact on employees' individual factors. Research design A research design refers to the plan for the collection, measurement and analysis of data (Blumberg, Cooper & Schindler 2014). The study used the correlational ex post facto design. A correlational design attempts to describe relationships, rather than explain them (Gravetter & Forzano 2009). Although a correlational research does not imply causality, it allows for predictions to be made even though one may not have an idea why a relationship exists. The ex post facto is pre-experimental, implying that it does not meet the scientific standards of experimental designs, nor does it involve a control group (De Vos, Strydom, Fouch & Delport 2011). The ex post facto design provides a means by which researchers may examine the degree to which an independent variable could affect the dependent variable(s) of interest. Kothari pointed out that there are two basic approaches to research -quantitative and qualitative. Researchers generally show a preference for either type of method, reflecting their research's philosophical point of view (Kothari 2004). Proponents of the positivistic school of thought commonly use quantitative methods, whereas qualitative methods tend to be chosen by researchers with an interpretivist attitude (Moon & Moon 2004). The current study adopted the quantitative approach. Nykiel posited that quantitative research methods seek to establish facts, make predictions and test hypotheses that have already been stated using a deductive approach and establishing objective knowledge. Data collection The Data were collected using a structured questionnaire. The questionnaire had an introductory section which summarised research objectives and addressed ethical issues. Section A contained items on demographic information. Section B had three other sub-sections, namely, B1 and B2 and B3. B1 had specific sections on selected individual factors (job satisfaction, organisational commitment and job stress). Respondents were required to indicate their level of agreement with given statements on a Likert scale, ranging from 1-Very Dissatisfied to 5-Very Satisfied and the other scale ranged 1-Strongly Disagree to 5-Strongly Agree for items measuring organisational commitment and job stress. Section C comprised of items measuring organisational factors (human resource practices, QWE, organisational structure). A five-point Likert scale ranging from 1-Strongly Disagree to 5-Strongly was used. Lastly, Section C had intention to quit items measured using a three-point Likert scale ranging from 1-Always to 2-Mostly and 3-Never. Data analysis Data were captured using the Statistical Package for Social Sciences (SPSS) version 25. Descriptive statistics were used to analyse the demographic characteristics of the sample. Thereafter, data were subjected to SEM using AMOS software techniques, to ascertain model fit. Ethical considerations Ethical approval to conduct the study was obtained from the Faculty Research and Innovation Committee (FRIC), Central University of Technology, Free State, reference number: FMSEC05/16. One of the researchers visited each business establishment in the province and requested employees to fill consent forms before their voluntary participation. The respondents were also informed of their right to withdraw from the study at any stage should they have felt any harm or threat. In order to maintain confidentiality, data were stored in an aggregate form. Exploratory factor analysis An exploratory factor analysis (EFA) was performed to test the structure of the three main constructs, that is, organisational factors, individual factors and intention to quit. Such an analysis allowed for the empirical assessment of the validity of the scales used. The Kaiser-Meyer-Olkin (KMO) measure was calculated to ensure that the sample was adequate for factor analysis. Glen noted that the KMO test is a measure of how suited data are for factor analysis. The test measures sampling adequacy for each variable in the model and for the complete model (Glen 2016). The suitability of the data was supported because the KMO value (0.871) was superior to the threshold of 0.6, and Bartlett's Test of Sphericity was significant (p < 0.001) (Pallant 2010). The KMO and Bartlett's Test of Sphericity confirmed that the data were suitable for factor analysis. After ascertaining the suitability of the data for factor analysis, a principal component analysis, using the VARIMAX method was used to extract the factors with an Eigenvalue above 1. However, the analysis showed poor results. The items were not loading well into the factors. After presenting Eigen values as indicated in the scree plot above, the models (initial and refined) were examined as indicated in the next section. Model fit indices Before examining the model fit indices of the final measurement model, a univariate normality test was conducted to confirm whether the model could be estimated using the maximum likelihood method (Li & Malik, 2018) The results are indicated below. Normality test A normality test is a statistical process used to determine if a sample or any group of data fits a standard normal distribution (Das & Imon 2016). A normality test can be performed mathematically or graphically. Generally, the values for skewness and kurtosis between -2 and +2 are considered acceptable to prove normal univariate distribution (Cain, Zhang & Yuan 2017). The results will not be affected by the non-normality distribution of data for all items used to measure the constructs, as most of their coefficients belong to the interval . Because the normality is supported, the maximum likelihood method was confidently used to assess the model fit of the initial model as indicated in Figure 2. The model presented in Figure 2 is the measurement model before refinement. Its Chi-square (X 2 ) was equal to 568 798, its p value < 0.001 (significant) and its degrees of freedom (df) = 249. Although this initial model suggests a significant Chisquare, there was a need to further examine model fit indices before concluding on the model. The fact that the Chi-square is very sensitive to the sample size is the reason most Chisquare of large samples are often significant (). A close diagnosis of the modification indices and the standardised residual covariance matrix retrieved from the IBM AMOS outputs suggested that some items should be deleted to improve the model fit indices. Items whose factor loadings were below 0.5 were also deleted. Two inter-item correlations pertaining to the same construct were added to improve the model fit using the IBM http://www.sajesbm.co.za Open Access AMOS function 'modification indices' (). The addition of the inter-item correlations reduced the measurement error and improved the internal consistency of items (Ford, MacCallum & Tait 1986), which in turn enhanced the model fit. Following the aforementioned amendments, a final measurement model was designed (see Figure 3). The final measurement model as indicated in Figure 3 above indicates a significant and lower Chi-square ( 2 = 242 110; p = 000; df = 130), which implies that this later version of the model is more improved than the initial measurement model as shown in Figure 2. The model fit indices of the final model are provided in Online Appendix 1, Reliability analysis, convergent and discriminant validity assessment The final measurement model illustrated in Figure 4 was considered as the graphical evidence of convergent and discriminant validity. All the factor loadings were above 0.5, suggesting a convergent validity of all the items. The moderate level of correlations above 0.8 suggests a discriminant validity concern of all three latent variables. Further robust statistical evidence is provided in Online Appendix 1, Table 2-A1 to establish the validity of all the research instruments used in the study. The table indicates a good reliability for all the scales used in this study as Cronbach alphas and composite reliability coefficients were both above 0.7 (Vaske, Beaman & Sponarski 2017). Online Appendix 1, Table 2-A1 also shows that the factor loadings of all constructs were above the recommended threshold of 0.5 (). Similarly, the average variances extracted (AVEs) of all constructs were also above the required cut-off of 0.5 (Ahmad, Zulkurnain & Khairushalimi 2016). In addition, the overall result indicates a good reliability of all the scales involved in this study (see Online Appendix 1) as Cronbach alphas and composite reliability coefficients were both above 0.7 (Bagozzi & Yi 1988). All the estimates on Online Appendix 1, Appendix 1, Table 3-A1. Discriminant validity is assessed through a comparison between the square root of the AVE estimates and the highest inter-construct correlation of the specific construct (Malhotra, Nunan & Birks 2017). The square root of the AVE is expected to be above all the interconstruct correlation values. Structural equation modelling The structural model was tested using the maximum likelihood performed with AMOS 25. The structural model for the study is presented in Figure 4. Hypotheses testing The study tested hypotheses in relation to whether selected individual and organisational factors have an influence in determining intention to quit amongst B&B employees. Regression analysis was used to determine the impact of the independent variables of the study on the dependent variables. The results are presented in Online Appendix 1, Table 6-A1. Regression weights The model explains up to 92.2% of the intention to quit and 89.4% of the individual factors. The details of the impact of each independent variable are provided in Online Appendix 1, Table 4-A1. Regression weights as shown in Online Appendix 1, Table 4-A1. Showed that all of the employee individual factors did not have a significant effect on intention to quit ( = 0.495, p > 0.05). This means that improving the employee individual factors does not translate into intention to quit. The regression results also showed that organisational factors had a positive and significant effect on intention to quit ( = 0.814, p < 0.01). This means that when any of the organisational factors goes up by 1 standard deviation (SD), there are 99% of chances that intention to quit also goes up by 0.814 of its own SD. The specific hypotheses results relating to specific factors are presented below. Discussion of findings Job satisfaction versus intention to quit Online Appendix 1, Table 4-A1 revealed that job satisfaction (one of the individual factors) did not have an influence on intention to quit. We, therefore, accept the null hypothesis and reject the alternative hypothesis (H1) which says, job satisfaction has a significant positive effect on intention to quit. Most studies that investigated the impact of job satisfaction on intention to quit have been conducted amongst large hospitality organisations, such as hotels. For instance, past studies (Albattat, Som & Helalat 2013;Sangaran & Jeetesh 2015) found that intention to quit occurs when there is dissatisfaction in the job. These findings are contrary to those of the current study because given the high unemployment rate in South Africa (Wang 2019), employees could be left with but little choice. However, the current study's results corroborate those by Holston-Okae, who found out that job satisfaction within the hospitality industry correlates inversely with employee intention to quit to a statistically significant degree. Organisational commitment versus intention to quit Results in Online Appendix 1, Table 4-A1 show that all aspects of another individual factor -organisational commitment (affective, continuance and normative) had no significant positive influence on intention to quit. We, therefore, accept the alternative hypothesis H2, which says that organisational commitment has a significant negative effect on intention to quit. This is contrary to a study conducted by Ghosh and Gurunathan that established organisational commitment as one of the major factors that influences intention to quit amongst employees in the hospitality industry. Although Johansson revealed that in most American small businesses, lack of organisational commitment is one of the factors that causes Job stress versus intention to quit Results in Online Appendix 1, Table 4-A1 revealed that job stress (one of the individual factors) had no significant association with B&B employees' intention to quit. This led us to accept the null hypothesis and reject the alternative hypothesis (H3) which says that job stress has a significant positive influence on intention to quit. These findings are contrary to a study by Hang-yue, Foley and Loi, who found that job stress exerts a significant positive effect on intention to quit. The current study's results are also contrary to previous studies (;Namra & Tahira 2017), which revealed that hospitality work can be stressful leading to increased intention to quit amongst employees. Human resource practices versus intention to quit This study established that selected human resources practices had a significant positive influence on intention to quit amongst B&B employees. This led us to accept the alternative hypothesis (H4), which says that human resource practices have a significant positive effect on intention to quit. A study by Santhanam et al. showed that human resources practices such as compensation, performance management and recruitment can affect employee outcomes such as intention to quit. With others, satisfactory compensation was found to reduce intention to quit and led to employee retention (Pohler & Schmidt 2015). Afsar, Shahjehan and Shah observed that employees who perceive their organisation to be in poor financial condition and do not receive their desired compensation may anticipate future layoffs and may pre-emptively leave. These factors help buttress the eminence of human resource practices in small businesses. Quality of work environment versus intention to quit Regarding the QWE and employees' intention to quit, a statistically significant positive effect was found between the two. Therefore, the alternative hypothesis (H5) which states that the QWE has a significant positive effect on intention to quit was accepted. These findings supported the conclusions previously reported by Robinson et al. that a dynamic and interactive environment may lead to higher job satisfaction of employees and, ultimately, employee retention, amongst other organisational benefits. Similarly, Kang, Busser and Choi noted that unfavourable perceptions of work environments lead to negative workplace outcomes, including turnover, whilst meaningful working environments deter intention to quit in a variety of job contexts. Organisational structure versus intention to quit Organisational structure was found to have had an impact on intention to quit. This led to the acceptance of the hypothesis (H6) which states that organisational structure has a significant positive effect on intention to quit. According to Katsikea, Theodosiou and Morgan, organisations that focus on a team-based approach rather than the typical hierarchical structure are in a better position to retain their employees. Afsar et al. called such organisations high performance organisations, that is, organisations that try to bring out the best in individuals and create an exceptional capability to deliver high-end results. Cummings and Worley stated that high performance organisations include organisations with divisional and process structures. A study by Katsikea et al. found that hospitality organisations that follow the divisional and process structures have reduced rates of intention to quit than those with a functional structure. Work in the hospitality sector, especially hotels and B&Bs, follows the division structure approach, with say a frontline section, kitchen section and so on. Association between organisational and individual factors Online Appendix 1, Table 5-A1 also indicates that organisational factors had a positive and significant effect on employee individual factors ( = 0.946, p < 0.05). This means that when any of the organisational factors goes up by 1 SD, there are 99% chances that corresponding hypothesised individual factors also go up by 0.946 of their own standard deviation. The specific hypotheses results with corresponding factors are presented below. HR practices versus job satisfaction Selected human resources practices (compensation management, performance management and recruitment) were found to be associated with job satisfaction. This led to the acceptance of the alternative hypothesis (H7) which states that HR practices have a significant positive effect on job satisfaction. Haider et al. maintained that total compensation planning can help improve job satisfaction and increase employee retention. Hospitality organisations, such as B&Bs, might have to ensure that their compensation policies are satisfactory to employees. In addition, one other HR practice, performance management, plays an important role in employee motivation and satisfaction (Ushus & Johney 2015). According to Ushus and Johney, when employees receive high quality appraisal experience, they will tend to feel satisfied and motivated about their job and tasks given to them. Likewise, recruitment has been found to play a positive role in ensuring worker performance and positive job satisfaction outcomes (Agoi 2016). It is often claimed that recruitment of workers occurs not just to replace departing employees or add to a workforce but, rather, it aims to put in place workers who can perform at a high level and demonstrate commitment, thereby leading to high levels of job satisfaction (Ballantyne 2014). Quality of work environment versus organisational commitment Quality of work environment was found to have an influence on organisational commitment amongst B&B employees. The alternative hypothesis (H8) which says that the QWE has a significant positive effect on organisational commitment was accepted. The results are in tandem with Hanaysha who maintained that internal work events and elements of the work environment shape employees' commitment to an organisation. The present findings also support the empirical evidence that work environment has the most significant influence on organisational commitment amongst hospitality employees (Ebrahim 2014). This is corroborated by Lee, Back and Chan who reported that in the hospitality industry, employees' poor commitment to an organisation is closely linked to excess work, pressure of work and difficult customers which are all aspects of the work environment. It is, therefore, evident that the QWE can lead to organisational commitment amongst B&B employees. Organisational structure versus job stress The present study showed that organisational structure has a significant positive effect on job stress. The alternative hypothesis (H9) was proved true. These results support the conclusions by Kanten, Kanten and Gurlek that employees in high performance organisations (those that follow the divisional or decentralised structures) possess greater levels of organisational trust than employees in traditional hierarchical organisations, thereby leading to reduced stress levels. Conversely, Cummings and Worley acknowledged that goals that traditional organisations (those following formalised and functional structures) tend to focus on are primarily how well the company is doing (business goals), whereas high performance organisations' goals tend to be more related to customer satisfaction, improving employee skills, as well as adapting to change within the workplace. The former structure is likely to result in increased stress, whilst the latter is likely to result in reduced stress and intention to quit amongst employees. Considering that B&B workspaces are more divisional and decentralised in structure, the current results are not surprising. Conclusions and recommendations This study investigated the impact of selected individual and organisational factors on intention to quit amongst employees in one of the districts in South Africa. Specifically, the study measured job satisfaction, organisational commitment and job stress (individual factors), as well as HR practices, QWE and organisational structure (organisational factors) and their impact on intention to quit. The results supported that organisational factors have a positive significant effect on intention to quit amongst B&B employees. In terms of individual factors, the results showed that there were no significant relationships between the selected factors, namely, job satisfaction, organisational commitment and job stress and intention to quit amongst B&B employees. These results were contradictory to the research hypotheses and literature reviewed which found that the selected individual factors and intention to quit were significantly correlated. It is, therefore, recommended that B&B owners/managers put more focus on improving organisational factors, so as to allow effective implementation of retention strategies. For example, continuously creating and maintaining a conducive work environment within B&B establishments might result not only in minimising intentions to quit but also improved business reputation and performance. Also, effective HR practices, such as satisfactory compensation, could also play an important role in assisting managers retain their skilled employees as it is evident that satisfactory compensation packages and benefits influence employee motivation, loyalty and result in low levels of intentions to quit amongst employees (Khaleefa & Al-Abdalaat 2017). It is also recommended that owners or managers of B&Bs conduct needs-analysis within their organisations to identify areas for improvement. This could prevent high rates of intention to quit within these establishments. Future studies might consider conducting similar research in other districts or provinces in South Africa with larger sample sizes for more robust findings. Further studies could also consider demographic factors, such as age, which are likely to influence intention to quit amongst employees in B&B establishments. It could also be interesting to contrast how intentions to quit differ amongst employees in B&B establishments against those in large companies. Limitations of the study The study was based on B&B establishments in a selected district in the Free State province of South Africa. This means that the results might have limited applicability to other similar institutions. The other limitation lies in the fact that there was a discriminant validity concern in the final measurement model for all constructs (organisational factors, intention to quit and individual factors). It is, however, encouraging to note that despite the discriminant validity concern, convergent validity was met, therefore, the results of the present study might be a basis upon which similar studies can be established.
The plastic spinal cord: functional and structural plasticity in the transition from acute to chronic pain Abstract Chronic pain is a major health problem and a challenge to clinical practice and basic science. Various avenues in the somatosensory nociceptive pathway undergo extensive plasticity in pathological disease states. Disease-induced plasticity spans various levels of complexity, ranging from individual molecules, synapses, cellular function and network activity, and is characterized not only by functional changes, but also by structural reorganisation. Functional plasticity has been well-studied at the first synapse in the pain pathway in the spinal dorsal horn, and recent studies have also uncovered mechanisms underlying structural remodeling of spinal synaptic spines. This review will focus on plasticity phenomena in the spinal cord observed in chronic pain models and discuss their molecular determinants, functional relevance and potential towards contributing to existing as well as novel therapeutic concepts.
As shown in FIG. 1A, a light-emitting path of conventional light-emitting elements, such as the light-emitting diode (LED) 15, is perpendicular to a light-emitting surface of the light-emitting elements, and the energy distribution thereof is Lambert's type. The conventional light-emitting elements are usually applied to traffic signals, illuminators or other guide signs. However, as shown in FIG. 1B, when multiple LEDs 15 and 15′ are arranged together to mix optical properties, such as mixing light intensity or light color. The mixing effect can be obtained at a given distance away from the light-emitting surface. An invalid distance D1 is the given distance where the mixing effect does not appear. If the light distribution pattern of the emitted light can be flattened, the invalid distance D1 will be shortened obviously. As shown in FIG. 2, a conventional side-emitting light-emitting diode of U.S. Pat. No. 6,598,998 B2, entitled “Side Emitting Light Emitting Device” issued to Lumileds Co., discloses a side-emitting light-emitting diode having a special packaging lens. Such kind of side-emitting light-emitting diode has multiple refractive surfaces 14, which have an oblique angle with respect to an optical axis L of the packaging lens. Although most light energy passes through the refractive surfaces 14 which the side surface of the packaging lens and is emitted therefrom, a small portion of light energy (less than 10%) still is emitted from the top of the packaging lens (i.e. along the direction of the optical axis L). Therefore, a light-shield sheet is stuck onto the top of the packaging lens of this kind of side-emitting light-emitting diode to reflect back the light proceeding upward. Refer to FIG. 3. U.S. Pat. No. 6,679,621 B2, entitled “Side Emitting LED and lens”, issued to the abovementioned Lumileds Co. discloses another packaging lens as shown in FIG. 3. The packaging lens comprises an incident surface 10, a reflective surface 11, a first refractive surface 12 and a second refractive surface 13. After a light from a light source enters into the packaging structure through the incident surface 10, the light proceeds primarily along two paths P1, P2 and is emitted out thereby. The light enters into the packaging structure through the incident surface 10 to be reflected by the reflective surface 11 via total internal reflection then to penetrate the first refractive surface 12 along the path P1. The light enters into the packaging structure through the incident surface 10 and then to penetrate the second refractive surface 13 directly along the path P2. However, the abovementioned conventional technology has the following problems. First, when the light reflected by the reflective surface 11 is not incident the first refractive surface 12 in an incident angle with a right angle, and it results in the energy loss; further, when the incident angle is too great it brings about a total reflection. Second, a portion of the light reflected by the first refractive 12 penetrates the reflective surface 11 and forms light spots or light halos on LED. This results in the light energy loss and an undesired light distribution pattern and needs a shielding sheet or a diffusive sheet. Third, an intersection formed by the reflective face 11 and the first refractive face 12 is an acute angle to result in a fragile structure of the packaging lens.
Public safety personnel, such as police officers, firefighters, paramedics and the like, as well as business critical users such as manufacturing, hospital, and public service workers typically utilize multiple communication devices. While some of these devices may operate on government or enterprise networks with emergency features, others may not. Public safety communication devices include, for example, mobile radios such as handheld radios and/or vehicular radios along with remote accessories, such as remote microphones, speakers, earpieces, headsets and the like. This type of equipment (considered primary mission critical devices) and the infrastructure to support its operation is typically accomplished utilizing a Private Network governed by a public safety agency. Primary devices for use in mission critical applications often include a user interface having an emergency button for transmitting an emergency alert notification as well as push-to-talk (PTT) capability which allows a worker to request additional back-up resources in high stress situations. The additional non public safety types of devices often utilized by public safety personnel are considered non-mission critical devices, such as cell phones, personal digital assistants, electronic notepads which operate over a Public Carrier network. These secondary, non-mission critical devices do not provide a user interface for high stress emergency environments. Public safety personnel often call upon back-up resources in a dangerous situation. However, when PTT voice requests for back-up resources are communicated over a radio system or cellular network, requests may not be heard by dispatchers or work partners due to coverage holes, network congestion, talk-over, or work partner task focus. The public safety worker who presses the emergency button or push-to-talk button to request back-up may not be able to readily repeat the request when operating in a high stress situation. Additionally, the public safety worker may not be aware that the request has failed, further jeopardizing safety. Accordingly, there is a need for an improved communication system which will enhance emergency and back-up requests in public safety applications.
IT is 40 years since a Conwy Valley engineer began the excavation of a motoring masterpiece. Owen Wyn Owen, who lives with wife Carol in Capel Curig, dug up the remains of the Higham Special – known affectionately as Babs – on April 1, 1969, after a three-day excavation. Babs broke the land speed record at 171mph in April 1926, on the beach at Pendine in South Wales, and it’s there its wreckage was buried the following year after the death of driver John Godfrey Parry-Thomas. The Wrexham flyer took Babs to her limit and was killed when the car overturned at around 170mph while trying to win the record back from Malcolm Campbell and Bluebird. Mr Owen, who suffers from Parkinson’s Disease, is now planning to publish a book charting the history of the car and Parry-Thomas, and hopes the 40th anniversary of its rebirth will make Welsh people sit up and take notice of both man and machine. “There should be a statue of him at Pendine and in Wrexham, we should make more of what he achieved but he never gets a mention,” said the 83-year-old. He added: “I’m writing a book about Babs and reclaiming her for Wales. I’ve written around 10,000 words and its the whole story about Pendine and the restoration. A member of the British Sports Car Council committee, Mr Owen has asked them to create an award in honour of Parry-Thomas. He said it seems like only yesterday that he dug up the vehicle, shrouded by press activity and an intrigued public. It took three years to restore it. “I remember the day I dug her up like it was yesterday, then I brought the car back to Capel Curig and brought it back to life,” he added. “It was hard work but I&apos;ve always loved cars, especially vintage cars, and this was a real labour of love. Mr Owen’s home is a shrine to vintage cars, with sepia-soaked walls of photographs, models and even a wheel chain from the 1927 wreckage. In the garage are three classic cars, most notably a sleek blue 1955 Delage. He still loves to tinker with old motors but finds it difficult given his condition. “I have Parkinson’s Disease now so it gets frustrating and very hard, I still like to fix old cars up but it’s tougher,” said Mr Owen. “I’ve had a pacemaker fitted as well so it can be hard but I’ve still got so much passion for motors. The white and blue speedster is now looked after by Owen’s son Geraint, who keeps it in his Hereford garage. However, it is taken to Pendine Sands every summer and put on display. Its history began in 1923, when Count Louis Vorrow Zborowski and his engineer Clive Gallop – who designed Chitty Chitty Bang Bang – built the original Babs with a war surplus American Liberty V12 aero engine. Following Zborowski’s death in the 1924 Italian Grand Prix, the almost unused Higham Special was bought for £125 by Parry Thomas. After the accident, Babs was buried in the Pendine sands for the next 42 years. It was then that Mr Owen received permission to excavate what was left of the damaged car.
Q: What is the word for a phrase repeated over and over? I remember learning such a word in my studies of drama and poetry. I am referring to the following example, I want to deconstruct the IBM commercial directed by Jim Henson. http://www.wired.com/wiredenterprise/2014/03/tech-time-warp-henson-ibm/ The phrase "do the paperwork" is used again and again. I leapt to "mantra" but that suggests a repetition for the purpose of summarising a belief and training one to believe it. I also thought of "slogan" but that usually has an overt sometimes political message. This is a word that would describe the fact that this phrase has been repeated frequently without suggesting much at all about the purpose of the phrase or its repetition. Perhaps a technical or domain word from textual analysis or literary criticism. Something highly specific I believe is what may describe the word I am trying to find. Thank you. A: Was it 'motif', 'theme', or perhaps 'refrain'? 'Leitmotif'? ... 2. (Literary & Literary Critical Terms) an often repeated word, phrase, image, or theme in a literary work [Collins]
Sufficient conditions for the global stability of nonautonomous higher order difference equations We present some explicit sufficient conditions for the global stability of the zero solution in nonautonomous higher order difference equations. The linear case is discussed in detail. We illustrate our main results with some examples. In particular, the stability properties of the equilibrium in a nonlinear model in macroeconomics is addressed.