content
stringlengths
37
2.61M
Palmetto Ridge High School wrestler Cullen Guerrero has been voted the Naples Daily News Athlete of the Week sponsored by Babcock Ranch for Jan. 21-26. Guerrero won two matches at 106 pounds last week during the state duals tournament, helping Palmetto Ridge advance to the regional finals. The junior received 6,457 of the 13,725 votes (47.1 percent) in the Daily News' online poll. Naples High girls basketball player Sydney Zerbe was second in the poll with 4,087 votes (29.8 percent), followed by Immokalee High boys basketball player Izayel Louis Pierre with 2,639 votes (19.2 percent).
The Novel Use of Resuscitative Endovascular Balloon Occlusion of the Aorta to Explore a Retroperitoneal Hematoma in a Hemodynamically Unstable Patient Balloon occlusion of the aorta was first described by C.W. Hughes in 1954, when it was used as a tamponade device for three wounded soldiers during the Korean War suffering from intra-abdominal hemorrhage. Currently, the device is indicated in trauma patients as a surrogate for resuscitative thoracotomy. Brenner et al. reported a case series describing the use of resuscitative endovascular balloon occlusion of the aorta (REBOA) in advanced hemorrhagic shock. Their conclusion was that it is a feasible method for proximal aortic control. We describe the novel use of REBOA before retroperitoneal hematoma exploration in a hemodynamically unstable patient. Reported is a 19-year-old blunt trauma victim where REBOA was successfully deployed as a means for proximal arterial control before a Zone 1 retroperitoneal hematoma exploration. The source of the patient's hemorrhagic shock was multifactorial: grade V hepatic injury, retrohepatic inferior vena cava laceration, and right renal vein avulsion with Zone 1 retroperitoneal hematoma. Immediate return of perfusion pressure, as systolic pressures increased from 50 to 150 mm Hg. Hemodynamic improvements were accompanied by decreased transfusion and vasopressor requirements. In addition, the surgeons were able to enter the retroperitoneal hematoma under controlled conditions. REBOA is an attractive new tool to gain proximal aortic control in select patients with hemorrhagic shock. It is less morbid, possibly more efficient, and appears to be more effective than resuscitative thoracotomy. REBOA is certainly feasible for proximal aortic control before retroperitoneal exploration, and should be considered in select patients.
43. The psychopath in history From "The Mask of Sanity" by Hervey M. Cleckley Over a period of many decades psychiatrists, and sometimes other writers, have made attempts to classify prominent historical figures-rulers, military leaders, famous artists and writers-as cases of psychiatric disorder or as people showing some of the manifestations associated with various psychiatric disorders. Many professional and lay observers in recent years have commented on the sadistic and paranoid conduct and attitudes reported in Adolf Hitler and in some of the other wartime leaders in Nazi Germany. Walter Langer, the author of a fairly recent psychiatric study, arrives at the conclusion that Hitler was "probably a neurotic psychopath bordering on schizophrenia," that "he was not insane but was emotionally sick and lacked normal inhibitions against antisocial behavior."177 A reviewer of this study in Time feels that Hitler is presented as "a desperately unhappy man … beset by fears, doubts, loneliness and guilt [who] spent his whole life in an unsuccessful attempt to compensate for feelings of helplessness and inferiority."281 Though the term psychopath is used for Hitler in this quotation it seems to be used in a broader sense than in this volume. Hitler, despite all the unusual, unpleasant, and abnormal features reported to be characteristic of him, could not, in my opinion, be identified with the picture I am trying to present. Many people whose conduct has been permanently recorded in history are described as extremely abnormal in various ways. Good examples familiar to all include Nero and Heliogabalus, Gilles de Rais, the Countess Elizabeth Báthory and, of course, the Marquis de Sade. I cannot find in these characters a truly convincing resemblance that identifies them with the picture that emerges from the actual patients I have studied and regarded as true psychopaths.203 In the lives of many painters, sculptors, poets and other writers who have gained a place in history we find reports of inconsistency and irresponsibility that sometimes do suggest the typical psychopath. Benvenuto Cellini, whose story has been recorded in such detail by his own hand, seems in more respects, perhaps, than any other creative artist who gained lasting renown to have followed a pattern similar to that of my patients. Nevertheless he worked consistently enough to produce masterpieces that centuries later are still cherished.41,120 Let us turn now to a much earlier historical figure, a military leader and statesman who is not likely to be forgotten while civilization as we know it remains on earth. I first encountered him during a course in ancient history when I was in high school. I had not at that time heard of a psychopath. The teacher did not try to classify him medically or explain his paradoxical career in psychological terms. I felt, however, that this gifted teacher shared my interest and some of my bewilderment as the brilliant, charming, capricious, and irresponsible figure of Alcibiades unfolded in the classroom against the background of Periclean Athens. None of my immature concepts of classification (good man, bad man, wise man, foolish man) seemed to define Alcibiades adequately, or even to afford a reliable clue to his enigmatic image. The more I read about him and wondered about him, the more he arrested my attention and challenged my imagination. All reports agreed that he was one of the chief military and political leaders of Athens in her period of supreme greatness and classic splendor during the fifth century B.G. This man led me to ponder at a very early age on many questions for which I have not yet found satisfactory answers. According to my high school history book,26 He belonged to one of the noblest families of Athens, and was a near kinsman of Pericles. Though still young, he was influential because of his high birth and his fascinating personality. His talents were brilliant in all directions; but he was lawless and violent, and followed no motive but self-interest and self-indulgence. Through his influence Athens allied herself with Argos, Elis, and Mantinea against the Lacedaemonians and their allies. [p. 224] The result of this alliance led Athens into defeat and disaster, but Alcibiades on many occasions showed outstanding talent and succeeded brilliantly in many important affairs. Apparently he had great personal charm and easily aroused strong feelings of admiration and affection in others. Though usually able to accomplish with ease any aim he might choose, he seemed capriciously to court disaster and, perhaps at the behest of some trivial impulse, to go out of his way to bring down defeat upon his own projects. Plutarch refers to him thus:242 It has been said not untruly that the friendship which Socrates felt for him has much contributed to his fame, and certain it is, that, though we have no account from any writer concerning the mother of Nicias or Demosthenes, of Lamachus or Phormion, of Thrasybulus or Theratnenes, notwithstanding these were all illustrious men of the same period, yet we know even the nurse of Alcibiades, that her country was Lacedaemon, and her name Amycla; and that Zopyrus was his teacher and attendant; the one being recorded by Antistheries, and the othei by Plato. (p. 149) In the Symposium,241 one of his most celebrated dialogues, Plato introduces Alcibiades by having him appear with a group of intoxicated revelers and burst in upon those at the banquet who are engaged in philosophical discussion. Alcibiades, as presented here by Plato, appears at times to advocate as well as symbolize external beauty and ephemeral satisfactions as opposed to the eternal verities. Nevertheless, Plato gives Alcibiades the role of recognizing and expounding upon the inner virtue and spiritual worth of Socrates and of acclaiming this as far surpassing the readily discerned attainments of more obviously attractive and superficially impressive men. Plato devotes almost all of the last quarter of the Symposium to Alcibiades and his conversation with Socrates. His great charm and physical beauty are emphasized repeatedly here. The personal attractiveness of Alcibiades is also dwelt upon by Plutarch:242 It is not, perhaps, material to say anything of the beauty of Alcibiades, only that it bloomed with him at all stages of his life, in his infancy, in his youth, and in his manhood; and, in the peculiar character belonging to each of these periods, gave him in everyone of them, a grace and charm. What Euripides says: "of all fair things the autumn, too, is fair" … is by no means universally true. But it happened so with Alcibiades amongst few others. …[pp149-150] Early in his career he played a crucial role in gaining important victories for Athens. Later, after fighting against his native city and contributing substantially to her final disaster, he returned to favor, won important victories again for her and was honored with her highest offices. In the Encyclopaedia Brittanica (1949) I read: Alcibiades possessed great charm and brilliant abilities but was absolutely unprincipled. His advice whether to Athens or to Sparta, oligarchs or democrats, was dictated by selfish motives, and the Athenians could never trust him sufficiently to take advantage of his talents. And Thucydides Says:280 They feared the extremes to which he carried his lawless self-indulgence, and … though his talents as a military commander were unrivalled, they entrusted the administration of the war to Others; and so they speedily shipwrecked the state. Plutarch repeatedly emphasizes the positive and impressive qualities of Alcibiades:242 It was manifest that the many wellborn persons who were continually seeking his company, and making their court to him, were attracted and captivated by his brilliant and extraordinary beauty only. But the affection which Socrates entertained for him is a great evidence of the natural noble qualities and good disposition of the boy, which Socrates, indeed, detected both in and under his personal beauty; and, fearing that his wealth and station, and the great number both of strangers and Athenians who flattered and caressed him, might at last corrupt him, resolved, if possible, to interpose, and preserve so hopeful a plant from perishing in the flower, before its fruit came to perfection. [p. 151] The same writer also cites many examples of unattractive behavior, in which Alcibiades is shown responding with unprovoked and arbitrary insolence to those who sought to do him honor. Let us note one of these incidents:242 As in particular to Anitas, the son of Anthernion, who was very fond of him and invited him to an entertainment which he had prepared for some strangers. Alcibiades refused the invitation, but having drunk to excess in his own house with some of his companions, went thither with them to play some frolic, and standing at the door of the room where the guests were enjoying themselves and seeing the tables covered with gold and silver cups, he commanded his servants to take away the one-half of them and carry them to his own house. And, then, disdaining so much as to enter into the room himself, as soon as he had done this, went away. The company was indignant, and exclaimed at this rude and insulting conduct; Anitas, however, said, on the contrary, that Alcibiades had shown great consideration and tenderness in taking only a part when he might have taken all. [p. 152] Despite his talents and many attractive features some incidents appear even in his very early life that suggest instability, a disregard for accepted rules or commitments and a reckless tendency to seize arbitrarily what may appeal to him at the moment. Plutarch tells us:242 Once being hard pressed in wrestling, and fearing to be thrown, he got the hand of his antagonist to his mouth, and bit it with all his force; when the other loosed his hold presently, and said, "You bite, Alcibiades, like a woman "No," replied be, "like a lion." [p. 150] On another occasion it is reported that Alcibiades with other boys was playing with dice in the street. A loaded cart which had been approaching drew near just as it was his turn to throw. To quote again from Plutarch:242 At first he called to the driver to stop, because he was to throw in the way over which the cart was to pass; but the man giving him no attention and driving on, when the rest of the boys divided and gave way, Alcibiades threw himself on his face before the cart and, stretching himself out, bade the carter pass on now if he would; which so startled the man, that he put back his horses, while all that saw it were terrified, and, crying out, ran to assist Alcibiades. [p. 150] Alcibiades, one of the most prominent figures in Athens, an extremely influential leader with important successes to his credit, became the chief advocate for the memorable expedition against Sicily. He entered enthusiastically into this venture urging it upon the Athenians partly from policy, it seems, and partly from his private ambition. Though this expedition resulted in catastrophe and played a major role in the end of Athenian power and glory, many have felt that if Alcibiades had been left in Sicily in his position of command he might have led the great armada to victory. If so, this might well have insured for Athens indefinitely the supreme power of the ancient world. The brilliant ability often demonstrated by Alcibiades lends credence to such an opinion. On the other hand, his inconsistency and capriciousness make it difficult, indeed, to feel confident that his presence would necessarily have brought success to the Athenian cause. The magnitude of its failure has recently drawn this comment from Peter Green in Armada From Athens:100 It was more than a defeat; it was a defilement. There, mindless, brutish, and terrified, dying like animals, without dignity or pride, were Pericles' countrymen, citizens of the greatest imperial power Greece had ever known. In that ... destruction ... Athens lost her imperial pride forever. The shell of splendid self-confidence was shattered: something more than an army died in Sicily. [p. 336] Athens' imperial pride had been destroyed and her easy self-assertion with it. Aegospotami merely confirmed the ineluctable sentence imposed on the banks of the Assinarus. Pindar's violet-crowned city had been cut down to size and an ugly tarnish now dulled the bright Periclean charisma. The great experiment in democratic imperialism that strangest of all paradoxes-was finally discredited. [p. 353] If Athens had succeeded in the expedition against Syracuse the history of Greece and perhaps even the history of all Europe might have been substantially different. Shortly before the great Athenian fleet and army sailed on the Sicilian expedition an incident occurred that has never been satisfactorily explained. Now when Athens was staking her future on a monumental and dangerous venture there was imperative need for solidarity of opinion and for confidence in the three leaders to whom so much had been entrusted. At this tense and exquisitely inopportune time the sacred statues of Hermes throughout the city were mutilated in a wholesale desecration. This unprovoked act of folly and outrage disturbed the entire populace and aroused superstitious qualms and fears that support of the gods would be withdrawn at a time of crucial need. Alcibiades was strongly suspected of the senseless sacrilege. Though proof was not established that he had committed this deed which demoralized the Athenians, the possibility that Alcibiades, their brilliant leader, might be guilty of such an idle and irresponsible outrage shook profoundly the confidence of the expeditionary force and of the government. Many who knew him apparently felt that such an act might have been carried out by Alcibiades impulsively and without any adequate reason but merely as an idle gesture of bravado, a prank that might demonstrate what he could get away with if it should suit his fancy. Definite evidence emerged at this time to show that he had been profaning the Eleusinian mysteries by imitating them or caricaturing them for the amusement of his friends. This no doubt strengthened suspicion against him as having played a part in mutilating the sacred statues. On a number of other occasions his bad judgment and his self-centered whims played a major role in bringing disasters upon Athens and upon himself. Though this brilliant leader often appeared as a zealous and incorruptible patriot, numerous incidents strongly indicate that at other times he put self-interest first and that sometimes even the feeble lure of some minor objective or the mere prompting of caprice caused him to ignore the welfare and safety of his native land and to abandon lightly all standards of loyalty and honor. No substantial evidence has ever emerged to indicate that Alcibiades was guilty of the sacrilegious mutilation of the statues. He asked for an immediate trial, but it was decided not to delay the sailing of the fleet for this. After he reached Syracuse, Alcibiades was summoned to return to Athens to face these charges. On the way back he deserted the Athenian cause, escaped to Sparta, and joined the enemy to fight against his native city. It has been argued that Alcibiades could not have been guilty of the mutilation since, as a leader of the expedition and its chief advocate, he would have so much to lose by a senseless and impious act that might jeopardize its success. On the other hand his career shows many incidents of unprovoked and, potentially, self-damaging folly carried out more or less as a whim, perhaps in defiance of authority, or as an arrogant gesture to show his immunity to ordinary rules or restrictions. It sometimes looked as though the very danger of a useless and uninviting deed might, in itself, tempt him to flaunt a cavalier defiance of rules that bind other men. If Alcibiades did play a part in this piece of egregious folly it greatly augments his resemblance to the patients described in this book. Indeed it is difficult to see how anyone but a psychopath might, in his position, participate in such an act. In Sparta Alcibiades made many changes to identify himself with the ways and styles of the enemy. In Athens he had been notable for his fine raiment and for worldly splendor and extravagance. On these characteristics Plutarch comments thus:242 But with all these words and deeds and with all this sagacity and eloquence, he mingled the exorbitant luxury and wantonness in his eating and drinking and dissolute living; owre long, purple robes like a woman, which dragged after him as he went through the marketplace, caused the planks of his galley to be cut away, that he might lie the softer, his bed not being placed on the boards but hanging upon girths. His shield, again, which was richly gilded had not the usual ensigns of the Athenians, but a Cupid holding a thunderbolt in his hand, was painted upon it. The sight of all this made the people of good repute in the city feel disgust and abhorrence and apprehension also, at his free living and his contempt of law as things monstrous in themselves and indicating designs of usurpation.[pp. 161-162] In contrast to his appearance and his habits in the old environment we find this comment by Plutarch on Alcibiades after he had deserted the Athenian cause and come to live in Sparta and throw all his brilliant talents into the war against his native land: 242 The renown which he earned by these public services, not to Athens, but to Sparta, was equaled by the admiraton he attracted to his private life. He captivated and won over everybody by his conformity to Spartan habits. People who saw him wearing his hair cut close and bathing in cold water, eating coarse meal and dining on black broth, doubted, or rather could not believe that he had ever had a cook in his house or had ever seen a perfumer or had ever worn a mantle of Milesian purple. For he had, as it was observed, this peculiar talent and artifice of gaining men's affection, that he could at once comply with and really embrace and enter into the habits and ways of life, and change faster than the chameleon; one color, indeed, they say, the chameleon cannot assume; he cannot himself appear white. But, Alcibiades, whether with good men or with bad, could adapt himself to his company and equally wear the appearances of virtue or vice. At Sparta, he was devoted to athletic exercises, was frugal and reserved: in Ionia, luxurious, gay and indolent; in Thrace, always drinking; in Thessaly, ever on horseback; and when he lived with Tisaphernes, the king of Persia's satrap he exceeded the Persians themselves in magnificence and pomp. Not that his natural disposition changed so easily, nor that his real character was so variable, but whether he was sensible that by pursuing his own inclinations he might give offense to those with whom he had occasion to converse, he transformed himself into any shape and adopted any fashion that he observed to be agreeable to them. [pp. 169-170] At Sparta Alcibiades seemed to strive in every way to help the enemy defeat and destroy Athens. He induced them to send military aid promptly to the Syracusans and also aroused them to renew the war directly against Athens. He made them aware of the great importance of fortifying Decelea, a place very near Athens, from which she was extremely vulnerable to attack. The Spartans followed his counsel in these matters and, by taking the steps he advised, wrought serious damage to the Athenian cause. The vindictive and persistent efforts of this brilliant traitor may have played a substantial part in the eventual downfall of Athens. Even before he left Sicily for Sparta Alcibiades had begun to work against his native land in taking steps to prevent Messina from falling into the hands of the Athenians. Eventually a good many of the Spartans began to distrust Alcibiades. Among this group was the king, Agis. According to Plutarch:242 ... While Agis was absent and abroad with the army, [Alcibiades] corrupted his wife, Timea, and had a child born by her. Nor did she even deny it, but when she was brought to bed of a son, called him in public, Leotychides, but amongst her confidants and attendants, would whisper that his name was Alcibiades, to such a degree was she transported by her passion for him. He, on the other side, would say in his valiant way, he had not done this thing out of mere wantonness of insult, nor to gratify a passion, but that his race might one day be kings over the Lacedaemonians. [p. 170] It became increasingly unpleasant for Alcibiades in Sparta despite his great successes and the admiration he still evoked in many. Plutarch say:242 But Agis was his enemy, hating him for having dishonored his wife, but also impatient of his glory, as almost every success was ascribed to Alcibiades. Others, also, of the more powerful and ambitious among the Spartans were possessed with jealousy of him and prevailed with the magistrates in the city to send orders ... that he should be killed. [p. 171] Alcibiades, however, learned of this, and fled to Asia Minor for security with the satrap of the king of Persia, Tisaphernes. Here he found security and again displayed his great abilities and his extraordinary charm. According to Plutarch:242 [He] immediately became the most influential person about him; for this barbarian [Tisaphernes], not being himself sincere, but a lover of guile and wickedness, admired his address and wonderful subtlety. And, indeed, the charm of daily intercourse with him was more than any character could resist or any disposition escape. Even those who feared and envied him, could not but take delight and have a sort of kindness for him when they saw him and were in his company, so that Tisaphernes, otherwise a cruel character, and above all other Persians, a hater of the Greeks, was yet so won by the flatteries of Alcibiades that he set himself even to exceed him in responding to them. The most beautiful of his parks containing salubrious streams and meadows where he had built pavilions and places of retirement, royally and exquisitely adorned, received by his direction the name of Alcibiades and was always so called and so spoken of. Thus, Alcibiades, quitting the interest of the Spartans, whom he could no longer trust because he stood in fear of Agis, the king, endeavored to do them ill offices and render them odious to Tisaphernes, who, by his means, was hindered from assisting them vigorously and from finally ruining the Athenians. For his advice was to furnish them but sparingly with money and so wear them out, and consume them insensibly; when they had wasted their strength upon one another, they would both become ready to submit to the king. [p. 171] It is not remarkable to learn that Alcibiades left the service of the Persians. It does seem to me remarkable, however, after his long exile from Athens, his allegiance to her enemies and the grievous damage he had done her, that he was enthusiastically welcomed back to Athens, that he again led Athenian forces to brilliant victories, and that he was, indeed, given supreme command of the Athenian military and naval forces. His welcome back to Athens was enthusiastic. According to Plutarch, 242 "The people crowned him with crowns of gold, and created him general, both by land and by sea." He is described as "coming home from so long an exile, and such variety of misfortune, in the style of revelers breaking up from a drinking party." Despite this, many of the Athenians did not fully trust him, and apparently without due cause, this time, he was dismissed from his high position of command. He later retired to Asia Minor where he was murdered at 46 years of age, according to some reports for "having debauched a young lady of a noble house." Despite the widespread admiration that Alcibiades could so easily arouse, skeptical comments were made about him even before his chief failures occurred. According to Plutarch, "It was not said amiss by Archestratus, that Greece could not support a second Alcidiabes." Plutarch also quotes Tinton as saying, "Go on boldly, my son, and increase in credit with the people, for thou wilt one day bring them calamities enough." Of the Athenians attitude toward Alcibiades, Aristophanes wrote: "They love and hate and cannot do without him."242 The character of Alcibiades looms in the early dawn of history as an enigmatic paradox. He undoubtedly disconcerted and puzzled his contemporaries, and his conduct seems to have brought upon him widely differing judgments. During the many centuries since his death historians have seemed fascinated by his career but never quite able to interpret his personality. Brilliant and persuasive, he was able to succeed in anything he wished to accomplish. After spectacular achievement he often seemed, carelessly or almost deliberately, to throw away all that he had gained, through foolish decisions or unworthy conduct for which adequate motivation cannot be demonstrated and, indeed, can scarcely be imagined. Senseless pranks or mere nose-thumbing gestures of derision seemed at times to draw him from serious responsibilities and cause him to abandon major goals as well as the commitments of loyalty and honor. Apparently his brilliance, charm, and promise captivated Socrates, generally held to be the greatest teacher and the wisest man of antiquity. Though Alcibiades is reported to have been the favorite disciple and most cherished friend of the master it can hardly be said that Socrates succeeded in teaching him to apply even ordinary wisdom consistently in the conduct of his life or to avoid follies that would have been shunned even by the stupid. According to the Encyclopaedia Brittanica (1949), "He was an admirer of Socrates, who saved his life at Potidaea (432), a service which Alcibiades repaid at Delium; but he could not practice his master's virtues, and there is no doubt that the example of Alcidiabes strengthened the charges brought against Socrates of corrupting the youth." When we look back upon what has been recorded of Alcibiades we are led to suspect that he had the gift of every talent except that of using them consistently to achieve any sensible aim or in behalf of any discernible cause. Though it would hardly be convincing to claim that we can establish a medical diagnosis, or a full psychiatric explanation, of this public figure who lived almost two and a half thousand years ago, there are many points in the incomplete records of his life available to us that strongly suggest Alcibiades may have been a spectacular example of what during recent decades we have, in bewilderment and amazement, come to designate as the psychopath. During this brief period Greece, and Athens especially, produced architecture, sculpture, drama, and poetry that have seldom if ever been surpassed. Perhaps Greece also produced in Alcibiades the most impressive and brilliant, the most truly classic example of this still inexplicable pattern of human life. Fifth Edition Copyright 1988 Emily S. Cleckley Previous edition copyrighted 1941, 1950, 1955, 1964, 1976 by the C.V. Mosby Co. Cleckley, Hervey Milton, 1903-1984 The Mask of Sanity ISBN 0-9621519-0-4 Scanned facsimile produced for non-profit educational use by Quantum Future School The owners and publishers of these pages wish to state that the material presented here is the product of our research and experimentation in Superluminal Communication. We invite the reader to share in our seeking of Truth by reading with an Open, but skeptical mind. We do not encourage "devotee-ism" nor "True Belief." We DO encourage the seeking of Knowledge and Awareness in all fields of endeavor as the best way to be able to discern lies from truth. The one thing we can tell the reader is this: we work very hard, many hours a day, and have done so for many years, to discover the "bottom line" of our existence on Earth. It is our vocation, our quest, our job. We constantly seek to validate and/or refine what we understand to be either possible or probable or both. We do this in the sincere hope that all of mankind will benefit, if not now, then at some point in one of our probable futures. Contact Webmaster at cassiopaea.com Copyright © 1997-2009 Arkadiusz Jadczyk and Laura Knight-Jadczyk. All rights reserved. "Cassiopaea, Cassiopaean, Cassiopaeans," is a registered trademark of Arkadiusz Jadczyk and Laura Knight-Jadczyk. Letters addressed to Cassiopaea, Quantum Future School, Ark or Laura, become the property of Arkadiusz Jadczyk and Laura Knight-Jadczyk R epublication and re-dissemination of the contents of this screen or any portion of this website in any manner is expressly prohibited without prior written consent. You are visitor number 33362 since April 16, 2009[TextCounter Fatal Error: Could Not Increment Counter] .
The accuracy of magnetic resonance imaging diagnosis of non-osseous knee injury at Steve Biko Academic Hospital Background Preoperative magnetic resonance imaging (MRI) has internationally been proven to reduce unnecessary knee arthroscopies and assist with surgical planning. This has the advantage of avoiding unnecessary surgery and the associated anaesthetic risk, as well as reducing costs. No data were found in the recently published literature assessing the accuracy of MRI interpretation of knee ligament injury in the public sector locally. Objectives This pilot study aimed to determine the accuracy of MRI in detecting non-osseous knee injury in a resource-limited tertiary-level academic hospital in Pretoria, South Africa, compared to the gold standard arthroscopy findings. Method This was an exploratory retrospective analysis of 39 patients who had MRI and arthroscopy at Steve Biko Academic Hospital (SBAH). True positive, true negative, false positive and false negative results were extrapolated from findings in both modalities and translated into sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for each structure. Results Negative predictive values were recorded as 97%, 81%, 90% and 100% (anterior cruciate ligament , medial meniscus , lateral meniscus and posterior cruciate ligament , respectively), which were comparative to recently published international literature. The PPV results were lower than those previously evaluated at 55%, 58%, 55% and not applicable. The sensitivities and specificities of the ligaments were 83%, 58%, 83% and not applicable; and 87%, 81%, 70% and not applicable, respectively. Conclusion Magnetic resonance imaging was found to be sensitive and specific, with a high NPV noted in all structures evaluated. Negative results can therefore be used to avoid unnecessary surgery to the benefit of the patient and state. The study reiterates that high accuracy can be obtained from MRI on a 1.5-tesla non-dedicated scanner, with interpretation by generalist radiologists. Introduction The knee joint is primarily a hinge joint; however, its flexibility allows for a wide range of further movements, but at the expense of stability. 1 This accounts for a large presenting patient population with traumatic and non-traumatic ligament injuries, including sports injuries. 2 Internationally, the high prevalence of knee ligament derangement is widely accepted, with significant associated costs for diagnosis and treatment. 3,4 The American College of Radiology Appropriateness Criteria suggest magnetic resonance imaging (MRI) as the primary radiological investigation for suspected non-osseous knee injury, furthermore suggesting that primary clinical examination after injury has shown a low diagnostic yield. 5, 6 The yield from the first clinical assessment alone, direct to arthroscopy, is low (between 35% and 70%) 7 Higher accuracy from clinical assessment has been described (75%-96%) by Rayan et al., reaffirming referral to MRI following a specialist review. 8 Magnetic resonance imaging is the gold standard imaging investigation. The modality is unparalleled in the evaluation of non-osseous knee structure derangement, being well suited for high-resolution assessment of the musculoskeletal (MSK) system, including muscle, tendon, ligament and occult bone injuries. 9,10,11 Arthroscopy is considered the gold standard in terms of definitive diagnosis of internal derangements of the knee. It is both sensitive and specific, furthermore being both diagnostic and therapeutic. 12 If one considers that arthroscopy is an invasive procedure that may have a negative result (4.7% reported complication rate), a major disadvantage in addition to possible patient surgical and anaesthetic complications is the expense of theatre cost and inpatient stay. 13 A 1997 study had suggested a $680 (USD) saving of MRI before arthroscopy, and 42% of patients could have potentially avoided surgery altogether based on the MRI results. 4 Arthroscopy is also user dependant and subject to its unique set of errors, limiting its accuracy. 14 The effective use of preoperative MRI has been proven to reduce unnecessary surgical arthroscopies and assist with preoperative planning. 4,5,6 This includes preoperative planning in situations including arthrofibrosis, or specific pathologies including ramp lesions or meniscal root attachment tears. These are important review areas for the radiologist and may prove to be 'blind spots' for the orthopaedic surgeon, depending on the arthroscopic approach used. Recent studies comparing MRI and clinical examination to arthroscopy have shown up to 100% negative predictive value (NPV) for an anterior cruciate ligament (ACL) injury and 96% NPV for meniscal injury. 12 A negative MRI would reduce unnecessary arthroscopies and has the added benefit of avoiding the associated costs, including theatre use, hospital stay and post-operative management. To the benefit of the patient, both surgical and anaesthetic complications may altogether be avoided. 4 International articles have previously examined the accuracy of MRI by using arthroscopy as the gold standard. Crawford et al., in a review, examined previous literature on MRI against arthroscopy by making use of a 'modified Coleman' methodology to identify scientifically credible and reproducible articles on the topic. Sixty-three articles were divided into two groups; their extrapolated findings demonstrated the MRI accuracy in detecting medial meniscus (MM), ACL and posterior cruciate ligament (PCL) injuries. Sensitivities were 91%, 76% and 86%, respectively; specificities were 81%, 93% and 95%, respectively; positive predictive value (PPV) was 83%, 80% and 82%, respectively; and NPV was 90%, 91% and 96%, respectively. 7 A study by Singla and Kansal compared ACL, PCL, MM and lateral meniscus (LM) MRI findings with those of arthroscopy. The ranges for results for the respective structures were: sensitivities of 76%-89.1%, specificities of 71.4%-94.3%, PPV of 66.7%-85.2% and NPV of 76.9%-97.1%. 15 A prospective study by Madhusudhan et al. compared clinical examination, MRI and arthroscopy. When compared to arthroscopy, ACL imaging was 91.8% specific with a 94% NPV, whilst the corresponding results for meniscal imaging were 50% and 31%. 16 Meniscus tears, however, showed a higher PPV of 75%. 16 These results are however conflicting with other literature in which it was found that a definitive MRI diagnosis of a meniscal tear was made 95% of the time. 17 The South African healthcare system is vastly different in the private and public sectors, with limitations of resources and large patient loads in the latter sector. In the public sector, this may translate into long time delays between clinical diagnosis and elective special investigations given the large population it serves. In Steve Biko Academic Hospital (SBAH), MSK imaging is performed amongst other examinations, on a nondedicated low field 1.5-tesla (T) MRI scanner, and findings are reported by general radiologists with interest and experience; however, there is no accredited subspeciality training in MSK imaging. Much international literature exists outlining the accuracy of MRI in diagnosis in non-osseous knee structure disruption, and the benefits to patients and hospitals alike. No local data are present, given the resource limitations at most South African tertiary state hospitals. The aim of this pilot study was to determine the accuracy (sensitivity, specificity, PPV and NPV) of MRI in detecting non-osseous knee structure injury (ACL, PCL, MM and LM) in a resource-limited tertiarylevel academic hospital in Pretoria, South Africa, as compared to the arthroscopy findings. Methods This was a retrospective analysis, comparing the MRI knee reports documenting non-osseous internal derangements of the knee with the corresponding knee arthroscopy report findings at SBAH, a tertiary care hospital. Adult patients (18 years and older) who had received a knee arthroscopy (left or right knee) preceded by an MRI of the corresponding knee for the period of 01 January 2013 to 01 March 2018 were included in the study. Sampling method All patients at SBAH had an MRI preceding the arthroscopy as outlined by the arthroplasty department, eliminating bias of severity of injury. A total of 39 patients were utilised, dictated by the patient records found, meeting both inclusion and exclusion criteria in the given 5-year period. Four structures were assessed, namely ACL, PCL, MM and LM. Thereafter, the results were classified into true positive (TP), true negative (TN), false positive (FP) and false negative (FN) results. The total findings were extrapolated into sensitivity, specificity, PPV and NPV. Data analysis The descriptive statistics mean, median, standard deviation and inter-quartile range were used to describe any continuous variables. Frequencies and proportions were used to describe the categorical variables, such as the presence of a tear on MRI or arthroscopy. Sensitivity and specificity, along with positive and negative predicative values, were calculated by using arthroscopy as the gold standard. Cohen's kappa statistics were calculated to test the agreement between MRI and arthroscopy. Tests were evaluated at the 5% level of significance. Four parameters, namely sensitivity, specificity, PPV and NPV, were calculated to assess the reliability of the MRI results. Ethical consideration Ethics approval was granted by the University of Pretoria Faculty of Health Sciences Research Ethics Committee. The ethics committee was asked for waiver of patient consent as this was a retrospective study. Consent was obtained from the Chief Executive Officer of SBAH to use the reports from the hospital picture archiving and communication system (PACS) (MRI findings), as well as from the patient records (arthroscopy findings). Ethics protocol number is 442/2018. Results A total of 39 patients who had arthroscopy at SBAH were evaluated. Twenty-six of the total patients were female patients and 13 patients were male patients (F:M = 2:1). The ages of the patients ranged from 18 to 69 years with a mean age of 36. Sixty-nine per cent of the patients were 40 years old or younger. Considering arthroscopy as the gold standard, MRI findings for derangements of four structures (ACL, PCL, MM and LM) were compared to their corresponding arthroscopic theatre reports. A total of 76.9% (30 of 39) of the patients had a positive result (some structural tear) noted between the two examinations, with nine of the 39 patients having all four intact structures between both studies. True positive, TN, FP and FN results were recorded for each ligament per patient, and respective sensitivity, specificity, PPV and NPV were calculated (see Table 1). With regard to the anterior cruciate ligament, medial meniscus, lateral meniscus and posterior cruciate ligament, the findings at MRI and arthroscopy are presented in Figure 1. The resulting true positive, true negative, false positive and false negative results are presented in Figure 2. This equated to an 83%, 58%, 83%, not applicable sensitivity and 87%, 81%, 70%, not applicable specificity, respectively. A PPV of 0.55, 0.58, 0.55, not applicable and NPV of 0.97, 0.81, 0.90, 1.0 were extrapolated. Kappa testing, which accounts for the possibility of chance agreement as a more robust means to measure inter-observer agreement, was calculated for each of the ligaments. A kappa value of 0.59, 0.39 and 0.47 was found for the ACL, MM and LM, respectively. Landis and Koch regard values fair in the range of 0.21-0.40, and in moderate agreement in the range of 0.41-0.60. 18 The PCL, however, which had two tears noted on MRI and none on arthroscopy, had a 0.00 Cohen kappa value (Figure 3). Discussion of results In our study, 39 patients were managed at SBAH; the results for NPV were recorded as 97%, 81%, 90% and 100% for ACL, MM, LM and PCL, respectively. The NPV demonstrates the proportion of negative MRIs, which were TN. In our study, we found that if a patient had no tear noted on MRI, they were highly unlikely to have a tear seen on arthroscopy. This was in keeping with previous studies that demonstrated high NPV of MRI in knee Posterior cruciate ligament 2 0 37 39 n/a n/a n/a n/a n/a n/a n/a 1. http://www.sajr.org.za Open Access ligament derangement. 7,12,15 In one study, NPVs were recorded as 90% (MM), 91% (ACL) and 96% (PCL) 7 ( Figure 4). The PPV is the proportion of patients with tears on MRI being true tears confirmed on arthroscopy. The PPV results were 55%, 58%, 55% and not applicable (ACL, MM, LM and PCL, respectively), which were lower than comparative previous literature. In a comparative study, the PPV for the four structures was between 66% and 85%. 15 The sensitivities and specificities of the ligaments were 83%, 58%, 83% and not applicable and 87%, 81%, 70% and not applicable (ACL, MM, LM and PCL, respectively) Sensitivity (TP rate) is the measure of how MRI correctly identifies a tear as such. Specificity (TN rate) is the ability to exclude injury, by using the arthroscopy results as the gold standard. The sensitivities and specificities of the ligaments were comparable to previous evidence. One study, which extrapolated the results for 63 former similar credible international studies looked at MRI against arthroscopy, by using a 'modified Coleman method' found the sensitivities to be 76%, 91% and 86% and the specificities to be 93%, 81% and 95% for MM, ACL and PCL, respectively. 7 Figure 5 demonstrates the sensitivity and specificity of the ACL and MM against figures from a comparative study. Our results were corresponding to previous evidence, with the largest discordancy our low sensitivity in the MM (58%). The largest discrepancy was noted to be the low sensitivity of the MM (58%). This stems from a high FN rate. In our study, there were three FN results (7.69%). On retrospective review of these three MRI studies, none of the classic radiological findings (high signal in contact with the superior or inferior aspect of the meniscus, nor distortion of the normal meniscus shape) were present to suggest a radiologically missed tear 17,19 (see Figures 6 and 7). Given that the image acquisition quality was acceptable for the other structures, as supported by their higher yield correlation (Figure 8), a postulation may be that there were errors in diagnosis at arthroscopy itself, as suggested by a previous article. 20 a b Note: Proton density weighted fat saturated coronal and sagittal sequences demonstrating high signal (white arrow) in the hypointense posterior horn of the medial meniscus (MM) in a patient who had a true positive for MM tear ( Figure 6). This is in comparison to the normal morphology and normal signal intensity of the MM on one of the FN patients (Figure 7), where MRI had no retrospective findings to support an MM tear. Unlike the MR images and reports that are stored on picture archiving and communication system (PACS) and can be retrospectively interrogated, a reviewer is purely reliant on a single hard copy of the surgical report with intraoperative pictures, which is kept in the patient's file. The report is influenced by the skills of the performing surgeon and cannot thereafter be assessed. Another notable difference in our study, compared to previous literature, was the time interval between MRI and arthroscopy. As the interval lengthens, there is an increased chance for discrepancy between the examination, including healing, worsening of injury or even an interim new injury occurring. A similar situation has been previously described, where healing was postulated during the long interval between the two investigations, leading to likely inconsistency in results between the examinations. 20 In our study, the difference in time between the two investigations ranged from 2 to 1527 days (mean of 237 days) (Figure 9), which reflects the high patient load and burden on resources in our setting. This was a much larger interval than in a previously described study (5.8 weeks or approximately 41 days). 16 In two of our patients with the longest interval between MRI and arthroscopy (1523 days and 853 days), one study was completely congruent; however, the other had an FP LM tear (seen on MRI and not on arthroscopy). Healing may well have taken place between examinations resulting in inaccuracy of comparison. The FN totals were low in comparison to the FPs (1, 0, 3, 2 for ACL, PCL, MM and LM, respectively). This demonstrates a tendency of the reporting radiologists to 'overcall' abnormalities, rather than tears being erroneously missed completely. In two instances, FP was secondary to ungraded degenerative tears (PCL and MM). The use of the word tear, rather than degeneration, reflects misinterpretation of normal aging as pathology. The lack of uniform nomenclature used within the reporting of both modalities means that inter-observer interpretation of the report is discrepant, and readers may be unsure of the tear severity grade, for example, a degenerative tear is a normal aging finding or pathological and requires further management. Grading of the tear (grade 1, 2 or 3) is a well-known classifying system, may be a more informative solution to express the degree of injury. 21 a b Note: Proton density weighted fat saturated sagittal and coronal sequences demonstrating good diagnostic quality image acquisition as demonstrated in one of the TP ACL tears. In the above sequences, high signal and discontinuity of the ACL (yellow arrow) is present in keeping with a complete ACL tear. Further, post-traumatic changes, amongst others, include a suprapatellar effusion (blue arrow) and high signal/bone oedema of the lateral condyle of the distal femur (red arrow). Differenece in me (days) Paents Difference dates Average date Comparave study average Note: The figure demonstrates the days between MRI and arthroscopy against the mean. A large difference in time between the examinations is noted between the individual patient (blue) and average days (red) against a previous study (green). The MRIs had all been performed on a single 1.5-T Philips Achieve MRI scanner, which was installed at the institution on 27 February 2006. This negated any variability of hardware and acquisition quality. Previous evidence suggests no advantage in using a 3-T rather than a 1.5-T MRI scanner, nor with dedicated high field extremity MRI. 22,23,10,24 All MRI studies were reported by generalist radiologists, with no subspecialisation in MSK, although a previous article had suggested high accuracy in similar circumstances. 25 Conclusion The pilot study performed at SBAH found the MRI accuracy in determining non-osseous knee structural derangements to be comparable to previous international literature, by using arthroscopy as the gold standard. The study reiterates that high accuracy can be obtained from MRI on a 1.5-T nondedicated scanner, with interpretation by generalist radiologists particularly for identifying cruciate ligament injuries. Magnetic resonance imaging was found to be sensitive and specific, particularly for the ACL. A high NPV was also noted in all four structures evaluated, in keeping with previous literature. This means negative results may be used to avoid an unnecessary surgical procedure to the benefit of the patient and state, and reinforces the role of MRI in excluding injury in the setting of equivocal clinical findings.
Interview of Bernadette Segol: The European Union is not an exercise in applied economics On 29 February, the European Trade Union Confederation (ETUC) is organizing a day of action to express its opposition to the treaty on strengthening budgetary discipline in the EU. In what way does this treaty represent a danger for workers? This treaty provides no solutions. We know that enshrining the famous golden rule in the national constitutions or legislations is not going to get us out of the crisis. There is absolutely no certainty that it will be ratified by all the Member States. We are opposed to this treaty for two essential reasons: it was not democratically drawn up and it offers no prospects for employment. Of course we are in favour of balanced public finances, but balance has to be restored gradually over a much longer period and on the basis of policies that will allow the economy to get properly back on its feet. Our first demand is for the economy to be given a boost by means of sustainable investment. We are in favour of European bonds, taxes on financial transactions, measures to eliminate tax fraud and better distribution of taxation. And there is a need also for determined action by the European Central Bank in the search for a solution.
While the details of Martin and Duggan’s deaths and lives varied considerably, both cases highlight the lack of urgency with which law enforcement officials investigate incidents of fatal crime against Blacks and how, on either side of the Atlantic, Black men suspected of crimes are shot first, with questions about culpability left to be determined after the casket closes, if at all. On the night of Aug. 4 2011, 29-year-old Mark Duggan was riding in a cab in Tottenham, an area of London, when the vehicle was stopped by officers from Scotland Yard’s Operation Trident, a special unit intially tasked with “dealing with gun crime among Black communities, in particular drug-related shootings” and armed with submachine guns. Reports say police believed that Duggan was planning to attack a man in retaliation for the stabbing of his cousin, but no crime had yet been committed when he was stopped; the alleged plot has never been confirmed. Details about exactly what happened after the stop are still disputed. But, in the end, three shots were fired and Duggan was killed in the alleged exchange. Just two days later, violence erupted outside of the Tottenham police station as an angry crowd demanded justice for Duggan. The initial anger over Duggan’s death boiled over into a full-scale riot as many Britons expressed their resentment of increased police stops on Black residents. Now, nearly one year later, another blow to the Black community comes as the Independent Police Complaints Commission (IPCC), the office tasked with investigating all allegations against a police officer, announced that Duggan’s investigation will be stalled until January 2013 because sensitive material related to police decision-making may have to be withheld from the coroner. In light of the news, Duggan’s family has accused the IPCC of intentionally stalling the investigation. “We believe the IPCC are withholding information from us which is delay tactics. Maybe they think we will go away, come to terms with what has happened, but we are a grieving family and we will always grieve for Mark,” said Duggan’s aunt Carole, according to the London Evening Standard. While there has been debate over whether Duggan was involved in drug dealing — his family denies the claim and the police name him as a prominent gang member — the fact remains that he was the father of four children, a brother and a son. Duggan’s family and community deserve a complete and open investigation into exactly what happened at the time of his death. Howe’s words resonate in both the riots that shook London in the wake of Duggan’s death and in the non-violent protests currently rolling across the U.S. in support of justice for Trayvon Martin. In both the U.S. and England, it is evident that the stalking of Black men must cease. The singular message that has emerged from these two tragedies and countless others is simple: Black communities will no longer tolerate being terrorized by people and institutions allegedly working to keep us safe.
The development and validation of the clinicians awareness towards cognitive errors (CATChES) in clinical decision making questionnaire tool Background Despite their importance on diagnostic accuracy, there is a paucity of literature on questionnaire tools to assess clinicians awareness toward cognitive errors. A validation study was conducted to develop a questionnaire tool to evaluate the Clinicians Awareness Towards Cognitive Errors (CATChES) in clinical decision making. Methods This questionnaire is divided into two parts. Part A is to evaluate the clinicians awareness towards cognitive errors in clinical decision making while Part B is to evaluate their perception towards specific cognitive errors. Content validation for both parts was first determined followed by construct validation for Part A. Construct validation for Part B was not determined as the responses were set in a dichotomous format. Results For content validation, all items in both Part A and Part B were rated as excellent in terms of their relevance in clinical settings. For construct validation using exploratory factor analysis (EFA) for Part A, a two-factor model with total variance extraction of 60% was determined. Two items were deleted. Then, the EFA was repeated showing that all factor loadings are above the cut-off value of >0.5. The Cronbachs alpha for both factors are above 0.6. Conclusion The CATChES questionnaire tool is a valid questionnaire tool aimed to evaluate the awareness among clinicians toward cognitive errors in clinical decision making. Background According to the Institute of Medicine's report titled "Improving Diagnosis in Health Care ", diagnostic error is defined as "the failure to (a) establish an accurate and timely explanation of the patient's health problem(s) or (b) communicate that explanation to the patient". Three broad categories of diagnostic errors have been identified by Graber et al., viz., the no fault errors, system-related errors and cognitive errors. The category of no-fault errors is defined as errors caused by external factors outside the control of the clinician or the health care system. These include atypical disease presentation or the misleading information provided by the patients. The second category, i.e., system-related errors, are errors due to technical or organizational barriers such as weaknesses in communication and care coordination, inefficient processes and faulty equipment. The third category, i.e., cognitive errors (also known as cognitive biases), are errors due to poor critical thinking skills of the clinicians. Cognitive errors are deviations from rationality and may derail the clinicians into making diagnostic errors if left unchecked. Although they may believe otherwise, studies have shown that clinicians are in fact, just as prone to commit cognitive errors as anyone else. Campbell et al. have classified the common clinically important cognitive errors into six categories. These categories are "errors due to over-attachment to a particular diagnosis (examples of cognitive biases in this class include anchoring and confirmation bias)", "errors due to failure to consider alternative diagnoses (for example, search satisficing)", "errors due to inheriting someone else's thinking (for example, diagnostic momentum and framing effect)", "errors in prevalence perception or estimation (for example, availability bias, gambler's fallacy and posterior probability error)", "errors involving patient characteristics or presentation context (for example, cognitive biases: fundamental attribution error, gender bias)", and "errors that are associated with the doctor's affect or personality (for example, visceral bias and sunk cost fallacy)". In a survey by MacDonald et al. involving 6400 clinicians on diagnostic errors, the top three reasons for diagnostic errors cited have to do with cognitive errors. A total of 75% of these clinicians cited atypical patient presentation (resulting in the doctors being misled to consider other diagnoses), 50% cited failure to consider other diagnoses while 40% cited failure to gather adequate history from patients. Nonetheless, as important as these cognitive errors are, it is not known how many clinicians are aware of this. As pointed out by Prochaska et al. in their Transtheoretical Model of Change, the first step towards behavioral change is known as contemplation. In the context of cognitive errors in clinical decision making, this is the stage where a clinician becomes acutely aware of the negative impact of cognitive errors on diagnostic accuracy as well as factors that increase the vulnerability of a clinician in committing such biases in clinical decision making. Once a clinician is in the contemplation stage, he or she would likely see the necessity to initiate steps towards the intended behavioral change (in this case, minimizing the risk of committing cognitive errors when making clinical decisions). This step is known as the preparation stage. On the other hand, a person who is unaware of the problem sees no reason to take any action to change. This prior stage is known as the pre-contemplation stage. A tool is therefore necessary to facilitate the transition from the stage of pre-contemplation to the stage of contemplation. Despite the impact of cognitive errors on diagnostic accuracy, there is a paucity of literature on questionnaire tools aimed to assess the clinicians' awareness toward cognitive errors. This paper describes the development and validation of a questionnaire purported to evaluate the Clinician's Awareness Towards Cognitive Errors (CATChES) in clinical decision making. The purpose of this tool is to help create the awareness among clinicians who are in the pre-contemplation stage with the hope of moving them from this stage to the stage of contemplation in the Transtheoretical Model of Change. This tool can also be used as a pre-intervention material to supplement educational resources in teaching cognitive errors in clinical medicine (such as this resource in MedEdPORTAL ). Participants For content validation, based on the recommendation by Lynn, ten experts consisting of emergency physicians from Universiti Sains Malaysia were invited to determine the content validation and out of these ten, nine of them consented. For construct validation, emergency physicians and emergency residents with a minimum of four years' working experience in Hospital Universiti Sains Malaysia were identified as the participants. Using the rule of thumb of a minimum of five participants per item, a minimum of 30 participants were needed. Clinicians who were not residents pursuing a postgraduate degree in emergency medicine or clinicians with less than four years of working experience in the emergency department were excluded. The authors invited 35 of these emergency residents to participate in the construct validation and 31 of them responded. All nine emergency physicians who participated in the content validation also participated in this construct validation process. Hence, a total of 40 participants were recruited in this construct validation process. Materials The questionnaire tool in this study is divided into two parts. The first part of the questionnaire (Part A) aimed to evaluate the awareness of clinicians towards cognitive errors in clinical decision making while the second part (Part B) aimed to evaluate the clinician's perception towards specific categories of cognitive errors in clinical setting (Part B). A preliminary version of the questionnaire was first developed by two authors (KS and AH) and checked by the third author (YC). For the development of Part A of this questionnaire, the Transtheoretical Model of Change was used as the theoretical framework. Six items were generated in this preliminary version. The theoretical basis for each of the items is given in Table 1. Whereas, for Part B, the classification of cognitive errors used by Campbell et al. was used to generate the preliminary list of categories of cognitive errors. Each category of the cognitive errors is defined as an item. A total of six items were generated. Procedure This was a cross-sectional study conducted among clinicians (emergency physicians for content validation; emergency physicians and emergency residents for construct validation) from Hospital Universiti Sains Malaysia (HUSM). Convenience sampling was applied in recruiting the participants. Human Research Ethics approval was obtained from The Human Research Ethics Committee of Universiti Sains Malaysia before the study was commenced. The content validity of the questionnaire (both Part A and Part B) was first determined by a panel of experts consisting of the emergency physicians in HUSM. These experts were briefed by one of the authors (AH) on how to respond to the relevance of the items, ranked in a Likert scale of four, ranging from "1 = not relevant at all" to "4 = highly relevant". The experts were told to respond anonymously and that they were free to withdraw from the study at any time. The response sheets were passed to the experts to respond on their own and were collected back by author (AH) the following day. The document on the glossaries of terms were handed out and read out to the participants prior to starting the questionnaire. After the content validation process, the construct validation of the questionnaire was determined. For the construct validation of Part A, participants were first briefed on how to respond to the items ranked in a Likert scale of five, ranging from "1 = strongly disagree" to "5 = strongly agree". Participants were told to respond anonymously and that they were free to opt out at any time. A separate document on the glossaries of terms were handed out and read to the participants prior to starting the questionnaire. All participants responded individually in one sitting. Since the purpose of this Part B is to identify the clinician's perception towards the specific categories of cognitive errors in clinical setting, it was set in a dichotomous format (i.e., whether they are relevant or not relevant) and not in an ordinal format. As such, construct validation for this part was not determined. The sequence of content and construct validation is illustrated in Fig. 1. This item is aimed to evaluate whether the clinician believe that realize that by just being aware of these cognitive errors would improve the quality of his clinical decisions. Item no. 3 "Authority gradient discourage critical thinking and thus increase the vulnerability to commit cognitive errors" Authority gradient is defined as the gradient that exists between two individuals of different professional status, experience, or expertise that contributes to difficulty in exchanging information (Cosby and Croskerry, 2004). This item is aimed to assess the clinician's perception on whether he or she believes that authority gradient discourages critical thinking on cognitive errors toward clinical decision Item no. 4 "Something, rather than nothing, can be done to minimize the risk of falling into these errors" To assess the motivation of the clinician towards change by minimizing the impact of cognitive errors in clinical decision making Item no. 5 "The understanding of cognitive errors and its impact on clinical decision making and patient safety should be made a component in emergency medicine curriculum in postgraduate training" To assess the motivation of the clinician towards change by minimizing the impact of cognitive errors in clinical decision making Item no. 6 "The understanding of cognitive errors and its impact on clinical decision making and patient safety should be taught at undergraduate level" To assess the motivation of the clinician towards change by minimizing the impact of cognitive errors in clinical decision making Fig. 1 The sequence of content validation followed by construct validation Statistical analyses Exploratory factor analysis (EFA) was used to determine the construct validity of Part A of the questionnaire. Principal axis factoring was chosen as the extraction method. The initial run of the factor analysis was performed to determine the number of items to be extracted. An eigenvalue of more than 1 was chosen as the cut-off value to determine whether the numbers of factors to be fixed. Scree plotting was also performed to further verify the number of factors for extraction. Repeated runs of the factor analysis were then performed to determine the factor loadings of the items as well as to identify problematic items that may need to be removed. A cut-off point of 0.5 was used as the criteria in factor loading to determine whether an item is to be removed or not. Whereas for communality (extraction), a value of >0.25 was set as the cut-off value to determine the need for item removal. Promax oblique rotation was used. The internal consistency reliability of the item was determined by analyzing the Cronbach's alpha coefficients. Cronbach's alpha refers to the degree to which participants' responses are consistent across the items within this questionnaire construct. A cut-off point of Cronbach's alpha >0.6 was set for this study for the criteria of a good degree of internal consistency. The software SPSS version 22.0 for Mac was used for data analysis. To evaluate the content validity of item relevance, the content validity index (CVI) and the modified kappa () were used. The item relevance CVI (I-CVI) for relevance is defined as the proportion of the judges who rate the item with scores of 3 or 4 on a four-point Likert scale (with 1 = not relevant at all, 2 = somewhat relevant, 3 = quite relevant, and 4 = highly relevant). CVI value of 0.85 and above is considered as valid. The modified kappa () was computed in order to account for the possibility of chance agreement in CVI. Results With regards to the content validity, the CVI values for all items were rated highly as valid in terms of their relevance in clinical settings. In terms of the values of their modified kappa (), all items were rated as "excellent" in terms of the validity of their relevance in clinical settings. The results of CVI for Part A and B are given in Tables 2 and 3 respectively. With regards to the construct validation using EFA on Part A, generally the Kaiser-Mayer-Olkin measure of sampling adequacy was found to be 0.74 which demonstrates a moderate degree of common variance shared among the items. The Bartlett's test of sphericity was statistically significant (with chi-square statistics = 43.93, p < 0.05). This shows that there are correlations among the items based on the correlation matrix. Initial eigenvalue indicates that the first two factors (which has the eigenvalue >1) explain 60% of the total variance (42 and 18% respectively). Furthermore, 2 factors were shown to be above the point of inflexion of eigenvalue on the scree plot (Fig. 2). The number of factors was therefore, fixed at 2 for re-run of the analysis. After 2 rounds of test re-run, two items out of the six were removed as they did not meet the minimum cutoff points of factor loading >0.5, and communality values of >0.25. In particular, item no. 3 ("Authority gradient discourage critical thinking and thus increase the vulnerability to commit cognitive errors") was recognized as problematic with factor loadings of only 0.14 and 0.20 in both factors and its communality value (extraction) of 0.084 only. Item no. 4 ("Something, rather than nothing, can be done to minimize the risk of falling into these errors") was also identified as problematic with factor loading of 0.481 in Factor 2 and a communality (extraction) value of 0.145. After removal of the two items, the re-run of the principal axis analysis of the remaining 4 items shows that they explain 75% of the variance with two factors extracted. All items in this analysis had factor loadings of >0.5 and communalities (extraction) of >0.25. The pattern matrix of the factor loading is presented in Table 4. Item no. 1 ("Cognitive errors in general have important impact towards clinical decision making in emergency medicine") and item no. 2 ("Being aware of cognitive errors help me to be more careful in my clinical decisions") load on Factor 2 whereas item no. 5 ("The understanding of cognitive errors and its impact on clinical decision making and patient safety should be made a component in emergency medicine curriculum in postgraduate training") and item no. 6 ("The understanding of cognitive errors and its impact on clinical decision making and patient safety should be taught at undergraduate level") load on Factor 1. Hence, we labeled factor 1 as the "educational interventions to reduce the risk of cognitive errors" whereas Factor 2 is labeled as the "impact of cognitive errors in clinical decision making". Both factors in Part A yield a Cronbach's alpha of 0.676 and 0.635 respectively and no further improvement in the Cronbach's alpha values could be achieved by deleting any of the items. Cronbach's alpha for Part B is 0.657. Similarly, no further improvement in Cronbach's alpha value could be achieved by deleting any of the items in this part. In the finalized version of the CATChES questionnaire (Table 5), the sequence of the factors is logically reversed, with Factor 2 placed first before Factor 1 for Part A of the questionnaire. Discussion Based on the content validity evaluation of Part A and B of the questionnaire, all items were retained as they were shown to have excellent content validity in terms of their relevance in clinical setting. From the EFA, Part A of the questionnaire is constructed with two factors, i.e., the "impact of cognitive errors in clinical decision making" and "educational interventions to reduce the risk of cognitive errors". Each of these two factors has two items. Referring back to the Transtheoretical Model of Change by Prochaska et al., Factor 2 "impact of cognitive biases in clinical decision making" reflects the contemplation stage of the model, whereas Factor 1 "educational interventions to reduce the risk of cognitive biases" reflects the preparation stage of the model. Furthermore, from the EFA, it is also shown that there are two items that had to be removed. For item no. 4 ("Something, rather than nothing, can be done to minimize the risk of falling into these biases"), the phrase 'something, rather than nothing' is rather ambiguous and this may have resulted in its rejection by the participants as a valid item. Re-phrasing it with a more direct sentence may bring greater clarity. For example, it could For item no. 3 ("Authority gradient discourages critical thinking and thus increase the vulnerability to commit cognitive errors"), its rejection could be due to the fact that the statement is overly generalized, particularly in an Asian culture. Authority gradient is defined as the gradient that may exist between two individuals' professional status, experience, or expertise that contributes to gap in exchanging information or communicating concerns. In our study, perhaps our participants did not think that authority gradient is always bad. Nurtured in an environment where a healthy level of authority gradient is respected, a senior, experienced clinician can train a junior clinician in inculcating better clinical decision making skill. In terms of the internal consistency analysis of Part A, a moderate degree of internal consistency measured by both Cronbach's alpha values of more than 0.6 in both factors was noted. The internal consistency could be improved by adding more items within the factors. Therefore, future research should consider including more items that are relevant to Factor 1 (educational interventions to reduce the risk of cognitive biases) and Factor 2 (impact of cognitive biases in clinical decision making) or items to generate more factors to move up to the next of stage of "taking actions" along the ladder of Transtheoretical Model of Change. There are a number of limitations in this validation study. First, for Part A, confirmatory factor analysis (CFA) was not performed on another set of samples to confirm the constructs developed based on the EFA result. Second, face validity was not performed to determine its comprehensibility and readability. For example, as mentioned, the phrase 'something, rather than nothing' in item no. 4 is rather vague. Third, more items should be included to improve the internal consistency of the constructs. The future development of this project would include rewording and rephrasing the items as well as adding more relevant items based on Transtheoretical Model of Change. More samples should be included to replicate the present EFA results and as well as including CFA test in the analysis to devise the second edition of this questionnaire. Conclusion Despite its limitations, the construct and content validation suggest that the CATChES questionnaire tool is useful in evaluating the awareness among clinicians toward cognitive errors in clinical decision making. Such awareness may in turn, motivate them to take measures to minimize risk of committing these errors.
Exeter based, full-service Agency with clients nationwide and an annual turnover of more than £6,000,000. Who are we looking for? We're looking for an experienced and ambitious Digital Marketing professional to join our digital team. Working alongside the Head of Digital, this role encompasses all aspects of the digital marketing mix – from campaign strategy and planning, delivering class-leading Search, Social, and Programmatic campaigns, to providing reports and insight to clients. The ideal candidate will be an enthusiastic digital marketer with an eye for detail and a passion for driving results through digital media. They will be able how they keep pace with the evolving digital media marketplace and combine best practices with a continual drive for innovation. As well as leading on strategic and analytical insight, they will also be skilled in the management of digital media buying and optimisation and will be able to demonstrate a high level of competency with core digital advertising channels such as Google Ads, Facebook, other ad vendors/DSPs, and reporting platforms. They will be confident and articulate in their ability to communicate complex media strategies and interpreting digital media performance for clients, both in person and in written formats. + Valid Google Certifications are highly desirable. + Excellent time management, problem-solving, written and verbal communications skills. + 2+ years working within digital marketing and/or advertising, with experience working within an agency environment highly desirable.
Telematics devices facilitate connecting a vehicle with a communications network. A telematics control unit (“TCU”) installed in a vehicle typically comprises a global positioning satellite (“GPS”) circuit, or module, wireless communication circuitry, including long range wireless (e.g., cellular telephony and data services) and short range wireless (“Bluetooth”) capabilities. A TCU typically includes at least one processor that controls, operates, and manages the circuitry and software running thereon, and also facilitates interfacing with a vehicle data bus. For example, a TCU installed by a vehicle's original equipment manufacturer (“OEM”) such as Ford, Toyota, BMW, Mercedes Benz, etc., typically couples directly to the corresponding vehicle's data bus, such as, for example, a controller area network (“CAN”) bus, an international standards organization (“ISO”) bus, a Society of Automotive Engineers (“SAE”) bus, etc. The TCU can process and communicate information retrieved from the bus via links of the wireless communication networks, to a user's mobile device local to the vehicle, or a computer device remote from the vehicle. An OEM typically cautiously guards access by third party software to a vehicle's bus through a TCU because of the potential of computer virus infection, other malware, and software and data that although innocuous may nonetheless interfere with operation of the vehicle, which could expose the OEM to warranty liability and other liability. Aftermarket vendors have begun packaging some components of a TCU in a module that plugs into a diagnostic connection of a vehicle, such as, for example, and OBD, or OBD-II port. Those of ordinary skill in the art typically refer to such a self-contained device as a ‘dongle.’ In addition, aftermarket vendors have also begun marketing a rear-view-mirror that includes some components of a TCU. A dongle typically receives power from the battery voltage power pin of the diagnostic port. The aftermarket rear view mirrors typically receive power through a wire, and trained technicians typically install the TCU mirrors.
Comparison of the Carbon and Water Fluxes of Some Aggressive Invasive Species in Baltic Grassland and Shrub Habitats : Biological systems are shaped by environmental pressures. These processes are imple-mented through the organisms exploiting their adaptation abilities and, thus, improving their spreading. Photosynthesis, transpiration, and water use efficiency are major physiological parameters that vary among organisms and respond to abiotic conditions. Invasive species exhibited special physiological performance in the invaded habitat. Photosynthesis and transpiration intensity of Fallopia japonica, Heracleum sosnowskyi, and Rumex confertus of northern and trans-Asian origin were performed in temperate extensive seminatural grassland or natural forest ecotones. The observed photosynthetically active radiation (PAR) ranged from 36.0 to 1083.7 mol m − 2 s − 1 throughout the growing season depending on the meteorological conditions and habitat type. F. japonica and H. sosnowskyi settled in naturally formed shadowy shrub habitats characterized by the lowest mean PAR rates of 58.3 and 124.7 mol m − 2 s − 1, respectively. R. confertus located in open seminatural grassland habitats where the mean PAR was 529.35 mol m − 2 s − 1. Correlating with the available sunlight radiation (r = 0.9), the highest average photo assimilation rate was observed for R. confertus ( p = 0.000). The lowest average intensity of photosynthesis rates was exhibited of F. japonica and H. sosnowskyi in shadowy shrub habitats. Transpiration and water use effectivity at the leaf level depended on many environmental factors. Positive quantitative responses of photosynthesis and transpiration to soil and meteorological conditions confirmed positive tolerance strategies of the invasive species succeeded by environmental adaptation to new habitats during their growing period sustained across a range of environments. Introduction Biodiversity is a prominent concern to ecosystems of Europe and worldwide. Vegetation, as part of biodiversity, performs a crucial function in the ecosystem's services, i.e., carbon flux exchange and the hydrological cycle between terrestrial ecosystems and the atmosphere through photosynthesis and transpiration. However, invasive alien species represent a key pressure to biodiversity as a result of enlarged international trade, transportation, tourism industry, and climate change. The regulation of alien species should be applied for the preservation of phytodiversity, and thus, guaranteeing the structure and function of ecosystems with the positive ecosystem services. Biogeographic and climatic conditions make natural barriers for the spread of alien species. However, adaptation to a new environment guarantees their spread outside their natural ranges. The assessment of physiological adaptation, namely, photosynthesis and transpiration activity, should allow an explanation of the reasons or limitations of the spread of alien species. Solar radiation is mainly absorbed as energy for CO 2 assimilation into free photosynthetic energy in the leaf, which is used for the transpiration process, which sets up an essential integrated functional system in plants. One author showed that only approximately 55% of solar radiation wavelengths can be employed by photo assimilation of CO 2, which reduces the light efficiency to about 18%. Green plants converted solar energy to sugars that were transmitted from green leaves to perform the greatly susceptible processes of growth, development, and ripening. Therefore, plant growth and development are significantly dependent on the photosynthesis effectivity. Moreover, photosynthesis provides the energy required for plants' acclimation, making them resistant to changing environmental conditions in line with the optimization hypotheses, which explained the forces of biological systems from cells to communities and ecosystems scales. After the ecological perspective, photosynthesis research has mainly focused on the income of biochemical energy created by light energy, indicating the photosynthetic efficiency related to consumed water, which is mainly lost in transpiration. Evaporation and transpiration realize the freshwater exchange between ecosystems and the atmosphere. Transpiration makes up 60-80% of the whole terrestrial evaporation and returns about half of the mainland rainfall back into the atmosphere. Hence, evaluations of photosynthesis and transpiration rates are essential indices for the characterization of species vitality and understanding vegetation's role in climate change, which depends on carbon and water cycling. Successful alien species follow optimal physiological trajectories formed by environmental pressures, forcing them to maximize their acclimation and reproductive success. The optimization theories particularly clarify the forms and role of terrestrial vegetation as of eco-hydrological and carbon-economy viewpoints through spatial and temporal scales. Their purposes are generally constrained by the identification of attributes of a complex system of interacting elements between environment and organism that contribute to species being fit for survival. Therefore, the theories of the optimization demands are based on the postulation that the plants target maximum carbon uptake and growth (subject to constraints) over a specified period. Therefore, the sufficient rates of photosynthesis and transpiration, in principle, might potentially indicate an adaptation of invasive plants to new terrestrial ecosystems when water is not the limiting factor, which modulate the gas exchange (water vapor and also the rate CO 2 fixation in leaf mesophyll tissue) between plant and environment. However, photosynthesis and transpiration constitute a complex and respond to numerous abiotic factors (light intensity, vapor pressure deficit, CO 2 content, etc.). The impact of water content on transpiration has been widely documented empirically (data-based) or validated by means of mechanistic (process-based) and economic (optimization-based) modeling for the different plant species. Transpiration effectiveness is evaluated by means of water use efficiency (WUE), which is defined as photosynthetic carbon gain per unit of evaporated water. WUE parameter indicates responses to negative aspects of the global climate change, such as drought or increased temperature. At the leaf level, WUE values increase with increasing temperature. When the optimal temperature for plant growth is exceeded (i.e., heat stress), the WUE begins to decrease. While comparing different ecosystems, it was identified the WUE has correlated to precipitation, gross primary productivity, and growing period length. Some studies analyzed the impacts of environmental changes, where WUE, together with physiological parameters, was used for the historic observations of different crops' responses to temperature and CO 2. They found that WUE increased until the temperature was exceeded by 1.5 C of the normal temperature, and then started to decline. An increase in WUE values might possibly indicate species with higher resistance to drought conditions. Thus, the important potential benefit of WUE should be used to identify invasive species' response and adaptation to a new environment. Nonetheless, the net effect of transpiration and photosynthesis data of invasive plant species in new territories remain to a large extent unknown. Extensive gaps in invasive plant species research in terms of their physiological acclimation faced by global decision-making bodies have significance for the scientific management of their invasions. Consistent with previous issues, the assessment of eco-physiological parameters of photosynthesis and transpiration were selected to specify the adaptation of invasive species to environmental conditions in different invaded seminatural or natural habitats. The present study was undertaken to compare the eco-physiological characteristics, i.e., photosynthesis and transpiration rates, of one cosmopolite and three alien plant species, which are marked by their prolific and vigorous growth and intensive spread. The following hypotheses were tested: the invasive species achieve high photosynthetic capacity that contributes to their adaptation and spread in the new invaded environment; species variations in transpiration rates depend on natural light conditions and precipitation changes during the growing period. The assessment of efficiency and rates of photosynthesis and transpiration may contribute to the explanation of the vitality and acclimation of invasive species to the temperate environments of central Lithuania. Species and Location Setup Lithuania is situated in the cold temperate zone (5-6 Hardiness Index) with moderately warm summers and medium cold winters. The average temperature in midsummer, i.e., July, is approximately 17 C, and in winter, it is approximately −5 C. Physiological acclimation of three invasive species listed on the National List of Invasive Species, namely Fallopia japonica (Hout.) Ronse Decr. (Polygonaceae), F jap, native of northern Japan (Hokkaido, Honshu) and N-E Russia (Sakhalin, Kurile Islands), Heracleum sosnowskyi Manden., H sosn (Apiaceae) from Trans-Asia, and Rumex confertus Willd., R conf (Polygonaceae) from Asia were assessed in the temperate climate of Lithuania. Cosmopolite Taraxacum officinale L., T offi served as a control species (Table 1). F jap and H sosn were tested in shrubland, whereas R conf and T offi were tested in extensive grassland habitats. Both habitats were situated close to international highway Via Baltica, Kaunas district, central Lithuania, with intensive traffic. Grasslands are dominated by Festuca pratensis, Poa pratensis and Lolium perenne, shrubland was dominated by Salix sp. Each habitat was of sufficient size to accommodate four representative plots of 1 m 2. Assessment of Physiological Parameters A plant photosynthesis system (ADC BioScientific, Hoddesdon, UK) was applied for the assessing photosynthesis (A, mol m −2 s −1 ), transpiration (TE, mmol m −2 s −1 ), stomatal conductance (gs, mol H 2 O m −2 s −1 ) and photosynthetically active radiation (PAR, mol m −2 s −1 ) parameters in situ for invasive plant species. Physiological parameters of fully developed apical leaves of six randomly selected plants were measured in 10 replications every month in each habitat (n = 6 10). Measurements were made at saturating irradiance photosynthetic photon flux density PPFD (1500 mol m −2 s −1 ) and ambient temperature, humidity, and CO 2 concentration. Using the measured A and E values, the water use efficiency (WUE = A/TE) was calculated. Estimation of Abiotic Environment Parameters Climatological data (temperature and precipitation) were taken from Kaunas meteorology station. Physical soil parameters (temperature-T, moisture and electric conductivity-el. conductivity) were evaluated using the integrated analyzer HH-2 (AT Delta-T Devices Ltd., Cambridge, UK) in the invaded habitats. Mean temperature and precipitation were compared with multi-annual averages throughout the growing period (April-September). The fluctuations and differences in weather conditions could affect not only abiotic ecosystem' parameters but also plant photosynthesis and respiration. Mean temperatures of May-August exceeded multiannual averages by 0.3-3.43 C, while it was equal to the multi-annual averages in April and September. Nonetheless, precipitation exceeded multi-annual averages with the exception of May. As a result, the growing season was rather favorable for plant growth compared to normally warm conditions with higher than usual humidity. Soil temperature, moisture, and electrical conductivity ( Figure 1) varied in concomitance to meteorological conditions in the habitats of the assessed species. The soil parameters revealed that F jap and H sosn favored similar environment parameters; however, R conf was different from the former species in preference for habitats of high moisture, with warm and ion-rich soil. Statistical Evaluation The level of statistical confidence, stochastic interactions between assessed A, T, and WUE data, and the plant species, measurement time, and environment conditions were calculated by an analysis of variance and regression using the statistical package R of StatSoft for Windows standards. A Fisher test and a Kruskal-Wallis H nonparametric test were used for means separation. Abiotic Conditions PAR intensity ( Figure 2) related to habitat type, soil, and meteorological conditions (Figure 1). The highest PAR rates, up to 1056.0-1083.7 mol m −2 s −1, were seen in R conf and T offi, which colonize open grassland habitats where full sunlight is accessible. The low light access data, with means of 58.3 and 124.7 mol m −2 s −1, were available for invasive F jap and H sosn, respectively, established in shaded shrubland. Meteorological conditions shifted PAR values and changed sunlight access to plants during the growing season ( Figure 1). Precipitations usually conditioned lower temperature and light (PAR). Determined correlation between PAR and temperature (r = 0.8) and precipitation (r = 0.4) confirmed their impact on light conditions in habitat. Comparing PAR during the growing season, the mean values differ significantly between treatments possibly due to cloudy weather that conditioned wide light dispersion during measurements ( Figure 2b). PAR exhibited the highest variation in September, when the change of sunny and shadowy periods was most frequent. Significantly, the lowest PAR values (p = 0.000) with narrow dispersion rates were recorded in mostly shaded bush habitat invaded by F jap (Figure 2). Changing cloud conditions had no impact on PAR dispersion here. Better average light access was noted for H sosn in bush habitat. The strongest light was available for R. confertus and T. officinale. The difference of light conditions was insignificant for open habitats and bush habitat invaded by H. sosnowskyi. Nonetheless, the widest light dispersion was in open habitats of T. officinale. The widest light variation was fixed also in these open habitats due to the changing impact of cloudy conditions here. Some studies revealed that different meteorological conditions may facilitate the invasibility of alien species in ecosystems. Moreover, the light availability (PAR) determines many functional responses, e.g., changes in leaf area and thickness, the chlorophyll content, root:shoot ratio of biomass allocation, and thus, the invasiveness facilities of alien plant species. Consequently, light conditions remain a determinant component for the invasiveness of alien species in newly invaded territories. Intensity of Photosynthesis and Transpiration The lowest mean photosynthetic activity (A, mol m −2 s −1 ) determined for F jap and H sosn that was established in shadowy habitats was different from the highest activity for R conf and T offi in open habitats (Figure 3). Nonetheless, the widest and significant differences (p = 0.000) of the photosynthesis data were documented between F jap (median value 3.02 mol m −2 s −1 ) and R conf (median value11.60 mol m −2 s −1 ). The differences between the photosynthesis values of the remaining species, specifically H sosn, R conf, and T offi, were insignificant (p = 0.000) in their habitats (Figure 4). The strong coefficients (r = 0.6-1.0) of the linear regression determined between A and PAR indicated good adaptation of the species' photosynthesis activity to light conditions in their habitats. The photosynthesis values have not exhibited a significant correspondence to the measurement date, possibly due to the substantial variation of PAR values caused by cloudy conditions in habitats (Figure 3). Meteorological conditions, i.e., temperature (T, C) and precipitation (P, mm) revealed different impacts on species photosynthesis (A) capacity reliant on conditions' accessibility in different habitats. T offi and R conf had the highest CO 2 exchange rates and exhibited a strong positive correlation between A and temperature (r = 0.5) due to good light accessibility in their open habitat. Rainy weather negatively affected photosynthesis due to lower PAR; thus, the negative mean correlation (r = −0.1) between A and P was determined for the assessed species in their habitats of moderate water content, where precipitation had a weak impact on the hydrological regime. Between soil physical conditions (Figure 1), soil temperature exhibited the strongest correlation with the photosynthesis activity of the assessed species. However, soil temperature negatively impacted the photosynthesis of F jap (r = −0.9) and H sosn (r = −0.7) in the wet canopy of shrub habitats. Negative strong correlation was observed between A and soil moisture (r = −(0.5-0.7)). Soil electrical conductivity, which indicated soluble ion content sufficient for plant nutrient supply, showed a strong impact on species' photosynthetic capacity (r = 0.6). Transpiration values exhibited a wide range, i.e., from 0.07 mmol m −2 s −1 of F jap in April to 1.70 mmol m −2 s −1 of R conf in September ( Figure 5). The highest mean TE rates of 0.58 mmol m −2 s −1 and 0.90 mmol m −2 s −1 were recorded for invasive H sosn and F jap, respectively, in the canopy of shrub habitats. These species grow in wet habitats, where the unlimited water conditions support the stomata aperture, and thus, the entropy production of TE. The median transpiration rates ranged between 0.285 mmol m −2 s −1 for F jap and 0.825 mmol m −2 s −1 for H sosn in the natural environment of the invaded habitats ( Figure 5). Although PAR energy is mainly consumed for photosynthesis, the linear correlation between PAR and TE was determined to be strong (r = 0.6). The correlation between TE and photosynthesis (A) activity was weaker (r = 0.4). In general, TE values were altered in the analogous trend as A values, showing a tendency to increase from spring to autumn. The TE differences between assessed species were statistically insignificant (p = 0.0005). The correlation between the mean TE and air temperature of the assessed species was weak (r = 0.1-0.3). However, precipitation exhibited a stronger impact on TE (mean r = 0.4) than on T. Soil physical parameters also insignificantly impacted the transpiration activity. This trend confirmed the weak positive correlation between TE and soil temperature (r = 0.3), but a negative correlation between moisture (r = −0.2), and el. conductivity (r = −0.1), possible due to their different impact on root functioning and soil water access to plant. Similarly to A data, scattered rates of transpiration insignificantly differed due to the cloudy atmosphere and sufficient water access in researched habitats during the growing season ( Figure 5). The highest mean TE values were observed in July and August for F jap, possibly due to the unlimited water content after extremely abundant precipitations. TE rates alternated insignificantly between measurements during the growing season. Water Use Effectivity Here, the water use efficiency (WUE) is expressed by the ration between plant productivity or photosynthesis gain and transpiration ( Figure 5). WUE is defined as the amount of assimilated carbon per unit of water used by the assessed species. Different exposures to solar radiation (PAR) impacted species' WUE rates in different habitats ( Figure 5). The R conf and T offi in grass habitats with unlimited light access had the most evident gas exchange rates and lower transpiration rates, and thus, they revealed nearly two times higher WUE values of 28.31 and 29.96 mol mol−1 than the remaining species in the constant canopy of a shrub habitat. A strong correlation between WUE and A (r = 0.6) and TE (r = −0.6) confirmed a similar impact, but with different vectors, of these parameters on water use efficiency, while temperature (r = −0.3) and precipitation (r = 0.1) have a weak impact on WUE due to their different effect on A and TE. Photosynthesis and Transpiration Adaptation to Abiotic Conditions In this study, we found that photosynthesis and transpiration rates of invasive species at leaf level could be used as novel parameters for the documentation of their adaptation to a new abiotic environment. In agreement with numerous studies, our results also revealed the alternation of eco-physiological parameters being subjected to environmental conditions in different habitats during invasive plant distribution. Moreover, some researchers concluded that such light differences accounted for the regulation of photosynthesis more than transpiration. Thus, photosynthesis and transpiration presented as a function of environmental conditions for the season of invasive plant growth. We found that soil physical conditions characterized by temperature, moisture, and electrical conductivity might be helpful to improve the photosynthesis and transpiration values of plants. Nonetheless, some simulation models in previous studies revealed that soil temperature impacted water loss through evaporation and photosynthesis more than through transpiration. The data of this research revealed that soil characteristics impacted the rates of photosynthesis more than those of transpiration activities of the assessed plants. Some authors recognize that water deficits cause water stress and have a dominant role in controlling stomatal function and gas exchange between plant and atmosphere, followed by the impact on the photosynthesis and transpiration activity. We found that among the soil parameters, the soil temperature had the strongest impact on photosynthesis rates due to the activation of root formation and functioning. It is already confirmed that soil temperature might increase the water supply by root activation, and thus, support stomatal conductance. Soil moisture regime exhibited a stronger correlation with photosynthesis activity in open grassland habitats than in the canopy of bush habitats, possibly due to different water cycling. This finding has been widely documented by previous publications, which generalized that adequate soil moisture maintained efficient light utilization and high photosynthetic rates, and thus, probably contributed to the success and geographical distribution of some invasive species. Drought stress might limit the distribution and spreading of invasive species responsive to water limit, reacting by dropping their leaves during drought stress. CO 2 gas exchange through stomatal conduction responses to a complex of many abiotic parameters (light intensity, water vapor pressure deficit, CO 2 concentration, etc.). Light remains an essential environmental resource for green plant survival, growth, development, and spread. Light limits photosynthesis, which integrates the two processes of inverse vectors, i.e., the exchange of CO 2 and water between the plant canopies and the atmosphere. Plants simultaneously absorb atmospheric CO 2 through leaves' stomata and lose water that diffuses to the atmosphere. This is directed to the supplementary proposition that photosynthetic rates in natural ecosystems are indirectly limited by sunlight; however, they are also related to the CO 2 transfer between the atmosphere and the canopy. Our results confirmed previous findings and revealed that photosynthesis strongly depended (r = 0.6-1.0) on light conditions; the highest values were documented for species in open grassland habitats, where PAR availability depends on cloudy weather. These data are consistent with the previous conclusion that photosynthesis can be effective in capturing the photons, using them for the generation of chemical-free energy in case of PAR availability and plant capacity for efficient light utilization. This means that photosynthetic rates are not directly restricted only by sunlight or PAR in natural ecosystems, but also by the transference of CO 2 between the atmosphere and the plant. For invasive species, the geographic latitude is an important variable that causes a large change in light and temperature in new habitats. Differently from PAR, the ambient temperature exhibited a minor impact (r = 0.3) on invasive species' photosynthesis in the canopy of shrub habitats compared to open habitats. Species' response to temperature was different in changes of photosynthesis effectivity, which reflected the species' ability to adapt to the invaded habitat environment. Plants that inhabit cold regions often need a low optimal temperature to achieve the maximal photosynthesis activity, which is limited by Rubisco (ribulose-1,5-bisphosphate carboxylase/oxygenase) or RuBP (ribulose 1,5-bisphosphate) regeneration activities. Precipitation of temperate climates negatively impacted A rates (r = −0.6) in all habitats due to decreased PAR access in cloudy conditions. Therefore, photosynthesis variation never remains systematic and predictable in the natural environment. We found that the photosynthesis capacity of assessed invasive species in this study was similar to that of cosmopolitan species T off. Since photosynthesis is an essential physiological process for plant acclimation, making them resistant to changing environmental conditions, the recorded photosynthesis data revealed that the assessed species physiologically adapted to light and temperature conditions in the investigated habitats of a new environment, namely Lithuania, which has a temperate climate. The next principal question is how invasive plants will respond to the new abiotic environment of the invaded climate zone in different latitudes, with different levels of light, temperature, and precipitation, which affect not only their photosynthesis, but also transpiration and WUE. Since photosynthesis is closely linked to transpiration through gas exchange, we found that the transpiration values were altered in the analogous trend, such as A ranged. Similar to the previous conclusion that high leaf transpiration has always been found in habitats distinguished with high soil water content, we similarly found that the transpiration rate was minimal in the open grass habitat and increased in the canopy of the bush habitat due to the higher moisture content. Some authors explained that abscisic acid modifies stomatal behavior, and thus, changes the transpiration rates. The recorded TE rates subjected to habitat water environments are the indication of the invasive species adaptation to their invaded habitats. Water Use Efficiency Water use efficiency (WUE) is among the basic characteristics of ecosystem functioning that reflects the balanced connection between carbon gain and water loss. We found that higher rates of WUE exhibited invasive R conf and cosmopolite T off in open grass habitats due to more intensive photosynthesis activity than those of invasive species in the bush habitats. We found a strong correlation between WUE, photosynthesis (r = 0.6), and transpiration rates (r = −0.6). The ambient temperature (r = −0.3) and precipitations (r = 0.1) are also important environmental factors affecting the WUE rates under different habitat conditions. This corresponds to previous findings that showed that the WUE response is directly related to the physiological processes controlling the gradients of carbon dioxide and water vapor among the foliage and surrounding atmosphere. Additionally, some authors concluded that WUE exceptionally depended on the carboxylation intensity caused by different stomatal conductance and response to environmental conditions. Moreover, recent studies on the environmental impact on WUE have analyzed the historical observations and discovered that WUE increased when temperatures rose by 1.5 C above the usual temperature and then began to decrease. Thus, water loss gradient can indicate the potential response of plant to environment and climate change. Chen et al. revealed that higher rates of net photosynthetic and WUE of alien species than those of native plants contributed to the successful invasion of alien species. Although the assessed invasive species represented invaders from southern latitudes, they found a favorable moisture level and sufficient light environment, ensuring high photosynthesis and transpiration rates, which indicated the species' physiological adaptiveness in a new habitat, namely Lithuania, which has a temperate climate. Conclusions In this study, we evaluated invasive species of different geographical origins for their physiological tolerance to a colder climate. The species exhibited sufficient photosynthetic rates which were maintained by effective water absorption and transport to leaves under conditions of unlimited water supply, and thus, supporting their spread in the temperate climate. Photosynthesis activity in relation to water loss during transpiration and water use effectivity at leaf level differently depended on many abiotic environmental impacts. The photosynthesis capacity of the assessed invasive species was similar to that of cosmopolitan T. officinale. The assessed species physiologically adapted to light, precipitation, and temperature conditions in an investigated habitat in Lithuania with a temperate climate. The soil temperature revealed the strongest impact than moisture and electric conductance on photosynthesis rates due to root formation and functioning. The soil moisture regime exhibited a stronger correlation with photosynthesis activity in open grassland habitats than that in the canopy of bush items, possibly due to the different water cycling here. The photosynthesis and transpiration capabilities of the invasive species allows them to access the required sufficient levels of light energy and water, keeping plants acclimated to the new temperate climate. Although the data of photosynthesis and transpiration profiling may be acceptable for finding relevant measures to explain the further spread of alien species, consistent with abiotic environment parameters, understanding and interpreting the constraints of the adaptation of invasive species might be subjected to more detailed complex analyses in future. In addition, the large-scale geographic variations in terrestrial photosynthesis are fairly well explained; however, the invasiveness problem can still be more fully interpreted by physiological and eco-physical exchange processes. Acknowledgments: The authors would like to thank the colleagues of the Institute of Ecology and Environment and the Laboratory of Climate Change Impacts on Forest Ecosystems, Agricultural Academy, Vytautas Magnus University for lab space, lab equipment, technical support, and assistance. Conflicts of Interest: The authors declare no conflict of interest.
T Cells Complementing Tolerance Kemper et al. investigated the factors governing lymphocyte differentiation into T-regulatory 1 (Tr1) cells and uncovered a role for CD46, a protein involved in complement regulation that also acts as a receptor for several human pathogens, in T cell-mediated immunological tolerance. To respond appropriately to innocuous and harmful stimuli, cells in the immune system must generate defensive responses to antigens associated with pathogens but not to self antigens. One mechanism whereby lymphocytes avoid potentially destructive autoimmune reactions involves the active suppression of T helper cells by Tr1 cells, a class of CD4+ lymphocytes that produce interleukin 10 (IL-10). Kemper et al. stimulated mixed populations of cultured CD4+ human peripheral blood lymphocytes with antibodies to cell surface proteins and discovered that the combination of CD3 and CD46 promoted IL-10 production. When CD4+ lymphocytes were sorted by cell-surface phenotype into nave cells (CD45RA+CD45RO-), memory cells (CD45RA-CD45RO+), and CD45RA+CD45RO+ cells, antibodies to CD3 and CD46 given in combination with interleukin-2 (IL-2) elicited IL-10 production in nave and CD45RA+CD45RO+ cells. Cells initially stimulated with antibodies to CD3 and CD46 acquired a memory-cell phenotype and subsequently produced IL-10 after stimulation with CD3 alone. Although CD3 and CD46 antibodies stimulated T cell proliferation, medium from these cells inhibited the proliferation of fresh CD4+ T cells, consistent with a Tr1 phenotype. This inhibition was blocked with neutralizing antibodies to IL-10. These data implicate CD46 in Tr1 cell development and suggest that the complement system may play a role in T cell-mediated tolerance. C. Kemper, A. C. Chan, J. M. Green, K. A. Brett, K. M. Murphy, J. P. Atkinson, Activation of human CD4+ cells with CD3 and CD46 induces a T-regulatory cell 1 phenotype. Nature 421, 388-392.
Simultaneous T1 and T2 Brain Relaxometry in Asymptomatic Volunteers Using Magnetic Resonance Fingerprinting Magnetic resonance fingerprinting (MRF) is an imaging tool that produces multiple magnetic resonance imaging parametric maps from a single scan. Herein we describe the normal range and progression of MRF-derived relaxometry values with age in healthy individuals. In total, 56 normal volunteers (24 men and 32 women) aged 11-71 years were scanned. Regions of interest were drawn on T1 and T2 maps in 38 areas, including lobar and deep white matter (WM), deep gray nuclei, thalami, and posterior fossa structures. Relaxometry differences were assessed using a forward stepwise selection of a baseline model that included either sex, age, or both, where variables were included if they contributed significantly (P <.05). In addition, differences in regional anatomy, including comparisons between hemispheres and between anatomical subcomponents, were assessed by paired t tests. MRF-derived T1 and T2 in frontal WM regions increased with age, whereas occipital and temporal regions remained relatively stable. Deep gray nuclei such as substantia nigra, were found to have age-related decreases in relaxometry. Differences in sex were observed in T1 and T2 of temporal regions, the cerebellum, and pons. Men were found to have more rapid age-related changes in frontal and parietal WM. Regional differences were identified between hemispheres, between the genu and splenium of the corpus callosum, and between posteromedial and anterolateral thalami. In conclusion, MRF quantification measures relaxometry trends in healthy individuals that are in agreement with the current understanding of neurobiology and has the ability to uncover additional patterns that have not yet been explored. Magnetic resonance fingerprinting (MRF) is a recently introduced method that simultaneously and rapidly measures multiple tissue properties, with initial application in measuring T 1, and T 2. This technique is based on the premise that acquisition parameters can be varied in a pseudorandom manner such that each combination of tissue properties will have a unique signal evolution. Using the Bloch equations, a dictionary of all possible signal evolutions can be created that includes all known acquisition parameters and all possible ranges of values and combinations of the properties of interest. The actual signal evolution in each voxel can then be compared to the dictionary entry, and the best dictionary match yields the property values for that voxel. With MRF there is now the possibility of observing small changes in multiple tissue relaxation properties simultaneously. However, to date, no study to our knowledge has been performed to describe the normal range and progression of MRFderived relaxometry values in healthy individuals. In this study, we present simultaneous quantification of regional brain T 1 and T 2 relaxation times in healthy volunteers using MRF and assess differences in tissue properties resulting from age, sex, and laterality of hemispheres. We further compare different best-fit options for regression analysis of age and brain relaxometry and assess how age-sex interactions affect these findings in the context of the known literature on relaxometry measurements with aging. METHODOLOGY Participant Recruitment Informed written consent was obtained from all participants according to the protocol approved by the local institutional review board. Multislice MRF data were acquired in 56 healthy volunteers aged 11-71 years. There were 24 men (aged 11-71 years) and 32 women (aged 18 -63 years), with an overall median age of 39 years (Figure 1). Of these participants, 53 were right-handed. One of the participants had a remote history of craniotomy for excising a meningioma; another had a remote history of surgical correction for Chiari 1 malformation. No other participant had a history of structural neurological disease or a known psychiatric disease. None of the participants revealed any overt parenchymal abnormalities on clinical T 2weighted images in the analyzed regions. MRF Acquisition MRF scans were obtained on 3.0-T Verio and Skyra scanners (Siemens Healthcare, Erlangen, Germany) using standard 20channel head coils. The acquisition technique has been previously described in detail. The parameters in MRF are continuously changed throughout the acquisition to create the desired spatial and temporal incoherence. The flip angle, phase, repetition time (TR), echo time (TE), and sampling patterns are all varied in a pseudorandom fashion. The parameters used for MRF acquisition were as follows: field of view, 300 300 mm 2 ; matrix size, 256 256; slice thickness, 5 mm; flip angle, 0-60°; TR, 8.7-11.6 ms; and radiofrequency pulse, sinc pulse with a duration of 800 s and time-bandwidth product of 2. In a total acquisition time of 30.8 s, 3000 images were acquired for each slice. The TE was half of TR and varied with each TR. The MRF acquisition was planned on whole-brain clinical standard T 2 -weighted images that were acquired as follows: TR, 5650 ms; TE, 94 ms; FOV, 230 mm; slice thickness, 4 mm; and flip angle, 150°. Approximately 4 -5 2D MRF slices were acquired through the whole brain for each individual depending on the head position. The entire study for each volunteer, including positioning time, was approximately 10 min in duration. Data Processing Using simulation, a dictionary of signal evolutions that could arise from all possible combinations of materials or systemrelated properties was generated. A total of 287 709 signal time courses, each with 3000 time points and different sets of T 1, T 2, and off-resonance parameters, were simulated for the dictionary. The ranges of T 1 and T 2 were chosen according to the typical physiological ranges of the tissues in the brain. T 1 values between 100 and 3000 ms and T 2 values between 10 and 500 ms were included in the dictionary. The off-resonance values included the range between 400 and 400 Hz. The total simulation time was 5.3 min. The vector dot product between the measured signal and each dictionary entry was calculated, and the entry yielding the highest dot product was selected as the closest match to the acquired signal. The final output consisted of quantitative T 1, T 2, off-resonance, and proton-density maps ( Figure 2). MRF-based proton-density values are affected by the type of acquisition as well as the sensitivity of the receiver coil and thus are not purely tissue-specific. Therefore, only T 1 and T 2 maps were utilized for further anatomical analysis. Data Analysis All data processing and analysis were performed using MATLAB version R2013b (MathWorks, Natick, MA) and SAS version 9.4 (SAS Institute, Inc, Cary NC). A region of interest (ROI)-based analysis was performed on the relaxometry maps as follows. For every subject, a fellowship-trained neuroradiologist manually drew the ROIs from which mean T 1 and T 2 measures were extracted. A total of 38 ROIs (17/hemisphere plus 4 midline) were drawn for each subject (Figure 3). The selected regions constituted important WM regions, deep gray nuclei, and posterior fossa structures. Cortical gray matter was not studied to avoid partial volume effects from cerebral spinal fluid (CSF) and WM. T 1 and T 2 maps with narrow window settings and magnified views were used to clearly identify each anatomical region and draw the ROIs. The ROI size depended on the region analyzed and ranged from 4 to 10 mm 2. Caution was taken to place the ROI in the center of the sampled region, with careful separation from adjacent structures to avoid partial volume effects. Regions with grossly visible artifacts or distortion were excluded from measurements.. For this study, age and sex effects were first examined using forward stepwise selection to select a baseline model that included either age, sex, or both, where variables were included at each step if they were significant with a P value.05. For regions where the baseline model included age, we then tested whether adding a quadratic term to the model significantly (P.05) improved fit. In addition, for regions with significant linear age effects, effects in men and women were compared using a test of equality between slopes to assess for age and sex interaction. Based on the slopes and intercepts, age-sex interplay was categorized as either an age sex effect or age sex effect. The age sex effect included regions where men and women had similar slopes with respect to age but different intercepts. The age sex effect included regions where each sex had significantly different slopes and intercepts with respect to age. Thus, for each brain region, we evaluated changes of MRF-based T 1 and T 2 with age using linear and quadratic models, differences between To test for differences between right and left hemispheres, regional relaxometry data from only right-handed participants (n 53) were used. In this subanalysis, a paired t test was performed to compare relaxometry measures for each region across hemispheres. A paired t test was also used to compare different components within a region, specifically between the medial and lateral thalami and between the genu and splenium of the corpus callosum (CC). For this subgroup analysis, pooled data from right-and left-handed subjects were analyzed. For statistical analysis, all comparisons with a P value.05 before correcting for multiple comparisons were considered significant results and discussed. This was done with the intention of describing all identifiable trends that may have physiological implications. However, correcting for multiple comparison test-ing using the Bonferroni method was also utilized, and outcomes that were statistically significant overall were identified. RESULTS All regions with field inhomogeneity and susceptibility artifacts were excluded from analysis. The largest number of field inhomogeneity and banding artifacts was seen in the genu region of the CC (n 15). T 2 maps were more susceptible to field inhomogeneity artifacts compared with T 1. Although we did our best to include all ROIs in the collected slices, slight variations in slice placement during imaging resulted in the omission of some regions, most commonly the splenium of the CC (n 8). Aging Progression When examining T 1, positive linear correlations with age were observed in 3 frontal WM regions and the genu of the CC. Negative linear correlations were seen in the left substantia nigra (SN) ( Table 1 and Figure 4A). Quadratic trends were observed in 3 fronto-parietal WM regions and the right SN, with the latter showing an overall decline in T 1 with age (Table 2 and Figure 4B). When examining T 2, positive linear correlations were seen in left frontal WM and the medial left thalamus, whereas negative linear correlations with age were detected in the bilateral SN (Table 1 and Figure 4A). Quadratic relationships with age were observed in right frontal WM and the left dentate nucleus, with an additional effect on sex in right frontal WM described further in the following section (Table 2 and Figure 4B). Differences Between Sexes Differences in MRF-derived relaxometry between sexes were observed in the absence of a significant correlation with age. Of the 38 regions examined in the T 1 analysis, left temporal WM, bilateral cerebellar hemispheres, and pons showed differences between sexes, with a higher T 1 in men compared with women and no significant change with age. In the T 2 analysis, a significant difference between sexes was detected in the right lentiform nucleus. Differences between sexes with age effects were categorized as either an age sex effect (men and women had similar slopes T1 and T2 Brain Relaxometry Using MRF with respect to age but different intercepts) or age sex effect (men and women had significantly different slopes and intercepts). Recall that age effects could be fit with a linear or quadratic model. In the T 1 analysis, left superior frontal and right parietal WM showed a linear age sex effect ( Figure 5A). In the T 2 analysis, age sex interaction was seen in bilateral superior frontal and parietal WM and the centrum semiovale. A linear age sex effect was observed in right superior frontal WM, and a quadratic age sex effect was observed in right frontal WM and the right dentate nucleus ( Figure 5B). Of all the sex differences measured, after adjusting for multiple comparison testing, only T 1 variations in right parietal WM (P.0001, R 2 0.30) and T 2 differences in right superior frontal WM (P.0001, R 2 0.30) remained statistically significant. Regional Differences Only right-handed individuals (n 53) were included in this analysis, and 34 paired regions were studied. Several regions with T 1 and T 2 differences between right and left hemispheres a For right frontal white matter, the quadratic model also included a term for sex, which was statistically significant (P 0.023), indicating a difference in intercepts between men and women. Results are displayed as separate regressions for men and women having different intercepts but the same linear and quadratic terms for age. b Statistically significant after correcting for multiple comparison testing using the Bonferroni correction technique. Figure 5. Regions with significant age and sex effects. (A) Regions with significant linear age sex effects; in these models, the slope of linear regression on age for men and women is similar but the intercepts are significantly different. (B) Regions with significant age sex effect on T 2 relaxometry; in this model, the slope of linear regression on age between men and women is statistically significant. T1 and T2 Brain Relaxometry Using MRF were identified (Table 3). In the analysis within regions, the splenium of the CC had a significantly higher T 1 but lower T 2 compared with the genu. The medial components of bilateral thalami showed higher T 1 and T 2 values compared with the lateral components. DISCUSSION To our knowledge, this is the first in vivo use of MRF at 3.0 T for measuring tissue properties of multiple brain regions in healthy human subjects across different age groups. At a microstructural level, brain aging is characterized by the loss of myelinated fibers, myelin pallor, ballooning, and redundant myelination; at a macroscopic level, there is a loss of grey and WM volume and expansion of CSF spaces. An increase in free water and decrease in water bound to macromolecules (such as myelin) are reflected by a lower magnetization transfer ratio in older age groups. The increase in gliosis, free water content, loss of myelination, and other aging changes also result in longer T 1 and T 2 relaxation times in WM. Although the published literature varies in the types of statistical modeling employed and regional predilection of findings, all studies to our knowledge agree that there is an overall increase in T 1 (1/R 1 ) and/or T 2 (1/R 2 ) in various WM regions/tracts with increasing age. A recent study measured the R 1 of various WM tracts over age and found that it increased from childhood up to the age of approximately 40 years and then decreased to 8-year-old levels between the ages of 70 and 80 years. In this study, comparable trends are seen in the T 1 of bilateral frontal and left parietal WM, with a dip in T 1 values between 30 and 50 years followed by an increase in later decades ( Figure 4B and Table 2). Various volumetry and DTI studies have consistently demonstrated a frontal predilection for age-related changes. DeCarli et al. also showed that the volumes of bilateral temporal lobes stayed stable across the human lifespan. These findings support the results shown herein that demonstrate that age effects on WM relaxometry are significant in frontal and parietal regions, whereas occipital and temporal relaxometry values stay relatively stable. In addition, the fact that the quadratic age model is a significantly better fit for certain frontal and parietal white regions over a linear age model alludes to a dynamic state of tissue turnover in these regions throughout the adult life. WM in the genu of the CC also demonstrated increased T 1 with age in this study. Previous DTI and relaxometry studies that explored the effects of aging on CC microstructure have found that the anterior portions of the CC (including the genu) are more susceptible to age-dependent changes compared with the splenium. More specifically, DTI studies showed greater decreases in fractional anisotropy in the genu that were explained by increases in free water content and demyelination in the CC with age. Such microstructural changes would also cause an increase in T 1 relaxometry (Table 2). With age, deep gray nuclei show drops in T 2 and less frequently T 1 values secondary to increasing mineralization and iron deposition. We identified similar trends in the left dentate nucleus and bilateral SN, the latter being statistically significant. T 2 shortening in the SN can be explained by increasing iron deposition as part of the physiological aging process and has been extensively reported in the literature. On the other hand, the age-dependent decrease in T 1 of the SN has not been as extensively explored. A recent study that assessed the relation between R 1 of the SN and age showed findings similar to our results. Histopathological studies of the SN have shown that there is nearly a 10% decrease in the number of neuromelanin-containing neurons per decade in neurologically intact individuals. Because neuromelanin inherently has a T 1 -shortening effect, in theory this loss should manifest as T 1 lengthening with age, but the data indicate a different effect to be dominant. The findings seen here may be an outcome of the combination of iron deposition and extraneuronal melanin deposition that are also seen with normal aging, both of which are expected to shorten T 1 (44,. In this study, T 1 and T 2 in the SN were determined to decrease with age in a linear or quadratic pattern. There is currently no consensus in the neuroimaging literature on whether a linear or quadratic model is the best fit for regression analysis of age and relaxometry. In addition, there is no physiologic reason to assume that the entire brain should conform to 1 model uniformly over the other. Our results suggest that for the more dynamically changing frontal WM regions, the quadratic model may be a better fit than the linear model, especially for T 1 ( Figure 4B). Two major differences in sex relaxometry were seen in this study, the first being different effects of aging on certain WM regions for men and women. In older age groups, men were observed to have higher relaxation time measurements in frontal and parietal WM compared with women. A few studies that looked at age and sex interactions in the past have shown that frontotemporal volume loss with age is more prominent in men, although a few other imaging studies have shown no such interaction (8,. Coffey et al. found that there was a greater age-related increase in sulcal and Sylvian CSF volumes with a lower size of parietal-occipital regions in men compared with women. The effects that sex has on aging as seen in our study are an additional piece of evidence that could reflect the greater predilection of men toward neurodegenerative processes and neurocognitive decline, which become more prominent with age. The second major difference in sex relaxometry that was identified in this study was in the mean relaxometry of temporal regions, the cerebellum, and pons. Similar sex effects seen previously have been attributed to sexual dimorphism that arose from how sex steroids affected microscopic processes such as glial proliferation, myelination, the presence of paramagnetic substances, and macrostructural phenotypes of gray and WM volumes (7,8,32,54,. In right-handed subjects, several areas of hemispheric asymmetry were identified in frontal, parietal, and temporal WM, the internal capsule region, and dentate nuclei. These regional differences hint at underlying microstructural distinctions that stem from asymmetry in the motor cortex and WM connectivity. Previous attempts to evaluate cerebral laterality with techniques such as morphometry, DTI, and functional MRI have shown that several subtle macro-and microstructural differences in cerebral hemispheres can be identified, although there is no single predominant pattern that has emerged. In this study, the genu of the CC showed significantly lower T 1 and higher T 2 values compared with the splenium. Previous DTI studies have shown higher fractional anisotropy in the splenium of the CC compared with the genu region. Thus, these 2 regions of the CC are known to have measurable differences on diffusion MRI. Several factors such as axonal fiber density, diameter of fibers, orientation, degree of myelination, and overall microstructural integrity that affect the diffusion metrics could also have an effect on the relaxometry characteristics of the CC and explain our findings, although the exact relation between these factors remains unexplored. We also found interesting regional variation in thalami relaxometry. For this analysis, it was not possible to anatomically segment the thalami into the component nuclei. Rather than analyze each thalamus in its entirety, we divided it into posteromedial and anterolateral components. The posteromedial segment approximately included the regions of pulvinar and medial nuclei, whereas the anterolateral segment included the anterior and lateral regions. For both hemispheres, the T 1 and T 2 of posteromedial thalami were higher by approximately 100 and 5 ms, respectively, compared with the lateral portions. The exact cause of these differences is unclear, although a differential in gray-white matter composition, unique nuclear arrangement, and differences in associated WM pathways may explain some of these findings. Several relaxometry studies have been attempted in normal subjects and in patients with multiple sclerosis. Because thalami are frequently studied in multiple sclerosis, our findings could have implications in designing future relaxometry studies in patients, as it may be necessary to analyze the medial and lateral portions of the thalami separately. This study utilized the original MRF technique with 2D acquisitions and an in-plane resolution of 1.2 mm. The lack of 3D whole-brain data limited the ability of selecting brain regions and necessitated analysis using the time-intensive ROI method. Future iterations of MRF acquisitions should seek to address these limitations with improved in-plane resolution and 3D acquisition capabilities while improving processing speeds and patient comfort. Relaxometry measurements from certain regions such as the genu of the CC are limited by the presence of field inhomogeneity and banding artifacts. These artifacts are more typical for all types of balanced steady-state free precession-based sequences and are commonly seen near air-tissue interface, where large field inhomogeneity is introduced. The incidence of these artifacts could be considerably reduced in future studies by using the fast imaging with steadystate precession-based MRF acquisition technique. Limitations of this study include the lack of details about study participant medical history that may affect brain anatomy and microstructure, including history of caffeine and alcohol intake, smoking, and diseases such as diabetes mellitus, hypertension, endocrinopathies, or current medications, and these factors could potentially alter relaxation parameters. No minimental state examination or psychological testing was administered to the participants as part of this study, although all participants demonstrated understanding of the consent form. Our ROIs included deep gray nuclei and WM regions; cortical gray matter was not analyzed. In conclusion, this pilot study introduces MRF as a rapid multiparametric in vivo quantitation tool in normative brain imaging and demonstrates that it can identify and quantify differences in brain parenchyma related to age, sex, hemisphere, and anatomy. This T 1 and T 2 normative database can be used as a reference for future MRF studies in various disease states. Dedicated efforts to improve in-plane resolution, facilitate 3D coverage, and reduce inhomogeneity artifacts are underway to develop an efficient and powerful quantitation tool for applications in neuroimaging and beyond.
iHelp: An Intelligent Online Helpdesk System Due to the importance of high-quality customer service, many companies use intelligent helpdesk systems (e.g., case-based systems) to improve customer service quality. However, these systems face two challenges: 1) Case retrieval measures: most case-based systems use traditional keyword-matching-based ranking schemes for case retrieval and have difficulty to capture the semantic meanings of cases and 2) result representation: most case-based systems return a list of past cases ranked by their relevance to a new request, and customers have to go through the list and examine the cases one by one to identify their desired cases. To address these challenges, we develop iHelp, an intelligent online helpdesk system, to automatically find problem-solution patterns from the past customer-representative interactions. When a new customer request arrives, iHelp searches and ranks the past cases based on their semantic relevance to the request, groups the relevant cases into different clusters using a mixture language model and symmetric matrix factorization, and summarizes each case cluster to generate recommended solutions. Case and user studies have been conducted to show the full functionality and the effectiveness of iHelp.
Structural factors associated with hallux limitus/rigidus: a systematic review of case control studies. STUDY DESIGN Systematic review of case control studies. OBJECTIVES To identify and analyze demographic and structural factors associated with hallux limitus/rigidus. METHODS A literature search was conducted across several electronic databases (Medline, EMBASE, CINAHL, and PubMed) using the following terms: hallux limitus, hallux rigidus, metatarsophalangeal joint, and big toe. Methodological quality of included studies was evaluated using the Quality Index. To evaluate the magnitude of differences between cases and controls, odds ratios were calculated for dichotomous variables and effect sizes (Cohen d) were calculated for continuous variables. RESULTS The methodological quality of the 7 included studies was moderate, with Quality Index scores ranging from 6 to 11 out of a possible score of 14. The overall mean age for the case group was 44.8 years (mean range, 23.4-54.9 years) and for the control group was 39.6 years (mean range, 23.4-58.8 years). There was a similar distribution of males and females across case and control groups. All studies used plain film radiography to assess foot structure. Cases were found to have a dorsiflexed first metatarsal relative to the second metatarsal, a plantar flexed forefoot on the rearfoot, reduced first metatarsophalangeal joint range of motion, a longer proximal phalanx, distal phalanx, medial sesamoid, and lateral sesamoid, and a wider first metatarsal and proximal phalanx. Measures of foot posture and arch height were not found to substantially differ between cases and controls. CONCLUSIONS This review of case control studies indicates that several variables pertaining to the structure of the first metatarsophalangeal joint may be associated with hallux limitus/rigidus. These findings have implications for the conservative and surgical treatment of the condition.
Learning Discriminative Projection With Visual Semantic Alignment for Generalized Zero Shot Learning Zero Shot Learning (ZSL) aims to solve the classification problem with no training sample, and it is realized by transferring knowledge from source classes to target classes through the semantic embeddings bridging. Generalized ZSL (GZSL) enlarges the search scope of ZSL from only the seen classes to all classes. A large number of methods are proposed for these two settings, and achieve competing performance. However, most of them still suffer from the domain shift problem due to the existence of the domain gap between the seen classes and unseen classes. In this article, we propose a novel method to learn discriminative features with visual-semantic alignment for GZSL. We define a latent space, where the visual features and semantic attributes are aligned, and assume that each prototype is the linear combination of others, where the coefficients are constrained to be the same in all three spaces. To make the latent space more discriminative, a linear discriminative analysis strategy is employed to learn the projection matrix from visual space to latent space. Five popular datasets are exploited to evaluate the proposed method, and the results demonstrate the superiority of our approach compared with the state-of-the-art methods. Beside, extensive ablation studies also show the effectiveness of each module in our method. I. INTRODUCTION With the development of deep learning technique, the task of image classification has been transfered to large scale datasets, such as ImageNet, and achieved the level of human-beings. Does it mean that we are already to solve large-scale classification problems? Two questions should be answered: 1) Can we collect enough samples of all the classes appeared all over the world for training? 2) Can the trained model with limited classes be transfered to other classes without retraining? The first question cannot be given an affirmative answer because there are 8.7 million classes only in animal species and over 1000 new classes are emerging everyday. Therefore, many researchers moved their focus to the second question by employing transfer learning, and Zero-shot Learning (ZSL). ZSL tries to recognize the classes that have no labeled data available during training, and is usually implemented by The associate editor coordinating the review of this manuscript and approving it for publication was Gang Li. employing auxiliary semantic information, such as semantic attributes or word embeddings, which is similar to the process of human recognition of new categories. For example, a child who has not seen a ''zebra'' before but knows that a ''zebra'' looks like a ''horse'' and has ''white and black stripes'', will be able to recognize a ''zebra'' very easily when he/she actually sees a zebra. Since the concept of ZSL was first proposed, many ZSL methods have been proposed and most of them try to solve the inherent domain shift problem -, which is caused by the domain gap between the seen classes and unseen classes. Although these methods can alleviate the domain shift problem and achieve certain effect, their performance are limited due to their negligence of unseen classes. To fully solve the domain shift problem, Fu et al. assumed that the labeled seen samples and the unlabeled unseen samples can be both utilized during training, which is often called transductive learning. This type of method can significantly alleviate the domain shift problem and achieve the state-of-the-art performance -, but the unlabeled VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ unseen data usually is inaccessible during training in realistic scenarios. In addition, conventional inductive learning often assumes that the upcoming test data belongs to the unseen classes, which is also unreasonable in reality because we cannot have the knowledge of the ascription of the future data in advance. Therefore, Chao et al. suggested to enlarge the search scope for test data form only the unseen classes to all classes, including both seen and unseen categories, which is illustrated in Fig. 1. To better solve the domain shift problem on the more realistic GZSL setting, many synthetic based methods have been proposed -. They often train a deep generative network to synthesize unseen data from its corresponding attribute by applying the frameworks of Generative Adversarial Network (GAN) or Variational Auto-Encoder (VAE), and then the synthesized data and the labeled seen data are combined to train a supervised close-set classification model. The synthetic based methods can also achieve state-of-the-art performance, but there is a serious problem that when a totally new object emerges the trained model will inevitably fail unless new synthetic samples are generated and retrained with previous samples. To solve the above mentioned problems, in this article, we proposed a novel method to learn discriminative projections with visual semantic alignment in a latent space for GZSL, and the proposed framework is illustrated in Fig. 2. In this framework, to solve the domain shift problem, we define a latent space to align the visual and semantic prototypes, which is realized by assuming that each prototype is a linear combination of others, including both seen and unseen ones. With this constraint, the seen and unseen categories are combined together and thus can reduce the domain gap between them. Besides, to make the latent space more discriminative, a Linear Discriminative Analysis (LDA) strategy is employed to learn the projection matrix from visual space to latent space, which can significantly reduce the within class variance and enlarge the between class variance. At last, we conduct experiments on five popular datasets to evaluate the proposed method. The contributions of our method is summarized as follows, 1) We proposed a novel method to solve the domain shift problem by learning discriminative projections with visual semantic alignment in latent space; 2) A linear discriminative analysis strategy is employed to learn the projection from visual space to latent space, which can make the projected features in the latent space more discriminative; 3) We assume that each prototype in all three spaces, including visual, latent and semantic, is a linear sparse combination of other prototypes, and the sparse coefficients for all three spaces are the same. This strategy can establish a link between seen classes and unseen classes, reduce the domain gap between them and eventually solve the domain shift problem; 4) Extensive experiments are conducted on five popular datasets, and the result shows the superiority of our method. Besides, detailed ablation studies also show that the proposed method is reasonable. The main content of this article is organized as follows: In section II we briefly introduce some related existing methods for GZSL. Section III describes the proposed method in detail, and Section IV gives the experimental results and makes comparison with some existing state-of-the-art methods on several metrics. Finally in section V, we conclude this article. II. RELATED WORKS In this section, we will briefly review some related ZSL and GZSL works for the domain shift problem. A. COMPATIBLE METHODS Starting from the proposed ZSL concept, many ZSL methods have been emerging in recent several years. Due to the existence of the gap between the seen and unseen classes, an inherent problem, called domain shift problem, limits the performance of ZSL. These methods often project a visual sample into semantic space, where Nearest Neighbor Search (NNS) is conducted to find the nearest semantic prototype and its label is assigned to the test sample. Kodirov et al. tried to use an autoencoder structure to preserve the semantic meaning from visual features, and thus to solve the domain shift problem. Zhang et al. exploited a triple verification, including an orthogonal constraint and two reconstruction constraints, to solve the problem and achieved a significant improvement. Akata et al. proposed to view attribute-based image classification as a label-embedding problem that each class is embedded in the space of attribute vectors, they employed pair-wise training strategy that the projected positive pair in the attribute space should have shorter distance than that of negative pair. However, the performance of these method are limited due to their negligence of unseen classes during training. In addition, conventional ZSL assumes that the upcoming test sample belongs to the target classes, which is often unreasonable in realistic scenarios. Therefore, Chao et al. extended the search scope from only unseen classes to all classes, including both seen and unseen categories. Furthermore, Xian et al. re-segmented the five popular benchmark datasets to avoid the unseen classes from overlapping with the categories in ImageNet. Beside, they also proposed a new harmonic metric to evaluate the performance of GZSL, and release the performance of some state-of-the-art method on the new metric and datasets. From then on, many methods have been proposed on this more realistic setting. For example, Zhang et al. proposed a probabilistic approach to solve the problem within the NNS strategy. Liu et al. designed a Deep Calibration Network (DCN) to enable simultaneous calibration of deep networks on the confidence of source classes and uncertainty of target classes. Pseudo distribution of seen samples on unseen classes is also employed to solve the domain shift problem on GZSL. Besides, there are many other methods developed for this more realistic setting,. B. SYNTHETIC BASED METHODS To solve the domain shift problem, synthetic based methods have attracted wide interest among researchers since they can obtain very significant improvement compared with traditional compatible methods. Long et al. firstly tried to utilize the unseen attribute to synthesize its corresponding visual features, and then train a fully supervised model by combining both the seen data and the synthesized unseen features. Since then, more and more synthetic based methods have being proposed,,,, and most of them are based on GAN or VAE because adversarial learning and VAE can facilitate the networks to generate more realistic samples,. CVAE-ZSL exploits a conditional VAE (cVAE) to realize the generation of unseen samples. Xian et al. proposed a f-CLSWGAN method to generate sufficiently discriminative CNN features by training a Wasserstein GAN with a classification loss. Huang et al. tried to learn a visual generative network for unseen classes by training three component to evaluate the closeness of an image feature and a class embedding, under the combination of cyclic consistency loss and dual adversarial loss. Dual Adversarial Semantics-Consistent Network (DASCN) learns two GANs, namely primal GAN and dual GAN, in a unified framework, where the primal GAN learns to synthesize semantics-preserving and inter-class discriminative visual features and the dual GAN enforces the synthesized visual features to represent prior semantic knowledge via semantics-consistent adversarial learning. Although these synthetic based methods can achieve excellent performance, they all suffer from a common serious problem that when an object of a new category emerges, the model should be retrained with the new synthesized samples of the new category. Different from these GAN or VAE based synthetic methods, our approach is a compatible one, which does not have the previous mentioned problem, and it can still accept new category without retraining even though there will be a little performance degradation. C. TRANSDUCTIVE METHODS Fu et al. tried to include the unlabeled unseen data in training, which is often called transductive learning, to solve the domain shift problem and achieved a surprising improvement. Unsupervised Domain Adaptation (UDA) formulates a regularized sparse coding framework, which utilizes the unseen class labels' projections in the semantic space, to regularize the learned unseen classes projection thus effectively overcoming the projection domain shift problem. QFSL maps the labeled source images to several fixed points specified by the source categories in the semantic embedding space, and the unlabeled target images are forced to be mapped to other points specified by the target categories. Zhang et al.proposed a explainable Deep Transductive Network (DTN) by training on both labeled seen data and unlabeled unseen data, the proposed network exploits a KL Divergence constraint to iteratively refine the probability of classifying unlabeled instances by learning from their high VOLUME 8, 2020 confidence assignments with the assistance of an auxiliary target distribution. Although these transductive methods can achieve significant performance and outperform most of conventional inductive ZSL methods, the target unseen samples are usually inaccessible in realistic scenarios. III. METHODOLOGY A. PROBLEM DEFINITION Let Y = {y 1,, y s } and Z = {z 1,, z u } denote a set of s seen and u unseen class labels, and they are disjoint Y ∩Z = ∅. Similarly, let A Y = {a y1,, a ys } ∈ R ls and A Z = {a z1,, a zu } ∈ R lu denote the corresponding s seen and u unseen attributes respectively. Given the training data in 3-tuple of N seen samples: (x 1, a 1, y 1 from N seen images. When testing, the preliminary knowledge is u pairs of attributes and labels:( a 1, z 1 ),, ( a u, z u ) ⊆ A Z Z. Zero-shot Learning aims to learn a classification function f : X u → Z to predict the label of the input image from unseen classes, where x i ∈ X u is totally unavailable during training. B. OBJECTIVE In this subsection, we try to propose an novel idea to learn discriminative projection with visual semantic alignment for generalized zero shot learning, the whole architecture is illustrated in Fig. 2. 1) SAMPLING FROM PROTOTYPES Suppose we have already know the prototypes of seen classes, the seen features should be sampled from these prototypes, so we can have the following constraint, where, P s is the prototypes of seen categories, Y s is the one-hot labels of seen samples, and 2 F denotes for the Frobenius norm. 2) PROTOTYPE SYNTHESIS Here we think each class prototype can be described as the linear combination of other ones with corresponding reconstruction coefficients. The reconstruction coefficients are sparse because the class is only related with certain classes. Moreover, to make the combination more flexible, we define another latent space, and construct a sparse graph in all three space as, where, H is the coefficient matrix; P = , P s and P u are visual prototypes of seen classes and unseen classes respectively; C = , C s and C u are the prototypes of seen classes and unseen classes respectively in latent, and are the balancing parameters. We apply diag(H) = 0 to avoid the trivial solution. 3) VISUAL-SEMANTIC ALIGNMENT In the latent space, the prototypes are the projections from both visual space and semantic space, so the alignment can be represented as, where, W 1 and W 2 are the projection matrices from visual space and semantic space respectively. 4) LINEAR DISCRIMINATIVE PROJECTION In visual space, the features might not be discriminative, which is illustrated in Fig. 2, so the direct strategy is to cluster them within class and scatter them between classes. Linear Discriminative Analysis is the proper choice and we can maximize the following function to achieve such purpose, where, S B and S W are the between-class scatter matrix and within-class scatter matrix respectively. C. SOLUTION Since we have already defined the loss function for each constraint, we can combine them and obtain the final objective as follows, where, and are the balancing coefficients. 1) INITIALIZATION Since Eq. 5 is not joint convex over all variables, there is no close-form solution simultaneously. Thus, we propose an iterative optimization strategy to update a single unresolved variable each time. Because proper initialization parameters can not only improve the model performance but also increase the convergence speed, we further split the solution into two sub problems, i.e., initializing the parameters with reduced constraints, and iterative optimizing them with the full constraints. Initializing H: Since A is known in advance, we initialize H first with the last term of Eq. 2. We exploit the following formulation as the loss function for H, To solve the constraint diag(H) = 0, we calculate H once per column, where H i is the i th column of H and the i th entry of H i is also removed, A \i is the matrix of A excluding the ith column. Initializing P s : We use Eq. 1 to initialize P s, and the closed-form solution can be obtained as follows, Initializing P u : Since there is no training data for unseen classes, we cannot use the similar initialization strategy as P s to initialize P u. However, we have already get H with Eq. 7 in advance, it is easy to utilize P s and H to calculate P u. The simplified loss function can be formulated as follows, By computing the derivative of Eq. 10 with respected to P u and setting it to zero, we can obtain the following solution, Since C is unknown till now, we cannot calculate W 1 withe Eq. 3. The only way for W 1 is to optimize Eq. 4, from which we can deduce the following formulation, If we define =, then W 1 can be solved by obtaining the eigenvector of S −1 W S B. Initializing C: Since W 1 and P are already known, it is easy to initialize C with the first item of Eq. 3, and the solution is, Initializing W 2 : By employing the second item of Eq. 3, W 2 can be solved with following formulation, 2) OPTIMIZATION Since the initialized value of each variable has already been obtained, the optimization of them can be executed iteratively by fixing others. Updating H: Similar as that for initializing H, we can obtain H once per column with the following loss function, By taking the derivative of L H i with respect to H i, and setting the result to 0, we can obtain the solution of H i as follows, Updating P s : By fixing other variables except P s, we can obtain the following loss function from Eq. 5, which can be expanded as, Eq. 17 can be simplified to AP s + P s B = C, which is a well-known Sylvester equation and can be solved efficiently by the Bartels-Stewart algorithm. Therefore, Eq. 17 can be implemented with a single line of code P s = sylvester( A, B, C) in MATLAB. Updating P u : Similar as that for P s, we fix other variables except P u, and obtain, By taking the derivative of L P u with respect to P u, and set the result to 0, we can obtain the following equation, Similarly, if we set can be simplified to AP u + P u B = C, which can also be solved efficiently with P u = sylvester( A, B, C) in MATLAB. Updating C: If we only let C variable and make others fixed, Eq. 5 can be reduced as, By taking the derivative of L C with respect to C, and set the result to 0, we can obtain the solution of C as follows, Updating W 2 : As for W 2, Eq. 5 can be simplified as, By taking the derivative of L W 2 with respect to W 2, and set the result to 0, we can obtain the solution of W 2 as follows, Updating W 1 : Similar as W 2 for W 1, Eq. 5 can be reduced as, Due to the direct derivative of Eq. 22 will cause the negative order of W 1, we rewrite it as follows, where, is a coefficient and set as the maximum eigenvalue of S −1 W S B here. By taking the derivative of L W 1 with respect to W 1, and set the result to 0, we can obtain the solution of W 1 as follows, After these steps, the test sample can be classified by projecting it into the latent space and finding the nearest neighbor of it from C. The algorithm of the proposed method is described in Alg. 1. IV. EXPERIMENTS In this section, we first briefly review some datasets applied in our experiments, then some settings for the experiments are given, and at last we show the experiment results and ablation study to demonstrate the performance of the proposed method. A. DATASETS In this experiment, we utilize five popular datasets to evaluate our method, i.e., SUN (SUN attribute), CUB (Caltech-UCSD-Birds 200-2011), AWA1 (Animals with Attributes), AWA2 and aPY (attribute Pascal and Yahoo). Among them, SUN and CUB are fine-grained datasets while AWA1/2 and aPY are coarse-grained ones. The detailed information of the datasets is summarized in Tab. 1, where ''SS'' denotes the number of Seen Samples for training, ''TS'' and ''TR'' refer to the numbers of unseen class samples and seen class samples respectively for testing. The set of visual features of seen classes: X s ; The set of one-hot labels of X s : Y s ; the set of semantic attributes, including both seen and unseen classes: A; The number of iterative time for optimization: iter; The hyper-parameters:,,,, and ; Output: The projection matrices: W 1 and W 2 ; The visual prototypes of both seen and unseen classes: P s and P u ; The latent prototypes of both seen and unseen classes: C s and C u ; 1: Initializing H with Eq. 7 once per clolumn; 2: Initializing P s and P u with Eq. 8 and Eq. 10 respectively; 3: Initializing W 1 with the eigenvectors of S −1 W S B from Eq. 11; 4: Initializing C with Eq. 12; 5: for k = 1 → iter do 6: Update H with Eq. 15 once per column; 7: Update P s with Eq. 17 by applying P s = sylvester ( A, B, C); 8: Update P u with Eq. 19 by applying P u = sylvester ( A, B, C); 9: Update C with Eq. 21; 10: Update W 2 with Eq. 23; 11: Update W 1 with Eq. 26; 12: end for 13: return W 1, W 2, P s, P u and C. Moreover, we use the same split setting, which is proposed by Xian et al. in, for all the comparisons with the stateof-the-art methods listed in Tab. 2. B. EXPERIMENTAL SETTING We exploit the extracted features with ResNet as our training and testing samples, which are released by Xian et al., and all the the settings, including both attributes and classes split, are also the same as those in. In addition, there are six hyper-parameters,,,, and. Since is only used to control the regularization terms, we set it with a small value 1 10 −4. As for other five parameters, due to the fact that different dataset usually performs well with different parameters, thus we choose our hyper-parameters from the set of {0.001, 0.01, 0.1, 1, 10, 100, 1000} by adopting a cross validation strategy. To be specific, we hereby compare the difference of ZSL cross-validation to conventional cross-validation for machine learning approaches. Compared to inner-splits of training samples within each class, ZSL problem requires inter splits by in turn regarding part of seen classes as unseen, for example, 20% of the seen classes are selected as the validational unseen classes in our experiments, and the parameters of best average performance of 5 executions are selected as the final optimal parameters for each dataset. It should be noted that the parameters may not be the most suitable for the test set, because the labels of test data are strictly inaccessible during training. C. COMPARISON WITH BASELINES In this subsection, we conduct experiments to compare our method with some baselines methods. In addition to the methods evaluated in, we also compare our method with some newly proposed frameworks, such as GFZSL, LAGO, PSEUDO, KERNEL, TRIPLE, LESAE, LESD and VZSL. To be specific, we directly cite the results from or from their own papers if it is feasible, otherwise we re-implement them according to the methods described in their own papers. We exploit the harmonic mean H to evaluate our model under the GZSL setting, and it is defined as, where, acc tr and acc ts are the accuracies of test samples from seen classes and unseen categories respectively, and we adopt the average per-class top-1 accuracy as the final result. Since our method utilizes both seen and unseen semantic attributes and focuses on the more realistic GZSL setting, we do not report the result on conventional ZSL setting. The results of our method and the compared method are recorded in Tab. 2, and the best result of each column is highlighted with bold font. From this table, we can clearly discover that our method can outperform the state-of-the-art methods on both ts and H. Concretely, our method can improve ts by 0.8% on SUN, 4.0% on CUB, 6.5% on AWA1, 5.7% on AWA2 and 5.2% on APY, and enhance H by 0.6% on SUN, 0.2% on CUB, 9.8% on AWA1, 7.7% on AWA2 and 8.0% on APY respectively compared with the best methods LESAE and TRIPLE. Besides, compared to those existing methods that have high tr but low ts and H, such as DAP and CONSE, our method can achieve more balanced performance on ts and tr and eventually obtain a significant improvement on H. We ascribe this improvement to the discriminative projection with LDA and the prototype synthesis with both seen and unseen classes, because the first one can make the projected features from same class cluster and from different classes disperse, and the second one combines both seen and unseen classes into a unified framework to alleviate the domain shift problem. D. ABLATION STUDY 1) EFFECT OF LATENT SPACE In our method, we utilize the latent space as the intermediate space for both visual and semantic features and we have claimed that this space can obtain more discriminative projection and alleviate the domain shift problem. Therefore, it is necessary to verify whether this space can achieve such statement. In this subsection, we remove the latent prototypes from Eq. 2 and Eq. 3, modify the discriminative projection item with LDA, and redefine the three loss functions as follows, We replace the three items L syn, L eqnarray and L LDA in Eq. 5 and re-optimize it, the performance with the new loss function is illustrated in Fig. 3, form which it can be clearly seen that the accuracies with the latent space are higher than those without the latent space on all five datasets. To be specific, we can obtain more improvement on SUN and CUB than on AWA and APY, especially on the metric ts. We attribute this phenomenon to that the learned vectors in latent space can preserve more discriminative characteristic, and the employment of unified synthesis framework on both seen and unseen classes can well alleviate the domain shift problem. 2) EFFECT OF LDA In our method, we utilize the LDA strategy to project visual features into latent space to make them more discriminative, so it is necessary to find how much this mechanism can improve the final performance. In this experiment, we remove the loss item L LDA from Eq. 5, and conduct the evaluation on the five popular datasets. The experimental results are illustrated in Fig. 4, from which it can be clearly observed that the method with LDA can significantly outperform that without LDA constraint. This phenomenon reveals that the LDA constraint plays a very important role in improving the performance due to its powerful ability of learning discriminative features in latent space. Moreover, to more intuitively display the improvement of our method, we also show the distributions of unseen samples on AWA1 with and without LDA in latent space with t-SNE. The results are illustrated in Fig. 5, from which it can be discovered that the distribution with LDA is more compact than that without LDA in each class, especially those classes at the bottom of the figure. This situation further prove that LDA is necessary for our method to learn discriminative features in latent space. 3) DIFFERENT DIMENSION OF LATENT SPACE Since we apply latent space in our method, It is necessary to discuss the effect of the dimension of the latent space on the final performance. In our experiment, we take AWA1 as an example and change the dimension of the latent space from 5 to 60 to show the performance change. The performance curves are recorded in Fig. 6, from which it can be clearly seen that the curves monotonically increase for both ts and H, and nearly stop increase when the dimension is larger than 50. This phenomenon reveals that we can obtain better performance when we have larger dimension in latent space, but this increasing will stop when it reaches the number of classes. E. ZERO SHOT IMAGE RETRIEVAL In this subsection, we conduct experiments to show zero shot retrieval performance of our proposed method. In this task, we apply the semantic attributes of each unseen category as the query vector, and compute the mean Average Precision (mAP) of the returned images. MAP is a popular metric for evaluating the retrieval performance, it comprehensively evaluates the accuracy and ranking of returned results, and defined as, where, r i is the number of returned correct images from the dataset corresponding to the ith query attribute, p i (j) represents the position of the jth retrieved correct image among all the returned images according to the ith query attribute. In this experiment, the number of returned images equals the number of the samples in unseen classes. For the convenience of comparison, we employ the standard split of the four datasets, including SUN, CUB, AWA1 and aPY, which can be found in, and the results are shown in Tab. 3. The values of the baseline methods listed in Tab. 3 are directly cited from. The results show that our method can outperform the baselines on all four datasets, especially on the coarse-grained dataset AWA1, which reveals that our method can make the prototypes in latent space more discriminative. V. CONCLUSION In this article, we have proposed a novel method to learn discriminative features with visual-semantic alignment for generalized zero shot learning. in this method, we defined a latent space, where the visual features and semantic attributes are aligned. We assumed that each prototype is the linear combination of others and the coefficients are the same in all three spaces, including visual, latent and semantic. To make the latent space more discriminative, a linear discriminative analysis strategy was employed to learn the projection matrix from visual space to latent space. Five popular datasets were exploited to evaluate the proposed method, and the results demonstrated the superiority compared with the stateof-the-art methods. Beside, extensive ablation studies also showed the effectiveness of each module of the proposed method. PENGZHEN DU received the Ph.D. degree from the Nanjing University of Science and Technology, in 2015. He is currently an Assistant Professor with the School of Computer Science and Engineering, Nanjing University of Science and Technology. His research interests include computer vision, evolutionary computation, robotics, and deep learning. HAOFENG ZHANG received the B.Eng. and Ph.D. degrees from the School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China, in 2003 and 2007, respectively. From December 2016 to December 2017, he was an Academic Visitor with the University of East Anglia, Norwich, U.K. He is currently a Professor with the School of Computer Science and Engineering, Nanjing University of Science and Technology. His research interests include computer vision and robotics. JIANFENG LU (Member, IEEE) received the B.S. degree in computer software and the M.S. and Ph.D. degrees in pattern recognition and intelligent system from the Nanjing University of Science and Technology, Nanjing, China, in 1991, 1994, and 2000, respectively. He is currently a Professor and the Vice Dean of the School of Computer Science and Engineering, Nanjing University of Science and Technology. His research interests include image processing, pattern recognition, and data mining.
. Amineptine is a new compound, derived from the tricyclic antidepressants. Its mechanism of action is principally dopaminergic and to a lesser extent serotoninergic. The following are here reported: The results of trials after a single dose: cerebral electrophysiology. Trials of treatment of depression--monopolar and bipolar,--involutional,--neurotic and reactive,--in chronic schizophrenia. The spectrum of activating antidepressant therapeutic activity of this drug is discussed. Other areas in which amineptine has seemed effective in children and adults are discussed. Finally, general, cardiac, ocular acceptability and tolerability in the elderly are reported, together with an analysis of cases of overdose.
Effect of Extended Nursing on the Behavioral and Psychological Symptoms and Cognitive Dysfunction of Patients with Moderate and Severe Alzheimer's Disease The purpose of this investigation is to study the effect of extended nursing on the behavioral and psychological symptoms and cognitive dysfunction of patients with moderate and severe Alzheimer's disease. A total of 102 patients with moderate and severe Alzheimer's disease admitted to the Peoples Hospital of Nanjing Medical University from February 2016 to March 2018 were randomly divided into the observation group and the control group. The control group received routine health education and nursing guidance after discharge, while the observation group received additional extended nursing scheme for 1 y, including daily living ability training, body function training, language training, memory training, music therapy, psychological nursing support and offline collective activities of patients. The scores of behavioral pathology assessment scale of Alzheimer's disease, Montreal cognitive assessment, and Barthel activities of daily living were compared between the two groups. There was no significant difference in the scores of behavioral pathology assessment scale of Alzheimer's, Montreal cognitive assessment and Barthel activities of daily living between the two groups on admission. One year later, the behavioral pathology assessment scale of Alzheimer's disease scores of the observation group were significantly lower than that of the control group (p<0.05), the Montreal cognitive assessment scores and Barthel activities of daily living scores of the observation group were significantly higher than that of the control group (p<0.05). The extended nursing scheme of this study improved the behavioral and psychological symptoms of Alzheimer's disease patients significantly, including anxiety, fear, hallucination, delusion, emotional disorder, aggressive behavior, behavioral disorder and circadian rhythm disorder. It could also improve the cognitive ability of Alzheimer's disease patients significantly, and improve their ability to handle daily life activities.
This application is related to Korean Application No. 99-59822, filed Dec. 21, 1999, the disclosure of which is hereby incorporated herein by reference. The present invention relates to electronic generation circuits and methods, and more particularly, to reference voltage source circuits and methods. It is generally desirable to maintain stable internal power supply voltage levels in integrated circuits (ICs) in order to prevent damage and maintain desired operational characteristics. Accordingly, integrated circuits often include power supply voltage regulation circuits that generate internal power supply voltages from externally supplied power supply voltages. These internal power supply voltage regulation circuits often use reference voltages produced by reference voltage generation circuits. Such reference voltage generation circuits may take many forms. One type of reference voltage generation circuit includes a MOS transistor that has its drain and gate terminals tied together, and which has an associated threshold voltage that is used to generate a reference voltage. However, the threshold voltage of such a MOS transistor typically varies responsive to temperature and process variations. Accordingly, the accuracy of such reference voltage generation circuits may be sensitive to variations in temperature and process. A conventional reference voltage generation circuit may use complementary circuits having respective positive and negative temperature coefficients to reduce sensitivity to temperature changes. Such circuits are described, for example, in xe2x80x9cVARIABLE VCC DESIGN TECHNIQUES FOR BATTERY OPERATED DRAMS,xe2x80x9d Symposium on VLSI Circuit Digest of Technical Papers, pp. 110-111, 1992. Referring to FIG. 1, a conventional temperature compensating reference voltage generation circuit includes a power supply voltage terminal that receives a power supply voltage Vcc, a ground voltage terminal that is connected to a power supply ground Vss, and an output terminal at which a reference voltage VREF is produced. In further detail, the reference voltage generation circuit includes a current limiting resistor R1 coupled between the power supply voltage Vcc and a node N1. The reference voltage generation circuit further includes a voltage divider circuit including a second resistor R2 connected to the node N1, and first and second NMOS transistors M1, M2 that are serially connected between a node N2 and the power supply ground Vss, and have their gate terminals coupled to the node N1 and the power supply voltage Vcc, respectively. A P-MOS transistor M3 is coupled between the output terminal and the power supply ground Vss, and has its gate terminal coupled to the node N2. If the xe2x80x9conxe2x80x9d resistance of the first and second NMOS transistors M1, M2 is denoted Req, and a threshold voltage of the PMOS transistor M3 is denoted |Vtp|, the reference voltage VREF may be expressed as: In equation (1), the temperature coefficient of the PMOS transistor threshold VREF = ( 1 + Req R2 ) ⁢ xe2x80x83 ⁢ c ⁢ xe2x80x83 ⁢ Vtpc ( 1 ) voltage |Vtp| typically is negative, while the temperature coefficient of the on resistance Req of the first and second NMOS transistors M1, M2 is typically positive. Accordingly, the reference voltage VREF is generated regardless of the temperature variation. That is, the reference voltage VREF regardless of the temperature variation can be obtained by offsetting the temperature variation by |Vtp| having the negative temperature coefficient and Req having the positive temperature coefficient. However, the temperature characteristics of PMOS transistor M3 and NMOS transistors M1, M2 are opposite each other and also typically non-linear. For example, above a critical voltage Vcr(e.g., 1.2V), the reference voltage VREF produced by the conventional reference voltage generating circuit of FIG. 1 may increase with increasing temperature. However, below the critical voltage Vcr, the reference voltage VREF may decrease with increasing temperature, such that the reference voltage VREF(Hot) at a relatively high temperature is less than the reference voltage VREF(Cold) at a relatively low temperature, as shown in FIG. 2. Such a negative temperature characteristic may be undesirable when producing a reference voltage for a power supply circuit. As the reference voltage decreases with increasing temperature, the level of the power supply voltage that is generated based on the reference voltage may also decrease. This can cause operating speeds of circuits receiving the power supply voltage to decrease. This phenomenon can represent an obstacle to effectively using low power supply voltages, such as 3.3V. According to embodiments of the present invention, a reference voltage source circuit includes a reference voltage generation circuit that inputs external power voltage and produces a first reference voltage with a first temperature characteristic, and a level shifter circuit that produces a second reference voltage with the first temperature characteristic from the first reference voltage and outputs the second reference voltage as a reference voltage. In particular, the reference voltage generation circuit may use a circuit configuration that is configurable to produce reference voltages in first and second ranges with respective positive and negative temperature characteristics, with the reference voltage generation circuit being configured to produce the first reference voltage in the first range with a positive temperature characteristic. The level shifter circuit may be operative to provide a voltage drop between the first and second reference voltages such that the second reference voltage is in the second range. In embodiments of the present invention, the reference voltage generation circuit includes a first resistor having a first terminal connected to a power supply node and a second terminal connected to a first node at which the first reference voltage is produced, and a second resistor having a first terminal coupled to the first node. A first NMOS transistor has a source terminal connected to a drain terminal of the first NMOS transistor at a third node, a gate terminal connected to the power supply node, and a drain terminal connected to a power supply ground. A PMOS transistor has a source terminal connected to the first node, a gate terminal connected to the second node and a drain terminal connected to the power supply ground. In other embodiments of the invention of the present invention, the level shifter circuit includes an NMOS transistor having a source terminal and a gate terminal coupled to the first node and a drain terminal connected to a fourth node at which the second reference voltage is produced. In still other embodiments of the present invention, the reference voltage generation circuit includes a current limit circuit coupled to a power supply node, a voltage divider circuit coupled to the current limit circuit at a first node at which the first reference voltage is produced and to a power supply ground and a voltage source circuit coupled to the voltage divider circuit at the first node and at a second node. The level shifter circuit includes a voltage drop circuit coupled between the first node and a third node at which the second reference voltage is produced, and a current pass circuit coupled between the third node and the power supply ground. In method embodiments of the present invention, a first reference voltage with a first temperature characteristic is generated. The first reference voltage is level shifted to produce a second reference voltage with the first temperature characteristic For example, a reference voltage generation circuit may be configured to produce the first reference voltage in a first range with a positive temperature characteristic, wherein the reference voltage generation circuit uses a circuit configuration that is configurable to produce reference voltages in the first range and a second range with respective positive and negative temperature characteristics. A voltage drop may be provided between the first and second reference voltages such that the second reference voltage is in the second range.
UPDATE (September 18, 2014): Solitary Watch received the following statement via email from North Carolina Department of Public Safety spokesperson Keith Acree: The evolving lockdown situation at Scotland Correctional Institution has affected about 600 inmates in close custody regular population housing. The medium custody (~540) and minimum custody (~240) populations have not been affected nor have those on control status (~230). The entire prison population today is 1,663. We implement lockdowns when needed to ensure the safety of inmates and staff and to prevent injuries. The December lockdown was prompted by a series of fights between large groups of inmates at Scotland that resulted in injuries to inmates and staff. Since the beginning of 2014, the institution has recorded 61 actual or attempted assaults on staff and 20 actual or attempted inmate on inmate assaults. At this point, the lockdown for close custody regular population (RPOP) has stepped down to a point that we call “managed observation”. Close custody RPOP inmates are now allowed about 4 hours of out-of-cell time daily (compared to about 8 hours before the Dec. 28 fights that began the lockdown). Visiting, outdoor recreation, telephone use and canteen privileges have resumed. Vocational and educational programs are in session and the prison’s two Correction Enterprises plants (a sewing plant and the Braille plant) are operating normally. Inmates continue to receive hot meals brought to their cells. All activities are occurring in small groups. Religious services have not yet resumed. A new chaplain began work this week. Since the lockdown began Dec. 28, restrictions have been lifted in 11 progressive steps, based on inmate behavior and cooperation, to reach the point where we are today. Katy Poole has been serving as acting administrator at Scotland CI since Aug. 1 when Sorrell Saunders retired. ≡≡≡≡≡≡ Across the United States, even prisoners who have not been placed in solitary confinement or any form of “segregation” can be subjected to a “lockdown” in which they may be held in solitary-like conditions, confined to their cells nearly round-the-clock. Brief lockdowns are a common occurrence, and lockdowns lasting months or more are not unusual. Individuals subjected to lockdown are generally denied even the pro-forma review processes afforded to most others placed in solitary confinement. In the “Close Custody” unit–a single celled, high-security unit–at North Carolina’s Scotland Correctional Institution, nearly 600 men have been on indefinite lockdown since December 28, 2013. Individuals subjected to the lockdown have been confined to their cells for 22 to 23 hours a day for eight months and counting. When asked by Solitary Watch about the status of Scotland, North Carolina Department of Public Safety (NC DPS) spokesperson Keith Acree stated that he was unaware that the prison was on lockdown. In January of this year, the Laurinburg Exchange reported on the lockdown: According to Keith Acree, spokesperson for the state department, the institution’s “closed custody” population, which numbers about 800, has been confined to cells since a “series of fights between inmates and minor assaults on staff members” occurred shortly after Christmas. Acree said injuries to staff members were “nothing serious,” but that several were “hit or bumped. . .” A lockdown means prisoners cannot have visitors, make calls, or leave their cells for meals. They cannot visit the canteen, Acree said, but orders from the canteen can be delivered to their cells. Acree said he could not remember when the last time the institution was on lockdown, but he was not aware of the current lockdown until he received an inquiry from The Laurinburg Exchange. In 2011, the prison was one of six in the state placed on lockdown after a surge of gang violence. About a dozen people from Scotland’s Close Custody population have written to Solitary Watch describing conditions at the prison. Some people wrote to describe the conditions at the prison in general, while others detailed particular incidents. One man recalls the day the Scotland Correctional Institute was put on lockdown: On December 28, 2013, two individual fights took place at about 5:35 PM. No one was stabbed or cut, and no staff was hurt. Prison officials labeled the incident a gang fight and shut down the whole facility. For almost a month we were not allowed out of our cells or allowed to take showers. When they did allow us to take showers, we had to do so in cuffs once a week. Another man wrote to describe the general conditions at the prison since the lockdown has been put in effect: We don’t get but two hours out of our cells a day. In that two hours, 24 people have to use the phone, take showers and get anything done that requires any assistance by the staff because once you’re in your cell, it’s like your forgotten. Then you spend 22 hours in this room. . . The things that go on here are uncalled for. This is supposed to be a place of rehabilitation but it does no one any good the way the staff at SCI mistreats people and writes you up for actions you didn’t commit. It just sends everyone’s minds or actions and feelings back to square one. The following comes from a man describes the restrictions at the prison as counterproductive to the point where he’s “about to lose [his] mind”: This prison has been on 22-hour-a-day lockdown for months. . . When I got here, I wanted a chance to earn my GED, but his prison is not helping me to better myself in any way. I have not been able to eat hot meals or go outside for fresh air ever in months. The treatment here is cruel and unusual and I’m about to lose my mind behind these doors. Another member of the Close Custody population elaborates on the the varying levels of restrictions seen since the lockdown began: While on lockdown, we’ve been through different stages. Stage one, we were on lockdown for 24 a day hours without being allowed to shower. It was like this for a month. Then the officers started taking us to the shower one day out of the week with handcuffs on so tight that it made it difficult for us to wash. Stage two, they let 12 of us out of our cells to rec in the dayroom for one hour. Next, they let 24 of us out for two hours. We haven’t had any outside rec since December 28, 2013, and our skin and health is showing that. Another man writes to convey the intrusiveness of his confinement, portraying how little privacy he was given, even while showering: The first month there was no recreation. Everyone was confined to their cells for 24 hours a day. There were no showers. When they started allowing showers, you had to go in full restraints with two officers standing at the shower watching with sticks out. . . No visitors have been allowed for the past three months, nor are we provided with any religious services. . . Since we have been on lockdown, we have been having trouble with the officers doing their jobs. If we ask them for writing paper, envelopes or request forms, they will not bring it, especially if we’re housed on the upper tier. One inmate asked an officer for some toilet paper. She said, “I’m not walking upstairs to give you any toilet paper. You better use a shirt,” and left the block. He also describes an instance when another prisoner had fallen in his cell and his pleas for help were ignored by staff: The major problem we have is that officers do not respond when we hit out call buttons. . . Nobody ever comes to see what we want until we start kicking on the cell doors. There were several incidents where an inmate was feeling dizzy and pushing his cell button every couple of minutes. Nobody came. He passed out and we had to kick on the doors for about 20 minutes before anyone came. Another member from the Close Custody population describes a similar incident: An inmate’s back gave out on him and he fell to the floor. He started banging on the door with his brush for 15 minutes but no one came to check on him. So we started kicking on the doors, and kicked for almost an hour before an officer came. She looked in his cell and started laughing at him. She left and came back with another officer. She looked in at him again and laughed. They both left and came back with a sergeant, who looked at the inmate and said, “Why don’t you just do us all a favor and die.” Then the sergeant called nurse and they came and took him to Medical – after he’d laid on the floor for about an hour and 10 minutes. . . We’ve been asking why they are punishing 600 inmates for something four people were involved in. Those inmates were put in segregation, found guilty of their charges and punished for them. But so are we. One man discusses a health condition with which he’s been diagnosed, yet is not being treated for: Recently I was diagnosed with high blood pressure. For the last two and half weeks, my blood pressure hasn’t been checked and I haven’t received my medication to treat it. Another man writes of problems he has had sending and receiving mail: Due to the lockdown, I wrote a grievance to the North Carolina Department of Public Safety and mailed it to them on 2/20/14, so I thought. . . But it was opened, taken out of the stamped envelope and signed by the unit manager and the screening officer and returned to me almost a month later. Before my grievance was returned, my mail started coming in late after mail had already been passed out and sometimes not until the next morning. Then I would either not receive mail or it would come looking like it’d been deliberately cut up. . . He closes his letter: I have not seen my family since my trial ended and I would love to see them. . . No religious services, no visits, no type of outside activity, no books from the library, no school – all this, for no reason. . . I, along with other inmates, are being punished for no reason at all. I have not caused any trouble and do not deserve this cruel and unusual punishment, which is a violation of my Eighth Amendment rights. Another prisoner describes the disturbing situation, emphasizing that the deprivations faced by these men are unwarranted: To think violence or disrespect is right? That is exactly what they’re doing to us: Disrespecting us and violating us as human beings. We don’t get to stretch our muscles, we don’t get any sunlight. . . We are treated like M-con status inmates, and we haven’t done anything to deserve it. . . They tell our families that they are understaffed, but that isn’t our problem. We are imprisoned inside a prison. . . We have been on lockdown since December 28, 2013, and enough is enough. Since these letters were written to Solitary Watch, Scotland Correctional Institution has modified the conditions of confinement in Close Custody. According to people held at the prison, the men are now allowed out of their cells twice a day for approximately for two hours and allowed outside for recreation for one hour twice a week. However, they have not resumed hot meals or any religious services for people held in the unit.
CareHeroes Web and Android™ Apps for Dementia Caregivers: A Feasibility Study. The purpose of the current feasibility study was to examine the use, utility, and areas for refinement of a newly developed web-based and Android™ application (app) (i.e., CareHeroes) with multiple features to support individuals caring for loved ones with Alzheimer's disease or other forms of dementia (AD). The study was performed over an 11-week period with triads of AD caregivers, assigned home care case managers, and primary care providers (PCP). The study involved quantitative and qualitative methodologies. Eleven AD caregivers (seven daughters, two sons, and two spouses), six case managers, and five PCPs participated. Data demonstrate participants were mostly satisfied with the multiple features and ability to access and use CareHeroes. Barriers for use include concerns about time constraints and not being familiar with technology. Although the study findings are promising, a longer term study to evaluate the impact of the CareHeroes app is indicated.
Kumasi, Dec 22, GNA- The Komfo Anokye Teaching Hospital (KATH) in Kumasi is to spend 300 million dollars on "Networking Information System" to ensure the successful take-off of the National Health Insurance Scheme (NHIS) in the medical facility next year. Dr Anthony Nsiah-Asare, Chief Executive Officer of the KATH, announced this at the opening of a two-day seminar on the NHIS for 100 personnel of the Hospital in Kumasi on Tuesday. He said the NHIS was aimed at providing quality health care for the people. Dr Nsiah-Asare said every contributor to the scheme would pay a minimum of 72,000 cedis as premium. He warned that authorities at the Hospital would sanction any member of staff who collects the premium illegally. Dr Joe Bonney, a Consultant of the Ghana Health Service, who spoke on "The NHIS in Ghana", said every district was supposed to establish district mutual health insurance schemes to cater for the health needs of their localities.
In Vitro Cross-Resistance Profile of Nucleoside Reverse Transcriptase Inhibitor (NRTI) BMS-986001 against Known NRTI Resistance Mutations ABSTRACT BMS-986001 is a novel HIV nucleoside reverse transcriptase inhibitor (NRTI). To date, little is known about its resistance profile. In order to examine the cross-resistance profile of BMS-986001 to NRTI mutations, a replicating virus system was used to examine specific amino acid mutations known to confer resistance to various NRTIs. In addition, reverse transcriptases from 19 clinical isolates with various NRTI mutations were examined in the Monogram PhenoSense HIV assay. In the site-directed mutagenesis studies, a virus containing a K65R substitution exhibited a 0.4-fold change in 50% effective concentration (EC50) versus the wild type, while the majority of viruses with the Q151M constellation (without M184V) exhibited changes in EC50 versus wild type of 0.23- to 0.48-fold. Susceptibility to BMS-986001 was also maintained in an L74V-containing virus (0.7-fold change), while an M184V-only-containing virus induced a 2- to 3-fold decrease in susceptibility. Increasing numbers of thymidine analog mutation pattern 1 (TAM-1) pathway mutations correlated with decreases in susceptibility to BMS-986001, while viruses with TAM-2 pathway mutations exhibited a 5- to 8-fold decrease in susceptibility, regardless of the number of TAMs. A 22-fold decrease in susceptibility to BMS-986001 was observed in a site-directed mutant containing the T69 insertion complex. Common non-NRTI (NNRTI) mutations had little impact on susceptibility to BMS-986001. The results from the site-directed mutants correlated well with the more complicated genotypes found in NRTI-resistant clinical isolates. Data from clinical studies are needed to determine the clinically relevant resistance cutoff values for BMS-986001.
LYN Function Lyn has been described to have an inhibitory role in myeloid lineage proliferation. Following engagement of the B cell receptors, Lyn undergoes rapid phosphorylation and activation. LYN activation triggers a cascade of signaling events mediated by Lyn phosphorylation of tyrosine residues within the immunoreceptor tyrosine-based activation motifs (ITAM) of the receptor proteins, and subsequent recruitment and activation of other kinases including Syk, phosholipase Cγ2 (PLCγ2) and phosphatidyl inositol-3 kinase. These kinases provide activation signals, which play critical roles in proliferation, Ca²⁺ mobilization and cell differentiation. Lyn plays an essential role in the transmission of inhibitory signals through phosphorylation of tyrosine residues within the immunoreceptor tyrosine-based inhibitory motifs (ITIM) in regulatory proteins such as CD22, PIR-B and FCγRIIb1. Their ITIM phosphorylation subsequently leads to recruitment and activation of phosphatases such as SHIP-1 and SHP-1, which further downmodulate signaling pathways, attenuate cell activation and can mediate tolerance. In B cells, Lyn sets the threshold of cell signaling and maintains the balance between activation and inhibition. Lyn thus functions as a rheostat that modulates signaling rather than as a binary on-off switch. LYN is reported to be a key signal mediator for estrogen-dependent suppression of human osteoclast differentiation, survival, and function. Lyn has also been implicated to have a role in the insulin signaling pathway. Activated Lyn phosphorylates insulin receptor substrate 1 (IRS1). This phosphorylation of IRS1 leads to an increase in translocation of Glut-4 to the cell membrane and increased glucose utilization. In turn, activation of the insulin receptor has been shown to increase autophosphorylation of Lyn suggesting a possible feedback loop. The insulin secretagogue, glimepiride (Amaryl®) activates Lyn in adipocytes via the disruption of lipid rafts. This indirect Lyn activation may modulate the extrapancreatic glycemic control activity of glimepiride. Tolimidone (MLR-1023) is a small molecule lyn activator that is currently under Phase 2a investigation for Type II diabetes. In June, 2016, the sponsor of these studies, Melior Discovery, announced positive results from their Phase 2a study with tolimidone in diabetic patients, and the continuation of additional clinical studies. Lyn has been shown to protect against hepatcellular apoptosis and promote liver regeneration through the preservation of hepatocellular mitochondrial integrity. Pathology Much of the current knowledge about Lyn has emerged from studies of genetically manipulated mice. Lyn deficient mice display a phenotype that includes splenomegaly, a dramatic increase in numbers of myeloid progenitors and monocyte/macrophage tumors. Biochemical analysis of cells from these mutants revealed that Lyn is essential in establishing ITIM-dependent inhibitory signaling and for activation of specific protein tyrosine phosphatases within myeloid cells. Mice that expressed a hyperactive Lyn allele were tumor free and displayed no propensity toward hematological malignancy. These mice have reduced numbers of conventional B lymphocytes, down-regulated surface immunoglobulin M and costimulatory molecules, and elevated numbers of B1a B cells. With age these animals developed a glomerulonephritis phenotype associated with a 30% reduction in life expectancy.
OPEC has sent invitations to 12 producing countries that are not members of the organization, to discuss market rebalancing measures a day before OPEC’s official meeting on November 30. The countries, according to Eulogio del Pino, are Russia, Kazakhstan, Azerbaijan, Oman, Egypt, Bahrain, Mexico, Colombia, Trinidad and Tobago, Bolivia, Norway, and Canada. China is the notable exception to this list, as both a major producer and major consumer of crude oil. China was also referred to as a wild card by one analyst, capable of undermining OPEC’s freeze efforts. The Asian nation has been buying a lot of crude over the last year and a half, after prices slumped enough, and currently has substantial reserves whose actual size nobody really knows for sure. If OPEC reaches a deal and prices rise, says Jodie Gunzberg, and if China decides to use these reserves instead of buying more expensive crude on international markets, the effect of the production freeze may well be wiped out in short order. The presence of Canada on the list is also interesting: the country sells almost all of its export crude to the U.S., and the commodity is vital for some regional economies as it is the main source of government revenues. It’s doubtful whether Canada would have any interest in capping production in tune with OPEC. So far, according to Del Pino, only Russia and Kazakhstan have accepted the invitation to the November 29 meeting. Russia has been consistent in its stated support for a production freeze, but nothing specific is coming from Moscow. On the contrary, Energy Minister Alexander Novak, who has become one of the most quoted people in the energy media, is being as vague as he can, repeating that the oil market rebalancing needs a concerted effort on the part of all stakeholders. At the same time, he hasn’t even made clear whether Russia would freeze its production or be willing to cut it from the historic highs of 11.11 million bpd reached last month. The United States was also absent from the list, at least not as of today, although an invitation to the ball, whether on Nov 29 or Nov 30, was or is being held pending the outcome of the US presidential elections. By Irina Slav for Oilprice.com More Top Reads From Oilprice.com:
Activation and ecological risk assessment of heavy metals in dumping sites of Dabaoshan mine, Guangdong province, China ABSTRACT In Dabaoshan mine, dumping sites were the largest pollution source to the local environment. This study analyzed the activation and ecological risk of heavy metals in waste materials from five dumping sites. Results indicated that the acidification of waste materials was severe at all dumping sites, and pH decreased below 3.0 at four of the five sites. There was a drastic variation in Cu, Zn, Pb, and Cd concentrations in different sites. Site A with 12915.3 mg kg−1 Pb and 7.2 mg kg−1 Cd and site C with 1936.2 mg kg−1 Cu and 5069.0 mg kg−1 Zn were severely polluted. Higher concentrations of water-soluble Cu were probably the critical constraint for local pioneer plants. A significant positive correlation was found between the concentrations of water-soluble and HOAc-extractable elements, and the regression analysis showed that, compared with Cu, Zn and Cd, Pb was more difficult to be transformed from HOAc extractable to water soluble. Concentration of water soluble metals should be an important index, same as concentration of HOAc extractable metals, in assessing ecological risks, availability, and toxicity of heavy metals. The modified ecological risk index indicated that all dumping sites had very high potential ecological risks. It is necessary to decrease the availability of heavy metals to reduce the impact of waste materials on environment.
Fabrication and Characterization of Nanocomposite Flexible Membranes of PVA and Fe3O4 Composite polymer membranes of poly(vinyl alcohol) (PVA) and iron oxide (Fe3O4) nanoparticles were produced in this work. X-ray diffraction measurements demonstrated the formation of Fe3O4 nanoparticles of cubic structures. The nanoparticles were synthesized by a coprecipitation technique and added to PVA solutions with different concentrations. The solutions were then used to generate flexible membranes by a solution casting method. The size and shape of the nanoparticles were investigated using scanning electron microscopy (SEM). The average size of the nanoparticles was 20±9 nm. Raman spectroscopy and Fourier-transform infrared spectroscopy (FTIR) were utilized to investigate the structure of the membranes, as well as their vibration modes. Thermal gravimetric analysis (TGA) and differential scanning calorimetry (DSC) demonstrated the thermal stability of the membranes and the crystallinity degree. Electrical characteristics of the thin membranes were examined using impedance spectroscopy as a function of the nanoparticles concentrations and temperatures. The resistivity of the fabricated flexible membranes was possible to adjust by controlled doping with suitable concentrations of nanoparticles. The activation energy decreased with the nanoparticles concentrations due to the increase in charge carriers concentrations. Therefore, the fabricated membranes may be applied for practical applications that involve the recycling of nanoparticles for multiple application cycles. Introduction Poly(vinyl alcohol) (PVA) is known as a water-soluble and synthetic polymer that is produced from poly(vinyl acetate) through a hydrolysis process. PVA exhibits a long flexibility chain and a great concentration of polar groups conferring to a molar polymer's mass. The PVA polymer is biocompatible and has superior adhesion characteristics; thus, its hydrogels are used for many biomedical applications ref.. It also has many applications in the field of production of biodegradable blends, since it is water-soluble, easy to form thin films/membranes, and practical to produce natural blends by mixing it with other natural polymers. However, the electrical conductivity of PVA is low, since it is a material of proton conduction, which limits its device applications in the pure form. Therefore, precise control of its electrical conductivity is essential to facilitate its utilization for practical device applications, including bioimplantable devices, resistive switching devices, and magneticcontrolled drug delivery devices. Plasticizers, such as glycerol (GL), are a novel class of materials with many outstanding properties, including thermal steadiness, nonflammability, nondispersal, and extraordinary ionic conductivity. Many plasticizers also have high electrical conductivity; thus, they can be used as additives to PVA to enhance and control its electrical conductivity. Moreover, the doping of PVA membranes with GL enables the control of their flexibility and increases their durability. Nanoparticles are grains of materials with size dimensions in nanometers, and they hold novel physical and chemical features that are dissimilar to their bulk form. There-fore, controlled blending of PVA with nanoparticles permits to take advantage of the novel characteristics of nanoparticles to produce custom-designed nanocomposite membranes for device applications. The characteristics of PVA membranes may be designed to target specific applications, according a thorough choice of additive nanoparticles' sizes, types, concentrations, and shapes. Those flexible membranes take advantage of the superior characteristics of nanoparticles while maintaining their flexibility and other attractive features. Furthermore, using magnetic nanoparticles such as iron oxide (Fe 3 O 4 ) allows the recycling of nanoparticles by dissolving PVA in water and retrieving the nanoparticles by a magnet. This supports the green utilization of nanoparticles where they can be employed for numerous application cycles. The retrieved nanoparticles can be used for the generation of new PVA-based membranes and apply them for further flexible device applications. Recently, researchers synthesized Fe 3 O 4 nanoparticles by a surfactant-free sonochemical reaction and added them to polyvinyl alcohol (PVA) to form flexible membranes. They found that the nanoparticles enhanced both the thermal stability as well as the flame-retardant characteristics of the PVA matrix. Furthermore, a different investigation was performed on ribbons of Fe 3 O 4 nanoparticles and PVA polymers with various concentrations. Atomic force microscopy analysis revealed that the encapsulation of PVA with Fe 3 O 4 decreased the agglomeration, controlled the morphology of nanoparticles to be further spherical, resulted in the further dispersion of nanoparticles, and decreased surface roughness. To show that the Fe 3 O 4 nanoparticles can potentially aid as a carrier of the protein that keeps the antigenicity of a conjugate, a method to synthesize 3-aminepropyltrimethoxysilane-PVA-magnetite nanoparticles altered using antiprotein kinase C (PKC). The conjugate includes Fe 3 O 4 nanoparticles that are bound covalently to the antibody: antiprotein kinase C (PKC). The conjugate process can help for localization of cellular PKC, as well as the inhibition of its function. The action of anti-PKC conjugated via Fe 3 O 4 was verified by recognizing PKC by the method of Western blot. The utilization of flexible membranes for device applications requires detailed identification of their electrical characteristics, including phase and structure transition, electrical conductivity, and polarization mechanisms, as well as charge transport. Here, impedance spectroscopy represents an exceptional tool that provides insight into polymer membrane characteristics. It enables the identification of both the conductivity and capacitance of the produced membranes, the effect of grain boundaries, the impact of blocking electrodes, etc.. The above investigations did not examine the effects of the addition of Fe 3 O 4 nanoparticles into PVA to enhance its electrical resistivity; neither had they tested the influence of Fe 3 O 4 nanoparticles' concentrations on adjusting the electrical resistivity of a PVA membrane. Therefore, this investigation presents the fabrication and thorough characterization of flexible composite membranes that include PVA, glycerol, and Fe 3 O 4 nanoparticles. The influence of nanoparticles' concentrations on the structure and electrical characteristics of the fabricated membranes are investigated. This work represents a continuation of our previous investigations of producing PVA nanoparticle composites for device applications ref. [11,12,15,17,. The previous investigations revealed that the modification of PVA with nanoparticles and plasticizers permit fine adjustments of the electrical and mechanical properties of its membranes. Results and Discussion The morphology of the synthesized Fe 3 O 4 nanoparticles was examined using SEM images, presented in Figure 1. The figure revealed an agglomerate of nanoparticles. The nanoparticle size was calculated from the SEM images, with an average size of 20 ± 9 nm. The agglomeration of the nanoparticles was due to their magnetic nature (magnetic interaction). It should be noted that the SEM image was fuzzy due to imaging distortion, since the nanoparticles were magnetic. The composition of the generated nanoparticles was assured using the EDS measurements, as depicted in the inset of Figure 1. Molecules 2020, 25, x FOR PEER REVIEW 3 of 10 distortion, since the nanoparticles were magnetic. The composition of the generated nanoparticles was assured using the EDS measurements, as depicted in the inset of Figure 1. can be misleading, since many works have reported that the low intensity of the XRD spectrum of Fe3O4 nanoparticles is due to their magnetic nature. The figure also reveals the reference pattern with Miller indices that are specified following the above structure. The XRD results agree well with the EDS measurements above. The inset in Figure 3 reveals a picture of a PVA-GL-Fe3O4 (10%) membrane. The picture demonstrates its flexibility. The dark color is due to the nanoparticles. DSC measurements of the produced PVA-GL-Fe3O4 membranes are presented in Figure 3 Total area o f all peaks %, and it yielded 1.5%. However, this value can be misleading, since many works have reported that the low intensity of the XRD spectrum of Fe 3 O 4 nanoparticles is due to their magnetic nature. The figure also reveals the reference pattern with Miller indices that are specified following the above structure. The XRD results agree well with the EDS measurements above. Molecules 2020, 25, x FOR PEER REVIEW 3 of 10 distortion, since the nanoparticles were magnetic. The composition of the generated nanoparticles was assured using the EDS measurements, as depicted in the inset of Figure 1.. can be misleading, since many works have reported that the low intensity of the XRD spectrum of Fe3O4 nanoparticles is due to their magnetic nature. The figure also reveals the reference pattern with Miller indices that are specified following the above structure. The XRD results agree well with the EDS measurements above. The inset in Figure 3 reveals a picture of a PVA-GL-Fe3O4 (10%) membrane. The picture demonstrates its flexibility. The dark color is due to the nanoparticles. DSC measurements of the produced PVA-GL-Fe3O4 membranes are presented in Figure 3(a), with a temperature up to 250 °C. During the heating of the The inset in Figure 3 reveals a picture of a PVA-GL-Fe 3 O 4 (10%) membrane. The picture demonstrates its flexibility. The dark color is due to the nanoparticles. DSC measurements of the produced PVA-GL-Fe 3 O 4 membranes are presented in Figure 3a, with a temperature up to 250 C. During the heating of the membranes, two processes were observed. The glass transition temperature (T g ) shifted from 100 C for pure PVAglycerol to 70 C with 20% of Fe 3 O 4, indicating the relaxation of crystalline domains of PVA due to the nanoparticles' inclusions inside the crystalline lattices. As the concentration of nanoparticles increased, the blocking of the crosslinking during the drying step of Molecules 2021, 26, 121 4 of 10 membranes increased, which led to a loosening molecular packaging. This effect, along with some large nanoparticles within the membranes not well-distributed, caused the shift of T g to a low temperature. The melting temperature (T m ) slightly shifted from 220 to 225 C due to the melting of the crystalline domains. However, the small shift means that the PVA polymer kept its crystallinity with the increasing nanoparticles' concentrations. The shift was an indication of the crosslinking between the nanoparticles and PVA. This is also confirmed by both FTIR and Raman analyses below. Molecules 2020, 25, x FOR PEER REVIEW 4 of 10 membranes, two processes were observed. The glass transition temperature (Tg) shifted from 100 °C for pure PVA-glycerol to 70 °C with 20% of Fe3O4, indicating the relaxation of crystalline domains of PVA due to the nanoparticles' inclusions inside the crystalline lattices. As the concentration of nanoparticles increased, the blocking of the crosslinking during the drying step of membranes increased, which led to a loosening molecular packaging. This effect, along with some large nanoparticles within the membranes not well-distributed, caused the shift of Tg to a low temperature. The melting temperature (Tm) slightly shifted from 220 to 225 °C due to the melting of the crystalline domains. However, the small shift means that the PVA polymer kept its crystallinity with the increasing nanoparticles' concentrations. The shift was an indication of the crosslinking between the nanoparticles and PVA. This is also confirmed by both FTIR and Raman analyses below. TGA analysis for the PVA-GL-Fe3O4 membranes was utilized to identify the stability and the loading amount of Fe3O4, and its results are depicted in Figure 3(b), with temperatures up to 550 °C. The figure revealed the moisture removal, degradation, and decomposition processes. The lost weight until about 259 °C was due to the elimination of the moisture content. The degradation stages started from 259 °C to about 390 °C, and they were assigned to PVA chain degradations, while from 340 °C to 410 °C, they were assigned the carbonation. The final decomposition stage of the PVA chains started at around 430 °C. It should be noted that an error margin in wt% was due to losing nanoparticles during solution preparation and casting. The results revealed a small shift with increasing nanoparticle wt%, which meant a decent stability of the prepared membranes. In addition, the above findings were in agreement with other investigations that revealed that the addition of Fe3O4 nanoparticles to the PVA membranes enhances their thermal stability. FTIR technique was used to examine the functional groups of PVA and the effects of Fe3O4 nanoparticles' concentrations on them, as shown in Figure 4(a). The broadband with a peak at 3268 cm −1 was consequent to the existence of the OH group of PVA. The 2907, 1100, and 1148-1709 cm −1 bands referred to C-H, C-O, and C-O-C stretching, respectively. Increased Fe3O4 nanoparticles' percentages caused the intensity of the O-H and C-H peaks to decrease because of the inter or intra-molecular hydrogen bonding beside the complex formation of Fe3O4 nanoparticles with the OH groups of PVA. However, it was observed that PVA kept its mechanical strength and elasticity. This conclusion was consist with other investigations that revealed that the addition of Fe3O4 nanoparticles did not influence the mechanical integrity of the PVA membranes. TGA analysis for the PVA-GL-Fe 3 O 4 membranes was utilized to identify the stability and the loading amount of Fe 3 O 4, and its results are depicted in Figure 3b, with temperatures up to 550 C. The figure revealed the moisture removal, degradation, and decomposition processes. The lost weight until about 259 C was due to the elimination of the moisture content. The degradation stages started from 259 C to about 390 C, and they were assigned to PVA chain degradations, while from 340 C to 410 C, they were assigned the carbonation. The final decomposition stage of the PVA chains started at around 430 C. It should be noted that an error margin in wt% was due to losing nanoparticles during solution preparation and casting. The results revealed a small shift with increasing nanoparticle wt%, which meant a decent stability of the prepared membranes. In addition, the above findings were in agreement with other investigations that revealed that the addition of Fe 3 O 4 nanoparticles to the PVA membranes enhances their thermal stability. FTIR technique was used to examine the functional groups of PVA and the effects of Fe 3 O 4 nanoparticles' concentrations on them, as shown in Figure 4a. The broadband with a peak at 3268 cm −1 was consequent to the existence of the OH group of PVA. The 2907, 1100, and 1148-1709 cm −1 bands referred to C-H, C-O, and C-O-C stretching, respectively. Increased Fe 3 O 4 nanoparticles' percentages caused the intensity of the O-H and C-H peaks to decrease because of the inter or intra-molecular hydrogen bonding beside the complex formation of Fe 3 O 4 nanoparticles with the OH groups of PVA. However, it was observed that PVA kept its mechanical strength and elasticity. This conclusion was consist with other investigations that revealed that the addition of Fe 3 O 4 nanoparticles did not influence the mechanical integrity of the PVA membranes. Molecules 2020, 25, x FOR PEER REVIEW 5 of 10 Raman spectroscopy was utilized to estimate the crystallinity of the PVA-GL-Fe3O4 membranes. As shown in Figure 4(b), the central peak was at 2914 cm −1, and it referred to the stretching vibration of CH2. The peaks at 1150 cm −1 and 1450 cm −1 were allocated to C-H and OH stretching vibrations in order. The intensity of the peaks decreased with the nanoparticles' concentrations and shifted to lower wavelengths, which indicated the decrease of the crystallinity degree of PVA due to its interactions with the nanoparticles. Electrical impedance characterization as a function of the nanoparticles' contents and temperatures were conducted for the PVA-GL-Fe3O4 membranes. Figure 5 illustrates Z' versus Z" results for all prepared membranes at various nanoparticles' concentrations and temperatures. The measurements presented in the figure can be extrapolated into semicircles, with the radius of each semicircle a demonstration of the membranes' dc resistance. All figures revealed that increasing the temperature decreased the semicircle radius, referring to the decrease of the dc resistance and increase in the dc conductivity. This was mainly assigned to the increase of the carrier concentration, as well as the growing rate of electron transfer from valence to the conduction band. At elevated temperatures, sufficient energy was attained by the ions, which empowered them to move and diffuse to the metal electrodes due to their potential differences. The figure illustrates single semicircles for all measurements, which revealed that a membrane may be presented by a single parallel RC circuit. The semicircle was assigned to the charge transfer via the kinetic process at high frequencies (at high frequencies, ions' transfers were insignificant due to their relatively high relaxation time). Fitting the semicircles can be used to estimate the dc resistance and capacitance of each membrane. Here, the resistance and capacitance represented the effects of both the grain boundaries and depletion regions within each membrane. Raman spectroscopy was utilized to estimate the crystallinity of the PVA-GL-Fe 3 O 4 membranes. As shown in Figure 4b, the central peak was at 2914 cm −1, and it referred to the stretching vibration of CH 2. The peaks at 1150 cm −1 and 1450 cm −1 were allocated to C-H and OH stretching vibrations in order. The intensity of the peaks decreased with the nanoparticles' concentrations and shifted to lower wavelengths, which indicated the decrease of the crystallinity degree of PVA due to its interactions with the nanoparticles. Electrical impedance characterization as a function of the nanoparticles' contents and temperatures were conducted for the PVA-GL-Fe 3 O 4 membranes. Figure 5 illustrates Z' versus Z" results for all prepared membranes at various nanoparticles' concentrations and temperatures. The measurements presented in the figure can be extrapolated into semicircles, with the radius of each semicircle a demonstration of the membranes' dc resistance. All figures revealed that increasing the temperature decreased the semicircle radius, referring to the decrease of the dc resistance and increase in the dc conductivity. This was mainly assigned to the increase of the carrier concentration, as well as the growing rate of electron transfer from valence to the conduction band. At elevated temperatures, sufficient energy was attained by the ions, which empowered them to move and diffuse to the metal electrodes due to their potential differences. The figure illustrates single semicircles for all measurements, which revealed that a membrane may be presented by a single parallel RC circuit. The semicircle was assigned to the charge transfer via the kinetic process at high frequencies (at high frequencies, ions' transfers were insignificant due to their relatively high relaxation time). Fitting the semicircles can be used to estimate the dc resistance and capacitance of each membrane. Here, the resistance and capacitance represented the effects of both the grain boundaries and depletion regions within each membrane. The dc resistances (R) extracted from the impedance results in Figure 5 can be used to evaluate the electrical resistivity using the equation: = Rl A, where l represents the membrane thickness and A is the cross-section area of the electrical electrode. Figure 6a reveals the dependence of the resistivity (natural logarithm) on the inverse temperature for the PVA-GL-Fe 3 O 4 membranes with different nanoparticles' concentrations. The figure reveals a decrease in resistivity (i.e., increase in conductivity) with increasing temperatures. At 25 C, increasing the nanoparticles' concentrations decreased the resistivity. In contrast, the resistivity increased the increasing nanoparticles' concentrations at 75 and 100 C. This observation was in agreement with the reported results of other researchers. A better understanding of this dependence of the resistivity on nanoparticles' contents can be extracted by the calculation of the activation energy (E a ). Therefore, the results were fitted into linear equations, as presented by the solid lines in the figure. The fitting lines revealed that, other than the 0% membrane, increasing the nanoparticles' concentrations reduced the slope of the curve. The slope of a fitting line can be used to extract the activation energy by utilizing the Arrhenius equation: = 0 e Ea k B T, with 0 a temperature-independent constant and k B representing the Boltzmann constant. The temperature (T) was measured in the unit of Kelvin. The dependence of the activation energy on the nanoparticles' concentrations of PVA-GL-Fe 3 O 4 membranes is presented in Figure 6b. Other than the 0% PVA-GL-Fe 3 O 4 membrane, the figure revealed a decrease of the activation energy with the nanoparticles' concentrations. This decrease in the activation energy referred to the rise in the concentration of the charge carriers as a result of the increasing nanoparticles' concentrations. Increasing the nanoparticles' concentrations facilitated the charge transport within the polymer nanoparticles matrix. Herein, the nanoparticles represented a network path for the charges transport to flow through. The resistivity of a pure PVA + GL membrane was balanced by adding Fe 3 O 4 nanoparticles that were semiconductor in nature. Since the values of resistivity for the PVA + GL and nanoparticles were comparable, the addition of low concentrations of nanoparticles did not contribute to enhancing the electrical conductivity, but instead, they disturbed the charge conduction paths inside the membranes. The semiconducting influence of the nanoparticles became more dominant at high concentrations. The dc resistances ( ) extracted from the impedance results in Figure 5 can be used to evaluate the electrical resistivity using the equation: =, where represents the membrane thickness and is the crosssection area of the electrical electrode. Figure 6(a) reveals the dependence of the resistivity (natural logarithm) on the inverse temperature for the PVA-GL-Fe3O4 membranes with different nanoparticles' concentrations. The figure reveals a decrease in resistivity (i.e., increase in conductivity) with increasing temperatures. At 25 °C, increasing the nanoparticles' concentrations decreased the resistivity. In contrast, the resistivity increased the increasing nanoparticles' concentrations at 75 and 100 °C. This observation was in agreement with the reported results of other researchers. A better understanding of this dependence of the resistivity on nanoparticles' contents can be extracted by the calculation of the activation energy ( ). Therefore, the results were fitted into linear equations, as presented by the solid lines in the figure. The fitting lines revealed that, other than the 0% membrane, increasing the nanoparticles' concentrations reduced the slope of the curve. The slope of a fitting line can be used to extract the activation energy by utilizing the Arrhenius equation: =, with a temperature-independent constant and representing the Boltzmann constant. The temperature ( ) was measured in the unit of Kelvin. The dependence of the activation energy on the nanoparticles' concentrations of PVA-GL-Fe3O4 membranes is presented in Figure 6(b). Other than the 0% PVA-GL-Fe3O4 membrane, the figure revealed a decrease of the activation energy with the nanoparticles' concentrations. This decrease in the activation energy referred to the rise in the concentration of the charge carriers as a result of the increasing nanoparticles' concentrations. Increasing the nanoparticles' concentrations facilitated the charge transport within the polymer nanoparticles matrix. Herein, the nanoparticles represented a network path for the charges transport to flow through. The resistivity of a pure PVA + GL membrane was balanced by adding Fe3O4 nanoparticles that were semiconductor in nature. Since the values of resistivity for the PVA + GL and nanoparticles were Synthesis of Nanoparticles Fe 3 O 4 nanoparticles were typically prepared by the simple coprecipitation method as described in reference with some modifications. In particular, two equal volumes of 0.1 M FeCl 2 and 0.2 M FeCl 3 aqueous solutions with 1 wt% of trisodium citrate were poured into a glass beaker under mechanical stirring. During stirring, the pH of the mixture was adjusted to 9 by 2 M NaOH aqueous solution. The product, faint brown, was collected by centrifugation at 7000 rpm. The product was washed by ethanol and deionized water three times, then dried at 80 C overnight. Membrane Preparation Composite flexible membranes were fabricated by utilizing a solution casting technique. Herein, a solution of 10 wt% of PVA was generated by dissolving granules of PVA (10 g) inside 100 mL of distilled water at a temperature of 80 C under rigorous stirring. Once PVA granules were dissolved completely, ethanol (50 mL) was introduced to the solution while still stirring, where a solution of viscous nature was obtained. Glycerol with a concentration of 1 wt% was introduced to the PVA solution as a plasticizer. Nanoparticles were added to the PVA solution while the solution was placed on a solicitor for 30 min. This process is essential to guarantee the uniform dispersion of nanoparticles within the membranes. The PVA-GL-Fe 3 O 4 solution was casted on top of aluminum foil and left to dry inside an oven at 80 C for more than 4 h under ambient air. Characterization The sizes of the nanoparticles, morphology, and composition were tested using a Nova-NanoSEM-450 scanning electron microscope (SEM) (FEI, Lausanne, Switzerland) attached with an apparatus of energy-dispersive x-ray spectroscopy (EDS). Fourier-transform infrared (FTIR) spectrometer of PerkinElmer, spectrum-400-FT-IR/FT-NIR (Waltham, MA, USA), was utilized to examine the structure, as well as the vibration modes of the membranes. A Raman spectrometer of Thermo Fisher Scientific DXR (Waltham, MA, USA) was utilized to investigate the crystallinity of the membranes. PerkinElmer systems of differential scanning calorimeter (DSC) (model number Jade-DSC) and thermal gravimetric analysis (TGA) (model number Pyris6-TGA) were used to investigate the stability, melting point, and crystallinity indices of the membranes. The TGA was performed at a heating rate of 10 C/min between 20 to 700 C, while the DSC measurements were performed in a temperature range between 20 to 250 C. A PANalytical x-ray diffraction (XRD) system of Empyrean was used to investigate the composition and structure of the produced nanoparticles by utilizing the Cu-K radiation peak of wavelength = 1.5406. The XRD measurements were applied in an angle (2) range of 10-80 with a 0.02 angle step size. An impedance-gain-phase analysis system of Solarton (model number 1260A) was utilized to study the electrical characteristics of the membranes. The membranes were tested by adopting a capacitor scheme, where each membrane was located between a pair of electrical electrodes that were made of stainless steel on a test stage with temperature control. The electrical characterizations were established as a function of temperature with frequency range between 1-10 6 Hz. The Solarton system detected electrical impedance as a function of frequency ( f ) of the ac signal (Z()), where = 2 f. The system also identified the phase angle that was a function of frequency (()). The electrical impedance could be expressed in terms of real and imaginary parts Z () and Z (), respectively, where Z() = Z () − iZ (), with i defined as the complex number. The impedance components were depicted on Nyquist plots with frequency as a tacit variable using Zview software. The Zview was also utilized to fit the impedance measurements and determine the equivalent resistance. Conclusions Flexible polymer membranes of poly(vinyl alcohol) (PVA), glycerol (GL), and iron oxide (Fe 3 O 4 ) nanoparticles were fabricated and characterized. The nanoparticles were produced using a coprecipitation method with an average size of 20 ± 9 nm, then added to PVA-GL to produce solutions with different nanoparticles' concentrations. Flexible membranes were then fabricated by utilizing a solution casting method. X-ray diffraction measurements demonstrated the formation of Fe 3 O 4 nanoparticles of cubic structures. The composition of nanoparticles was further assured using energy-dispersive x-ray spectroscopy (EDS) measurements. Differential scanning calorimetry (DSC), Fourier-transform infrared (FTIR) spectroscopy, and Raman analysis revealed that PVA membranes maintained their crystallinity with the increasing nanoparticles' concentrations. The thermogravimetric calorimetry (TGA) demonstrated a decent stability of the prepared membranes. Electrical impedance characterizations demonstrated that the membranes may be presented as a single parallel RC circuit. The resistivity of the membranes exhibited a negative temperature coefficient, and it decreased with the nanoparticles' concentrations. This caused a decrease of the activation energy with the nanoparticles' concentrations because of the decrease in the scattering of charge carriers through the grain boundaries. The resistivity of the fabricated flexible membranes was viable to alter by introducing suitable concentrations of the nanoparticles. Hence, the generated membranes may be nominated as a potential flexible material for electronic flexible devices. Data Availability Statement: Data is contained within the article. The data presented in this study are available in article.
The National Bank of Cambodia (NBC) and a microfinance industry insider hailed one of Cambodia’s top 10 MFIs, Sathapana Limited, for changing their majority shareholder to a commercial bank. They said it proves a positive and strong funding source is essential for MFIs to pursue their business operation and expansion goals. However, an economist questioned whether the change will make the institution shift from a social to a commercial interest. On November 1 Maruhan Japan Bank became Sathapana’s majority shareholder by buying a 95.1 per cent stake in the MFI. Sim Senacheert, president and CEO of Prasac MFI, said the buyout is reflective of changes within the industry, and proves they can seek strong and good partners for long-term investment in Cambodia’s microfinance industry. He said that most MFI’s initial shareholders are development institutions which help MFIs to provide loans to people in order improve their living standards. Furthermore, they want to build confidence and attract the purely private sector companies to partner with MFIs. “Whenever they see that we are strong enough to run by ourselves, they [the development institutions] will exit,” he said. He said his institution will also face the same thing in the future. “Of course, we will do the same but we don’t know when it will come,” he added. Bun Mony, CEO of Sathapana, also said the change proves the strength and great achievement of his institution which has a strong position in the market, adding that the participation of a new shareholder is the evidence of this to the public. “In my view, it is the great success of Sathapana. We gained the trust of this Japanese corporation to invest directly with us,” he said. “We’re very optimistic and foresee Maruhan helping us to fulfill our ambitions for business expansion nationwide as well as to increase total assets to US$1 billion in the next 10 years.” Chan Sophal, president of the Cambodia Economic Association, said the change will have positive and negative impacts on the industry. “It is normal. It depends on the objective whether or not they still have a social mission to help the poor or shift to a commercial purpose. The positive is that they will have more funding for their business operations by giving loans more to people. But the negative is that if they can earn more, they will get return on equity – meaning they’ll have a commercial interest. “I hope that they will not try to make the high returns from their investment, but they still maintain their social objective,” he added.
Non-invasive Detection of Unique Molecular Signatures in Laser-Induced Retinal Injuries. Unintentional laser exposure is an increasing concern in many operational environments. Determining whether a laser exposure event caused a retinal injury currently requires medical expertise and specialized equipment that are not always readily available. The purpose of this study is to test the feasibility of using dynamic light scattering (DLS) to non-invasively detect laser retinal injuries through interrogation of the vitreous humor (VH). Three grades of retinal laser lesions were studied: mild (minimally visible lesions), moderate (Grade II), and severe (Grade III). A pre-post-treatment design was used to collect DLS measurements in vivo at various time points, using a customized instrument. VH samples were analyzed by liquid chromatography/tandem mass spectrometry (LC-MS/MS) and relative protein abundances were determined by spectral counting. DLS signal analysis revealed significant changes in particle diameter and intensity in laser-treated groups as compared with control. Differences in protein profile in the VH of the laser-treated eyes were noted when compared with control. These results suggest that laser injury to the retina induces upregulation of proteins that diffuse into the VH from the damaged tissue, which can be detected non-invasively using DLS.
Comparing identically designed grayscale (50 phase level) and binary (5 phase levels) splitters: actual versus modeled performance Performance of diffractive optics is determined by high-quality design and a suitable fabrication process that can actually realize the design. Engineers who are tasked with developing or implementing a diffractive optic solution into a product need to take into consideration the risks of using grayscale versus binary fabrication processes. In many cases, grayscale design doesn't always provide the best solution or cost benefit during product development. This fabrication dilemma arises when the engineer has to select a source for design and/or fabrication. Engineers come face to face with reality in view of the fact that diffractive optic suppliers tend to provide their services on a "best effort basis". This can be very disheartening to an engineer who is trying to implement diffractive optics. This paper will compare and contrast the design and performance of a 1 to 24 beam, two dimensional; beam splitter fabricated using a fifty phase level grayscale and a five phase level binary fabrication methods. Optical modeling data will be presented showing both designs and the performance expected prior to fabrication. An overview of the optical testing methods used will be discussed including the specific test equipment and metrology techniques used to verify actual optical performance and fabricated dimensional stability of each optical element. Presentation of the two versions of the splitter will include data on fabrication dimensional errors, split beam-to-beam uniformity, split beam-to-beam spatial size uniformity and splitter efficiency as compared to the original intended design performance and models. This is a continuation of work from 2005, Laser Beam Shaping VI.
GLENDALE, Ariz. (CBSNewYork/AP) — The New York Rangers acquired All-Star defenseman Keith Yandle in a trade with the Arizona Coyotes on Sunday as the team makes a push for another deep playoff run this year. The Rangers, who lost to the Kings in the Stanley Cup Finals last season, made the big move ahead of Monday’s trade deadline. Yandle has been one of the league’s best offensive-minded defensemen with 41 points in 63 games this season and has been the anchor of Arizona’s power play unit. The 28-year-old is a four-time All-Star who has led the Coyotes in scoring the past three seasons. New York also got defenseman Chris Summers and a 2016 fourth-round pick in the deal, trading defenseman John Moore, top prospect Anthony Duclair, a conditional first-round draft pick in 2016 and a second-rounder this year. Arizona agreed to retain 50 percent of Yandle’s salary. The Rangers are third in the Eastern Conference, five points behind Montreal, and two behind the Islanders in the Metropolitan Division. Yandle has been a core member of the Coyotes since they drafted him in the fourth round of the 2005 draft. An assistant captain in Arizona, he has been the subject of trade rumors for the last several seasons and was finally moved with a year remaining on a five-year, $26 million contract signed in 2011. The Moore, 24, should give the Coyotes depth on the blue line. He was the 21st overall pick in the 2009 draft and had a career-best season in 2013-14, with four goals and 11 assists. Moore has a goal and five assists in 38 games this season. The big piece of the deal for the Coyotes was Duclair. The 19-year-old left wing was a third-round pick in 2013 and already has NHL experience, scoring a goal with six assists in 18 games for the Rangers this season. He also has nine goals and 16 assists in 20 games with Quebec of the QMJHL. The Coyotes are hoping he can help set a foundation for the future with Max Domi, the 12th overall draft pick in 2013 who is expected to have a big impact next season. He and Duclair played on a line together for the gold-medal Canadian team in the 2015 World Junior Championships.
The death of Trayvon Martin has sparked a fury in most corners of Black America and each detail is read eagerly by many people who are fiercely advocating for George Zimmerman, 28, the White-Hispanic neighborhood watchman who shot him in the chest at point-blank range, to be tried for his murder. White America is saying not so fast. The strict racial lines in the sand are apparent in other areas of the poll as well. 73 percent of Black people believe that Zimmerman would have been arrested if Trayvon had been White. Interestingly enough, only 33 percent of non-Hispanic White people believe that race plays a part in how the gunman is being treated. Not only does White America predominantly believe in Zimmerman’s innocence, 43 percent of them say believe that there is too much media coverage surrounding the case — compared to only 16 percent of Black people who feel the same, according to an earlier Pew Research Center poll. Over 3,00o adults were polled by Gallup, with a margin of error of plus or minus 2 percentage points.
Having It All: Women, Work, Family, and the Academic Career The gender gap in academic careers at colleges and universities has been persistent and resistant to change. While women have made significant progress as incoming faculty, their ascent in the academic hierarchy, their quest for pay equity, and their move into senior academic and administrative positions has been slow. To be certain, there has been ongoing attention to, and improvement of progress for, women in the workplace in all sectors and at all levels, yet challenges remain and they have been stubborn. The integration of faculty work and family life, particularly for female faculty in postsecondary institutions, has been and remains a major focus of faculty members, academic researchers, administrators, and institutions and plays a significant role for increased representation and equity for female faculty. The challenges facing women who yearn for an academic career, particularly a tenure-track one, can be associated with melding family and career. On the basis of insights from qualitative and quantitative research, it is clear that academic mothers face challenges at home and in the workplace when it comes to achieving greater parity and representation. Although faculty positions are flexible and enjoy great autonomy, tenure-track positions are time-consuming and can be difficult to manage. Structural impediments, workplace norms, and gender stereotypes can make the path for female faculty additionally review essay / note critique
SU-F-T-464: Development of a Secondary Check Procedure to Evaluated Flatness and Symmetry Discrepancies Detected During Daily Morning QA. PURPOSE A daily QA device is used to monitor output, flatness and symmetry constancy for all linac photon and electron energies. If large deviations from baseline in flatness or symmetry are reported it becomes necessary to crosscheck the measurements with a second device. Setting up another device such as Matrixx (IBA Dosimetry) can be time consuming, due to its warm-up time, and trained personnel may not be readily available to analyze the results. Furthermore, this discrepancy is frequently isolated to a single energy. Unaffected energies could still be used, avoiding further patient delays, if a method to gather data for offline analysis could be developed. We find that optically stimulated luminescent dosimeters (OSLDs) provide a quick, simple, and inexpensive solution to this important clinical problem. METHODS The exact geometry of the detectors on the daily tracker (Keithley Therapy Beam Evaluator) was reproduced by placing nanoDot OSLDs (Landauer) on a solid water phantom. A combination of bolus and solid water was placed on top to provide buildup and prevent air gaps. Standard daily measurements of output, flatness and symmetry were taken for 2 photon energies (6x,10x) and 5 electron energies (6e,9e,12e,15e,18e) using the tracker. These measurements were then repeated with the OSLD phantom. RESULTS The time it took to set up the OSLD phantom was comparable to that of the tracker. The inline and crossline OSLD phantom measurements of flatness and symmetry agreed with the tracker results to within 2%. CONCLUSION OSLDs provide a good solution for a quick second check when questionable flatness and symmetry results are detected with the tracker during daily QA.
As the end to a three-day ceasefire drew closer on Thursday, Israeli officials said they were willing to unconditionally extend the lull in fighting, only to be rebuffed by Hamas officials, who threatened to renew fire Friday morning if Jerusalem did not accede to their demands. Palestinian delegations and Israeli officials are currently holding indirect talks in Egypt aimed at putting a conclusive end to a month of fighting, but officials say a large gap remains between the sides’ demands. Egyptian officials spent Wednesday shuttling between the sides to get approval for an extension on the truce. Get The Times of Israel's Daily Edition by email and never miss our top stories Free Sign Up An unnamed Israeli official was widely reported on Wednesday night as saying that Israel was keen to extend the 72-hour ceasefire “unconditionally.” However, Hamas officials were quick to say that there was no agreement on an extension and that negotiations in Cairo were still ongoing. The Izz ad-Din al-Qassam Brigades, Hamas’s armed wing, said it will start firing as soon as the ceasefire ends on Friday at 8 a.m., Israeli news site Walla reported, citing the Hamas news agency Al-Risala. Hamas deputy leader Mussa Abu Marzouk, part of the Palestinian delegation holding talks in Cairo, denied overnight there was yet any agreement. “There is no agreement to extend the ceasefire,” he wrote on Twitter. “Any news about the extension of the truce is unfounded,” added Hamas spokesman Sami Abu Zuhri. Senior Hamas figure Ismail Radwan told Al-Risala that there has been no agreement on extending the ceasefire because Hamas demands have not been met. Pundits suggested that the threat is an attempt to bully Israel into giving in to some of the demands that Hamas presented at the ongoing indirect talks. Hamas is seeking a lifting of the Israeli blockade, as well as extended fishing rights into the Mediterranean and the opening of a seaport and airport in the Strip, as well as opening the Rafah crossing into Egypt. Israeli officials have indicated they will only agree to lift the blockade in exchange for the Strip being disarmed, and would like the Palestinian Authority to manage the Gaza side of the Rafah border terminal. Israel launched Operation Protective Edge on July 8 to halt rocket fire from the Gaza Strip at Israeli towns and cities and destroy a network of tunnels, dug by Hamas under the border, that were used to launch terror attacks inside Israeli territory. Egypt has emerged as the prime moderator in talks between Israel and the Palestinians to negotiate an enduring ceasefire. The Egyptian initiative is based on an immediate ceasefire followed by negotiations for the long-term status. Hamas political bureau member Azat Al-Rashak told al-Risala the Palestinian delegation has not received an answer from Israel to the demands it presented to the Egyptians and warned that “the resistance is ready to continue fighting.” IDF Chief of Staff Benny Gantz warned Wednesday that the IDF would retaliate strongly if the terror organization resumes its attacks on Israel. “We aren’t done,” he said, as tens of thousands of army reservists, called up for the Gaza operation, were released from duty. “If there are incidents, we will respond to them.” Reuters reported on Wednesday that Britain, France, and Germany have offered to revive an international delegation that monitored the Rafah crossing in the past. The so-called EU Border Assistance Mission started working in 2005 to calm Israeli fears over security at the crossing, and in particular what was being brought into the Gaza Strip. However, the mission suspended its operations in 2007 after Hamas seized control of Gaza and pushed out PA officials. AFP contributed to this report.
Active Batch Selection for Fuzzy Classification in Facial Expression Recognition Automated recognition of facial expressions is an important problem in computer vision applications. Due to the vagueness in class definitions, expression recognition is often conceived as a fuzzy label problem. Annotating a data point in such a problem involves significant manual effort. Active learning techniques are effective in reducing human labeling effort to induce a classification model as they automatically select the salient and exemplar instances from vast amounts of unlabeled data. Further, to address the high redundancy in data such as image or video sequences as well as to account for the presence of multiple labeling agents, there have been recent attempts towards a batch mode form of active learning where a batch of data points is selected simultaneously from an unlabeled set. In this paper, we propose a novel optimization-based batch mode active learning technique for fuzzy label classification problems. To the best of our knowledge, this is the ¬rst effort to develop such a scheme primarily intended for the fuzzy label context. The proposed algorithm is computationally simple, easy to implement and has provable performance bounds. Our results on facial expression datasets corroborate the efficacy of the framework in reducing human annotation effort in real world recognition applications involving fuzzy labels.
Functions of resolvin D1-ALX/FPR2 receptor interaction in the hemoglobin-induced microglial inflammatory response and neuronal injury Background Early brain injury (EBI) has been thought to be a key factor affecting the prognosis of subarachnoid hemorrhage (SAH). Many pathologies are involved in EBI, with inflammation and neuronal death being crucial to this process. Resolvin D1 (RvD1) has shown superior anti-inflammatory properties by interacting with lipoxin A4 receptor/formyl peptide receptor 2 (ALX/FPR2) in various diseases. However, it remains not well described about its role in the central nervous system (CNS). Thus, the goal of the present study was to elucidate the potential functions of the RvD1-ALX/FPR2 interaction in the brain after SAH. Methods We used an in vivo model of endovascular perforation and an in vitro model of hemoglobin (Hb) exposure as SAH models in the current study. RvD1 was used at a concentration of 25 nM in our experiments. Western blotting, quantitative polymerase chain reaction (qPCR), immunofluorescence, and other chemical-based assays were performed to assess the cellular localizations and time course fluctuations in ALX/FPR2 expression, evaluate the effects of RvD1 on Hb-induced primary microglial activation and neuronal damage, and confirm the role of ALX/FPR2 in the function of RvD1. Results ALX/FPR2 was expressed on both microglia and neurons, but not astrocytes. RvD1 exerted a good inhibitory effect in the microglial pro-inflammatory response induced by Hb, possibly by regulating the IRAK1/TRAF6/NF-B or MAPK signaling pathways. RvD1 could also potentially attenuate Hb-induced neuronal oxidative damage and apoptosis. Finally, the mRNA expression of IRAK1/TRAF6 in microglia and GPx1/bcl-xL in neurons was reversed by the ALX/FPR2-specific antagonist Trp-Arg-Trp-Trp-Trp-Trp-NH2 (WRW4), indicating that ALX/FPR2 could mediate the neuroprotective effects of RvD1. Conclusions The results of the present study indicated that the RvD1-ALX/FPR2 interaction could potentially play dual roles in the CNS, as inhibiting Hb promoted microglial pro-inflammatory polarization and ameliorating Hb induced neuronal oxidant damage and death. These results shed light on a good therapeutic target (ALX/FPR2) and a potential effective drug (RvD1) for the treatment of SAH and other inflammation-associated brain diseases. Introduction polarization, which is characterized by the expression of many anti-inflammatory-or phagocytosis-related genes (IL-10, Arg1, and CD206). Several studies have shown that neurological function after SAH can be improved by regulating microglial polarization (through the promotion of anti-inflammatory polarization or the inhibition of pro-inflammatory polarization). However, few studies have investigated the specific effects of RvD1 on microglia. RvD1 has been shown to inhibit TNF- and IL-1 secretion as well as NF-B pathway activation in LPS-stimulated microglia in vitro, although LPS is a bacterial-derived substance, which is not consistent with the aseptic inflammatory response after SAH observed in this study. RvD1 has also been observed to enhance IL-4-induced anti-inflammatory polarization of the microglial cell line BV2, which was achieved by enhancing the nuclear transfer and DNA binding ability of PPAR. What is more, these effects could be blocked by ALX/FPR2 inhibitors. Therefore, we speculated that RvD1 might also regulate the microglial polarization induced by Hb. Neuronal injury or death is the key determinant of poor prognosis after SAH with neurons demonstrated to undergo significant apoptosis in the cortex, subcortical, and hippocampal areas. The factors that cause neuronal apoptosis are highly complex, which include cerebral ischemia, microcirculation failure, subarachnoid blood stimulation, and inflammation. For example, some studies have shown that the inflammatory factor TNF- can significantly activate the TNF- receptor of neurons and activate the downstream apoptosis pathway to induce neuronal death. However, few studies have investigated whether ALX/FPR2 could function in neurons. ALX/FPR2 activation has been shown to promote the growth of neuronal axons and dendrites, and some studies have demonstrated that treatment with ALX/FPR2 agonists could promote neural stem cell migration and differentiation, which were achieved by promoting F-actin aggregation. Therefore, the goal of the current study was to determine whether RvD1 can inhibit the apoptosis or synaptic damage of neurons after SAH. Materials and methods Reagents RvD1 (CAS. no. 872993-05-0, Cayman Chemical Company, MI, USA), Hb (Sigma, Darmstadt, Germany), and WRW4 (cat. no. 2262, Tocris Bioscience, MO, USA) were purchased from the indicated commercial suppliers. The chemical structure of RvD1 is shown in the Supplementary Material Fig. S1. RvD1 was added 30 min before Hb stimulation according to the previous published articles. Concentration of RvD1 used in the present study was based on the in vitro experiments in macrophages, BV2 microglial cell lines, and primary alveolar epithelial type 2 cells. Meanwhile, we also did a simple dose response experiment in microglia and neurons, respectively (Supplementary Material Fig. S2). At last, the concentration of 25 nM was chosen for the whole experiment. Animal experiments Twenty-eight male Sprague-Dawley rats (RRID: RGD_ 10395233) weighing 270-310 g were purchased from the Animal Core Facility of Nanjing Medical University. Our study was approved by the Experimental Animal Ethics Committee of Nanjing Drum Tower Hospital (approved number: 2018020003). Rats were maintained in a comfortable environment at a constant temperature of 26 ± 2°C with a 12-h light/dark cycle and free access to water and a standard chow diet. The endovascular perforation model of SAH in rats was generated as described in a previous study. Briefly, rats were trans-orally intubated and mechanically ventilated with 3% isoflurane anesthesia during the operation. A 4-0 monofilament nylon suture was inserted from the external carotid artery into the right internal carotid artery, after which the bifurcation of the anterior and middle cerebral arteries was punctured. Similar procedures were performed for the sham operation group, but the suture was withdrawn without artery perforation. A heated blanket was used to warm the rats until they recovered from anesthesia. Rats were killed by isoflurane anesthesia at each time point followed by decapitation and the removal of the brains. The basal cortex tissue was sampled and stored at -80°C. Primary microglial cell culture Primary microglial cells from the cerebral cortex were cultured as previously described. Briefly, the meninges of the brains from neonatal (1 day) mice were carefully removed and then digested in 0.25% trypsin (Gibco, USA) at 37°C for 10 min. Subsequently, the tissue was triturated with warm culture medium and filtered through a 70-m strainer (Sigma). After the suspension was centrifuged at 1000 r/min for 5 min, the remaining cells were resuspended in Dulbecco's modified Eagle's medium (DMEM, GIBCO, USA) supplemented with 10% fetal bovine serum (FBS, Biological Industries, USA). Then, the cells were seeded into flasks, and after approximately 10 days, when the glial cells reached confluency, the flasks were shaken and non-adherent cells were collected and transferred to plates to obtain microglia. Approximately 2 days after seeding onto plates, the microglia were in a resting state and could be used for experiments. Primary neuron culture For neuron cultures, the cerebral cortex from a fetal rat at embryonic day 18 was used. The culture protocols were essentially the same as those describe above, except that the cells were seeded in plates pre-coated with 0.1 mg/ml poly-D-lysine. Four hours after seeding, the medium was completely replaced with neurobasal medium supplemented with 1% GlutMax (GIBCO, USA) and 2% B27. Subsequently, after 7 days of cultivation, neurons were available for experiments. ELISA Primary microglial cells in 24-well plates were preincubated with 25 nM RvD1 for 30 min, which was followed by the addition of Hb (20 M) and another incubation for 1, 4, 12, and 24 h. Subsequently, the culture medium was centrifuged to obtain the supernatants which were then stored at -80°C. The protein levels of TNF- and IL-10 were detected by ELISA (Multi Sciences Biotech, Hangzhou, China) according to the manufacturer's instructions. Quantitative PCR (qPCR) Primary microglial cells in 12-well plates were preincubated with 25 nM RvD1 for 30 min, which was followed by treatment with Hb (20M) for 1, 4, 12, and 24 hours, respectively. Primary neurons were pre-incubated with 25 nM RvD1 for 30 min, which was followed by treatment with Hb (50 M) for 12 h. Total RNA was extracted from cells using TRIzol reagent following the manufacturer's instructions. After quantifying the concentration and purity of the extracted RNA with a Bio-Photometer (Eppendorf, Germany), 1.0 g of RNA was reverse transcribed to cDNA with a reverse transcription mix (Vazyme, Nanjing, China). Finally, qPCR was performed with SYBER Green mix (Roche, Switzerland) and a PCR thermocycler system (Applied Biosystems, USA). The primers used for qPCR are shown in Table 1. An internal control (GAPDH) was used to normalize the expression of each gene, and the 2 −Ct method was used to determine the relative gene expression. For in vitro cell staining, primary antibodies against Iba-1 (1:200, RRID: AB_2224402) and p65 (1:200, 1:1000, cat. no. D14E12, CST) were used for microglia staining, while a primary antibody against MAP2 (1:500, cat. no. ab183830, Abcam) was used to stain neurons. The protocols and the secondary antibodies used for in vitro cell staining were the same as those used to stain brain sections. Cell viability assay Approximately 2 10^4 primary neurons per well were seeded into 96-well plates, and after 7 days, the cells were treated with 25 nM RvD1 for 30 min followed by Hb stimulation for 24 h. Then, the medium was completely replaced with new medium containing 10% of the Cell Counting Kit-8 (CCK-8) reagent (Dojindo Laboratories, Kumamoto, Japan) and incubated for 2 h at 37°C. Subsequently, the absorbance at 450 nm was measured, and the results were calculated as the relative cell viability. Live/dead cell double staining assay A Live/Dead Cell Double Staining Kit (KGAF001, Key-GEN BioTECH, Jiangsu, China) was used to assess cell viability following the manufacturer's instructions. First, 5 l of reagent A (PI) and B (calcium AM) were mixed with 10 ml of PBS. Then, the cells were washed twice with sterile pre-heated PBS, the prepared staining solution described above was added, and the cells were incubated at room temperature for 5-10 min. Finally, the cells were washed twice with PBS and then observed under a fluorescence microscope in time. MDA content detection The total tissue protein was extracted with PBS buffer, and the required reagents were added according to the manufacturer's instructions (S0131, Beyotime, Shanghai, SOD enzyme activity test Total tissue protein was extracted in PBS buffer, and the required reagents were prepared according to the manufacturer's instructions (S0101, Beyotime, Shanghai, China). Reagent 1 (1 ml), reagent 2 (0.1 ml), reagent 3 (0.1 ml), reagent 4 (0.1 ml), standard sample (0.5m l), and sample (0.5ml) were added to the centrifuge tube respectively and placed at room temperature for 10 min. Then, the absorbance in each well was measured at 550 nm, and the SOD activity in the sample was calculated according to the standard curve. ROS detection The ROS content in primary neurons was determined using the DCFH-DA (D6883, Sigma) method. The original culture medium of neurons was replaced with DMEM medium supplemented with 10 M DCFH-DA. Then, after incubating in an incubator for 20 min, the medium was removed, and the cells were washed three times with preheated DMEM without DAFH-DA. Then, the ROS content in cells was immediately observed under an inverted fluorescence microscope. Statistical analysis GraphPad Prism (RRID: SCR_002798) Windows version 8.0 was used to perform statistical analyses. Two experimental groups were compared by two-tailed unpaired Student's t tests. Three or more groups were compared by one-way ANOVA followed by post hoc Tukey's tests. Twoway ANOVA was used to analyze the interaction effect of time courses and treatments. Differences were considered significant at P < 0.05, presented as * or #, ns: not significant. No outlier tests were performed in the present study,. c ALX/ FPR2 expression pattern in primary microglia, neurons, and astrocytes (n = 5). The data were analyzed by one-way ANOVA and Tukey's post hoc multiple comparison. *p < 0.05, ***p < 0.001 compared with the sham group for b or with the astrocyte group for c. Bar = 50m. n is the number of animals or independent cell samples and no statistical methods were used to predetermine the sample size. The data are presented as the means ± SD. Results ALX/FPR2 is elevated after SAH and primarily expresses in neurons and microglia, rather than astrocytes The location of ALX/FPR2 expression has remained controversial. Therefore, we conducted double immunofluorescence and western blot analyses to observe which cells expressed ALX/FPR2. As shown in Fig. 1a, the rat cerebral cortex staining results showed that ALX/FPR2 was highly expressed in neurons (marked by NeuN), whereas little expression was observed in microglia (marked by Iba1) and astrocytes (marked by GFAP). However, different results were obtained from in vitro experiments (Fig. 1c), where ALX/FPR2 exhibited the highest expression in primary neuron, followed by primary microglia, while primary astrocytes did not exhibit ALX/FPR2 expression. Regarding the fluctuation in ALX/FPR2 expression after SAH over time, as shown in Fig. 1b, the expression of ALX/FPR2 protein in the rat brain significantly increased from 24 h to 3 days before decreasing on the 7th day after SAH. Hb induces significant microglial pro-inflammatory polarization We constructed the SAH model of primary microglia by Hb stimulation in vitro and performed qPCR to assess mRNA expression of related genes. As shown in Fig. 2, under the stimulation of 20 M Hb, the primary microglia were obviously activated, as evidenced by obvious changes in the polarization phenotype index. For pro-inflammatory polarization (Fig. 2b), the trend of the changes in TNF- and IL-1 cytokine levels was essentially the same. The transcription of these genes significantly increased and peaked from 1 h after Hb stimulation, and gradually decreased with levels remaining higher than that of the control group. iNOS expression was not significantly altered at 1 h but increased and peaked at 4 h before gradually decreasing. CD86 was labeled on the cell membrane surface, with levels significantly increasing at 1 hour and peaking at 4 hours before gradually decreasing, but the overall increase was not as obvious as that of the previous markers. For anti-inflammatory polarization (Fig. 2a), IL-10 showed a time course expression pattern similar to that of TNF- and IL-1, peaking after 1 h before gradually decreasing until the 24-h time point. CD206 expression also peaked after 1 h, exhibiting higher levels than that observed in the control group after 4 h, although its expression sharply decreased from 12 to 24 h. Arg1 expression gradually increased after a significant decrease after 1 h and rebounded at the 24-h time point, which was significantly higher than that of the control group. RvD1 attenuates Hb-induced microglial pro-inflammatory polarization After confirming that the SAH in vitro model was essentially consistent, we added 25 nM RvD1 to the culture medium for 30 min in advance to observe its effect on. The data are shown as the relative changes of the experimental group versus the control group (baseline) and were analyzed by two-way ANOVA followed by Sidak post hoc multiple comparison, *p < 0.05, **p < 0.01, and ***p < 0.001. n is the number of independent cell samples microglial cell polarization and performed qPCR to detect the mRNA expression changes. As shown in Fig. 3, for pro-inflammatory cytokines, RvD1 significantly inhibited the expression of TNF- and IL-1 induced by Hb at 1 and 4 h. Especially for IL-1, the inhibition effect lasted for 24 h, while TNF- expression rebounded at 24 h. RvD1 also significantly inhibited the iNOS and CD86 expression. For iNOS, the inhibition was the most obvious at 4 h, with no subsequent significant difference observed at other time points. Regarding CD86 expression, significant attenuation was observed at 1 and 4 h, but at 24 h, a significant rebound was observed, similar to that detected for TNF-. For IL-10, expression was significantly decreased by RvD1 at 1 h but increased significantly at 4 and 12 h, suggesting the potential effect of RvD1 to promote the transformation of the antiinflammatory response. For CD206, RvD1 also significantly reduced its expression in the early period (1 and 4 h), but in the later period (12 and 24 h), no significant difference was observed compared with that detected in the Hb stimulation group. The expression of Arg1 was significantly promoted by RvD1 at 1 and 4 h, while no significant difference could be observed in the later phase. These results suggested that RvD1 indeed had a significant anti-inflammatory effect and that it could potentially promote the anti-inflammatory polarization of microglia. RvD1 inhibits the protein expression of TNF- but promotes that of IL-10 In the previous assays, we only assessed the polarization phenotype index, i.e., the changes in mRNA expression. As shown in Fig. 4, we also assessed the changes in the protein expression of TNF- and IL-10 by ELISA. The protein expression of TNF- and IL-10 was significantly promoted after Hb stimulation, which increased at 1 h and then gradually increased, peaking at 24 h (Fig. 4a, c). After RvD1 treatment, the protein expression of TNF- was significantly inhibited at 4 h but not at other time points, while the protein expression of IL-10 significantly increased at 1 and 4 h, with no significant difference observed in the later phase (Fig. 4b, d). These results provided further evidence of the effects of RvD1 on microglia-related inflammation, suggesting that RvD1 could show a superior anti-inflammatory effect. RvD1 possibly functions by regulating the IRAK1/TRAF6/ NF-B signaling pathway After confirming that RvD1 could have a good antiinflammatory effect, we continued to assess its potential mechanisms. To this end, we used immunofluorescence staining and qPCR analyses to assess the changes in key proteins in the pro-inflammatory signaling pathways. Fifteen minutes after Hb stimulation, microglia showed obvious activation, as evidenced by the large amount of p65 nuclear translocation observed (Fig. 5a, d). However, after RvD1 treatment, p65 nuclear translocation was significantly inhibited. It was worth mentioning that although RvD1 inhibited the activation of this inflammatory pathway, the morphology of microglia was not significantly different from that of the Hb stimulation group, both of which showed an increase in mitosis and in cytoplasm volume appearing as "fried egg" (Fig. 5a). The mRNA expression was significantly inhibited at 1 h after RvD1 treatment for both IRAK1 and TRAF6, while only for IRAK1 at 4 h. Meanwhile, they both slightly rebounded at 12 h, with no significant difference observed (Fig. 5b, c). NF- B and MAPKs signaling activities are inhibited by RvD1 To further assess the inhibitory effects of RvD1 on the inflammatory pathways, we assessed the levels of proteins in the downstream of NF-B and MAPKs pathways by western blot analysis. The results shown in Fig. 6 demonstrated that the proteins of the two pathways were obviously activated after Hb stimulation, suggesting that they could strongly promote the pro-inflammatory polarization of microglia. Although the phosphorylation levels of p65, JNK, p44, p42, and p38 remained higher than those of the control group after the application of RvD1 in the Hb + RvD1 group, the phosphorylation levels were significantly inhibited when compared with the Hb stimulation group, suggesting that RvD1 had a regulatory effect on both pathways. With respect to microglia, both pathways were regulated by IRAK1/ RvD1 ameliorates Hb-induced neuronal oxidative stress and synaptic damage Oxidative stress of neurons is the primary factor associated with neuronal damage. As we observed that ALX/ FPR2 was expressed in a large number of neurons, in this experiment, primary neurons were used to assess whether RvD1 had a direct effect on neurons. To this end, we observed the effects of RvD1 on the oxidative stress of neurons and synaptic damage after Hb treatment. ROS levels in neurons from the Hb-treated group were significantly higher than those of the control group, indicating that Hb caused notable oxidative stress, while ROS-positive cell staining significantly reduced after RvD1 application (Fig. 7a, b). The results of cell viability, MDA content, and SOD enzyme activity assays also showed similar results (Fig. 7c-e). After Hb treatment, the viability of primary neurons decreased, the MDA content increased, and SOD enzyme activity decreased. Compared with the Hb-treated group, the viability of cells significantly increased and the content of MDA decreased after the application of RvD1, and although SOD enzyme activity increased, no significant difference was observed. For the synaptic damage experimental results, we also observed that the synapses stained for microtubule-associated protein 2 (MAP2) significantly reduced after Hb treatment, while the application of RvD1 could reverse this damage (Fig. 7f, g). RvD1 can reduce the death of neurons in vitro caused by Hb The results showed that a large number of primary neurons died after Hb stimulation. As shown in Fig. 8a, c, numerous dead cells appeared in Hb-treated group, but significantly decreased after RvD1 application. The western blot results of apoptosis-related proteins showed that Hb stimulation caused an increase in bax and cleaved caspase-3 levels, but did not significantly downregulate bcl-xL and caspase-3. Compared with that observed in Hb stimulation group, the expression of bax protein decreased after the application of RvD1, but no significant difference was observed. Furthermore, the expression of bcl-xL protein significantly increased, while that of cleaved caspase-3 decreased, whereas total caspase-3 levels did not (Fig. 8b, d). RvD1-mediated effects on microglia and neurons are dependent on ALX/FPR2 We showed that microglia and neurons expressed the receptor ALX/FPR2 and observed some functions of RvD1 on primary microglia and neurons. To further confirm whether the effects of RvD1 are based on an RvD1-ALX/FPR2 interaction, we used the ALX/FPR2specific antagonist WRW4 to investigate whether it can abolish the effects of RvD1. The results (Fig. 9a) showed that the mRNA expression of upstream signaling pathway genes (IRAK1/TRAF6) were reversed by addition of WRW4, as well as the downstream factors (IL-1/TNF-) when comparing the Hb + RvD1 group to the Hb + RvD1 + WRW4 group. With respect to neurons (Fig. 9b), the mRNA expression of the antioxidant gene glutathione peroxidase 1 (GPx1) and the anti-apoptosis gene bcl-xL were significantly abolished by WRW4. The expression of other genes such as heme oxygenase 1 (Ho-1) and bax showed no significant differences when comparing the various groups, indicating that they might not be good indicators of RvD1 activity. Discussion In the present study, we investigated the expression pattern of ALX/FPR2 and observed the functions of the anti-inflammatory effects of the RvD1-ALX/FPR2 interaction potentially occurs through regulation of IRAK1/ TRAF6 signaling activities; RvD1 has the potential to inhibit Hb-induced neuronal damage or apoptosis. ALX/FPR2 has been widely studied on neutrophils and monocytes. It is a promiscuous receptor that can bind to many types of ligands and exert different functions.. c The activity of neurons was measured by the CCK-8 method (n = 6). d, e MDA content and SOD activity (n = 3). f The synapse changes of neurons, shown as white arrows. g The number of synaptic intersections in the visual field analyzed by ImageJ (n = 5). The data were analyzed by one-way ANOVA and Tukey's post hoc multiple comparison. **p < 0.01, ***p < 0.001, #p < 0.05, and ##p < 0.01. Bar = 50 m for ROS staining and 40 m for MAP2 staining. n is the number of independent cell samples In the CNS, A protein phagocytosis via ALX/FPR2 was investigated in the Alzheimer's disease model. However, the cell localization of ALX/FPR2 has seldom been explored in the brain. The results of our in vivo and in vitro experiments showed that ALX/FPR2 was highly expressed in neurons, moderately expressed in microglia, and not expressed in astrocytes. The ALX/FPR2 expression results in neurons were consistent with the findings of a study by Ho, providing a theoretical basis and research direction for studying the analgesic effects of RvD1. For microglia, the observed ALX/FPR2 expression was essentially the same as that observed in a number of other studies, but the lack of ALX/ FPR2 expression in astrocytes was different from the results of other studies. However, these tissue immunofluorescence results were not fully observed in the present study, which might be due to the high expression of neurons covering up the expression of microglia. ALX/FPR2 expression increased significantly after SAH and maintained from 24 h to approximately 3 days, suggesting that ALX/FPR2 might play an important role in the pathophysiological process after SAH. Based on the published studies of ALX/FPR2 and our above results, it might also be feasible to use the ALX/ FPR2 as a potential target in the treatment of SAH. The polarization phenotype of microglia induced by Hb indicated that it was not totally opposite about the Fig. 8 Influence of RvD1 on neuronal death induced by Hb. The primary neurons were cultured in a medium containing 50 M Hb for 24 h, and RvD1 was added at a concentration of 25 nM 30 min before Hb stimulation. a, c Live-dead cell staining and the corresponding quantitative statistical results (n = 3). The red color shows PI staining, indicating dead cells; the green color shows calcein AM staining, indicating live cells. b, d The western blot results of Bax, bcl-xL, cleaved caspase-3, caspase-3, and the corresponding semi quantitative statistical results (n = 3). The data were analyzed by one-way ANOVA and Tukey's post hoc multiple comparison. *p < 0.05, ***p < 0.001, #p < 0.05, ##p < 0.01 and ns showed no significant difference. n is the number of independent cell samples. Bar = 50 m polarization of pro-inflammatory and anti-inflammatory phenotypes. Indeed, it has been shown that NF-B plays dual roles in the acute and resolution stages of inflammation, because it not only increases the levels of proinflammatory factors but also promotes the expression of anti-inflammatory factors. In the present study, the same results were also observed. Specifically, in the early phase (1 and 4 h), when the expression of TNF- and IL-1 significantly increased, the expression of IL-10 and CD206 also significantly increased. These results suggested that the polarization of microglia was complex even under the stimulation of a single factor, and that the polarization direction continued to change with time. Nevertheless, a weakness of the present study is that the indicators could not fully assess the polarization phenotype, indicating that more samples and indicators are needed to verify these results in future. After the application of RvD1, the polarized phenotype of microglia further changed. The expression of proinflammatory markers was significantly inhibited in the early stage, where IL-1 was inhibited throughout the time course, while some pro-inflammatory markers such as TNF-, iNOS and CD86 rebounded to different degrees over 24 h. These results confirmed those of earlier studies performed by Serhan et al. and Hong et al. showing that RvD1 acts on microglia to reduce cytokine production. For the anti-inflammatory index, IL-10 and CD206 were also significantly inhibited after 1 or 4 h but increased in the later period. Among them, IL-10 was significantly different between the Hb treatment and RvD1 groups, while the levels of CD206 were not significantly different. Arg1 was markedly upregulated in the early stage and gradually downregulated in the later stage. These results further confirmed the complexity of Fig. 9 WRW4 reverses the effect of RvD1 on microglia and neurons. The ALX/FPR2 specific antagonist WRW4 (10 M) and RvD1 (25 nM) were added 30 min before Hb stimulation, and microglia were then stimulated by 20 M Hb for 1 h, while primary neurons were stimulated by 50 M Hb for 12 h. a mRNA expression of IRAK1/TRAF6/IL-1/TNF- for microglia (n = 3). b mRNA expression of Ho-1/GPx1/bcl-xL/bax for neurons (n = 3). The data were analyzed by one-way ANOVA and Tukey's post hoc multiple comparison. *p < 0.05, **p < 0.01, ***p < 0.001 compared with Hb + RvD1 group and ns showed no significant difference. n is the number of independent cell samples microglial polarization. However, it was undeniable that RvD1 had a significant regulatory effect on microglia, especially in the inhibition of pro-inflammatory phenotype. When we further assessed the protein expression changes of TNF- and IL-10, increased IL-10 levels were observed at 1 and 4 h, which was essentially consistent with the previous results. In contrast, TNF- was significantly inhibited at 4 h, although it could also explain the inhibitory effects of RvD1, but the effective time course was too short, suggesting that post transcriptional regulation or protein level modification were involved. Subsequently, we continued to assess the expression or activation of upstream and downstream proteins of proinflammatory factors to elucidate the underlying mechanisms. The results showed that the transcriptional expression of TRAF6 and its upstream factor IRAK1 was upregulated by Hb and the Hb-induced upregulation was inhibited by RvD1. The activation of many proteins in the downstream NF-B and MAPKs pathways also showed similar changes, which was essentially consistent with the observed phenotypic change. All of the above data indicated that RvD1 could inhibit the microglial pro-inflammatory response, possibly by regulating IRAK1/TRAF6 signaling activities. Neuronal death is an important pathological phenomenon after SAH and is also the most studied aspect of this process. In the present study, we observed the effects of RvD1 on neuronal apoptosis and oxidative stress after SAH in vitro. We confirmed that RvD1 had a direct protective effect on primary neurons, which was primary reflected in the improvements in cell viability, inhibition of the production of oxidative stress products, the avoidance of synaptic injury and a reduction in cell death induced by Hb. These results were consistent with those of previous studies. For example, Peritore et al. observed that ALX/FPR2 gene knockout significantly increased the apoptosis of brain neurons in a depression model. The results of some studies also suggested other potential factors. For instance, He et al. showed that ALX/FPR2 could mediate neuronal apoptosis, which was inhibited by WRW4, a specific inhibitor of ALX/ FPR2. In addition, Ying et al. observed that ALX/FPR2 was expressed in neuronal cell lines, which could increase the susceptibility of these cells to A protein, while other ALX/FPR2 ligands, such as humanin and W peptide, could inhibit this susceptibility and reduce A-induced neuronal apoptosis. Based on the results of the present study and other studies, we can draw a preliminary conclusion that ALX/FPR2 is expressed in neurons, but its specific function is determined by ligands, where ALX/FPR2 can promote either neuronal apoptosis or neuron growth through binding different ligands. It was not investigated how ALX/FPR2 mediated the protective effects of RvD1 on neurons in the present study. RvD1 could promote bcl-xL expression, meanwhile, the inhibitory effects of RvD1 were also observed on the increase in cleaved caspase-3 and bax levels induced by Hb. Regarding the related proteins upstream of the ALX/FPR2 signaling pathway, no detailed assessments were made, which was also a deficiency of the present study and would be further addressed in future studies. However, the results of some other relevant studies might provide some indications. For instance, Fan et al. showed that RvD1 functioned through the PI3K-Akt-caspase-3 pathway in rats with cerebral hemorrhage in vivo and neuronal Hb stimulation in vitro. However, there are few relevant results for other signaling pathways, which need further study in future. Some additional limitations of the present study include the small sample size and a follow-up study with a large sample size is necessary. In addition, animal experiments are needed to further explore the role of the RvD1-ALX/FPR2 interaction in the CNS. Conclusion In the present study, ALX/FPR2 was shown to be expressed in neurons and microglia. The RvD1-ALX/ FPR2 interaction exerted superior inhibitory effects on Hb-induced microglial pro-inflammatory polarization, possibly by negatively regulating the signaling activities of IRAK1/TRAF6/NF-B or MAPKs. With respect to neurons, the RvD1-ALX/FPR2 interaction could also ameliorate Hb-induced neuronal oxidative damage and death (Fig. 10). All of the above results indicated a novel therapeutic target (ALX/FPR2) and drug (RvD1) for the treatment of SAH and other associated diseases. Additional file 1: Figure S1. Chemical structure of RvD1. Figure S2. Dose response experiments of RvD1 in microglia and neurons. The primary microglia and neurons were cultured in a medium containing 20M or 50M Hb for 12 hours, respectively. RvD1 was added at a concentration of 25nM or 75nM 30 minutes before Hb stimulation. A TNF- mRNA expression changes in microglia. B bcl-xL mRNA expression changes in neurons. The data were analyzed by one-way ANOVA and Tukey's post hoc multiple comparison. **p<0.01, #p<0.05 and ns showed no significant difference. n is the number of independent cell samples.
A planned water shutdown scheduled by the city’s Public Utilities Department will be occurring Thursday, Jan. 5 in sections of La Jolla Shores. Impacted areas are: 8000-8400 La Jolla Shores Drive, 8000-8100 Calle del Cielo, 2300-2400 Paseo Dorado, 2300-2400 Avenida de la Playa, 7800 Dorado Court, 2300 Calle de la Garza, 2300 Calle del Oro and 2300-2400 Vallecitos. The shutdown in these areas, part of the water main replacement project in the neighborhood, will be from 8 a.m. until 6 p.m. As a result of work being conducted, there will be construction noise in the affected areas. Customers in the these areas habe been provided with 3-day advance notification of the shutdown. Anyone with questions or concerns can call Tisa Aguero at 619-527-7539, Evardo Lopez at 619-990-1573 or Bernard Powell at 619-527-3945.
Depth-of-field characteristic analysis of the imaging system with scattering medium The depth-of-field (DOF) characteristic of the imaging system with scattering medium is analyzed based on the analytical model of ambiguity function as a polar display of the optical transfer function (OTF) in this paper. It is indicated that the scattering medium can help re-collect more high spatial frequencies, which are normally lost with defocusing in traditional imaging systems. Therefore, the scattering medium can be considered not as an obstacle for imaging but as a useful tool to extend the DOF of the imaging system. To test the imaging properties and limitations, we performed optical experiments in a single-lens imaging system.
The Future of High-Energy Astrophysical Neutrino Flavor Measurements We critically examine the ability of future neutrino telescopes, including Baikal-GVD, KM3NeT, P-ONE, TAMBO, and IceCube-Gen2, to determine the flavor composition of high-energy astrophysical neutrinos, ie, the relative number of $\nu_e$, $\nu_\mu$, and $\nu_\tau$, in light of improving measurements of the neutrino mixing parameters. Starting in 2020, we show how measurements by JUNO, DUNE, and Hyper-Kamiokande will affect our ability to determine the regions of flavor composition at Earth that are allowed by neutrino oscillations under different assumptions of the flavor composition that is emitted by the astrophysical sources. From 2020 to 2040, the error on inferring the flavor composition at the source will improve from $>40\%$ to less than $6\%$. By 2040, under the assumption that pion decay is the principal production mechanism of high-energy astrophysical neutrinos, a sub-dominant mechanism could be constrained to contribute less than 20\% of the flux at 99.7\% credibility. These conclusions are robust in the nonstandard scenario where neutrino mixing is non-unitary, a scenario that is the target of next-generation experiments, in particular the IceCube-Upgrade. Finally, to illustrate the improvement in using flavor composition to test beyond-the-Standard-Model physics, we examine the possibility of neutrino decay and find that, by 2040, combined neutrino telescope measurements will be able to limit the decay rate of the heavier neutrinos to below $1.8\times 10^{-5} (m/\mathrm{eV})$~s$^{-1}$, at 95\% credibility. I. INTRODUCTION High-energy astrophysical neutrinos in the TeV-PeV energy range, discovered by the IceCube Neutrino Observatory, offer unprecedented insight into astrophysics [6, and fundamental physics. On the astrophysical side, they may reveal the identity of the most energetic non-thermal sources in the Universe, located at cosmological-scale distances away from us. These neutrinos attain energies well beyond the reach of terrestrial colliders, granting access to a variety of Standard Model and beyond-the-Standard-Model (BSM) physics scenarios. Because of their small interaction cross sections, neutrinos are unlikely to interact en route to Earth, so the information they carry about distant sources and high-energy processes reaches us with little to no distortion. Detecting these neutrinos and extracting that information is challenging for the same reason, requiring cubic kilometer or larger detectors to overcome their low detection rate. The sources of the observed flux of high-energy astrophysical neutrinos-still unidentified today, save for two promising instances -are presumably hadronic accelerators where high-energy protons and nuclei interact with surrounding matter and radiation to make high-energy neutrinos. Different neutrino production mechanisms yield different flavor compositions at the source, and during their journey to Earth over cosmological distances, neutrinos oscillate, i.e., they undergo flavor conversions. The standard theory of neutrino oscillation allows us to map a given flavor composition at the source to an expected flavor composition at Earth. Here, large-scale neutrino telescopes detect them; the flavor composition of the neutrino flux results from comparing the number of events with different morphologies, which roughly reflects the number of neutrinos of each flavor [31,. Additionally, if there is more than one mechanism of neutrino production, each producing neutrinos with a different flavor composition, constraining the average flavor composition amounts to asking how large the fractional contribution of each mechanism can be in order to be detected. At present, however, our ability to perform such a precise flavor reconstruction and recover the flavor composition at the source is hampered by two important yet surmountable limitations. First, the prediction of how a given flavor composition at the source maps to a flavor composition at Earth relies on our knowledge of the values of the neutrino mixing parameters that drive the oscillations. Because these are not precisely known, such predictions are uncertain. Second, measuring the flavor composition in neutrino telescopes is challenging, and suffers from large statistical and systematic uncertainties. This prevents us from distinguishing between predictions that are similar but based on different assumptions of neutrino production. In this work, we show that these limitations will be overcome in the next two decades, thanks to new terrestrial and astrophysical neutrino experiments that are planned or in construction. Oscillation experiments that use terrestrial neutrinos-JUNO, DUNE, Hyper-Kamiokande (HK), and the IceCube-Upgrade -will reduce the uncertainties in the mixing parameters and put the standard oscillation framework to test. Large-scale neutrino telescopes-Baikal-GVD, IceCube-Gen2, KM3NeT, P-ONE, and TAMBO -will detect more highenergy astrophysical neutrinos and improve the measurement of their flavor composition. To show this, we make detailed, realistic projections of how the uncertainty in the predicted flavor composition at Earth of the isotropic flux of high-energy neutrinos and its measurement will evolve over the next two decades. Our main finding is that, by 2040, we will be able to precisely infer the flavor composition at the sources, including possibly identifying the contribution of multiple neutrino-production mechanisms, even if oscillations are non-unitary [49,60,63,. Further, we illustrate the upcoming power of flavor measurements to probe BSM neutrino physics using neutrino decay [35,45,47,52,55,. This article is organized as follows. In Section II we revisit the basics of neutrino mixing, especially as it pertains to high-energy astrophysical neutrinos, and introduce the formalism of neutrino decay and non-unitary neutrino evolution. In Section III we introduce the future neutrino experiments that we consider in our analysis and their measurement goals. In Section IV we present the statistical method that we use to produce the allowed regions of flavor composition at Earth. In Section V we present our results. In Section VI, we summarize and conclude. In the appendices, we show additional analysis cases that we do not explore in the main text. [91,; NuFit 5.0 is the latest fit. Projections from 2020 to 2040 have best-fit values fixed at the current values from NuFit 5.0, but uncertainties reduced due to JUNO, DUNE, and Hyper-Kamiokande (HK) measurements, following our simulations. The boxes at the top show the start and projected estimated running times for these experiments. Bottom: Time evolution of the expected error on the unitarity of the neutrino flavor mixing matrix; values taken from Refs.. II. FLAVOR COMPOSITION OF HIGH-ENERGY ASTROPHYSICAL NEUTRINOS A. Flavor composition at the sources In astrophysical sites of hadronic acceleration, protons and heavier nuclei are accelerated to energies well beyond the PeV scale. Likely candidate acceleration sites feature high particle densities, high baryon content, and matter that moves at relativistic bulk speeds, such as the jets of gamma-ray bursts and active galactic nuclei. There, high-energy protons interact with ambient matter and radiation [68,69,, generating secondary pions and kaons that decay into high-energy neutrinos. The physical conditions at the sources de-termine what neutrino production channels are available and affect the maximum energy of the parent protons and the energy losses of the secondaries. This, in turn, determines the relative number of neutrinos and antineutrinos produced-i.e., the flavor composition at the sources. We parametrize the flavor composition at the source via the flavor ratios (f e,S, f,S, f,S ), where f,S ∈ is the ratio of the flux of and, with = e,, or, to the total flux. We do not separate neutrinos and anti-neutrinos because high-energy neutrino telescopes are unable to make this distinction on an event-by-event basis, with the exception of the Glashow resonance triggered by high-energy e [33,. Neutrinos and anti-neutrinos may be distinguished statistically by measuring the inelasticity distribution of detected events; see Ref. for the first measurement of the flavor composition using this observable. Henceforth, we use to mean both neutrinos and anti-neutrinos of flavor. Flavor ratios are normalized to one, i.e., f,S = 1, and if there are additional neutrino species, the sum also includes them; see Section II F for details. Presently, because the identity of the high-energy astrophysical neutrino sources is unknown, there is considerable uncertainty as to the dominant neutrino production mechanism and the physical conditions at production. In addition, these may be different at different neutrino energies. However, because the flavor ratios reflect the neutrino production mechanism, we can use them-after accounting for oscillations en route to Earth as discussed in Section II B-to reveal the production mechanism and help identify the neutrino sources. In our analysis, we explore all possible flavor ratios at the sources, but showcase three physically motivated benchmark scenarios commonly discussed in the literature: full pion decay, muon damping, and neutron decay. In the full pion decay scenario, charged pions generate neutrinos via + → + +, followed by + → + e + + e, and their charge-conjugated processes. In this case, the flavor ratio is 1 3, 2 3, 0 S. This is the canonical expectation for the flavor ratios at the sources. In the muon-damped scenario, the intermediate muons cool via synchrotron radiation induced by strong magnetic fields harbored by the sources. As a result, only the coming directly from pion decay have high energy. In this case, the flavor composition is S. The flavor composition may transition from the full pion decay scenario to the muon-damped scenario at an energy determined by the onset of synchrotron losses; see, e.g., Refs.. Observing this transition would reveal the magnetic field strength of the sources and help identify them ; this might be possible in IceCube-Gen2 if the transition occurs at PeV energies. In the neutron decay scenario, e exclusively are generated in the beta decay of neutrons or short-lived isotopes produced by spallation or photodisintegration of cosmic rays. In this case, the flavor composition is S. This production scenario is unlikely, since neutrinos from beta decay are significantly less energetic than those from pion decay. Already, flavor-ratio measurements disfavor this production scenario at ≥ 2 ; we keep it in our discussion because it remains a useful benchmark. B. Standard neutrino oscillations Because the neutrino flavor states, | e, |, |, and the energy eigenstates of the free-particle Hamiltonian, i.e., the mass eigenstates | 1, | 2, | 3 are different, neutrinos change flavor, or oscillate, as they propagate from their sources to Earth. Oscillations alter the neutrino flavor ratios that reach Earth. Below, we describe how this occurs within the standard oscillation scenario; for comprehensive reviews, see Refs.. Later, in Sections II E and II F, we introduce alternative flavortransition mechanisms. In the standard oscillation scenario, the flavor and mass states are related via a unitary transformation, i.e., where = e,,, and U is the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) lepton mixing matrix. We adopt the standard parametrization of U as a 3 3 complex "rotation" matrix, in terms of three mixing angles, 12, 23, and 13, and one phase, CP.If neutrinos are Majorana fermions, U contains two additional phases that do not affect oscillations. Neutrino oscillation experiments and global fits to their data have determined the values of the mixing angles to the fewpercent level and have started to corner the CP phase. In this work, we use and build on the recent NuFit 5.0 global fit, which uses data from 31 different analyses of solar, atmospheric, reactor, and accelerator neutrino experiments. The characteristic neutrino oscillation length is L osc = 4E/∆m 2 ij, where E m i is the neutrino energy, and ∆m 2 ij ≡ m 2 i −m 2 j is the difference between squared masses of the mass eigenstates, with (i, j = 1, 2, 3). High-energy astrophysical neutrinos, with energies between 10 TeV and 10 PeV, have L osc 1 pc. Thus, compared to the cosmological-scale distances over which these neutrinos propagate and the energy resolutions of neutrino telescopes, the oscillations are rapid and cannot be resolved. Instead, we are sensitive only to the average → flavor-transition probability, i.e., The average probability depends only on the mixing angles and the CP -violation phase. We adopt this approximation in our standard oscillation analysis. Our choice is further motivated by the fact that the isotropic high-energy neutrino flux is the aggregated contribution of multiple unresolved sources, each located at a different distance, so that the individual oscillation patterns coming from each source are smeared by the spread of the distribution of distances, leaving the average flavortransition probability as the only accessible quantity. Because the complex phases in U do not contribute to the average flavor-transition probability, Eq., leptonic CP -violation does not affect the flavor composition at Earth. However, in the standard parameterization of U that we use, the value of CP still impacts the probability, since where X 1, X 2, X 3, X 4, and Y are computable functions of the mixing angles, but not of CP. In other words, cos CP contributes to the content of 1 and 2 mass eigenstates in the and flavor states. However, the effect of CP on the flavor-transition probability is weak because it appears multiplied by sin 4 13 1. Table I shows the current best-fit values and uncertainties of the mixing parameters from NuFit 5.0. The values depend on the choice of the unknown neutrino mass ordering, which is labeled as normal, if 1 is the lightest, or inverted, if 3 is the lightest. In the main text, our present-day results and projections are derived assuming the distributions of values of the mixing parameters under the normal mass ordering; see Section III for details. Normal ordering has until recently been favored over inverted ordering at a significance of 3, but such preference has weakened in light of the most recent data. Results of our analyses assuming an inverted ordering are very similar; we show them in Appendix A. The "solar" mixing parameters 12 and ∆m 2 21 are measured in solar neutrino experiments and the reactor experiment KamLAND. The angle 13 is precisely measured in reactor experiments, e.g., Daya Bay. The "atmospheric" parameters 23 and ∆m 2 32 are measured in atmospheric and long-baseline accelerator experiments. The phase CP is measured in long-baseline experiments. Presently, the only significant correlation among the mixing angles and CP is between 23 and CP (see Fig. A1), which we take into account below in our sampling of values of the mixing parameters. Using the best-fit values of the mixing parameters from Fig. 2. Later, in Section V, we show that the present-day experimental uncertainties in the mixing parameters and in the measurement of the flavor ratios prevent us from distinguishing between these benchmark scenarios, but that future improvements will allow us to do so by 2040. D. Flavor composition measurements in neutrino telescopes The flavor composition of a sample of events detected by a neutrino telescope is inferred from correlations in their energies, directions, and morphologies. The morphology of an event, i.e., the spatial and temporal distribution of the collected light associated with it, correlates particularly strongly with the flavor of the neutrino that triggered it. In ice-based and water-based neutrino telescopes, the morphologies detected so far are showers (mainly from e and ), tracks (mainly from ), and double bangs (from ). Showers, also known as cascades, are generated by the charged-current (CC) deep-inelastic neutrino-nucleon scattering of a e or a in the ice or water. The scattering produces a particle shower in which charged particles emit Cherenkov radiation that is detected by photomultipliers embedded in the detector volume. Neutral current (NC) interactions from all flavors also yield showers, though their contribution to the event rate is subdominant to that of CC interactions because the NC cross section is smaller and because, at a fixed shower energy, higher-energy neutrinos are required to make a NC shower than a CC shower. Tracks are generated by the CC deep-inelastic scattering of a. This creates an energetic final-state muon that can travel several kilometers, leaving a visible track of Cherenkov light in its wake. In addition, the momentum transferred to the nucleon produces a shower centered on the interaction vertex. Tracks can also be produced by CC interactions where the tau promptly decays into a muon, which happens approximately 18% of the time, and where the showers generated by the production and decay of the tau cannot be separated. Double bangs, or double cascades, are uniquely made in the CC interaction of. The neutrino-nucleon scattering triggers a first shower and produces a final-state tau that, if energetic enough, decays far from the first shower to trigger a second, identifiable shower. The first double-bang events were only recently observed at IceCube. There are other identifiable, but yet undetected, morphologies associated to CC interactions ; e.g., when the interacts outside the detector but the de-cay of the tau is visible. Identifying flavor on an event-by-event basis is effectively unfeasible. Showers generated by the CC interaction of e and of the same energy look nearly identical-which leads to a degeneracy in measuring their flavor ratios-and so do the showers generated by the NC interaction of all flavors of neutrinos of the same energy. Tracks may be made by final-state muons from CC interactions or by the decay into muons of finalstate taus from CC interactions. To address this limitation, future neutrino telescopes may be able to use timing information to distinguish e -induced electromagnetic showers, from hadronic showers originating mainly from 's, by using the difference in their latetime Cherenkov "echoes" from low-energy muons and neutrons. This will require using photomultipliers with a low level of "delayed pulses" that could mimic muon and neutron echoes. We do not include echoes in our analyses. Thus, the flavor composition is reconstructed collectively for a sample of detected events, using statistical methods. All flavor measurements use starting events, where the neutrino interacts within the detector volume and all three morphologies are distinguishable. References reported IceCube measurements of the flavor composition based exclusively on starting events using 3, 5, and 7.5 years of data, respectively. These analyses are statistically limited because of the low event rate of ∼8 neutrinos per km 3 per year above 60 TeV, including the background of atmospheric neutrinos. Flavor measurements are improved by complementing them with through-going tracks, which occur when 's interact outside the instrumented volume, producing muons that cross part of the detector. Because through-going tracks are more numerous, when combined with starting events they appreciably tighten the flavor measurements. Reference reported the only IceCube measurement of the flavor composition of this type to date, based on 4 years of starting events and 2 years of through-going tracks. Detailed analyses of the flavor-composition sensitivity that use through-going tracks require knowing the detector effective areas for these events, that, however, are not available outside the IceCube Collaboration. Therefore, we base our analyses instead on the estimated projected IceCube (and IceCube-Gen2) sensitivities to flavor composition from Ref., using combined starting events and through-going tracks. Figures 2 and 5 show the present-day estimated Ice-Cube sensitivity to flavor ratios, based on a combination 8 years of starting events and through-going tracks. The size of the sensitivity contour is representative of the present-day sensitivity of IceCube; it has been manually centered on the most likely best-fit composition assuming neutrino production in the full pion decay scenario. I. Current best-fit values of the mixing parameters and their 1 uncertainties, taken from the global fit to oscillation data NuFit 5.0, assuming normal or inverted neutrino mass ordering. We include only the parameters that affect the average flavor-transition probabilities of high-energy astrophysical neutrinos: the mixing angles, 12, 23, 13, and the phase CP. Under decay, the flavor composition at Earth is determined by the flavor content of the surviving mass eigenstates, i.e., where f i,⊕ is the fraction of surviving i in the flux that reaches Earth, and depends on the neutrino lifetimes, energies, and traveled distances. By comparing the flavor composition at Earth under decay to the flavor composition measured in neutrino telescopes, we constrain the lifetime of the decaying neutrinos. As illustration, we explore the case of invisible neutrino decay, in which the two heaviest mass eigenstates decay to species that are undetectable in neutrino telescopes, e.g., into a sterile neutrino or into a low-energy active neutrino. For example, if the mass ordering is normal and neutrinos have a Dirac mass, 2 and 3 could decay into a right-handed 1 and a new scalar; if it is inverted, 1 and 2 could decay into a right-handed 3 and a new scalar. In our discussion, we focus only on decay in the normal ordering and we take the lightest neutrino, 1, to be stable. Figure 5 shows the flavor content |U i | 2 of the mass eigenstates. In the extreme case of complete decay, all unstable neutrinos have decayed upon reaching Earth, and the flavor composition of the flux is determined by the flavor content of 1, i.e., f,⊕ = |U 1 | 2. If a fraction of the unstable neutrinos survive, the flavor composition is a combination of their flavor contents, Eq.. In order to estimate bounds on the neutrino lifetime, we turn to a concrete model, in which we assume that 2 and 3 have the same lifetime-to-mass ratio /m and only 1 is stable. We calculate the diffuse flux of highenergy neutrinos produced by a nondescript population of extragalactic sources, including the effect of neutrino decay during propagation, following Ref.. We adopt the formalism of invisible decay from Refs.. We assume that each neutrino source produces neutrinos with the same power-law energy spectrum, where the value of the spectral index is common to neutrinos and anti-neutrinos of all flavors. We assume = 2.5 corresponding to the neutrino flux adopted to produce the IceCube projections of the sensitivity to flavor composition that we use. For the number density of the neutrino sources at redshift z, we use the generic parametrization from Ref., i.e., where different values of n describe different candidate source populations and z c is a critical redshift above which their evolution is flat. We take n = 1.5 and z c = 1.5, which roughly corresponds to the expected distribution of active galactic nuclei sources. The diffuse flux of with energy E detected at Earth is the sum of the contributions from all sources, i.e., where H(z) = H 0 + m (1 + z) 3 is the Hubble parameter, H 0 = 67.4 km s −1 Mpc −1 is the Hubble constant, m = 0.315 is the energy density of matter, and = 1 − m is the energy density of vacuum. We integrate over the neutrino sources up to z max = 4 beyond which we expect negligible contribution to the neutrino flux. The neutrino flavor oscillation probability considering invisible decay is where Z i is the redshift-dependent decay suppression factor introduced in Ref.. Since the neutrino mass m i and lifetime i appear together in Eq., we perform our analysis in terms of the ratio m i / i. Because 1 is stable, Z 1 = 1, while, for 2 and 3, where a 1.67, b = 1 − a, and c 1.43 for our choice of values of the cosmological parameters. Under decay, the flavor composition changes with neutrino energy (see, e.g., Refs. ). In our analysis, we compute the average flavor composition at Earth over the energy interval from E min = 60 TeV to E max = 10 PeV, i.e., So far, we have assumed that the 3 3 mixing matrix U is unitary, i.e., that the flavor states, and thus also the mass eigenstates i, form a complete basis. The assumption of unitarity imposes constraints on the elements of U. However, the "true" mixing matrix could be larger than 3 3, as a result of the three active neutrinos mixing with additional states, such as a fourth, "sterile" neutrino. In this case, U is a 3 3 submatrix of the larger, true mixing matrix. Relaxing the assumption of the unitarity of U leads to a broader range of allowed flavor composition at Earth due to the active neutrinos mixing with the new states [49,63,. This is true even if the new states are too massive to be kinematically accessible. We will examine how much the prediction of the allowed flavor composition at Earth relies on the assumption of the unitarity of neutrino mixing, and how much it affects the ability of future neutrino telescopes to infer the flavor composition at the source. In the case of non-unitary mixing, a flavor state can be written as where the normalization N ≡ 3 i=1 |U i | 2 ensures that | is a properly normalized state, i.e., that | = 1. The non-unitary (NU) average flavor-transition probability | | | 2 is The flavor ratios at Earth are computed in analogy to Eq., i.e., for a given flavor composition at the source, they are However, because some active neutrinos oscillate away into sterile states, the sum over active flavors at Earth is no longer unity, i.e., f e,⊕ + f,⊕ + f,⊕ < 1. Since neutrino telescopes can only measure the flavor composition of the flux of active neutrinos, we renormalize the flavor ratios asf,⊕ = f,⊕ / =e,, f,⊕. Below, we show our results in the case of non-unitarity exclusively in terms of these renormalized flavor ratios. To lighten the notation, below we refer to them simply as f,⊕. III. NEXT-GENERATION EXPERIMENTS In the next two decades, oscillation experiments that use terrestrial neutrinos will significantly improve the precision of mixing parameters. In parallel, future neutrino telescopes will precisely measure the flavor composition of astrophysical neutrinos. Combined, they will provide the opportunity to pinpoint the flavor composition at the sources and thus help identify the origin of the high-energy astrophysical neutrinos. In this section, we describe how we model these future experiments. A. Future oscillation experiments Figure 1 summarizes our projected evolution of the measurement precision of the mixing parameters using a combination of next-generation terrestrial neutrino experiments 1. Presently, as seen in Table I, sin 2 12 and sin 2 23 are known to within 4%, from the NuFit 5.0 global fit. We consider the future measurement of sin 2 12 by JUNO, and of sin 2 23 and CP by HK and DUNE. We assume there will be no improvement on sin 2 13. Presently, sin 2 13 is measured to ∼3% by Daya Bay. This is because, while JUNO, HK, and DUNE are sensitive to sin 2 13, no single one of them is expected to achieve better precision than Daya Bay, assuming their nominal exposures. For example, DUNE will only reach 7% resolution with its nominal exposure. Below we describe the oscillation experiments that we use in our predictions. JUNO, the Jiangmen Underground Neutrino Observatory, will be a 20-kt liquid scintillator detector, located in Guangdong, China. It will measure the oscillation probability P ( e → e ) of 2-8-MeV reactor neutrinos at a baseline of 53 km. JUNO seeks to determine the neutrino mass ordering and precisely measure sin 2 12 and ∆m 2 21. Its nominal sensitivity on sin 2 12 is 0.54% after 6 years of data-taking, which is the value we adopt in this work. JUNO is under construction and will start taking data in 2022. To simulate the time evolution of the sensitivity to sin 2 12, we simulate JUNO following Ref.. We take the reactor neutrino flux from Refs. and the inverse-beta-decay cross sections from Ref.. We include a correlated flux uncertainty of 2%, an uncorrelated flux uncertainty of 0.8%, a spectrum shape uncertainty of 1%, and an energy scale uncertainty of 1%. We do not include matter effects in the computation of the oscillation probability because they only shift the central value of sin 2 12 and not its sensitivity, and we do not consider backgrounds. With 6 years of collected data, our simulated sensitivity in sin 2 12 is 0.46%. We then take the time evolution of our simulated sensitivity and scale it by a factor of 1.17, so that our 6-year sensitivity matches that of Ref.. DUNE, the Deep Underground Neutrino Experiment, is a long-baseline neutrino oscillation experiment made out of large liquid argon time projection chambers. It will measure the appearance and disappearance probabilities, P ( → e ) and P ( → ), in the 0.5-5-GeV range using accelerator neutrinos, in neutrino and antineutrino modes. DUNE seeks to determine the mass ordering and measure CP and sin 2 23 precisely. We use the official analysis framework released with the DUNE Conceptual Design Report, which is comparable to the DUNE Technical Design Report. DUNE will start taking data in 2026 using a staged approach. At the start of the beam run, its far detector will have two modules with a total fiducial volume of 20 kton, with a 1.2 MW beam. After one year, an additional detector will be deployed, and then after 3 years of running, the last detector will be installed, totaling 40 kton of liquid argon. Then, the beam will be upgraded to 2.4 MW. We follow this timeline and assume an equal running time for neutrino and antineutrino modes to simulate the time evolution of the mixing parameter measurements. At completion, the nominal projected sensitivity of DUNE envisions 300 ktMWyear exposure, which corresponds to 7 years of collected data. HK is the multipurpose water Cherenkov successor to Super-K, with a fiducial mass of 187 kt, under construction in Kamioka, Japan. This long-baseline experiment will measure the appearance and disappearance probabilities of accelerator neutrinos. It operates at slightly lower energies ( 0.6 GeV) and with a shorter baseline (295 km) than DUNE. Like DUNE, HK will also measure CP and sin 2 23 precisely. It will start operation in 2027 with a projected nominal exposure of 10 years using one Cherenkov tank as far detector. Our simulations of HK are modified from that of Ref., which follows Refs.. We adjusted the systematic errors on signal and background normalizations to match the official expected sensitivities on sin 2 23, ∆m 2 32, and CP. Figure A1 shows our projected DUNE and HK sensitivities on sin 2 23 and CP, using their nominal exposures. Beyond the mixing parameters, we will examine how robust are our results to oscillations being non-unitary. The lower panel of Fig. 1 shows the global limits on nonunitarity from current and future experiments by quantifying the deviation from N = 1, for the = e,, rows. The 2015 values are taken from Ref., and the 2020 and 2038 values are from Ref.. While future experiments may limit the non-unitarity in the e and rows to the O(1%) level, the non-unitarity in the row will remain relatively unchanged from its present value of 17%. The IceCube-Upgrade will extend the current IceCube detector by 2025, with the addition of seven new closelypacked strings, including a number of calibration devices and sensors designed to help improve ice modeling. The sensitivity of the IceCube-Upgrade to appearance will play a major role in constraining |U i | 2. The ORCA subdetector of KM3NeT, to be deployed in the Mediterranean sea, is expected to perform similar measurements. B. Neutrino telescopes IceCube is an in-ice Cherenkov neutrino observatory that has been in operation for nearly a decade. The experiment comprises a cubic kilometer of clear Antarctic ice, instrumented with 86 vertical strings, each of which is equipped with 60 digital optical modules (DOMs) to detect Cherenkov light from neutrino-nucleon interactions. After 7.5 years of data-taking, IceCube has seen 103 High-Energy Starting Events (HESEs), of which 48.4 are above 60 TeV and expected to be of astrophysical origin. In 10 years, IceCube has seen 100-150 through-going tracks per year of astrophysical origin above 1 TeV. As mentioned in Section II D, we base our analysis on projections of the measurement of flavor composition in IceCube (and IceCube-Gen2), shown originally in Ref., that estimate the sensitivity obtained by combining starting events and through-going tracks collected over 8 or 15 years (in the latter case, combined with 10 years of IceCube-Gen2), as such an analysis has not been performed on real data yet. Figures 2 and 5 show the 99.7% credible regions (C.R.) 8-year IceCube contour from Ref.. IceCube-Gen2 is the planned extension of Ice-Cube. It will add 120 new strings to the existing experiment, leading to an instrumented volume of 7.9 km 3 and an effective area that varies from 7 to 8.5 times that of IceCube between 100 TeV and 1 PeV. Here, we assume a full array effective start date of 2030.. Later we detail how we use the IceCube and IceCube-Gen2 projections to estimate projections also for the other neutrino telescopes. KM3NeT is the successor to ANTARES, located in the Mediterranean Sea. The high-energy component, called KM3Net/ARCA, will be deployed as two 115-string arrays with 18 DOMs each, 100 km off the coast of Sicily, and should be complete by 2024. Based on a projected event rate of 15.6 cosmic neutrinoinduced cascades per year, we estimate the exposure of KM3Net to be 2.4 times that of IceCube. Baikal-GVD is a gigaton volume detector that expands on the existing NT-200 detector in lake Baikal, Siberia. The first modules are already installed, and the detector has been operating since 2018 with an effective volume of 0.35 km 3. This will rise to 1.5 km 3 in 2025 when the detector is complete, consisting of 90 strings, with 12 DOMs each. Baikal-GVD has already seen at least one candidate neutrino cascade event with reconstructed energy of 91 TeV. P-ONE, the Pacific Ocean Neutrino Experiment, is a planned water Cherenkov experiment, to be de-ployed in the Cascadia basin off Vancouver Island, using Ocean Networks Canada infrastructure that is already in place. P-ONE is expected to be complete in 2030 and will include 70 strings, with 20 DOMs each, deployed in a modular array covering a cylindrical volume with a 1 km height and 1 km radius. TAMBO the Tau Air-Shower Mountain-Based Observatory, is a proposed array of water-Cherenkov tanks to be located in a deep canyon in Peru. TAMBO will search for Earth-skimming in the 1-100 PeV range. It is expected to detect approximately 7 per year in the energy range considered here. Because it is sensitive to a single flavor, TAMBO will be particularly helpful in breaking the e - degeneracy in measuring flavor composition. Unlike the other future neutrino telescopes, whose projected sensitivity we obtain by scaling the IceCube sensitivity (see below), we model the contribution of TAMBO to the projected flavor likelihood in 2040 as whereN 70 is the expected number of detected between 2030 and 2040 and N is the number of events if f,⊕ deviates from the assumed true value of 0.34. Table II shows what neutrino telescopes are expected to contribute to flavor measurements in 2020, 2030, and 2040, and their combined exposures. Reference presented the projected sensitivity of 8 and 15 years of Ice-Cube, and 15 years of IceCube plus 10 years of IceCube-Gen2, in the form of iso-contours of posterior density in the plane of flavor compositions at Earth. We use these as likelihood functions L(f,⊕ ) that represent the sensitivity of the flavor measurements, i.e., L IC8 and L IC15 for 8 and 15 years of IceCube, which we use for our 2020 and 2030 projections, and L IC+Gen2 for 15 years of IceCube plus 10 years of IceCube-Gen2, which we use for our 2040 projections. We are interested in assessing the flavor sensitivity achieved by combining all of the available neutrino telescopes in 2040. However, with the exception of IceCube and IceCube-Gen2, detailed projections for the sensitivity to flavor composition in upcoming neutrino telescopes are unavailable. Therefore, we estimate 2040 projections for the other neutrino telescopes ourselves based on the projections for IceCube-Gen2. First, we single out the contribution of 10 years of IceCube-Gen2 via ln L Gen2 ≡ ln L IC+Gen2 − ln L IC15. Second, we estimate the combined sensitivity of Baikal-GVD, KM3NeT, and P-ONE by rescaling the IceCube-Gen2 contribution by the exposures of these telescopes in 2040 (see Table II). Third, we add to that the contribution of 15 years of IceCube and of TAMBO, Eq.. Thus, in 2040, we calculate the flavor sensitivity as ln L comb = S ln L Gen2 + ln L IC15 + ln L TAMBO, where S is the effective IceCube-Gen2-equivalent exposure defined by Based on the projections presented above, the estimated exposures by 2040 for individual experiments are: Gen2 = 81.6 km 3 yr for IceCube-Gen2, KM3NeT = 42.1 km 3 yr for KM3NeT, GVD = 24.3 km 3 yr for Baikal-GVD, and P−ONE = 31.6 km 3 yr for P-ONE. Figures 2 and 5 show our 99.7% C.R. contour for all neutrino telescopes combined in 2040. All of the projected contours of flavor-composition sensitivity in our analysis are centered on the flavor composition at Earth corresponding to the full pion decay chain computed using the best-fit values of the mixing parameters from NuFit 5.0, i.e., (0.30, 0.36, 0.34) ⊕. While the position on which the contours are centered may be different in reality, their size is representative of the sensitivity of IceCube in 2020, the combination of IceCube and IceCube-Gen2 in 2030 and 2040, and the combination of all available neutrino telescopes in 2040. Later, in Section V A, we use these likelihoods to infer the sensitivity to flavor composition at the sources based on the flavor composition measured at Earth. IV. STATISTICAL METHODS We present results in a Bayesian framework, as 68% or 99.7% credible regions (C.R.) or intervals. These represent the iso-posterior contours within which 68% or 99.7% of the marginalized-integrated-over nuisance parameters-posterior mass is located. In this section, we describe in detail how we obtain the regions of the neutrino flavor composition at Earth, f f f ⊕ ≡ (f e,⊕, f,⊕, f,⊕ ), given different assumptions about the flavor composition at the sources, f f f S ≡ (f e,S, f,S, f,S ), and under the three flavor-transition scenarios introduced in Section II: standard oscillations, non-unitary mixing, and neutrino decay. We assess the compatibility of a given flavor composition f f f ⊕ with the probability distribution of mixing parameters, either today or in the future, and with our prior belief about what the flavor composition at the source is. To do this, we adopt the Bayesian approach first introduced in Ref.. The posterior probability of f f f ⊕ is Here, L is the likelihood function, defined as the probability of obtaining a particular set of measurements E E E in oscillation experiments conditional on the mixing parameters being ≡ (sin 2 12, sin 2 23, sin 2 13, CP ). The prior on the mixing parameters is ( ), and the prior on TABLE II. Projected 68% C.R. uncertainties on the allowed flavor ratios at Earth, f,⊕ ( = e,, ), and on the inferred flavor ratios at the astrophysical sources, f,S. For this table, we assume standard oscillations and set the true value of the flavor ratios at the sources to 1 3, 2 3, 0 S, coming from the full pion decay chain (see Section II C). the flavor ratios at the source is (f f f S ). Below, we describe how to compute these functions. The Dirac delta ensures that we account for all combinations of f f f S and that produce the specific flavor ratios f f f ⊕ at Earth. Inside the Dirac delta, the flavor ratios at Earth,fff ⊕ (f f f S, ), are computed using Eq. for standard oscillations, Eq. for neutrino decay, and Eq. for non-unitary mixing. In the case of neutrino decay, f f f ⊕ also depends on the i fractions, f i,⊕ (see Section II E), while in the case of non-unitary mixing, represents the elements of the non-unitary mixing matrix instead of the standard mixing parameters (see Section II F). To compute the likelihood L in Eq., we construct a 2 test-statistic that incorporates the combined information from future oscillation experiments-JUNO, DUNE, HK-on the mixing parameters. We fix the best-fit values of the mixing parameters to the current NuFit 5.0 best fit (see Table I); for these, we assume normal mass ordering in the main text and inverted mass ordering in our appendices. In our projections, we assume that the measurement of each mixing parameter i will have a normal distribution, and that the measurement of different mixing parameters will be uncorrelated except for CP and sin 2 23. With this, the sensitivity associated to each experiment E is where ij is the covariance matrix for parameters i, j. The likelihood of the combined set of future experiments is where the sum runs over NuFit 5.0 and each of the relevant experiments described above. For the prior ( ) in Eq., we sample uniformly from sin 2 12, sin 2 13 and sin 2 23. For the prior on the flavor composition at the source, (f f f S ) in Eq., we explore two alternatives separately. In both, we ensure that the prior is normalized by de-manding that We only need to integrate over f e,S and f,S because f,S = 1 − f e,S − f,S. The two alternatives are: 1. Every flavor composition at the source is equally likely, and we let it vary over all the possibilities. In this case, (f f f S ) = 2. 2. The flavor composition is fixed to one of the three benchmark scenarios: 1, 0)), or neutron decay (f f f n S ≡ ). In this case, (f f f S ) = (f f f S −f f f S ) for pion decay, and similarly for the other benchmarks. In practice, we build the posterior function, Eq., by randomly sampling values of and f f f S from their respective priors, computing the corresponding value of fff ⊕ (f f f S, ), and assigning it a weight L( ). Using the sampled values offff ⊕, we build a kernel density estimator that is proportional to the posterior distribution. Figure 2 shows the 99.7% C.R. of flavor composition at Earth for the years 2020 and 2040, assuming standard oscillations, obtained using the statistical method outlined above. The larger gray regions are sampled from a flat prior in source composition, while each of the colored regions assumes 100% pion decay (red), muon-damped decay (orange), or neutron decay (green). Table II shows the 68% C.R. sensitivity to each of the flavor ratios for the different combinations of neutrino telescopes. These are shown for the year 2020-using the distribution of mixing parameters from NuFit 5.0-and for the years 2030 and 2040-using the projected sensitivity to the mixing parameters of the combined JUNO, DUNE, and HK, with their true values fixed at the best-fit values of NuFit 5.0. Figure 2 shows that, for a given flavor composition at the source, the allowed region of flavor composition at Earth shrinks approximately by a factor of ten between 2020 and 2040. When allowing the flavor composition at the source to vary over all possible combinations instead, the allowed flavor region at Earth shrinks approximately by a factor of 5 between 2020 and 2040. In this case, the improvement is smaller because the prior volume is larger since, in addition to sampling over the mixing parameters, we sample also over all possible values of f f f S. The reduction in the size of the allowed flavor regions from 2020 to 2040 stems mainly from the improved measurement of sin 2 12 by JUNO, which shrinks the regions along the f e,⊕ direction, and of sin 2 23 by DUNE and HK, which shrinks the regions along the f,⊕ direction. We keep the uncertainty on sin 2 13 fixed at its current value (see Section III). While we account for improvements in the measurement of CP over time, the effect of CP on flavor transitions is weak (see Section II B). In Fig. 2, we also include the estimated 2020 Ice-Cube 8-year flavor sensitivity, the projected 2040 Ice-Cube 15-year + IceCube-Gen2 10-year flavor sensitivity, and an "all-telescope" sensitivity that additionally includes the contributions of Baikal-GVD, KM3NeT, P-ONE, and TAMBO. These contours are produced under the assumption of a "true" flavor ratio at Earth of about (0.30, 0.36, 0.34) ⊕ coming from the full pion decay chain; see Section III B for details. The uncertainty in flavor measurement shrinks by roughly a factor of 2 between 2020 and 2040. This improvement stems from the larger event sample size and, to a lesser extent, the inclusion of TAMBO, which measures the -only neutrino flux. For the remaining neutrino telescopes, which are sensitive to all the neutrino flavors, these projections use the same morphology confusion matrix as recent IceCube analyses. This is a conservative assumption, as these rates are expected to improve for IceCube thanks to the calibration devices of the IceCube-Upgrade and are expected to be better in Baikal-GVD, KM3NeT, and P-ONE due to the reduced scattering of Cherenkov photons in water compared to ice. Additional improvement may come from combining all of the available neutrino telescopes in a global observatory. The change from 2020 to 2040 is most striking when we focus on the two most likely neutrino production scenarios: full pion decay and muon damping. In 2020, their 99.7% C.R. overlap, which makes it challenging to distinguish between them, especially because of the large uncertainty with which IceCube currently measures flavor composition. In contrast, by 2040, their flavor regions will be well separated, at the level of many standard deviations. This, combined with the roughly factor-oftwo reduction in the uncertainty of flavor measurements, will allow IceCube-Gen2 to unequivocally distinguish be-tween the full-pion-decay and muon-damped scenarios, and realistically help identify the population of sources at the origin of the high-energy astrophysical neutrinos, as we will discuss in Sec. V A. Robustness against non-unitary mixing.-The right panel of Fig. 2 shows that our conclusions hold even if neutrino mixing is non-unitary. The allowed regions with and without unitarity in the mixing-in the left vs. right panels of Fig. 2-have approximately the same size. This means that our ability to pinpoint the dominant mechanism of neutrino production is not affected by the existence of additional neutrino mass states. Our analysis of non-unitarity assumes that the new mass eigenstates are too heavy to be produced in weak interactions. This is not the case for additional neutrinos motivated by the short-baseline oscillation anomalies, in which case large deviations from the allowed standard-oscillation flavor regions are possible, because light (< 100 MeV) sterile neutrinos could be produced at the sources, and so the sum in Eq. would then be over all mass states, active and sterile, with masses smaller than the mass of the parent pion. Inferring the flavor composition at the sources.-Ultimately, we are interested in learning about the identity of the sources of high-energy neutrinos and the physical conditions that govern them. To illustrate the improvement over time in the reconstruction of the flavor composition at the source, we compute the posterior probability of f f f S as P(f f f S ) = d L( )L(f f f ⊕ (f f f S, ))( )(f f f S ), where L(f f f ⊕ (f f f S, )) is the (projected) constraint on the flavor composition at Earth from neutrino telescope observations, (f f f S ) is the prior on the flavor composition at the source. We assume f,S = 0 and put a uniform prior on f e,S. Our results update those from Ref., by improving in four different ways. First, for the 2015 and 2020 results, we use L( ) taken directly from the NuFit 5.0 2 profiles, which include two-parameter correlations, compared to Ref., which assumed Gaussian, uncorrelated likelihoods centered around the NuFit 3.2 bestfit values. Second, for the 2020 and 2040 projections, we use more recent and accurate projections of L(f f f ⊕ ) for IceCube and IceCube-Gen2, from Ref., instead of the early estimate from Ref. used in Ref.. Third, for the 2040 projections, we build detailed projected likelihoods L( ) by combining the results of simulating different oscillation experiments (see Section III A), versus Ref., which assumed an estimated reduction in the parameter uncertainties in the near future and perfect knowledge of the parameters in the far future. Finally, we now include in our projection not only IceCube-Gen2, as in Ref., but also the combination of all upcoming TeV-PeV neutrino telescopes. Figure 3 shows our results. We assume that are not In each case, we show the best-fit value of fe,S, its 68% C.R. interval and, in parentheses, its 99.7% C.R. interval. The 2020 (measured) curve is based on the measurement of flavor composition L(f f f ⊕) reported by IceCube in Ref. (following Ref., we convert the frequentist likelihood reported therein into a probability density) and mixing-parameter likelihood L( ) from NuFit 5.0. The curves for 2020 (projected) and 2040 are based on projections of Lexp from Ref., and L( ) built by combining projections of different oscillation experiments, as detailed in Section IV. For the 2020 and 2040 curves, we assume that the real value of fe,S = 1/3, coming from the full pion decay chain. We fix f,S = 0, i.e., we assume that sources do not produce. produced in the sources, i.e., that f,S = 0, as in the full-pion-decay and muon-damped scenarios, since that would require producing rare mesons in the sources, like D ± s. Using the 2015 IceCube measurements of flavor composition, the preferred value is f e,S 0, favoring muon-damped production, as was first reported in Ref.. To produce our 2020 and 2040 projections, we assume that the true flavor composition at Earth is that from the full pion decay chain (see Section III B), and attempt to recover it. Figure 3 shows that, by 2040, using the projected sensitivity to flavor composition in 15 years IceCube plus 10 years of IceCube-Gen2, and the projected reduction in the uncertainty in mixing parameters, we should be able to recover the true value of f e,S, to within 2% at 68% C.R., or within 21% at 99.7% C.R. By combining all of the available TeV-PeV neutrino telescopes in 2040, f e,S could be measured to within 15% at the 99.7% C.R. The improvement in the precision of f e,S is driven by larger sample size of the future neutrino telescopes as discussed in Section III B. Revealing multiple production mechanisms.-It is conceivable that the diffuse flux of high-energy astrophysical neutrinos is due to more than one population of sources and that each population generates neutrinos with a different flavor composition. Alternatively, even if there is a single population of neutrino sources, each one could produce neutrinos via multiple mechanisms, each yielding its own flavor composition. Given the expected improvements in the precision of the mixing parameters and flavor measurements, we study whether we can identify subdominant neutrino production mechanisms by measuring the flavor composition. The left panel of Fig. 4 shows the 2040 projected sensitivity to the fractions of the diffuse flux that can be attributed to each of the three benchmark production scenarios: full pion decay (k ), muon-damped (k ), and neutron decay (k n ), where k + k + k n = 1. The flavor composition at the source combining all these three To produce Fig. 4, we assume that k = 1, and compute how well we can recover that value, given the projected combined sensitivity L(f f f ⊕ ) of all the neutrino telescopes, and the projected combined likelihood L( ) of all the oscillation experiments. The posterior probability of the fractions k k k = (k, k, k n ) at the source is where (k k k) is a uniform prior in k k k. The left panel of Fig. 4 shows that, while the "true" value of k = 1 is within the favored region, lower values of k are also allowed, with the same significance, at the cost of increasing the contribution of muon-damped and neutron-decay production. The value of k is anticorrelated with the values of k and k n : lowering the contribution of pion-decay production to k < 1 decreases f e,S and f,S, but the former is compensated by the correlated increase in k n and the latter, by the correlated increase in k. Remarkably, the contribution of neutrondecay production cannot be larger than 40%. In some astrophysical sources, especially the ones that do not accelerate hadrons past PeV energies, the production of TeV-PeV neutrinos via neutron decay might be strongly suppressed, since beta decay yields neutrinos of lower energy than pion decay. Below we explore the sensitivity to k k k in the limit of no neutrino production via neutron decay. The right panel of Fig. 4 shows our results if we restrict production to only the pion decay and muon-damped scenarios, i.e., to k and k = 1 − k. At present, using the 2015 IceCube measurements of flavor composition and the NuFit 5.0 measurements of mixing parameters, the entire range of k is allowed even at 68% C.R. By 2040, the constraints are significantly stronger: k can be measured to within 5% at the FIG. 4. Sensitivity to the fraction of the diffuse flux of high-energy neutrinos that is contributed by the three benchmark scenarios. The real value is assumed to be k = 1, i.e., production only via full pion decay. Left: Allowing for production via the three benchmark scenarios. Right: Allowing for production only via the full pion decay and muon-damped scenarios, in IceCube (IC), IceCube-Gen2 (IC-Gen2), and future neutrino telescopes combined, and accounting for the uncertainties in the mixing parameters. In each case, we show the best-fit value of k, its 68% C.R. interval and, in parentheses, its 99.7% C.R interval. 68% C.R. and to within 20% at the 99.7% C.R. In practice, searches for the neutrino production mechanism will use not only the flavor composition but also the energy spectrum. In the muon-damped scenario, the synchrotron losses of the muons would leave features in the energy spectrum that are not expected in the full pion decay scenario, and which may indicate the strength of the magnetic field of the sources. Presently, there is little sensitivity to these features in the energy spectrum, but improved future sensitivity may help break the degeneracy between k and k. B. Testing new neutrino physics: neutrino decay Figure 5 shows that, by 2040, the higher precision to which we will know the mixing parameters will also allow us to perform more precise tests of new physics, which we illustrate by considering the case neutrino decay (see Section II E) [35,45,47,52,55,. The flavor contents |U i | 2 of the mass eigenstates i are required to compute the flavor composition at the Earth under decay, Eq.. Figure 5 shows the uncertainty in them, in 2020 and 2040. If all the eigenstates but one decay completely en route to Earth, the allowed flavor composition at Earth matches the flavor content of the one remaining eigenstate. If multiple eigenstates survive, the flavor composition is a combination of the flavor contents of the surviving eigenstates. Figure 5 shows the allowed region of flavor composition that results from all possible combinations k 1 |U 1 | 2 + k 2 |U 2 | 2 + k 3 |U 3 | 2, where k 1 +k 2 +k 3 = 1 and each k i ∈. Reference showed an earlier version this region, generated using the 2015 uncertainties of the mixing parameters from Ref.. Under the assumption that 2 and 3 decay into invisible products with the same decay rate m/ (see Section II E), we estimate upper limits on their common decay rate, or, equivalently, lower limits on their common lifetime, for the years 2020 with IceCube 2015 measurement or with projected 8 year IceCube data, and 2040 using IceCube data or the flavor measurement at all future neutrino telescopes. To do this, we compare the expected flavor composition at Earth computed for different values of the decay rate to a "no decay" scenario, where the flavor composition is computed under standard oscillations under different choices of the flavor composition at the source. We use the likelihood of the mixing parameters, L( ), and the likelihood of flavor measurements in neutrino telescope, L(f f f ⊕ ), to translate any decay-induced deviation of f f f ⊕ away from the "no decay" scenario into a bound on the decay rate. The posterior probability of the decay rate m/ is where (m/ ) is a uniform prior on the decay rate and the flavor composition at Earth is computed following Eq.. The left panel of Fig. 6 shows the resulting posterior distributions computed assuming that the flavor composition at the source is f f f S = f f f S ≡ 1 3 : 2 3 : 0 S. The posteriors reach their peak as m/ → 0, favoring longer lifetimes; we thus place upper limits on the decay rates. These become more constraining over time, as L( ) and L(f f f ⊕ ) become narrower. They translate into lower limits on the lifetimes of /m ≥ 2.4 10 3 (eV/m) s, using 2015 data, to 5.6 10 5 (eV/m) s in 2040. The right panel of Fig. 6 shows the corresponding lower limits on the lifetime as a function of neutrino mass. We have highlighted the allowed interval of masses assuming normal ordering by shading out the regions that are respectively disfavored for each of the mass eigenstates due to constraints on the mass splitting from oscillation experiments, and limits on the sum of the masses from by the latest global fit including cosmological observations and terrestrial experiments. A realistic analysis needs to take into account the uncertainties on the flavor compositions. To this end, we explore two alternative choices of the flavor composition at the source: varying over all possible values of f f f S ("f f f S free"); and production via full pion decay, but allowing its contribution to the neutrino flux to vary below its nominal value of 100%, with a half-Gaussian prior with a 10% width ("f f f S constr.") and the rest of the flux comes from the muon damped scenario. Table III shows the 95% C.R. upper limits on the decay rate for the three cases. In the most conservative case, "f f f S free," we see the same decay rate limit with 2020 and 2040 data, m/ 2 10 −4. This corresponds to a transition energy between fully-decay and no-decay at E m/ H 0 100 TeV, close to the lower limit of our energy window. For any smaller decay rates, only a small fraction (exponentially suppressed, see Eq. ) of neutrinos in the energy window would have decayed during the propagation, thus causing negligible changes to the flavor composition integrated over energy. This leads to strong degeneracy between the flavor composition at the source and the decay rate. By choosing instead the "f f f S constr.", the degeneracy is largely lifted. This illustrates that any future bounds for neutrino decay will need to be carefully weighed against our understanding of the flavor composition at the source. However, note that we only use the flavor information to test decay. If there are indeed hints for neutrino decay, the measured energy spectrum will also provide crucial information. The limits that we find are for the case of invisible decays and are, therefore, more conservative than the case of visible decay. For visible decays, the heavier mass eigenstates decay into the lightest one and can still be detected in neutrino telescopes. In the normal mass ordering, where 1 is the lightest neutrino, visible decay leads to a larger surviving fraction of 1, moving the flavor composition further away from the flavor composition expected from full pion decay, and potentially strengthening the limits on the decay rate. However, by 2040, and assuming that the measured flavor composition is centered on f f f ⊕ -as in the projected measurement contours in Fig. 5-then only decays that leave 2 as the dominant surviving neutrino in the flux will still be allowed. For a detailed treatment of the nuances of visible decay, see Ref.. The right panel of Fig. 6 shows that our lower limits on the neutrino lifetime are far from the lower limit stemming from early-Universe constraints. Although those limits assume a scalar-mediated decay from heavier to lighter mass eigenstates, decays to completely invisible products should not produce appreciably weaker bounds owing to the self-interactions induced by such a new mediator. Our limits are independent of early-Universe cosmology and are thus not susceptible to modifications to CDM nucleosynthesis or recombination. For example, models in which a late-time phase transition leads to neutrino decay easily evade the cosmological limits, making our constraints dominant. Year Neutrino telescopes Oscillation parameters Upper limit m/ (95% C.R.) ) and producing neutrinos assuming spectral index of = 2.5, via the full pion decay chain (fourth column, f f f S = 1 3, 2 3, 0 ), and allowing the source flavor composition to vary freely (fifth column, f f f S free). The last column assumes a pion decay fraction of 100%, with 10% (half) Gaussian uncertainty at the source, with the remaining neutrino flux from the muon-damped scenario. trino flux, has long been regarded as a versatile tool to learn about high-energy astrophysics and test fundamental physics. However, in practice, present-day uncertainties in the neutrino mixing parameters and in the measurement of flavor composition in neutrino telescopes limit its reach. Fortunately, this situation will change over the next two decades, thanks to the significant progress that is expected from terrestrial neutrino experiments. We have found that the full potential of flavor composition will finally be fulfilled over the next 20 years, thanks to a host of new neutrino oscillation experiments that will improve the precision of the mixing parameters using terrestrial neutrinos and neutrino telescopes that will improve the measurement of the flavor composition of high-energy neutrinos. Regarding neutrino mixing parameters, by 2040, improved measurements of 12 by JUNO and of 23 by DUNE and Hyper-Kamiokande will reduce the size of the allowed flavor regions at Earth predicted by standard oscillations by a factor of 5-10 compared to today. Additionally, the IceCube-Upgrade, together with the previously mentioned experiments, will provide improved constraints on non-unitarity of the PMNS matrix. This will clearly separate the flavor composition predicted by different neutrino production mechanisms, at a credibility level well in excess of 99.7%, and will also sharpen the distinction between expectations from standard and nonstandard oscillations. Regarding the measurement of the flavor composition of high-energy neutrinos, the deployment of new neutrino telescopes will increase the precision of the measurement of flavor composition thanks to the larger sample of high-energy neutrinos that they will detect. Beyond the continuing operation of IceCube, Baikal-GVD and KM3NeT/ARCA should already be in operation by 2025, P-ONE and IceCube-Gen2 by 2030at which point the combined effective volume of neutrino telescopes exceeds the present one by more than an order of magnitude-and TAMBO, dedicated to measuring the flux. From their combined measurements, the uncertainty in flavor composition is expected to shrink by a factor of 2 from 2020 to 2040. Our projections are conservative: they rely mainly on statistical improvements due to larger exposures and the inclusion of TAMBO as a dedicated tau-neutrino experiment. Any improvement in the methods use to reconstruct flavor, which we have not considered, will only improve the projections further. Combining these two improvements, by 2040 we will be able to distinguish with high confidence between similar predictions of the flavor composition at Earth expected from different neutrino production mechanisms. Notably, we will be able to robustly differentiate the flavor composition expected from neutrino production due to full pion decay from the composition expected from muon-damped pion decay, the two most likely production scenarios. The combined effect of smaller allowed flavor regions and more precise flavor measurements anticipate that progress in using flavor measurements to identify the still-unknown sources of the bulk of the high-energy diffuse neutrino flux will be not merely incremental but transformative. Further, by 2040 we will be able to use the measured flavor composition, and our precise knowledge of the mixing parameters, to infer the flavor composition at the source with high precision. In particular, the average e fraction at the source will be known to within 6%, a marked improvement over the 42% precision to which it is known today (see Table II). Moreover, if highenergy neutrinos are produced by a variety of production mechanisms, each yielding a different flavor composition, we will be able to identify the dominant and subdominant mechanisms. We find that if production via pion decay is the dominant mechanism, this constrains the contribution from production via neutron decay to be smaller than 40%. If production only via pion decay and muon-damped decay are allowed, the dominant production mechanism can be pinned down to less than 20% at 99.7% credible level. The presence of new physics effects, specifically nonunitarity in the PMNS mixing matrix, only modestly affects the flavor triangle: by 2040, all three canonical source compositions will be distinguishable even in the presence of non-unitary mixing. We explore neutrino decay into invisible products to illustrate the improvement that we will achieve in testing beyond-the-Standard-Model neutrino physics using the flavor composition. Complete neutrino decay to 3 or 1 is strongly disfavored today, and will be excluded at more than 5 by 2040. Under certain conservative assumptions, we have shown that future observations will be able to constrain the lifetime of the heavier neutrinos to nearly ∼ 10 5 (eV/m) s if only 1 is stable. This is nearly eight orders of magnitude stronger than the limits set by solar neutrino observations, and competitive with bounds that could be obtained from observing a Galactic supernova ; however they are significantly weaker than the constraints for early universe observables Approximately fifty years have passed since the original proposal by Markov to build large detectors to observe high-energy neutrinos. The last ten years have brought us the discovery of the diffuse high-energy astrophysical neutrino flux by IceCube, the discovery of the potential first few astrophysical sources of highenergy neutrinos, and first measurements of the flavor composition. We have shown that these efforts will come to dramatic fruition in the next two decades, yielding a more complete picture of the Universe as seen with high-energy neutrinos. The future is bright for neutrino hunters.
On the eve of the climate bill vote, Democrats sitting on the fence with their votes received thousands of dollars from Nancy Pelosi, Henry Waxman and Jim Clyburn, in campaign contributions Politico reports. The Democrats say it is a standard procedure at the end of each quarter. Republicans are already saying it's clear evidence that votes were being bought. Majority Whip Jim Clyburn (D-SC) doled out $28,000 to reps who eventually voted yes on June 24, two days before the big vote -- on a day when House leaders were doing some heavy-duty arm-twisting. Clyburn recipients who voted for the bill included a who's-who of battleground district Dems: Steve Driehaus, D-OH ($2,000); Martin Heinrich, D-NM ($2,000); Suzanne Kosmas, D-Fla. ($4,000); Betsy Markey, D-Colo. ($2,000); Carol Shea-Porter, D-NH ($2,000), Baron Hill, D-Ind. ($2,000); Alan Grayson, D-Fla. ($2,000); Leonard Boswell, D-Iowa ($2,000); Jim Himes, D-Conn. ($2,000); Mary Jo Kilroy, D-OH ($2,000); Kurt Schrader, D-Ore. ($2,000); Jerry McNerney, D-Calif. ($2,000) and Tom Perriello, D-Va. ($2,000). On the other hand, Clyburn also gave at least $14,000 to Democrats who voted no despite his pressure: Mike Arcuri, D-NY ($2,000); Marion Berry, D-Ark. ($2,000); Bobby Bright, D-Ala. ($2,000); Chris Carney, D-Penn. ($2,000); Chet Edwards (D-Tx.), Travis Childers , D-Miss. ($2,000); Parker Griffith, D-Ala. ($2,000) and Harry Mitchell, D-NM ($2,000). The same pattern held true for House Speaker Nancy Pelosi, who gave $4,000 to yes-voting Ohio Democrat Zack Space and the same amount to no-voting Chris Carney. House Energy and Commerce Henry Waxman gave at least $16,000 to yes-voters on June, 25, FEC records show.
Individual Towns and Regions pointing out its weaknesses in order to build a stronger one. Time and again, the reviewer is preparing to note a faulty line of reasoning when the author himself tackles it and takes it into account. It is a technique which is not only charming in itself but allows Dr Harris to be remarkably gentle with predecessors, praising them while positing the 'false' picture which agrees with their teaching, and then blaming his own thought, not theirs, as he goes on to demolish it. The overall picture of London politics provided may be summarized as follows. At the Restoration most Londoners welcomed the monarchy, but with different expectations of it. Although scarcely any of these were realized as the new regime proved both profligate and expensive, the main division lay between those who wished to tolerate religious dissent and those who did not. This rift remained throughout the reign, and represented the fundamental fault-line separating Whigs from Tories. Nevertheless, it is part of Dr Harris's subtlety of method that he recognizes shifts of opinion as well as long-term differences in it. Thus, he substantiates that amongst all classes support switched from the Whigs to the Tories between 1679 and 1683. Some niggling criticisms of detail could be made. The Excise was contracted at the Restoration, and not extended as Dr Harris believes, while hostility to the French among Londoners was powerful before the late 1660s, from which he dates its inception. But it is more interesting to consider the wider implications of his ideas. Despite his unfailing courtesy to Christopher Hill, it is clear that he has dealt another blow to those who persist in seeing the seventeenth century in terms of a class struggle. Oh the other hand, his work marries well with the recent heavy stress upon religious tensions in the period, whether in John Morrill's essays upon the Civil War or Jonathan Clark's portrayal of England as a 'confessional state' until 1832. To historians of the reign of Charles II he performs two principal services. One is to consider, far more deeply than before, the nature of the opposition to the Whigs in the metropolis: in fact, one could mimic J. R. Jones's seminal work and dub the second half of the book The First Tories. Second, by showing how much popular support Charles's government enjoyed in London throughout the last decade of the reign, it greatly increases our impression of its strength in these years. In fact, can we now describe the 'Exclusion Crisis' as any sort of crisis at all? Ronald Hutton Department of History, University of Bristol
Beta-Blockers and Fetal Growth Restriction in Pregnant Women With Cardiovascular Disease. BACKGROUND The effects of -adrenergic blockers on the fetus are not well understood. We analyzed the maternal and neonatal outcomes of -adrenergic blocker treatment during pregnancy to identify the risk of fetal growth restriction (FGR). METHODSANDRESULTS We retrospectively reviewed 158 pregnancies in women with cardiovascular disease at a single center. Maternal and neonatal outcomes were analyzed in 3 categories: the carvedilol (/-adrenergic blocker; / group, n=13); -adrenergic blocker ( group, n=45), and control groups (n=100). Maternal outcome was not significantly different between the groups. FGR occurred in 1 patient (7%) in the / group, in 12 (26%) in the group, and in 3 (3%) in the control group; there was a significant difference between the incidence of FGR between the group and control group (P<0.05). The group included propranolol (n=22), metoprolol (n=12), atenolol (n=6), and bisoprolol (n=5), and the individual incidence of FGR with these medications was 36%, 17%, 33%, and 0%, respectively. CONCLUSIONS As a group, -adrenergic blockers were significantly associated with FGR, although the incidence of FGR varied with individual -blocker. Carvedilol, an /-adrenergic blocker, had no association with FGR. More controlled studies are needed to fully establish such associations. (Circ J 2016; 80: 2221-2226).
The e-revolution and post-compulsory education: using e-business models to deliver quality education The best practices of e-business are revolutionising not just technology itself but the whole process through which services are provided; and from which important lessons can be learnt by post-compulsory educational institutions. This book aims to move debates about ICT and higher education beyond a simple focus on e-learning by considering the provision of post-compulsory education as a whole. It considers what we mean by e-business, why e-business approaches are relevant to universities and colleges and the key issues this raises for post-secondary education.
A Study of EEG Correlates of Unexpected Obstacle Dodging Task and Driving Style In this thesis, we want to study the EEG relates of surprising status and driving style. Accidents usually caused by lack of alertness and awareness have a high fatality rate especially in night driving environments. It becomes extremely dangerous in some situations such as the appearance of an unexpected obstacle in the middle of the road. Combining the technology of virtual reality (VR), a realistic driving environment is developed to provide stimuli to subjects in our research. The VR scene designed in our experiment is driving a car on the freeway at nighttime. Independent Component Analysis (ICA) is used to decompose the sources in the EEG data. ICA combined with power spectrum analysis and correlation analysis is employed to investigate the EEG activity related to surprising level and driving style. According to our experimental results, the appearance of ERP at CPz is highly correlated to the surprising status. Furthermore, the level of surprising status can be evaluated with the amplitude of the ERP. An extension analysis of driving style has also been further studied in the experiments. It is observed that the magnitudes of ERP power spectrum at 10Hz and 20Hz are different respecting to different driving styles.
Getty Images It was already expected that San Francisco 49ers safety Dashon Goldson would be one of the hottest commodities on the market when free agency begins on Tuesday. In fact, Goldson is already being pursued by multiple teams on the eve of free agency. According to Adam Schefter of ESPN.com, four teams are in play to sign the two-time Pro Bowler. The 49ers, Detroit Lions, Tampa Bay Buccaneers and Philadelphia Eagles are all in the running to sign Goldson with the Lions and Buccaneers likely the front-runners. Goldson has made the Pro Bowl in consecutive seasons and been an enforcer in the 49ers’ secondary. He made 69 tackles with three interceptions and a forced fumble for San Francisco last season. With Ed Reed seemingly an option for the 49ers, Goldson could decide to head elsewhere to play next season.
Improvements under the hood Apr, 14 Many changes have happened in KeeperRL land in the last few weeks. Most of them were in the game’s internals, so they may not be very interesting to you, but they are important for development. I treat KeeperRL as a very long term project, so I spend a lot of time trying to improve its internal architecture. Badly written code, much like a messy bedroom, decreases your morale, and causes you to trip over things as you’re trying to reach your goal. Not to mention bugs! In this spirit I spent a whole week switching to a new serialization library, which is the backbone of the saving and loading system. The new one, Cereal is more modern than Boost serialization, and easier to use on multiple platforms. As it turned out, it also decreased saving and loading times about two to three times, and reduced save file size almost twice! This encouraged me to do some cleaning and remove a few obsolete classes from the code. Traps, torches, and portals are now regular Furniture, much like all other static things on the map. This was an opportunity to rework portals a little bit, to make them more useful. From now on they will be constructed by imps, and will not time out. You will use them actively, just like stairs. I think that in such form they will be a great addition to the dungeon. Unfortunately, teaching the AI to use them for pathfinfing is a much bigger deal, so for now they are there only to the advantage of players. But I will revisit this problem later, because having the AI use portals, and be smart about it, would be really, really fun. As another gameplay change, I deflated the quantities of all resources in the game, except mana. Everything now costs five times less wood, gold, iron, and so on, and you also receive less of everything. The only real effect is on the size of stockpiles that are generated, because every unit of resource in KeeperRL, except mana, exists in the game as an individual item. This used to inflate save file size quite a bit. Going back to technical stuff, I noticed that switching off Vertical Sync in the window configuration, which ties the game’s framerate to the refresh rate the monitor, fixes some severe frame dropping that I experience on the development build of the game. I’m not sure if this has much effect in the real world, but I added the option to switch off V-Sync in the game’s settings. I’m also contemplating just switching it off by default. I need to research how other games approach this issue. Last, but not least, I had some time to work on the tutorial. It’s going to take the form of very small, detailed tasks for the player to perform as they build their dungeon. As KeeperRL is fairly complex, there will be a large number of these steps, so it’s going to take longer to finish than I expected initially. This is what the tutorial will look like. In addition to giving you instructions, the game also highlights the relevant UI elements for the current task. COMMENTS
Laser epithelial keratomileusis for the correction of hyperopia using a 7.0-mm optical zone with the Schwind ESIRIS laser. PURPOSE To investigate the efficacy of laser epithelial keratomileusis (LASEK) for the correction of hyperopia using a 7.0-mm optical zone and a 9.0-mm total ablation zone diameter with the Schwind ESIRIS flying-spot laser. METHODS Forty-seven patients (70 eyes) were treated with a mean preoperative spherical equivalent refraction of +2.32 diopters (D) (range: 0 to +5.00 D). All eyes underwent LASEK using 15% alcohol with a 20-second application. RESULTS An intact epithelial flap was obtained in 66 (94%) eyes. In 70 eyes at 12 months, the mean spherical equivalent refraction was +0.09 D (range: -0.75 to +1.00 D) with all (100%) eyes within +/- 1.00 D of the intended correction and 60 (86%) eyes within +/- 0.50 D. In 40 eyes with 24-month follow-up, the refractive correction remained stable after 6 months. Hyperopic cylindrical corrections were attempted in 49 eyes (range: +0.25 to +5.00 D) with vector analysis demonstrating a mean 102% correction at 12 to 24 months. In 60 non-amblyopic eyes, uncorrected visual acuity was > or = 20/20 in 47 (78%) eyes. Thirty-three (47%) eyes gained 1 to 2 lines of Snellen decimal equivalent best spectacle-corrected visual acuity, 30 (43%) eyes showed no change, and 7 (10%) eyes lost 1 line. Eight (11%) eyes at 12 to 24 months had grade +/- 1 of paracentral corneal haze and 57 (81%) had no haze. At 12 months (n = 70), the safety index was 1.06 with an efficacy index of 0.95. Analysis of higher order wavefront aberrations showed no significant changes in root-mean-square values post-operatively, except for a significant reduction of fourth order spherical aberration (P <.05). CONCLUSIONS Laser epithelial keratomileusis for hyperopia up to +5.00 D using a 7.0-mm optical zone with the Schwind ESIRIS laser provides excellent refractive and visual outcomes with minimal complications. In eyes followed for 24 months, the refractive correction remained stable after 6 months.
Adaptive Bilateral Extensor for Image Interpolation Adaptive Bilateral Extensor for Image Interpolation a candidate for the degree of Master of Science and hereby certify that in their opinion it is worthy of acceptance. ii Acknowledgments I extend my greatest appreciation to my advisor, Dr. Kannappan Palaniappan, for his constant support and guidance, without which this thesis could not be realized. I thank him for his countless hours, patience, and dedication to this thesis. I also thank Dr. Sumit Nath for his guidance in learning principles and applications necessary for conducting research within the field of computer science. His thoughts and ideas were always a meaningful contribution. I wish to acknowledge the support and assistance of Dr. Filiz Bunyak. Her willingness to assist in any way possible is greatly appreciated. I extend my gratitude to Dr. Jeffery Uhlmann. His willingness to listen and offer feedback has provided invaluable insight for this work. I also extend my sincere thanks to him for allowing me the pleasure to assist in the teaching of computer science 2050, algorithm design and programming II. Finally, I dedicate this thesis to my wife, Nicole for her continued love and support throughout my graduate school career.
A prominent Chinese political campaigner was sentenced to 13 years in jail on Wednesday, a court in central China said. Qin Yongmin was found "guilty of subversion of state power," the Wuhan City Intermediate People's Court said on its official website. The 64-year-old, first jailed as a "counter-revolutionary" from 1981-1989, has already spent a total of 22 years in prison. At the time of his arrest in January 2015, Qin was the head of the pro-democracy "China Human Rights Watch" group, which circulated online statements denouncing government policies, as well as organising discussion groups. Qin had "refused to cooperate with the court" and remained completely silent during his trial in May, lawyer Lin Qilei had previously told AFP. His lawyers did not immediately respond to requests for comment on the verdict on Wednesday. The verdict comes a day after Liu Xia, the widow of Chinese Nobel dissident Liu Xiaobo, left China to Germany. She had been held under de facto house arrest -- despite no charges -- since 2010, when her husband received the Nobel Peace Prize. Liu Xiaobo, a veteran of the 1989 Tiananmen Square protests, died last year while serving an 11-year jail sentence for "subversion". Frances Eve, researcher at Chinese Human Rights Defenders, said Qin was "prosecuted for his belief in a democratic China as well as his actions in advocating for human rights." "Authorities have been unable to build a case against him despite three years of investigation," Eve said. The veteran activist was last convicted and sentenced to prison in late 1998 after he and other activists sought to officially register the China Democracy Party. He was released in December 2010. Upon his release, Qin said police had told him not to speak with journalists, while several of his supporters who had hoped to meet him have disappeared and are believed to be in police custody. But Qin told AFP at the time that he would continue to advance human rights because "I must do what I must do."
Enabling student participation in syllabus design through film nominations and voting: an action research project This article uses the students as partners framework to examine the implications of an action research project conducted as part of a film studies module, delivered at a transnational tertiary education provider, a Sino-British university in China. The action research project consisted of the implementation of a system of film nomination and voting that allowed students to actively participate in one element of the syllabus design, namely, the choice of films to be screened and discussed in a segment of the modules curriculum, spanning 3 out of the total 14 weeks of the semester. Using as a dataset a series of semi-structured interviews with students who participated in the project, the article analyses their attitudes towards the process of nomination and voting, and points to future directions of research. By focusing on the intended democratic stakes of the project, the article argues that although the students evidenced some of the expected benefits of the collaboration, they also discursively privileged the role, the experience and the perspective of the teacher over their own. Introduction This article critically reflects on the findings of an action research project conducted while teaching film studies in a transnational educational context at a Sino-British joint venture university based in China. For the purposes of this article, the term 'action research' is understood via Geoffrey E. Mills (2013: 8) as both an evidence-based 'systematic inquiry' into aspects of a teaching practice and a 'problem-solving approach', oriented towards increasing understanding and 'effecting positive changes'. The working definition of transnational higher education is the one formulated by Sally Stafford and John Taylor (2016: 625), as 'the delivery of programmes overseas by a parent institution either operating directly or in association with an international partner'. The transnational nature of the educational provision at the university which hosted this project is evidenced by the fact that its students, while studying in China, are awarded British university degrees, a process underwritten by the quality assurance protocols that the university has in place. I taught two film studies modules at this university, namely, European Cinema for several years -the course that provided the idea for the project discussed here -and Foundations in Film Studies, the module during the delivery of which this action research project was developed. Given the transnational framework of the project, it is perhaps worth noting that ever since they were first introduced, these film studies modules have been taught to either a monocultural student group (during one academic year, the students enrolled were all from China) or, more often than not, to a student cohort which, for the most part, was culturally and nationally homogeneous, with the host nationals far outnumbering the much smaller international student contingent. At the time when the project was carried out, the latter was the case. In order to counteract the tendency towards what scholars have called the 'geo-cultural segregation' (Johan and Rienties, 2016: 227) which often emerges in classrooms of mixed constituency, the onus is usually on the teacher to create a culturally inclusive environment, and a sense of community and belonging among students of different cultural and national backgrounds, by fostering 'cross-cultural learning links' (Johan and Rienties, 2016: 235). This need to minimise potential segregation legitimises further the collaborative nature of the project that I designed and implemented. The impetus for the action research project came from the observation of patterns in student feedback on the films selected in the European Cinema module. This particular type of student feedback cropped up regularly in the module questionnaires that students filled in at the end of the semester, which are standardised for the entire university. It was usually volunteered in sections of the questionnaire which invited comments on what students enjoyed about the module and/or on how students thought it could be improved. The opinions ranged from some students finding the films studied in the module 'inspiring' and 'beautiful' to others finding them 'boring'. Although contradictory and mixed opinions of this kind tend to be inconclusive and difficult to translate into actions improving the quality of the teaching, the pedagogical insight they generated in the form of a working hypothesis was that students may have felt slightly alienated from, and ambivalent about, an important element of their learning experience, namely, the films with which they were asked to critically engage on a weekly basis. This lack of student involvement in the choice of films was identified as the problem that the project subsequently sought to address through devising a collaborative model of film selection, whereby, as module leader, I partnered with the students in deciding the films to be screened and analysed in 3 of the 14 weeks of the semester. The project was meant to question and unsettle one of the enduring practices in film studies syllabus design, namely, the habit of structuring classes 'on the basis of our individual scholarly predilections' (Tomasulo, 2001: 111) and, in the case of film selection in particular, on the basis of individual aesthetic tastes. As Anne Burns (2015: 188) has rightly remarked, the 'impetus' for action research projects is 'a perceived gap between what actually exists and what participants desire to see exist'. It is a similar type of realm of possibility that this study has ultimately sought to reach, while at the same time raising questions about the effectiveness and limitations of such an intervention. In this regard, it is worth mentioning that 'it is not expected that new action strategies will solve a problem immediately', and that a more prudent forecast is that what will be generated is a 'new stage of clarification of the situation', prompting 'further action strategies' (: 7). Literature review The responsibility for the choice of films in film studies courses customarily rests with module leaders. To my knowledge, there are as yet no studies exploring areas and modes of collaboration between students and instructors on this element of syllabus design in film studies. There is, however, a relatively wide range of research into cooperative models of syllabus and assessment design in higher education in other disciplines, such as the literature on 'collaborative course development' in management education (Kaplan and Renard, 2015;). The body of scholarship to which this project is most indebted, and to which it broadly pertains, is the growing literature on 'students as partners' in higher education, which places a premium on the relationality of the educational process. In one of the most representative entries in this literature, Alison Cook-Sather, Catherine Bovill and Peter Felten counterpose this model of understanding the relationship between student and academic staff to the 'student as consumer' model and to the managerial discourses associated with it, defining the student-faculty partnership as a 'collaborative, reciprocal process through which all participants have the opportunity to contribute equally, although not necessarily in the same ways, to curricular or pedagogical conceptualisation, decision making, implementation, investigation or analysis' (: 6). Importantly, as the authors proceed to point out, within the partnership framework, the previously distinct roles of student and faculty are rendered more permeable and more responsive to each other, with both positioned as 'learners as well as teachers' (: 7). At the same time, the framework does not force a 'false equivalency' between the two, but instead acknowledges that, while equally valuable, the types of insight contributed by each partner are bound to differ (: 7). This is consistent with Glynis Cousin's advocacy for 'threshold concepts' and a type of liminality, which she evocatively summarises as 'neither teachercentred nor student-centred'. The idea of partnership in education has also been fruitfully linked to the notion of 'self-authorship', a 'distinctive mode of making meaning' associated with adulthood, presumed to emerge between the ages of 17 and 30, and which is conducive to forging learning partnerships (Baxter Magolda, 2004: 2). There are precedents and alternatives for the kind of reconceptualisation of the role of the student in relation to the teacher that the 'students as partners' model proposes. One of them is the concept of 'radical collegiality'. An important component in the case that Michael Fielding (1999: 22) made for this type of 'authentic, inclusive collegiality' was thinking of students 'not merely as objects of teachers' professional gaze, but as agents in the process of transformative learning'. According to Fielding (1999: 23), this type of radical collegiality emerges in the context of 'teaching conceived of and practised as a pedagogy of care', with a 'dialogic imperative which binds both student and teacher in ways which, on occasions and in particular circumstances, begin to disrupt the settled roles and forms of teacher-student interaction'. The cognate terminology is even richer than this brief overview has thus far indicated. The 'students as partners' model can be subsumed under the more elusive, 'broad church' of 'student engagement', which some scholars have gone as far as considering 'a state of mind' (: 478). Also of note here is the literature on the 'student as co-producer' in higher education (McCulloch, 2009;Streeting and Wise, 2009;Carey, 2013), which is based upon the assumption that 'at the individual level, co-production is already happening all the time because new skills, knowledge and understanding are "produced" through a combination of student effort, pedagogy and the learning environment' (Streeting and Wise, 2009: 3). However, framing the relationship between teachers and students as co-production goes beyond merely acknowledging and making this explicit, towards creating and enhancing opportunities where this type of relationship can flourish. At the same time, scholarship has shown that while 'there is a pull in the direction of the co-producer and learning communities models' (Little and Williams, 2010: 126), the 'student as consumer' model and its attendant 'complaints culture' is still predominant, and more 'fundamental change' is needed than merely elevating the student 'from an informant to a consultant' (Carey, 2013: 258). Ambitious claims have been made about the potential of the 'student as producer' framework to 'radicalise the mainstream' and mount 'an intelligent resistance' to the 'market-based system of higher education' (Neary, 2013: 588). For all the high-mindedness of these claims, however, it remains unclear how this potential could be actualised on a large scale, beyond the limited success of some institutional efforts and programmes. An element of overlap between the literature on the student as (co-)producer, the research on employing students as 'pedagogic consultants' and the scholarship on 'students as partners' is their shared emphasis on student participation. The understanding of participation with which this scholarship often operates is based on Sherry Arnstein's 'Ladder of Participation'. Arnstein has proposed a spectrum of citizen participation ranging from non-participation and lack of power, through tokenistic forms of participation, and culminating with more authentic forms of empowerment through participation, among which 'partnership' is listed. Catherine Bovill and Catherine Bulley have cogently adapted Arnstein's influential model to account for different levels of student participation in curriculum design, with the lowest rung of the ladder being occupied by what the authors call the 'dictated curriculum', where students have no input, and the highest rung, which, by the authors' own admission, is actually extremely uncommon -a theoretical possibility more than anything else -being represented by students being in full control of the curriculum, with the second-highest rung being the partnership of a 'negotiated curriculum'. In between the extreme ends of this continuum, Bovill and Bulley distinguish between intermediary forms of participation, where choice is circumscribed. The level in Bovill and Bulley's (2011: 6) model which best describes the action research project detailed in this article is a set-up they call 'wide choice from prescribed choices', whereby a specific area of the curriculum is open for negotiation and, within the confines of this area, students enjoy a high degree of freedom. Research design and methodology Like many action research projects, this study also broadly subscribes to Kemmis and McTaggart's highly influential visualisation of action research as a two-phase spiral with four steps (planning, acting, observing and reflecting), with the present reflection happening at the end of the first cycle in this model. In education, reflection is by no means confined to the repertoire of methods and steps associated with action research projects. As Stephen Brookfield (2017: 30) has eloquently put it, critical reflection is an intrinsic dimension of teaching, a 'sustained and intentional process of identifying and checking the accuracy and validity of our teaching assumptions'. The action research methodology only brings this process to the fore, giving it relief and shape. The action research intervention that forms the subject of this article was to decide the films screened and discussed in a film studies module in Weeks 9, 12 and 13 of a semester, through a transparent nomination and voting system, in response to module questionnaires received in previous years. As module leader, I collaborated with the students on nominating films for these weeks, but I excluded myself from the vote so that the winning films would be entirely decided by the students. In other words, out of a total of 12 films screened in the module, 3 of them were chosen by the students who participated in the vote. In order to collect and analyse the students' views on this pedagogical strategy, which I implemented in the last section of the module, I adopted a qualitative approach, whereby I gathered data through a series of semi-structured interviews. There were 163 students enrolled in the module, and all of them were invited to participate. Only a fraction of this number agreed to participate, with three interviews conducted face-to-face, and six as online written communication in the virtual learning environment used by the university. The Sage Encyclopedia of Communication Research Methods mentions online interviews as a recent method of data collection, which can take the form either of synchronous communication or of asynchronous communication, standardised or non-standardised. While I chose standardised interviews, with all the interviewees receiving roughly the same questions, the in-person interviews were comparatively and unsurprisingly more 'free form' than the others. The face-to-face interview offers obvious advantages, in that it affords a richer, more layered experience, where verbal communication is supplemented by non-verbal communication. As Janet Salmons (2012: 2) has pointed out, 'technology is more than a simple transactional medium', and computer-mediated communications often miss out on non-verbal signals (chronemic, paralinguistic, kinesic and proxemic). On the other hand, during the face-to-face interviews, one of the intrinsic problems of self-reporting data -the aim to please -was experienced by the interviewer to be more present and more noticeable than in the online written communication. In this regard, the fact that both types of interviews were conducted served to counterbalance the advantages and shortcomings of each format. I was interested in the opinions both of students who chose not to nominate and vote for films and of students who did, and in fact the participants represented both categories. The interviews were collected over the course of two weeks. The research process complied with the ethical protocols of the university: a participant information sheet was provided for the students and they all signed a consent form. Direct quotations were anonymised through the use of pseudonyms (Participants 1 to 9). I transcribed the three face-to-face interviews myself (Participants 7, 8 and 9), and subsequently applied an inductive approach to identify recurrent ideas across all nine interviews, and organised the data according to these emerging themes. To preserve the authenticity of the process, I did not correct the grammatical mistakes and awkward turns of phrase, but kept them as such in the verbatim quotations. These semi-structured interviews were supplemented by participant observation. The module Foundations in Film Studies is an introductory course which aims to equip students with an understanding of the main modes of filmic expression, enabling them to develop an entry-level command of specialised terminology, which students are then able to apply in the assignments of the course: a shot-by-shot analysis of a film extract and a film review. The segments of the syllabus that were open for film nomination and voting were the weeks where a workshop on the second assignment (the film review) was delivered and the weeks focused on the topics Introduction to Documentary and Introduction to Animation. The films were meant to serve as case studies for these topics. The winning titles were The Imitation Game (Morten Tyldum, 2014), My Octopus Teacher (Pippa Ehrlich and James Reed, 2020) and Coco (Lee Unkrich, 2017), choices that seem to indicate a certain predilection for contemporary, highly conventional film-making. Interestingly enough, although there were Chinese and Japanese films nominated, they did not win the vote in any of the three categories. Several students who engaged in the process informally admitted in class that the winning titles were films that they had watched before, and, in this respect, familiarity seems to have been a factor of decision. Several of the students interviewed for the project said that they had not voted for the films that ended up winning, and so it has not been possible to probe in more depth the reasons why these particular films won and not the other nominated films. The reasons why the choice was limited to only a quarter (25 per cent) of the total number of films were manifold. The higher education environment is mostly defined by expert input, the assumption being that teachers have in-depth understanding of their field and can deploy this disciplinary knowledge to make decisions that stand to benefit the students. As Kevin Gannon (2020: 87) has rightly pointed out, as faculty, we 'possess a great deal of power', but, 'paradoxically, we use it most effectively when we give it away'. In the co-production model (and, one might add, also in the 'students as partners' model), power is perceived as shared, which can be a daunting prospect for the students (Little and Williams, 2010: 118). Hence, limiting the range of decision making can be reassuring, and can yield more positive results. This hypothesis, deriving from the existing scholarship, was in fact corroborated by the students I interviewed for the project. Student participation is not always desirable or possible, but, when it happens, the students need support in order to feel comfortable about the choices they are given (Bovill and Bulley, 2011). The research questions that this action research sought to answer were: 1. How did the students experience the nomination and voting activity? 2. How can this activity be improved for future use? The manner in which students' opinions are made to inform and transform teaching practice is not only a desideratum of the action research approach, but is also in accordance with one of the main tenets of the 'students as partners' model, which regards students as 'legitimate informants' (Feuerverger and Richards, cited in : 16). Findings A subgroup of questions in the interviews was aimed at shedding light on how students perceived an important aspect of the experience of nominating and voting, namely, the dynamic between module leader and students, their respective roles in the process, and the weight ascribed to these roles. Several participants privileged the teacher's role over their own and that of their peers. Participant 1 was the most emphatic on this point, noting: When I first try to familiarise myself with a certain field like film studies, I need a teacher or expert to ensure that the general direction is on the right track even if it is the part of nominating films, since I'm probably unable to identify something wrong. Later, in response to the question regarding the number of weeks open for voting, the student reinforced the previously made observations by adding: The number is three and I think it is appropriate to control the number in a relatively low state. Generally speaking, the module leader should have more power. As I answered above, I need someone to guide me. This view was taken to an extreme by Participant 4, who went as far as suggesting that students should not nominate films but only vote from a list of nominations decided solely by the module leader: I feel it would not be a good idea to let student to nominate the film, it is possible we only nominate those we have already watched to save time, if you could just let us vote for the list that offer by module leader, cause I really look forward to some recommendations of new types of film (even those films that are only for education purposes). For Participant 5, this privileging of the teacher's role was a matter of trust in their expertise: It is essential for the module leader to participate in the nomination in my point of view, because she is more professional in this field and she may have a more comprehensive understanding for films than students. Students can trust her aesthetic cognition of films. A related subgroup of questions concerned other areas of the module where student choice and decision making could be introduced and encouraged. The students were generally reserved and even sceptical about this possibility, once again downplaying or downright dismissing the value of the student contribution. For instance, Participant 8 commented that 'Choosing film is good but if students have too many rights to choose, too many things, maybe it is not good for their study.' Asked why, the student answered: 'Because I think students do not know many things about courses, modules. Maybe something they think is good is not really good.' When provided with an example of other areas of a module in which students could get involved (co-designing the assessment), the student added: I don't want to participate in this. I am not a person who is very critical thinking. I just , I'm too lazy. I just want our teacher to give me this assessment and I do this. In a move that was not uncommon in the interviews I gathered, Participant 3 dissociated her reactions and her level of involvement from those of her classmates, when she welcomed the opportunity to have more say in the module, allowing for fluctuations of disposition and interest, but expressed doubts about the overall efficacy of this strategy when applied to the larger student cohort: Personally, I'm glad that I have the chance to make some decisions in a module. However, it often depends on the topic and my mental and physical status. Nevertheless, I do think that most of the students do not know what they want to learn from the module. Therefore, offering too many choices can be really annoying sometimes, especially during seminars. Participant 1 was the most positive about the prospect of having options in a module and a chance to partake in the decision making, noting that: having options enables me to have a sense of learning and being alerted, I mean I can have a substantial feeling that I am doing something meaningful or relevant to my course, instead of being at sea. However, she also made sure to add that she liked to be involved in decision making, but only on the condition that 'the teacher circumscribes the agenda setting'. Interestingly enough, this privileging of the teacher's role was sometimes accompanied by a concern with her workload. For instance, Participant 8 was in favour of keeping the set number of weeks open for voting, saying: 'three is okay, if it is too many, maybe it is not easy for you to prepare the lecture'. In other words, the flexibility shown by the teacher, and her willingness to incorporate student choices in the syllabus, were seen as generating more work and causing difficulty. This sentiment was echoed by Participant 9 who, addressing the interviewer directly, remarked that in the process of nomination and voting, 'students don't have a lot of trouble, but you have a lot of trouble'. At least some of the student participants suggested improvements to the process, partly to alleviate this perceived burden or strain on the module leader. For instance, when Participant 3 suggested using a 'statistical system' in the virtual environment rather than the discussion forum, she added that this would be 'a good way to lessen teacher's workload'. She also recommended (similarly to Participant 6) that voting happen not independently and privately in the virtual environment, but in class, either during the break or before the start of teaching activities, so that students could discuss the list with their classmates and provide instant feedback to the teacher. All student participants were forthcoming with recommendations about how the process could be improved. For Participant 7, what would have potentially increased the number of students participating in both nominating and voting was incentivising them through a reward system, such as an electronic badge for each nomination made and/or vote cast. Participant 2 was adamant that what would have helped was using the WeChat/Weixin polls functionality, rather than the virtual learning environment. Voting through WeChat, China's most popular messaging app, was also endorsed by Participant 9. Additionally, Participants 6, 7 and 8 expressed a desire to see the process streamlined by having students click on film titles rather than typing them in a forum, with an instant display of the final results at the end, rather than waiting for the module leader to tabulate the scores and announce the winning titles. Regarding the timing of the nomination and voting, several participants suggested that these processes should take place earlier in the semester, in its first half or around mid-term. The reasons given were, as expected, the assessment deadline pressure specific to the end of the semester and the lack of time to invest in the voting procedure, with Participant 1 reporting a 'hurry feeling'. Consisting of 30 and 16 titles respectively, the length of the nomination lists was deemed either fine/appropriate (Participants 3 and 7) or too long (Participants 1, 2, 8). Only one participant tentatively suggested that the lists should be longer. When the nomination lists were judged too long, it was sometimes in correlation to an avowed lack of patience in carefully reading through the lists and researching the films in order to make an informed decision. For instance, Participant 1 remarked: I needed to search for some films that I hadn't watched in the lists to have a basic understanding, which was essential for me before voting. I felt that sometimes my patience was worn out and I would skip some. This was similar to the experience recounted by Participant 8, who said: 'When I see the list, I only see half and I have no patience to see the last part.' To the follow-up question 'So you chose films from the first half of the lists?', the participant replied in the affirmative. These observations about the nomination lists being too long were typically followed by the suggestion to add short descriptions of the films or a justification of their selection to speed up the process of deliberation preceding the actual vote. For instance, Participant 7 recommended to 'add some reasons to nominate this kind of film' or to clarify the 'benefit from this film'. Occasionally, a problem was identified but no solution proposed, such as when Participant 5 remarked that 'After the nominations are carried out, the learning aim for each week was not as clear as usual.' Asked directly whether the nomination and voting system was a good idea and whether it made a difference to their experience of the module, most interviewed students concurred and made appreciative statements. Participants 3 and 8 framed the intervention as a manifestation of the respect shown by the module leader to the students, with Participant 3 adding: It helps the communication between the students and teachers. It offers students a sense of their opinions are valuable, which is actually something that a lot of students (especially Chinese) need. Different affects came into play in the students' experience of the nomination and voting, with Participant 1 noting, for instance, that she felt 'sad' when the films she voted for did not win, and Participant 2 saying, counter-intuitively, that he loved documentaries so much that he abstained from nominating any because he would have really 'cared' if they won or not. Intriguingly, after recounting this vivid reaction, Participant 2 also noted that the whole process 'really doesn't matter, because the chosen films for teaching are adequate', with a similar neutral stance being voiced by Participant 7, who stated that it was not 'necessary'. Participant 5 saw the benefits mostly from the perspective of the teacher, rather than from that of the students, when remarking: 'It is a good idea since you will have a chance to see what your students prefer and find their interests, which is helpful for teaching and module structuring.' One of the things that participant observation added to the insights derived from the interview process was to notice a decrease in student involvement from Week 9 to Week 13, which is consistent with the students' recommendation of an earlier timing for the intervention. Analysis One of the demonstrable gains of this process of co-opting students in syllabus design through film nomination and voting, acknowledged in the specialised literature as a desirable outcome, is the development of 'meta-cognitive awareness' (: 112). As some of the interview quotations have illustrated, the nature of the intervention encouraged students to think of the time and effort required for the design, planning and support of teaching activities, and for the use of technologyassisted approaches. The opportunity to shape the subject material also enabled students to actively reflect on how they learnt, how much choice they wanted, and how they interacted with their peers and their tutor in the process of learning. From the teacher's perspective, this process of meta-cognitive awareness was already pronounced, as a function of the specific transnational and intercultural context in which this project was carried out. As Betty Leask (2006: 1) has stated, 'transnational programs are complex sites of intercultural engagement' and are 'based on institutional contractual arrangements which are in themselves sites of intercultural interaction'. In addition to these contractual arrangements, there is also an 'unwritten contract' (Leask, 2006: 1) that students have with their transnational educational provider, which materialises in expectations that teachers be not only knowledgeable in their field and skilled 'managers of the learning environment', but also 'efficient intercultural learners' (Leask, 2006: 6). For Kam Louie (2005: 17), this requirement that teachers become adept at intercultural engagement relies upon them developing and cultivating a mindset which he describes as 'meta-cultural sensitivity and awareness'. This is necessary because 'being the more powerful partner in the teacher-student relationship, the cultural baggage carried by the teachers has a much more dominant effect than that carried by the students' (Louie, 2005: 23). Jude Carroll in many ways complements the work done by Louie, in that she also emphasises the importance of teachers' self-awareness when dealing with students from a different culture and educational system. This self-awareness can be undermined by teachers' tendency to 'not see themselves as carriers of culture' and to consequently take for granted things that they would be better served to make explicit, such as 'the appropriate way for students to interact with teachers' (Carroll, 2005: 27). As an antidote to this tendency, Carroll (2005: 29) recommends not only providing clear instructions at each step of the way, but also a committed effort of 'moving beyond spontaneous first reactions to identify what you were assuming would happen', searching, in other words, for hidden assumptions about what constitutes 'normal' student behaviour. This constant vigilance for things that might require explicit explanation is imperative in transnational settings, where cultural miscommunication can be exacerbated by the language difficulties that students who are non-native English speakers regularly experience. In terms of the cultural assumptions I brought to the process of voting, I was careful not to do things that could be perceived as interfering in any way with the results. For instance, I did not introduce the nominated films to the students beyond providing basic information such as title, year of release and name of director, because I thought there was a potential for a particular choice of words to sway them in favour of certain films. However, based on the interviews, it seems the students would have welcomed such an interference and in fact felt the need for it. Moreover, some of them recommended that the process of voting be reconceived as a communal experience in which they would get to discuss the nominations with their peers in class before casting their vote. These responses made me question my own preconceptions about how voting should take place, and inspired me to consider adapting it in the future in the manner suggested by the students. In terms of Carroll's call for becoming more explicit, there was once again a positive takeaway from the interviews, in that they revealed the students' need to have their choices more clearly integrated into the overall conceptual structure of the module. Although I take responsibility for this unmet need, it is also possible that the student who commented that during the weeks when the films voted for by the students were discussed, the learning aims were less clear than before might have reacted from an unjustified fear that student participation would somehow jeopardise the module structure, as the plan for those weeks did not fully originate from the teacher. The findings of this article could be examined further in light of the project's specific transnational framework. Although some international students participated in the process of nomination and voting, none of them volunteered to be interviewed; therefore, the findings exclusively reflect the views and experience of the Chinese students, who constituted the module's overwhelming majority. Therefore, the intercultural dynamic of the project mainly played out between myself, an academic educated in the UK and using British standards of teaching and assessment, and my Chinese students, speaking for and from within their own culture. While there are indisputable cultural variables in learning and teaching, I tend to agree with scholars who argue that paying attention to 'contextual factors' which have an impact on learning is not the same thing as postulating that 'students' approaches to learning are culturally determined' (Richardson and Sun, 2016: 116). In accounting for the difficulties inherent in intercultural teaching, as well as for its opportunities, scholars and practitioners often fall back on the notion of 'cultures of learning' (Cortazzi and Jin, 2013). Lixian Jin and Martin Cortazzi (2006: 9-10) have produced useful insights into aspects of Chinese learners' 'linguistic and educational socialisation', including a culturally specific form of acquisition of literacy that has resulted in a common learning cycle in Chinese education being 'demonstration-mimesis-practice-performance', with the classroom interaction very much teacher-centred. The comments made by the participants in the interviews I conducted would seem to confirm at the very least a level of ease with teacher-centredness, which is likely to be a legacy of the students' Chinese secondary education. Another finding of this research is that students' perception of the voting and nomination process aligned well with my intentions of embedding into this pedagogical intervention, and emphasising through it and beyond, a key professional value, namely, respect for individual learners. This value holds a central place in the 'students as partners' model, alongside reciprocity and shared responsibility, and it is generally understood as a mixture of openness to other perspectives and a 'withholding of judgment' (: 2). Coupled with the inbuilt inclusivity of the project, this has allowed for a more democratic dynamic between students and the teacher to emerge, one that interestingly triggered mixed reactions. The partnership model in higher education was designed to challenge a situation in which students are treated by teachers as 'the people we teach to, not the people we are in class with', and to replace it with a relationship that encourages 'curiosity and common inquiry' (: 10). However, although the intervention discussed in this article attempted to disrupt the rigid hierarchy of higher education, the hierarchical boundaries were often re-established and reinforced discursively by the students in some of their comments quoted in the previous section. Paradoxically, this could be seen as an unconscious move to revert to more traditional, and thus 'safer', modes of interaction, by asserting and justifying the power imbalance between the students and the teacher. There are various ways in which this move can be conceptualised. It could be seen as a manifestation of lingering support for what Mano Singham has called 'the authoritarian classroom'. Pointing to instances of 'legalistic' wording frequently used in syllabuses, to a language of 'edicts' and orders, Singham (2005: 52) remarks that 'students don't seem to be offended by being ordered in course syllabi' and goes on to interpret the 'authoritarian syllabus' as 'just the visible symptom of a deeper underlying problem, the breakdown of trust in the student-teacher relationship'. In his discussion of capitalist realism, without explicitly referencing Singham, Mark Fisher (2009: 30) continued this train of thought, by vividly describing the impossible position that teachers are asked to occupy as somewhere in between 'being facilitator-entertainers and disciplinarian-authoritarians'. The irony for Fisher (2009: 30) resided in the fact that teachers are still 'interpellated by students as authority figures', despite the fact that 'disciplinary structures are breaking down in institutions'. A less pessimistic take would be that, as Philippa Levy, Sabine Little and Natalie Whelan (: 3) have noted, there is still 'considerable tension between the ideal of partnership and the effects of consumerist discourse and academic hierarchy', and this tension is by no means an easy fix or something that can be dealt with appropriately in the context of any one course, but rather something, one could argue, for the mitigation of which constant efforts have to be made. Given these persistent challenges, the action research project discussed in this article, and the lessons learnt from it, can perhaps best be understood using James Lang's notion of 'small teaching'. Lang's inspiration for this term is a sports analogy ('small ball'), referring to a style of tactical play in baseball characterised by incremental, methodical advances. In coining the term 'small teaching', Lang (2016: 26) started from the following realisation, to which many teachers can relate: 'As much as I frequently felt the urge to shake up my teaching practices with radical new innovations, I mostly didn't.' He proposed instead an approach that emphasises the value and 'potency of small shifts' (Lang, 2016: 27). This is how one might also think of this project, as a pedagogical intervention meant to unlock 'small' ideas for how to gradually and meaningfully improve. Conclusion This action research project adds to the literature on engaging students as partners, and provides analysis of an example of student-teacher co-creation of an element of syllabus design (the film choice) in a discipline (film studies), in which this topic does not seem to have inspired much discussion. The pedagogical intervention analysed in this article has revealed that while students are generally aware of the positive connotations and implications of entering a partnership with the teacher, they maintain a clearer allegiance to a model where they defer to the teacher's judgement and decisions, and do not fully invest in collaborative alternatives. The latter could be due, in part at least, to the procedural shortcomings inherent in any first trial. Many of the suggestions for improvement made by the students, primarily related to convenience of use and assistance with the deliberation involved in the process of voting for films, are not only valid but also entirely actionable. The number of students voting at the beginning of the project was higher than at its end and, based on this observed waning of interest, it would seem advisable -if the experiment were to be repeated at the same institution with similarly large student cohorts -to concentrate on increasing the number of participating students through a more carefully guided set-up, while reducing the number of weeks in which the films are decided collaboratively. The intervention discussed in this article illustrates the dual understanding of student engagement as 'a process' and, at the same time, as 'an outcome' (Dunne and Owen, 2013: 2), as it was something that I did with the students in order to boost their participation in classroom activities. Key areas that could be explored further in future cycles of this project include devising ways of counteracting the students' tendency to undervalue and even devalue their and their peers' views in the process of collaboration with the teacher, organising and framing such initiatives in a manner that ends up galvanising participation to a larger extent than has been the case in the initial iteration of this experiment, and exploring ways of harnessing motivations to greater engagement.
Comparison Analysis of Data Augmentation using Bootstrap, GANs and Autoencoder In order to improve predictive accuracy for insufficient observations, data augmentation is a well-known and commonly useful technique to increase more samples by generating new data which can avoid data collection problems. This paper presents comparison analysis of three data augmentation methods using Bootstrap method, Generative Adversarial Networks (GANs) and Autoencoder for increasing a number of samples. The proposal is applied on 8 datasets with binary classification from repository data websites. The research is mainly evaluated by generating new additional data using data augmentation. Secondly, combining generated samples and original data. Finally, validating performance on four classifier models. The experimental result showed that the proposed approach of increasing samples by Autoencoder and GANs achieved better predictive performance than the original data. Conversely, increasing samples by Bootstrap method provided lowest predictive performance.
Systematic characterization of human prostatic fluid proteins with two-dimensional electrophoresis. We present a systematic analysis of human prostatic fluid with two-dimensional gel electrophoresis (the ISO-DALT system) and a characterization of normal and disease-related protein patterns. A reference map for prostatic fluid proteins was established by analysis of pooled prostatic fluids from 80 men (age less than or equal to 50 years) without prostatic lesions. Proteins in prostatic fluid that share immunogenicity with serum proteins were identified by use of antibody to whole human-serum protein in an affinity-column fractionation of a reference pool and differential analysis of the absorbed (serum components) and unabsorbed (non-serum components) fractions. Individual prostatic fluids from 30 patients (eight with prostatic cancer, 10 with prostatitis and benign prostatic hyperplasia, six with benign prostatic hyperplasia alone, and six with asymptomatic chronic prostatitis) were scored qualitatively with respect to the presence or absence of 57 major prostatic fluid proteins. Statistically significant, disease-correlated alterations were observed for at least eight of the proteins so scored.
The present invention relates to an ink sheet cartridge, and to an exchangeable ink sheet set which can be attached to the ink sheet cartridge. In general, a thermal printer employs an ink ribbon cartridge which eases replacement of an ink ribbon, i.e., handling of the thermal printer. When the thermal printer is configured as a line printer, a wide ink sheet is used. Japanese Patent Provisional Publication No. 2001-277627 discloses an ink sheet cartridge having the wide ink sheet. The ink sheet cartridge disclosed in this publication includes a supply core tube, a takeup core tube, and a cartridge body. Spools are attached to both ends of the supply core tube and the takeup core tube so that each of the core tubes is rotatably attached via the spools to right and left side walls of the cartridge body. In the ink sheet cartridge, an intermediate connector is interposed between one of the spools and an end portion of the supply core tube. FIG. 18 schematically shows the intermediate connector 103 and one of the supply core tube 101 as a front view. As shown in FIG. 18, the intermediate connector 103 has a resilient pawl 104 which engages with a mating groove 102 formed at the end portion of the core tube 101. The resilient pawl 104 has an arm portion 104a and a pawl portion 104b. The resilient pawl 104 has a form of a letter “L” and extends from a maximum diameter portion 103a of the intermediate connector 103 in an axial direction of the core tube 101. When the intermediate connector 103 is inserted into the end portion of the core tube 101, the resilient pawl 104 is also inserted into the mating groove 102. Then, the resilient pawl 104 engages with the mating groove 102. The above mentioned configuration shown in FIG. 18 requires the intermediate connector 103 to connect the spool (not shown) to the core tube 101. Therefore, the configuration prevents a non-regular ink sheet (e.g., one provided by a non-regular vendor which is different from a regular vendor of the ink sheet cartridge having the configuration shown in FIG. 18) from being erroneously attached to the cartridge body (not shown in FIG. 18).
Ray method for unstable resonators. The previously developed ray-optical method for unstable, symmetric, bare resonators with sharp-edged stop and circular mirrors is reviewed here. A deductive stepwise procedure is presented, with emphasis on the physical implications. It is shown how the method can accommodate other edge configurations such as those produced by rounding, and also more complicated nonaxial structures such as the half-symmetric resonator with internal axicon. For the latter, the ray approach categorizes those rays that must be eliminated from the equivalent aligned unfolded symmetric resonator, and it identifies the canonical diffraction problems that must be addressed to account for shadowing and scattering due to the axicon tip. Effects due to shielding or truncation of the axicon tip are also considered. Approximate calculations of the eigenvalues for the lowest-loss modes illustrate the effects due to various tip shielding lengths and spacings of the axicon from the output mirror.
Indeed. Rather than a bold stride into the vanguard of the battle against climate change, the new proposals from the E.P.A. offer just enough progress to shuffle along with a world that unfailingly falls short of delivering what is needed. The models reviewed by the I.P.C.C. suggest that to make it “as likely as not” that global temperatures remain below the 2-degree threshold throughout this century, we may need to cut greenhouse gas emissions by more than half by 2050 — only 36 years from now — and much more after that. The problem is, we haven’t even started. In the first decade of this century, emissions actually grew at twice the pace of the preceding three decades, fueled mostly by China and its vast appetite for coal. And even if every country, including the United States, were to deliver on the pledges made in Cancún, the world’s greenhouse gas emissions in 2020 would still be greater than in 2005. This is not to discount the political boldness of the Obama administration’s proposal. The mining and energy companies that stand to lose money from the rules will almost certainly take them to court. They will be assailed mercilessly in Congress, where energy corporations have good friends and where a big chunk of the Republican Party does not believe in the science of climate change. It is not to minimize the diplomatic obstacles to a deal on how to share the burden. It is certainly not to minimize the technological challenge. The most efficient mitigation paths evaluated by the I.P.C.C. rely on the deployment of technologies that don’t really exist yet on the scale needed. For instance, in the absence of large-scale carbon capture and storage, the economic cost of staying below the 2-degree limit would more than double. But the Earth doesn’t care about any of this. What makes all this dithering so agonizing is that staying under the 2-degree ceiling would be surprisingly affordable, if the world started now to make reasonable emission cuts, while investing in developing the future technologies that could largely replace fossil fuels by the end of the century. According to the I.P.C.C report, the effort would slow the growth in the world’s annual consumption by no more than 0.14 percentage points, a tiny portion of the typical 3 to 4 percent annual rise in global output. Even Professor Jackson’s tough calculus may be easier to solve than he expected. Based on the most recent I.P.C.C. report, the decline in carbon intensity required to achieve his egalitarian growth target by midcentury without breaching the 2-degree ceiling could be less than half of what he estimated. And yet to make progress one must first take the road. Baby steps will not take the world where we need to go.
Federal Government's $10b IT bill now rivalling Newstart Allowance welfare spend Updated The Federal Government is now spending as much on information technology projects in the public service as it is on its major social welfare program, the Newstart Allowance. Key points: Cost of Government IT jumps to nearly $10 billion, rivalling the amount spent on Newstart Allowance Labor is leading a Senate inquiry following several IT bungles The Government will be forced to cut some of its largest contracts While welfare programs are being consolidated and plans to drug test recipients are announced, the cost of Government IT has jumped to nearly $10 billion. The spiralling costs — up from $5.9 billion in 2012-13 — have not always resulted in better outcomes for the public with the tax office and the Australian Bureau of Statistics facing embarrassing IT bungles. That has prompted a Labor-led Senate inquiry into mismanagement and waste, data leaks, privacy breaches and a series of website outages. A new report by the Digital Transformation Agency — once seen as Prime Minister Malcolm Turnbull's pet project — has released a scathing review of the bureaucrats managing the projects. It found public servants were too afraid to make major changes to IT procurement and were not talking with other departments to avoid duplication. "A fear of external scrutiny of decisions — such as through Senate estimates and audits — leads to a low-risk appetite and a culture where it is 'not OK to fail'," the report said. "This means that old and familiar ICT solutions are preferred to newer and more innovative, but perceivably riskier, solutions." The Government committed to spend $9 billion on IT in 2015-16 — including software and customer service websites — and another $1.4 billion on staff. It expects to spend $9.6 billion on the Newstart Allowance this financial year. To save money and boost competition, Assistant Minister for Digital Transformation Angus Taylor has capped the cost of future IT contracts at $100 million, or three-year terms. That means the Government will not be able to continue some of its largest contracts, such as the $484 million paid to IBM by the Department of Human Services (DHS) over four years. By capping the contracts, the Government hopes it may be able to secure better value for money from smaller companies, which may provide better outcomes. Three major companies — Boeing, IBM and Telstra — were awarded 24 per cent of the overall IT budget in 2015-16, with many contracts exceeding $100 million. Mr Taylor said the culture of the public service needs to change to reward "entrepreneurial spirit". He said there were "substantial cuts" to IT budgets before the Coalition was elected, which were increased to "get our systems back going again". "We don't need to keep them at the level they've been in the last year or two, we know that there's potential to bring those down," Mr Taylor said. "And a lot of this is actually through using smaller service providers, mostly local, to be able to do that." 'A symptom of a broader problem' Shadow minister for the digital economy Ed Husic said the Government was trying to blame public servants for the increasing costs, rather than taking leadership. Mr Husic, who led Labor's calls for the Senate inquiry, said the Government did not have enough "digital literacy" to turn the problem around. "You can see the scale of this when you measure the cost of the digital spending against what is being spent on Newstart payments," he told the ABC. "The ATO website has been crashing repeatedly over the course of the last six to eight months, and no senior minister has thought it important to step forward and explain what has gone wrong. "This is a symptom of a broader problem in Government." Topics: federal-government, government-and-politics, budget, community-and-society, welfare, turnbull-malcolm, australia First posted
Comment on: Treatment strategies for clozapine-induced hypotension: A systematic review Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). TherapeuTic advances in psychopharmacology TherapeuTic advances in psychopharmacology We read with interest the systematic review by Tanzer et al. 1 where they summarize 13 case reports and case series of patients with hypotension secondary to clozapine use. The authors should be commended for their efforts in seeking answers for this challenging adverse effect of clozapine. Although recommendations were provided for the management of orthostasis in ambulatory patients in this context, there was minimal attention to patients presenting with profound hypotension and shock secondary to clozapine. Vasodilatory shock is a form of circulatory failure where there is a global inability to effectively deliver oxygen to tissues and utilize oxygen at the cellular level due to profound arterial vasodilation. 2 Vasodilatory shock is a medical emergency that warrants prompt intervention to restore arterial blood pressure and reversal of the inciting cause. The syndrome is highly morbid and death rates can be up to 80% depending on the underlying cause. 3 In their systematic review, five of the studies identified by Tanzer et al. were of patients with refractory shock secondary to maintenance clozapine use or massive ingestion. What is very concerning about these cases that is deserving of more attention is the characteristics of the shock state, specifically the catecholamine vasopressor load that suggests a general lack of response to adrenoreceptor agonists (Table 1). Unresponsiveness to high dosages of catecholamine vasopressors has consistently been demonstrated to serve as a poor prognosticator in refractory vasodilatory shock. Specifically, when norepinephrine dosing rates exceed 1 g/kg/min or 100 g/min, mortality rates are in excess of 80-90%. 9,10 Indeed, the cases summarized by Tanzer et al. required such high doses of catecholamines (Table 1), and in most cases, necessitated alternative agents for rescue from life-threatening hypotension. Requirement of high dosages of catecholamines is problematic, because it signals the presence of (a) an uncorrected source that vasopressors cannot fix and (b) prolonged hypotension leading to organ ischemia and multiple organ failure. Further, this excessive adrenoreceptor stimulation from escalating dosages increases the opportunity for (c) malignant cardiac arrhythmias, and (d) tissue and organ ischemia. 2,3 Because clozapine is a potent alpha adrenergic antagonist, saturation of these receptors results in an environment where catecholamines are unable to engage adrenoreceptors to produce their vasoconstrictive effects, leaving patients at high risk of experiencing the latter three consequences described above. Therefore, in the case of clozapine ingestion, providing catecholamines and increasing them to toxic dosages without a beneficial hemodynamic effect is not a useful strategy for managing these patients. To not delay restoration of perfusing pressures, we would caution the use of catecholamines in this setting. Tanzer et al. have partially suggested this approach with the avoidance of epinephrine due to paradoxical hypotension (also known as the 'reverse epinephrine effect'). However, careful observation for an atypical response to norepinephrine is warranted. If an atypical response is noted, immediate application of non-catecholamine vasopressors, such as vasopressin or angiotensin II, should be deployed. All clinicians should be aware that clozapine is a more potent alpha-antagonist compared with other antipsychotics. The risk of hypotension related to this mechanism is relevant to all fields of medicine, in all levels of care. However, there may be gaps of knowledge as it relates to management of shock secondary to clozapine use. Attention to this issue in the United States may have waned following removal of the reverse epinephrine effect
Error Analysis of Finite Element Methods for Space-Fractional Parabolic Equations We consider an initial/boundary value problem for one-dimensional fractional-order parabolic equations with a space fractional derivative of Riemann-Liouville type and order $\alpha\in $. We study a spatial semidiscrete scheme with the standard Galerkin finite element method with piecewise linear finite elements, as well as fully discrete schemes based on the backward Euler method and Crank-Nicolson method. Error estimates in the $L^2\II$- and $H^{\alpha/2}\II$-norm are derived for the semidiscrete scheme, and in the $L^2\II$-norm for the fully discrete schemes. These estimates are for both smooth and nonsmooth initial data, and are expressed directly in terms of the smoothness of the initial data. Extensive numerical results are presented to illustrate the theoretical results. Introduction We consider the following initial/boundary value problem for a space fractional-order parabolic differential equation (FPDE) for u(x, t): where ∈ is the order of the derivative, f ∈ L 2 (0, T ; L 2 (D)), and R 0 D x u refers to the Riemann-Liouville fractional derivative of order, defined in (2.1) below, and T > 0 is fixed. In case of = 2, the fractional derivative R 0 D x u coincides with the usual secondorder derivative u, and then model (1.1) recovers the classical diffusion equation. The classical diffusion equation is often used to describe diffusion processes. The use of a Laplace operator in the equation rests on a Brownian motion assumption on the random motion of individual particles. However, over last few decades, a number of studies have shown that anomalous diffusion, in which the mean square variances grows faster (superdiffusion) or slower (subdiffusion) than that in a Gaussian process, offers a superior fit to experimental data observed in some processes, e.g., viscoelastic materials, soil contamination, and underground water flow. In particular, at a microscopic level, the particle motion might be dependent, and can frequently take very large steps, following some heavy-tailed probability distribution. The long range correlation and large jumps can cause the underlying stochastic process to deviate significantly from Brownian motion for the classical diffusion process. Instead, a Levy process is considered to be more appropriate. The macroscopic counterpart is space fractional diffusion equations (SpFDEs) (1.1), and we refer to for the derivation and relevant physical explanations. Numerous experimental studies have shown that SpFDEs can provide accurate description of the superdiffusion process. Because of the extraordinary modeling capability of SpFDEs, their accurate numerical solution has become an important task. A number of numerical methods, prominently the finite difference method, have been developed for the time-dependent superdiffusion process in the literature. The finite difference scheme is usually based on a shifted Grnwald Date: started May 21, 2013; today is May 22, 2014. 1 formula for the Riemann-Liouville fractional derivative in space. In, the stability, consistency and convergence were shown for the finite difference scheme with the Crank-Nicolson scheme in time. In these works, the convergence rates are provided under the a priori assumption that the solution u to (1.1) is sufficiently smooth, which unfortunately is not justified in general, cf. Theorem 3.2. In this work, we develop a finite element method for (1.1). It is based on the variational formulation of the space fractional boundary value problem, initiated in and recently revisited in. We establish L 2 (D)-and H /2 (D)-norm error estimates for the space semidiscrete scheme, and L 2 (D)-norm estimates for fully discrete schemes, using analytic semigroup theory. Specifically, we obtained the following results. First, in Theorem 3.1 we establish the existence and uniqueness of a weak solution u ∈ L 2 (0, T ; H /2 (D)) of (1.1) (see Section 2 for the definitions of the space H (D) and the operator A) and in Theorem 3.2 show an enhanced regularity u ∈ C((0, T ]; H −1+ L (D)) with ∈ [0, 1/2), for v ∈ L 2 (D). Second, in Theorems 4.2 and 4.1 we show that the semidiscrete finite element solution u h (t) with suitable discrete initial value u h satisfies the a priori error bound, l = 0, 1, with h being the mesh size and any ∈ [0, 1/2). Further we derived error estimates for the fully discrete solution U n, with being the time step size and tn = n, for the backward Euler method and Crank-Nicolson method. For the backward Euler method, in Theorems 5.1 and 5.2, we establish the following error estimates and for the Crank-Nicolson method, in Theorems 5.3 and 5.4, we prove These error estimates cover both smooth and nonsmooth initial data and the bounds are directly expressed in terms of the initial data v. The case of nonsmooth initial data is especially interesting in inverse problems and optimal control. The rest of the paper is organized as follows. In Section 2, we introduce preliminaries on fractional derivatives and related continuous and discrete variational formulations. Then in Section 3, we discuss the existence and uniqueness of a weak solution to (1.1) using a Galerkin procedure, and show the regularity pickup by the semigroup theory. Further, the properties of the discrete semigroup E h (t) are discussed. The error analysis for the semidiscrete scheme is carried out in Section 4, and that for fully discrete schemes based on the backward Euler method and the Crank-Nicolson method is provided in Section 5. Numerical results for smooth and nonsmooth initial data are presented in Section 6. Throughout, we use the notation c and C, with or without a subscript, to denote a generic constant, which may change at different occurrences, but it is always independent of the solution u, time t, mesh size h and time step size. Fractional derivatives and variational formulation In this part, we describe fundamentals of fractional calculus, the variational problem for the source problem with a Riemann-Liouville fractional derivative, and discuss the finite element discretization. 2.1. Fractional derivatives. We first briefly recall the Riemann-Liouville fractional derivative. For any positive non-integer real number with n − 1 < < n, n ∈ N, the leftsided Riemann-Liouville fractional derivative R 0 D x u of order of the function u ∈ C n is defined by : Here 0I x for > 0 is the left-sided Riemann-Liouville fractional integral operator of order defined by The right-sided versions of fractional-order integral and derivative are defined analogously, i.e., Now we introduce some function spaces. For any ≥ 0, we denote H (D) to be the Sobolev space of order on the unit interval D =, and H (D) to be the set of functions in H (D) whose extension by zero to R are in H (R). Analogously, we define H L (D) (respectively, H R (D)) to be the set of functions u whose extension by zero is in H (−∞, 1) (respectively, H (0, ∞)). Here for u ∈ H L (D), we set. The proper variational formulation is given by : find u ∈ U ≡ H /2 (D) such that where the sesquilinear form A(, ) is given by It is known (, ) that the sesquilinear form A(, ) is coercive on the space U, i.e., there is a constant c0 such that for all ∈ U (2.3) ℜA(, ) ≥ c0 2 U, where ℜ denotes taking the real part, and continuous on U, i.e., for all, ∈ U Then by Riesz representation theorem, there exists a unique bounded linear operator. The term The next result shows that the linear operator A is sectorial, which means that the resolvent set (A) contains the sector = {z : ≤ | arg z| ≤ } for ∈ (0, /2); (I − A) −1 ≤ M/|| for ∈ and some constant M. Then we have the following important lemma (cf. ), for which we sketch a proof for completeness. The next corollary is an immediate consequence of Lemma 2.1. We define the discrete operator A h : The lemma below is a direct corollary of properties (2.3) and (2.4) of the bilinear form A(, ): Next we recall the Ritz projection R h : H /2 (D) → U h and the L 2 (D)-projection P h : L 2 (D) → U h, respectively, defined by We shall also need the adjoint problem in the error analysis. Similar to (2.5), we define the adjoint operator A * as Variational formulation of fractional-order parabolic problem The variational formulation of problem (1.1) is to find u(t) ∈ U such that and u = v. We shall establish the well-posedness of the variational formulation (3.1) using a Galerkin procedure, and an enhanced regularity estimate via analytic semigroup theory. Further, the properties of the discrete semigroup are discussed. 3.1. Existence and uniqueness of the weak solution. First we state an existence and uniqueness of a weak solution, following a Galerkin procedure. To this end, we choose an orthogonal basis { k (x) = √ 2 sin kx} in both L 2 (D) and H 1 0 (D) and orthonormal in L 2 (D). In particular, by the construction, the L 2 (D)-orthogonal projection operator P into span{ k } is stable in both L 2 (D) and H 1 0 (D), and by interpolation, it is also stable in H (D) for any ∈. Now we fix a positive integer m, and look for a solution um(t) of the form The existence and uniqueness of um follows directly from the standard theory for ordinary differential equation systems. With the finite-dimensional approximation um at hand, one can deduce the following existence and uniqueness result. The proof is rather standard, and it is given in Appendix A for completeness. Theorem 3.1. Let f ∈ L 2 (0, T ; L 2 (D)) and v ∈ L 2 (D). Then there exists a unique weak solution u ∈ L 2 (0, T ; H /2 (D)) of (3.1). Now we study the regularity of the solution u using semigroup theory. By Corollary 2.1 and the classical semigroup theory, the solution u to the initial boundary value problem (1.1) with f ≡ 0 can be represented as where E(t) = e −tA is the semigroup generated by the sectorial operator A, cf. Corollary 2.1. Then we have an improved regularity by . Further, we have the following L 2 (D) estimate. Properties of the semigroup E h (t). Let E h (t) = e −A h t be the semigroup generated by the operator A h. Then it satisfies a discrete analogue of Lemma 3.1.. Proof. It follows directly from Remark 2.2 and Lemma 3.1. Last we recall the Dunford-Taylor spectral representation of a rational function r(A h ) of the operator A h, when r(z) is bounded in a sector in the right half plane . Error estimates for semidiscrete Galerkin FEM In this section, we derive L 2 (D)-and H /2 (D)-norm error estimates for the semidiscrete where v h ∈ U h is an approximation to the initial data v. We shall discuss the case of smooth and nonsmooth initial data, i.e. v ∈ D(A) and v ∈ L 2 (D), separately. 4.1. Error estimate for nonsmooth initial data. First we consider nonsmooth initial data, i.e., v ∈ L 2 (D). We follow the approach due to Fujita and Suzuki. First, we have the following important lemma. Here we shall use the constant 1 and the contour = z : z = e ±i 1, ≥ 0 defined in the proof of Lemma 2.1. and this completes the proof. The next result gives estimates on the resolvent R(z; A)v and its discrete analogue. Proof. By the definition, w and w h should respectively satisfy Upon subtracting these two identities, it gives an orthogonality relation for e = w − w h : This and Lemma 4.1 imply that for any By taking = h w, the finite element interpolant of w, and the Cauchy-Schwarz inequality, we obtain Appealing again to Lemma 4.1 with the choice = w, we arrive at It remains to bound w H −1+ (D). To this end, we deduce from (4.6) that It follows from this and (4.5) that from which follows directly the H /2 (D)-norm of the error e. Next we deduce the L 2 (D)norm of the error e by a duality argument: given ∈ L 2 (D), we define and h respectively by Meanwhile it follows from (4.4) and (4.7) that This completes proof of the lemma. Now we can state our first error estimate. Proof. Note the error e(t) := u(t) − u h (t) can be represented as where the contour = z : z = e ±i 1, ≥ 0, and w = R(z; A)v and w h = R(z; A h )P h v. By Lemma 4.2, we have A similar argument also yields the L 2 (D)-estimate. Now it follows from this and the representation (4.8) that It suffices to bound the integral term. First we note that which is also valid for the integral on the curve 2. Further, we have Hence we obtain the H /2 (D)-estimate. The L 2 (D)-estimate follows analogously. Error analysis for fully discrete scheme Now we turn to error estimates for fully discrete schemes, obtained with either the backward Euler method or the Crank-Nicolson method in time. Backward Euler method. We first consider the backward Euler method for approximating the first-order time derivative: for n = 1, 2,..., N U n − U n−1 + A h U n = 0, with U 0 = v h which is an approximation of the initial data v. Consequently By the standard energy method, the backward Euler method is unconditionally stable, i.e., for any n ∈ N, (I + A h ) −n ≤ 1. To analyze the scheme (5.1), we need the following smoothing property. Thus we have the following bound for the integral on the curve nR : This completes the proof of the lemma. Now we derive an error estimate for the fully discrete scheme (5.1) in case of smooth initial data, i.e., v ∈ D(A). Proof. Note that the error e n = u(tn) − U n can be split into where u h denotes the semidiscrete Galerkin solution with v h = R h v. By Theorem 4.2, the term n satisfies the following estimate n L 2 (D) ≤ Ch −2+2 Av L 2 (D). Next we bound the term n. Note that for n ≥ 1, Then by Lemmas 3.2 and 5.1 we have The desired result follows from the identity A h R h = P h A and the L 2 (D)-stability of the projection P h. Next we give an error estimate for L 2 (D) initial data v. Proof. Like before, we split the error e n = u(tn) − U n into where u h denotes the semidiscrete Galerkin solution with v h = P h v. In view of Theorem 4.1, it remains to estimate the term n. By identity (5.4) and Lemmas 5.1 and 3.2, we have for n ≥ 1 This completes the proof of the theorem. 5.2. Crank-Nicolson method. Now we turn to the fully discrete scheme based on the Crank-Nicolson method. It reads where U n−1/2 = 1 2 (U n + U n−1 ). Therefor we have (5.6) It can be verified by the energy method that the Crank-Nicolson method is unconditionally stable, i.e., for any n ∈ N, Proof. The proof of general cases can be found in . We briefly sketch the proof here. By setting w = 1/z, the first inequality follows from and that for c ≤ cos 1, The first estimate now follows by the triangle inequality. Meanwhile, we observe that Consequently for z under consideration This completes the proof of the lemma. Now we can state an L 2 (D)-norm estimate for (5.6) in case of smooth initial data. Proof. Like before, we split the error e n into where u h denotes the semidiscrete Galerkin solution with v h = R h v. Then by Theorem 4.2, the term n satisfies the following estimate Note that A h is also sectorial with the same constant as A h, and further With tn = n, it suffices to show By Lemma 3.3, there holds Since rcn(z) n z −1 R(z; A h ) = O(z −2 ) for large z, we can let R tend to ∞. Further, by , we have and consequently, by taking → 0, there holds where the sector is given by = z : z = e ±i 1, ≥ 0. By applying Lemma 5.2 with R = 1, we deduce (5.7) This completes the proof of the theorem. Now we turn to the case of nonsmooth initial data, i.e., v ∈ L 2 (D). It is known that in case of the standard parabolic equation, the Crank-Nicolson method fails to give an optimal error estimate for such data unconditionally because of a lack of smoothing property. Hence we employ a damped Crank-Nicolson scheme, which is realized by replacing the first two time steps by the backward Euler method. Further, we denote The damped Crank-Nicolson scheme is also unconditionally stable. Further, the function r dcn (z) has the following estimates . Proof. We split the error e n = u(tn) − U n as (5.5). Since the bound on n follows from Theorem 4.1, it remains to bound Let Fn(z) = e −nz − r dcn (z) n. Then it suffices to show for n ≥ 1 The estimate is trivial for n = 1, 2 by boundedness. For n > 2, we split Fn(z) into It follows from that Using the fact r dcn (z) n R(z; A h ) = O(z −3 ) as z → ∞, we may let R → ∞ to obtain Further, by Lemma 5.3, Fn(z)R(z; A h ) = O(z) as z → 0, and consequently by taking → 0 and setting = z : z = e ±i 1, ≥ 0, we have (5.10) Now we estimate the two terms separately. First, by Lemmas 5.2 and 5.3, we get Repeating the argument for (5.7) gives that for n > 2 As to other term, we deduce from (5.9) that and thus we can change the integration path to ∞ /n ∪ /n. Further, we deduce from Lemma 5.3 that Thus, we derive the following bound for n > 2 This completes the proof of the theorem. Numerical results In this section, we present numerical experiments to verify our theoretical results. To this end, we consider the following three examples: We examine separately the spatial and temporal convergence rates at t = 1. For the case of nonsmooth initial data, we are especially interested in the errors for t close to zero, and thus we also present the errors at t = 0.1, 0.01, 0.005, and 0.001. The exact solutions to these examples are not available in closed form, and hence we compute the reference solution on a very refined mesh. We measure the accuracy of the numerical approximation U n by the normalized errors u(tn) − U n L 2 (D) / v L 2 (D) and u(tn) − U n H /2 (D) / v L 2 (D). The normalization enables us to observe the behavior of the errors with respect to time in case of nonsmooth initial data. To study the rate of convergence in space, we use a time step size = 10 −5 so that the time discretization error is negligible, and we have the space discretization error only. 6.1. Numerical results for example (a): smooth initial data. In Table 1 we show the errors u(tn) − U n L 2 (D) and u(tn) − U n H /2 (D) with the backward Euler method. We have set = 10 −5, so that the error incurred by temporal discretization is negligible. In the table, ratio refers to the ratio of the errors when the mesh size h (or time step size ) halves, and the numbers in the bracket denote theoretical convergence rates. The numerical results show O(h −1/2 ) and O(h /2−1/2 ) convergence rates for the L 2 (D)-and H /2 (D)-norms of the error, respectively. In Fig. 2, we plot the results for = 1.5 at t = 1 in a log-log scale. The H /2 (D)-norm estimate is fully confirmed, but the L 2 (D)norm estimate is suboptimal: the empirical convergence rate is one half order higher than the theoretical one. The suboptimality is attributed to the low regularity of the adjoint solution, used in Nitsche's trick. In view of the singularity of the term x −1 in the solution representation, cf. Remark 2.1, the spatial discretization error is concentrated around the origin. Table 1. L 2 -andH /2 -norms of the error for example (a), smooth initial data, with = 1.25, 1.5, 1.75 for backward Euler method and = 10 −5 ; in the last column in brackets is the theoretical rate. In Table 2, we let the spacial step size h → 0 and examine the temporal convergence order, and observe an O( ) and O( 2 ) convergence rates for the backward Euler method and Crank-Nicolson method, respectively. Note that for the case = 1.75, the Crank-Nicolson method fails to achieve an optimal convergence order. This is attributed to the fact that v is not in the domain of the differential operator R 0 D x for > 1.5. In contrast, the damped Crank-Nicolson method yields the desired O( 2 ) convergence rates, cf. Table 3. This confirms the discussions in Section 5.2. 6.2. Numerical results for nonsmooth initial data: example (b). In Tables 4, 5 and 6, we present numerical results for problem (b1). Table 4 shows that the spatial convergence rate is of the order O(h −1+ ) in L 2 (D)-norm and O(h /2−1+ ) in H /2 (D), whereas Table 5 shows that the temporal convergence order is of order O( ) and O( 2 ) for the backward Euler method and damped Crank-Nicolson method, respectively. For the case of nonsmooth initial data, we are interested in the errors for t closed to zero, thus we check the error at t = 0.1, 0.01, 0.005 and 0.001. From Table 6, we observe that both the L 2 (D)-norm and H /2 (D)-norm of the error exhibit superconvergence, which theoretically remains to be established. Numerically, for this example. one observes that the solution is smoother than in H −1+ L (D) for small time t, cf. Fig. 3. Similarly, the numerical results for problem (b2) are presented in Tables 7, 8 and 9; see also Fig. 4 for a plot of the results in Table 9. It is observed that the convergence is slower than that for problem (b1), due to the lower solution regularity. 6.3. Numerical results for general problems: example (c). Our theory can easily extend to problems with a potential function q ∈ L ∞ (D). Garding's inequality holds for the bilinear form, and thus all theoretical results follow by the same argument. The normalized L 2 (D)-and H /2 (D)-norms of the spatial error are reported in Table 10 at t = 1 for = 1.25, 1.5 and 1.75. The results concur with the preceding convergence rates. The numerical experiments fully confirmed the convergence of the numerical schemes, but the L 2 (D)-norm error estimates are suboptimal: the empirical convergence rates are one-half order higher than the theoretical ones. This suboptimality is attributed to the inefficiency of Nitsche's trick, as a consequence of the low regularity of the adjoint solution. Numerically, we observe that the H /2 (D)-norm convergence rates agree well with the theoretical ones. The optimal convergence rates in the L 2 (D)-norm and the H /2 (D)norm estimate for the fully discrete schemes still await theoretical justifications. Appendix A. Proof of Theorem 3.1 Proof. We divide the proof into four steps.
The Dual-frequency Post-correlation Difference Feature for Detection of Multipath and non-Line-of-Sight Errors in Satellite Navigation Global Navigation Satellite Systems provide users with positioning in outdoor environments, however their performance in urban areas is decreased through errors caused by the reception of signals that are reflected on buildings (multipath and non-line-of-sight errors). To detect these errors, we simulated their influence on the correlator output on two carrier frequencies. Based upon the simulations, a feature using the difference between an ideal correlator output and the received correlator output was developed. This article presents the new Dual-frequency Post-correlation Difference feature and compares its performance with two methods from the bibliography. This new feature outperforms the Code-Minus-Carrier and the signal-to-noise ratio difference features in multipath and non-line-of-sight detection. Furthermore it can distinguish between multipath and non-line-of-sight reception. As a consequence, the method can be used to exclude satellites whose signals are affected by reflections, to provide a more accurate navigation solution.
Recently neocon Jennifer Rubin stated she is concerned that Trump might want Ron Paul for the role of Secretary of State in his cabinet. Ron Paul has recently addressed this, saying this is an unlikely scenario. Yet the entirety of Trump’s candidacy has been made up of unlikely scenarios. And while Ron Paul might not align with Donald Trump politically, the similarities of their candidacies are striking; both didn’t buy into special interests, both exposed corruption on national TV, both weren’t afraid to speak their mind, both are fierce critics of the current foreign policy of the United States and both made the establishment shake in their boots. This leaves me with the thought of “why not?”. Why would Trump not want Ron Paul in his cabinet? Ron Paul has proven to be a capable statesman and a patriot, and has a large following. Trump could be compelled to bring him in both because Trump wants a competent and intelligent cabinet and because it will draw many voters to him. Read more
Streamlining verbal feedback: Reflection on a feedback research project in secondary schools This paper is a reflection on action research I conducted in two classrooms to explore the effectiveness of feedback. As a result of this project, I have changed my practice to streamline verbal feedback. Despite its transience, verbal feedback can be made far more effective if it is reduced to key points only. Academic rationale I chose this subject because I use feedback as a key strategy in my classroom. Kluger and DeNisi mention that research consistently ranks feedback as among the strongest interventions at a teacher's disposal. Hattie found feedback has an effect size of 1.3. An effect size of 0.5 is equivalent to one grade leap at GCSE, advancing a learner's achievement by one year, or improving the rate of learning by 50 per cent. An effect size of above 0.4 is above average in educational research, constituting the 'hinge point' at which the impact is greater than just a typical year of academic experience and student growth. In practice, this 1.3 effect size for feedback translates into a leap of over two grades at GCSE level. Students who would, without my intervention, be taking a C grade could reach an A grade through the use of meaningful feedback. I wished to manipulate my feedback effectively enough for this to happen. Also I wanted to assess how effective my current strategies (written feedback in notebooks, whole-class verbal feedback session in a lesson after a major assignment) were, based on student feedback to me. Research design For this project, I undertook action research in two boys-only school classrooms, involving 18 target children, 6 of whom were aged 15-16 years (Year 11) and 12 of whom were aged 13-14 years (Year 9). I gathered data by recording classroom activities and, more importantly, by recording student interviews that I later analysed for patterns and relationships. Also I compared records of student work before and after I gave feedback. Initial reflection In preparation for the action research, I reflected on how I normally gave feedback to my students. In addition, video recordings were made of feedback classes for my students in. I noticed that throughout the academic year I gave feedback in several ways. Written feedback was either short comments or symbols annotating the script, or comments at the end of the script. These comments constituted descriptive feedback tabulating medals (what the student had done well) and missions (what the student needed to do to progress to the next level) (Petty, 2004;Black and Wiliam, 1998). I write these comments in two columns next to a drawn symbol of a medal or of a hand pointing to where to go next. The descriptive feedback is written in short bullet points for easy absorption and reference. For instance, one student, Zohrain, saw the following comments in relation to his assignment next to a picture of a medal: cogently argued excellent use of argument/rhetorical questions varied punctuation used. Next to this was the pointing hand of the mission, advising him to: perfect your accuracy, especially P/Ag (person agreement) work on density of ambitious vocabulary to vault directly to the next band (grade). I give students a few minutes to review this feedback when they receive their notebooks. At the time of the next assignment, they are reminded to review the feedback before proceeding, especially the 'missions'. Since errors and achievement vary from student to student, this means that in the next assignment, each student will continue to progress and to build up skills from the point where the last assignment left off. This constitutes ipsative feedback, the idea that a student makes a comparison with the self rather than with norms (achievement of other students) or external criteria alone. This encourages them to act upon developmental feedback to achieve a personal best. Oral feedback mostly takes place in the form of a whole-class feedback lesson or as individual comments when returning a notebook or upon a student request to discuss his assignment. I wanted to determine whether verbal or written feedback was more effective. What could I do to leverage my feedback? For this purpose, I decided to conduct interviews with six of my Year 11 students. Planning In the English language classroom of the Boys' Branch of the Lahore College of Arts and Sciences, I introduced the students of Year 11 to the idea that research would be taking place to learn what they had to say about the teacher's feedback and to consider how effective feedback was and how to make it better. I emphasized that they were not being judged, that their honest response was required to make feedback better, and that there were no right or wrong answers. Although the students belonged to an older age group, they had not experienced action research before in a classroom setting. Although details differed, most of the students who were interviewed seemed to feel that the medal and mission comments were more effective than verbal feedback. One student mentioned that he did not remember much of the verbal feedback. This was surprising because he was one of the best students in that class. In consideration of all this, I planned a lesson in which I would use the medal and mission comments on a convergent assignment. A 'convergent' assignment is the term used by Torrance and Pryor to denote an assignment in which the answers are expected to be similar, as opposed to an open-ended or divergent assignment where a range of original responses may be expected to the same question (the kind of assignment in which I would normally give medal and mission feedback). Furthermore, I planned changes in the teaching practice by reducing feedback to a smaller number of key concepts during the following whole-class feedback session. Action I made changes to my own lesson with Year 11 by introducing medal and mission comments to a convergent assignment and reducing the feedback to fewer core concepts. The six students were then interviewed again, yielding rich data comparing the effect of oral feedback to written feedback and revealing how the change in strategy influenced them. Observation and reflection Some interesting responses were offered in the interviews. However, I was uncertain how far the students were affected by the actual feedback given on this one particular assignment as opposed to how much change was due to the way I gave instructions in the first place or to the cumulative effect of feedback given over the academic year. Reflection in the second cycle In an attempt to untangle how feedback affects students, I wondered what would happen with fewer variables, for example by removing instructional quality as a factor affecting or reinforcing feedback. Planning in the second cycle For the next stage, therefore, I chose students from Year 9 because they had been taught by a different teacher and would be receiving only feedback from me. The criterion for selection of 12 participants was their teacher's assessment of their attainment of close to the highest, the lowest, and the average marks in this subject, from a single class and section. The target and control groups were chosen by me at random, each group including a range of attainment in English. (Attainment here refers to scores obtained on English tests/examinations marked by their teacher). Two assignments, similar in nature, would be given to the students to see how far feedback had influenced their performance. Action in the second cycle After the first assignment (part of their normal coursework), their teacher gave me the 12 notebooks to mark. For the students of the target group, I wrote descriptive feedback in the form of an annotated script, and medal and mission comments, and conducted a short verbal feedback lesson in the library, focusing on only two key points. This lesson was recorded on videotape and the assignments of all 12 participants were photographed. In accordance with normal practice, like the rest of their class fellows the control group received no additional verbal feedback class and received only ticks or a vague evaluative comment (such as 'good work'). The next day, I interviewed all 12 participants in groups of two before their language class, which happened to be at the end of the school day. A second assignment of a similar nature (again a part of the normal planned coursework for the class) was attempted in class. I marked and then photographed the work of these 12 students to see if there was any evidence for feedback alone making a noticeable difference in the performance of the target group. I expected the control group to perform at about the same level in the next assignment and the experimental group receiving feedback to have improved their performance, significantly reducing errors pointed out and making some further progress in areas they had already done well in. Observation in the second cycle This time the effects of streamlined feedback were brought into even sharper relief. The second cycle of the action research showed how explicit feedback in small doses affected student performance in the assignment that followed. Despite the range of ability and differing degrees of autonomy possessed by the students involved, the students in the target group showed significant progress due to the implementation of feedback as opposed to those in the control group, most of whom performed at the same level. Reflections on findings and the change in my understanding of feedback Students forget. Among the findings that emerged, students mentioned that they did not remember all oral feedback and that many preferred written feedback to oral feedback. For example, Razaak (Year11) and Umer (Year 9) both commented that it was hard to remember oral feedback: Razaak:... I don't really remember stuff that well and I don't really remember what you say verbally because at the end of the year, the stress and, you know, not much time, so I think the best thing is to just open your notebook and go through your mistakes. That's helpful. Umer: Actually, I like the written ones because when we get out of the classroom we forget actually 50 per cent of the lesson. When we are doing assignments we can look into that which mistakes we have done earlier and which we should not have done in this. During the initial interviews, I concluded that the reason why written feedback was effective was because there was a record of the medals and missions that could be referred to for ipsative feedback. However, as a result of the action research, I realized there was another factor contributing to its success. Out of necessity, I always reduced written feedback to a bare minimum (due to constraints of time). When I pared down whole-class verbal feedback to key points in a similar manner, it became equally, if not more, effective. This ties in with what Shute has documented. Furthermore, after the reduction of teaching points in the verbal feedback, even Razaak changed his mind: Razaak: Normally the written feedback for the summary writing is much better but this time the oral one was better because you pointed out the major mistakes that students make Teacher: Was the idea something new or had it been touched upon earlier in the year? Razaak: It had been touched upon earlier but as I said I don't really remember verbal feedback. By the end of the assignment, I had no idea. I had forgotten. Teacher: So it struck you as new again? The interesting thing was that this particular misconception had been addressed several times during the term. However, it seems to have been buried underneath a plethora of feedback offered at the time. Shute has also pointed out how feedback needs to be given in smaller doses to be retained. In relation to the format of feedback, while this highly self-directed student, Razaak, felt that there was no difference in essence between the former casually written mixed comments and the new tabulated medal and mission comments for this kind of assignment, other students of lesser attainment felt the new visual approach made it easier to absorb the feedback. They commented that this feedback was 'new', even though the same learning points had been mentioned in earlier feedback. This seemed to indicate that students who were less self-directed in their learning benefited more from an organized and categorized layout of feedback. Other interesting patterns emerged from the responses of the students. In separate interviews, students of higher attainment from both Year 9 and Year 11 initially said they preferred written feedback to oral feedback. The reason given was that they felt they would not retain all verbal instructions. Such students had a higher level of self-direction and appeared to be reluctant to depend on verbal feedback because of its transient nature. On the other hand, with struggling students the need for accessible help made verbal feedback far more attractive. For example, Hafi liked being able to ask the teacher questions: Hafi: Both were effective but I think the verbal one will give a more clear explanation of what to do Teacher: Why is that? Hafi: We can ask more questions about what to do and we can get a clear understanding to every point This seems to be based on more immediate concerns of understanding concepts than secondary issues of retention. Another pattern that emerges is that across a range of attainment, almost all students preferred customized feedback that related to their own assignments, skills, or errors in particular. Interestingly, when asked if the feedback they received was effective, all students cited examples they found written in their notebooks. Generic comment by the teacher in the verbal feedback class was not referred to even once, unless probed for specifically. During the interviews with Year 9 students, I was initially quite dismayed to hear two students from the control group (who had received evaluative feedback only) state with great confidence that they felt encouraged by the feedback and express certainty that their next assignment would show progress. At the time, it felt as if the experiment had floundered badly: if there was no difference between the performance of the control group and the intervention group, then how important could feedback be? Progress would be attributed to natural student growth. However, upon marking the second set of assignments, there were distinct differences between the progress of the control group (nominal) and the experimental group (visible). Progress was significantly greater for the most self-directed student in the experimental group than for students in the low-level or mid-level range. The term 'self-directed students' here refers to students who showed initiative, either in approaching the teacher for clarification on related learning or acting upon written feedback without explicit reminders from the teacher. Nevertheless, even the least self-directed learner of the experimental group showed more visible progress than the most independent learners of the control group. For instance, the most selfdirected learner of the experimental group was given feedback on three different areas, of which two types of error disappeared entirely and the third showed improvement. Another student with less self-direction was given equally detailed feedback; however, in the next assignment this student showed partial progress in two areas out of three. Compared to this it was eye-opening to see how so many of the students in the control group were repeating the errors made earlier. Out of these, the most self-directed student made some progress based on (as I deduced) his close following of ticks and loss or gain of marks. The level of autonomy appeared to correspond to how well the student was doing in his class. Impact on my practice and on my view of assessment literacy Initially, I felt my written feedback was of far greater value because it was visible and could be referred to. Now I understand that verbal feedback can hold more value for students who are still struggling with learning. This means that my view of assessment literacy is now perhaps more inclusive. Moreover, while the written medal and mission comments were pared down to essentials out of necessity, due to constraints of time, no such constraint applied to verbal feedback lessons; due to its collective nature I tried to cover all possible errors. Now, however, I have changed my practice by reducing such lessons to between two and four key points of feedback per assignment. I was dismayed to discover through my reading for this research, that the marks I was giving were depriving the students of the benefits to be gained from my carefully written comments. As a result of this reading, for one topic in the next academic session I planned a short series of assignments that would lead up to a cumulative assignment. The shorter assignments received feedback without marks, merely celebrating what the student could already do (medals) and identifying what more the student could do to attain the next level (missions). This time there was a significant jump in marks (13-23 per cent) for the cumulative task. It is not possible to be sure that this was due exclusively to focus on learning, but it is certainly worth exploring further. The action research drove home to me the importance of taking into account the less than perfect retention by students, of including only core material in my lessons, of not overcrowding feedback lessons, and of customizing feedback. My classroom practice has changed as a result of these insights. Having viewed myself teaching on the video I made as part of the project, I have also decided to speak more slowly. Moreover, I tend far more frequently to phrase comments in the form of a question (termed 'provocative feedback' by Hargreaves, 2014) instead of an instruction or 'evaluation'. Above all I think it has given me an incentive to create time to hear the students' voice again, something I did regularly as a novice teacher. The degree to which students expressed appreciation of the use of individualized and customized feedback encouraged me to continue using ipsative feedback. If my students are filtering out much of the generic commentary, can I really afford to rely on it as a primary instrument for improvement? Notes on the contributor Rizwana Nadeem is a senior teacher for O Levels at the Boys' Branch, Lahore College of Arts and Sciences, Johar Town Campus in Lahore, Pakistan. Over the last decade, she has had countless enthusiastic, outspoken language students ranging from Year 3 to Year 11, who have taught her that she still has a lot to learn about assessment for learning.
news Downtown Digital Design Firm’s Offices Invaded by Police Over a Lego Gun The actual model of Lego gun purchased by Jeremy Bell. Image courtesy of BrickGun. Jeremy Bell bought a Lego replica of a semi-automatic pistol and assembled it, yesterday, in the back office of teehan+lax, a downtown user-experience design firm, where he’s a partner. An hour later, he was up against the stairwell wall, being frisked by police on suspicion of weapons possession. Admittedly, the gun, purchased from BrickGun, an online retailer that specializes in Lego replicas of firearms, looked pretty realistic. Bell thinks someone living in one of the apartments with windows facing teehan+lax’s building, at 460 Richmond Street West, might have called in the tip after seeing him wave around the suspicious chunk of plastic from a distance. “I understand why,” he said. “It looks legit. You see a guy in an office with the door closed, putting something together, it looks like a gun. I get it.” Constable Tony Vella of the Toronto Police Service later confirmed to CTV that this was precisely the case. “We have to take all the gun calls seriously because we don’t know what we’re getting involved in,” he told them. The way Bell told the story, both to us over the phone and on his blog, he’d just received the Lego gun kit in the mail on Wednesday and had brought it to work. It sat in its box until the end of the day, at which point he opened it up. “I decided to put it together,” he wrote on his blog earlier today. “I literally assembled it, handed it to a co-worker (who promptly broke it), and then put it back in the box.” Jeremy Bell, the “culprit.” Then, he went to go unwind with some colleagues over a game of Call of Duty: Modern Warfare 2. “So we’re sitting there just playing some games,” he said, “and probably like an hour or two later, around seven o’clock or so, I could hear screaming in the elevator.” He took off his headphones and went to investigate. “I’m thinking it’s some kind of domestic dispute outside, until I heard my own name. And then it’s like: oh shit, that’s not good.” Bell describes what happened next on his blog: “I sheepishly opened the door to see what was going on, only to discover a SWAT member crouched in the stairwell yelling at me and another directing a small mirror into the hall. The guy in the door had a weapon with a flashlight pointed at me, so I couldn’t really see what was going on, but I was instructed to put my hands on my head and turn around. With my hands up, I had to lift my shirt and slowly spin around. Once they confirmed I wasn’t packing any Lego heat, I walked backwards towards them, was then cuffed, pulled into the stairwell and thrown against the wall.” Another witness, who claims to work for a different company with offices in the building (but supplies no further information about himself), describes the scene outside the building on his personal blog: “Five cop cars in total. Two ambulances. And a dozen cops all taking positions of cover in downtown Toronto. A drug deal? An explosive? Hmmm.” Jeffrey Remedios, president and co-founder of the indie record label Arts & Crafts, also located in the building, tweeted the following to Bell earlier today: “@jeremybell – hey lego gun: we’re in the office above you. SWAT tried to take down my co-worker last night as he left the building!” If this all seems a little senseless, rest assured that Bell agrees. “I’m a sucker for Lego and when I saw that thing I was like, ‘That thing’s pretty cool,'” he said. “That, unfortunately, is how shallow it was.” Thanks to reader Ren Bostelaar for the tip. Photo of Jeremy Bell courtesy of his own online bio.
The prehistory of discovery: precursors of representational change in solving gear system problems. Microgenetic research has identified 2 different types of processes that produce representational change: theory revision and redescription. Both processes have been implicated as important sources of developmental change, but their relative status across development has not been addressed. The current study investigated whether (a) the process of representational change undergoes developmental change itself or (b) different processes occupy different niches in the course of knowledge acquisition. College, 3rd-, and 6th-grade students solved gear system problems over 2 sessions. For all grades, discovery of the physical principles of the gear system was consistent with theory revision, but discovery of a more sophisticated strategy, based on the alternating sequence of gears, was consistent with redescription. The results suggest that these processes may occupy different niches in the course of acquiring knowledge and that the processes are developmentally invariant across a broad age range.
COSTA MESA — Boys and girls lined up to enter the Estancia High gymnasium for a special lunch Wednesday. Before they got in, they checked in with one of the coaches responsible for letting them have a free meal during the school's lunch hour. If your name was not on the list football coach Mike Bargas had in his hand, you were not getting in. But everyone in line was waiting for a legit reason. They all played a part in helping Estancia keep the All-Sports Cup at home. The Eagles' treat for beating cross-town rival Costa Mesa for the trophy awarded to the best high school athletic program in the city was catered food by the Newport Rib Co. Estancia has claimed the huge prize and spread in each of the two years of the competition's existence. This school year's competition, which Costa Mesa United runs, was not even close. The Eagles beat the Mustangs, 120-60, each boys' and girls' team contributes five points for a rivalry victory, 2.5 points for a rivalry tie, and only in football does a team earn 15 points per rivalry win. John Ursini, the owner of Newport Rib Co., created the All-Sports Cup because he wanted to highlight the special rivalry between Estancia and Costa Mesa. "This was modeled after the Lexus Gauntlet Trophy that USC and UCLA does [annually]," said Ursini, who wanted to do more for Estancia and Costa Mesa, which play for the Battle for the Bell trophy in only three sports, football, baseball and basketball. "I always appreciated what the Scotts [the late Jim Scott, and now his son Jim Scott Jr.] do with doing the Bell trophy. [Jim Scott Jr.] can only do so many sports. He just got to the point where it's hard to do [for every sport]. We support that, and he brings the [winning football, baseball, boys' and girls' basketball team] over to Newport Rib Co. and it's great. "For me, [I've] always been kind of a fan of all sports, all teams, everybody counts." Every varsity athlete at Estancia got to taste barbecued chopped beef brisket or pulled pork sandwiches, with barbecued beans, mashed potatoes and coleslaw. Seeing the athletes load up their plates with food, smiling as they walked toward to get a drink where the trophy was on display made Ursini's day. He used to be one of them. He was an athlete at Estancia, graduated from the school in 1982, and later coached soccer from 1989 to 1991. "I think it's just the right thing to do for the kids of the Costa Mesa area," Ursini said. "It seemed like Costa Mesa United was all good for raising money and things, but at the end of the day, the kids I think appreciate just competition, and this represents what competition is all about." Estancia earned the bragging rights against Costa Mesa in many sports this season. On the trophy, it shows which school's sports team outscored the other. Football earned, 15, and it is understandable why because Estancia and Costa Mesa usually play once a season. That one game this past November was a special one for Estancia. Bargas' Eagles clinched their first undefeated Orange Coast League title by defeating Costa Mesa, 35-6, in the regular-season finale. The league title was the program's first outright championship in 21 years. The boys' golf team finished perfect for the entire regular season, a first for the program. Coach Art Perry led the Eagles to two victories against the Mustangs, as Estancia went 17-0, 14-0 in the Orange Coast League. Baseball also shined under Coach Matt Sorensen. For the first time under his four-year watch, the Eagles swept Costa Mesa, winning each of the three Orange Coast League games. The three wins gave Estancia 15 points in the All-Sports Cup competition. "It makes a big statement about the swing in power," said Sorensen, whose Eagles were the only baseball program in Newport-Mesa to advance to the CIF Southern Section playoffs this season. "[Costa Mesa Coach] Jim Kiefer has done an unbelievable job there. They just absolutely handled us the first two years that I was here. But everything goes in cycles." Estancia can also be proud of its girls' basketball team, led by co-coaches Xavier Castellano and Judd Fryslie. The Eagles swept their two-game series with Costa Mesa and went on to win the Orange Coast League title by going 10-0, the program's first league crown since 2003. "We've been very fortunate in our athletic programs," said Jessica Gatica, the girls' athletic director at Estancia. "[I'm] incredibly proud of the girls, from last year to this year, we've had a lot of improvements in a lot of our sports. "I definitely feel like the trophy adds to the rivalry, because of course you want to have the bragging rights of having the trophy."
Arterial blood gas changes during cardiac arrest and cardiopulmonary resuscitation combined with passive oxygenation/ventilation: a METI HPS study Objective High-fidelity simulators can simulate physiological responses to medical interventions. The dynamics of the partial arterial pressure of oxygen (PaO2), partial arterial pressure of carbon dioxide (PaCO2), and oxygen pulse saturation (SpO2) during simulated cardiopulmonary resuscitation (CPR) were observed and compared with the results from the literature. Methods Three periods of cardiac arrest were simulated using the METI Human Patient Simulator™ (Medical Education Technologies, Inc., Sarasota, FL, USA): cardiac arrest, chest compression-only CPR, and chest compression-only CPR with continuous flow insufflation of oxygen (CFIO). Results In the first period, the observed values remained constant. In the second period, PaCO2 started to rise and peaked at 63.5mmHg. In the CFIO period, PaCO2 slightly fell. PaO2 and SpO2 declined only in the second period, reaching their lowest values of 44mmHg and 70%, respectively. In the CFIO period, PaO2 began to rise and peaked at 614mmHg. SpO2 exceeded 94% after 2 minutes of CFIO. Conclusions The METI Human Patient Simulator™ accurately simulated the dynamics of changes in PaCO2. Use of this METI oxygenation model has some limitations because the simulated levels of PaO2 and SpO2 during cardiac arrest correlate poorly with the results from published studies. Introduction According to the current European Resuscitation Council recommendations, standard cardiopulmonary resuscitation (CPR) should be performed with a compression-to-ventilation ratio of 30:2. 1 Ventilation during the early phase of CPR has been questioned and reevaluated. 2 Adverse consequences from excessive positive-pressure ventilation could result in increased intrathoracic pressure 3 and decreased coronary perfusion pressure, thereby worsening patient outcomes. 4 The fact that both laypersons 5 and health professionals 6 are reluctant to perform mouthto-mouth ventilation has also contributed to the re-evaluation of ventilation in the early phase of CPR. As a result, the current European Resuscitation Council guidelines encourage CPR providers who witness sudden adult collapse to perform continuous chest compression-only CPR without mouth-to-mouth ventilation. 7 An alternative approach to oxygenation during CPR is continuous flow insufflation of oxygen (CFIO) through a Boussignac tube, nonrebreather facemask (NRB), 11 or nasal oxygen tube. 12 Some studies have shown that CFIO is more effective 11 or equally as effective as the recommended intermittent positive-pressure ventilation with respect to outcomes after resuscitation 10,12 and yields better oxygenation 9 and coronary perfusion pressure during resuscitation efforts. 8 The METI Human Patient Simulator TM (METI HPS) (Medical Education Technologies, Inc., Sarasota, FL, USA) is a sophisticated mannequin with integrated pathophysiological models that can reproduce different clinical scenarios and simulate physiological responses to medical interventions while giving real-time feedback. The METI HPS is currently used as an educational tool for nurses, 13 medical students, 14,15 and medical providers. 16 It has also been used as an experimental tool to simulate bodily responses to extreme environments such as carbon monoxide poisoning during occupational mining accidents. 17 High-fidelity simulators were designed to test medical devices and interventions, especially in emergency situations, 18,19 before their translation into clinical research or clinical practice, allowing safer clinical human studies and clinical practice. It can be concluded that the METI HPS accurately reproduces real clinical situations. The accuracy of the METI physiological model during oxygen administration and apnea maneuvers was recently investigated. 20 The study showed some discrepancies between the obtained data and the results from the literature. The purpose of this study was to quantitatively observe the dynamics of changes in the METI HPS oxygenation model (changes in arterial blood gas and oxygen pulse saturation) when performing chest compression-only CPR or a combination of chest compression-only CPR and CFIO. Materials and methods A patient was simulated using the version 6 METI HPS in the Simulation Center of the Medical Faculty, University of Maribor (Maribor, Slovenia). This full-scale, highfidelity simulator uses a hybrid (mathematical and mechanical) self-regulating lung model with a real physical system to model pulmonary gas exchange and lung mechanics of a simulated patient. Briefly, uptake and excretion of oxygen, carbon dioxide, nitrous oxide, and a volatile anesthetic were physically created based on the measured concentrations in the bellows of the simulated lung and in a software model representing uptake, distribution, storage, consumption, and/or production in the body. Lung perfusion was also accounted for in this model by modeling the cardiovascular subsystem of the patient being simulated. 20 Unlike this hybrid lung model, all other simulator models, such as cardiovascular (blood pressure) and systemic uptake and distribution (arterial blood gas) models, are mathematical models. Simulation of cardiac arrest and chest compressions were performed as follows. Initially, cardiac arrest was simulated for 10 minutes (cardiac arrest period). The cardiac rhythm of the METI HPS was set as asystole to simulate cardiac arrest during this period. After the first period, resuscitation efforts (chest compression-only CPR period) were simulated for the next 10 minutes. This required minor changes in the software of the METI HPS to simulate the second period. The tidal volume was set to the lowest possible value of 200 mL with a breathing frequency of 40/minute (highest possible) to simulate passive ventilation/oxygenation achieved by chest compression-only CPR. Our goal was to set the breathing frequency as close as possible to the chest compression rate during CPR. Because of the software limitation, the highest possible value was chosen. The respiratory quotient was set at 0.8 (oxygen consumption of 250 mL/min and carbon dioxide production of 200 mL/min) for a standard adult (body weight of 70 kg) and remained constant during the simulation. Cardiac output was set at 25% of the normal value with a heart rate of 100/ minute (the cardiac rhythm was set at sinus rhythm) to simulate the cardiac output achieved by chest compression-only CPR. These values of tidal volume 21 and cardiac output achieved during chest compressions have already been confirmed in previously published studies. 22,23 After the initial 20 minutes, the third 10-minute period (CFIO period) of chest compressiononly CPR with CFIO followed the first and second periods. During the third period, oxygen was applied to the simulator via a nasal cannula (NC), NRB, or combination of the two to perform CFIO. The flow of oxygen was set at 15 L/minute in the third period. The partial arterial pressure of oxygen (P a O 2 ) and partial arterial pressure of carbon dioxide (P a CO 2 ) were continuously measured by the METI HPS respiratory gas analyzer module during the simulation and recorded in a data log file for each procedure. The oxygen pulse saturation (SpO 2 ) was simulated with the data from the model and was also continuously recorded in a data log file. The obtained data were compared with the results from the literature. This study was conducted in a simulation center using only a METI HPS high-fidelity simulator. No human or animal subject was included in the study. Therefore, approval from an ethics committee was not needed. Results The dynamics of the changes in P a CO 2 are presented in Figure 1. For the first 10 minutes, P a CO 2 remained similarly constant (42 mmHg). After the initial 10 minutes (chest compression-only CPR), P a CO 2 began to rise and reached its highest value of 63.5 mmHg at the 20-minute mark. At the beginning of the CFIO period, P a CO 2 began to slightly decline, reaching 58.9 mmHg using the NC and 61.7 mmHg using the NRB at the end of the CFIO period. P a CO 2 continued to rise using the NC NRB after the 20-minute mark and reached its highest level of 68.1 mmHg at the end of the experiment. The differences in pCO 2 from the baseline levels at different time points of the present METI HPS study and various studies from the literature are presented in Table 1. The changes in P a O 2 are shown in Figure 2. For the first 10 minutes, P a O 2 remained constant (121 mmHg). From the 10-minute mark, it began to decline, reaching its lowest value of 44 mmHg at the 20-minute mark. In the CFIO period, P a O 2 began to rise and reached its highest value of 395 mmHg using the NC, 477 mmHg using the NRB, and 614 mmHg using the NC NRB at the end of the experiment. The differences in pO 2 from the baseline levels at the different time points of the present METI HPS study and various studies from the literature are presented in Table 2. 12 Values of DpCO 2 from the present METI HPS study were taken from the nasal cannula oxygen application, which was also used by Hayes et al. 12 in the CFIO period. Figure 1. Dynamics of changes in P a CO 2. Abbreviations: P a CO 2, partial arterial pressure of carbon dioxide; mmHg, millimeters of mercury; min, minute; NC, nasal cannula; NRB, nonrebreather facemask. The changes in S p O 2 are presented in Figure 3. After constant levels in the initial period, S p O 2 remained unchanged until the 14-and 15-minute marks, when the S p O 2 value declined to <94% and <90%, respectively. The S p O 2 continued to decline during the chest compression-only CPR period and reached its lowest value of 70% at the end of this period. Already 2 minutes after starting CFIO, the S p O 2 increased to >94%; it then increased to the maximal value of 100% after 4 minutes of CFIO regardless of the method of oxygen application and remained at this level until the end of the experiment. The dynamics of the mean arterial pressure are presented in Figure 4. The changes in the mean arterial pressure were similar in all three scenarios. The pressure fell after 2 minutes of cardiac arrest to the lowest level of 17 mmHg and started to rise only after 2 minutes of chest compressions to the maximal level of approximately 50 mmHg; it then remained constant until the end of the experiment. 12 Values of DpO 2 from the present METI HPS study were taken from the nasal cannula oxygen application, which was also used by Hayes et al. 12 in the CFIO period. Discussion The dynamics of changes in arterial blood gas parameters and oxygen pulse saturation during cardiac arrest and resuscitation efforts followed by chest compression-only CPR combined with CFIO were simulated using the METI HPS, and the data were compared with the results reported in the literature. In the present study, P a CO 2 did not change during the cardiac arrest period. This observation is in accordance with those reported by Steen et al. 8 and Idris et al., 24 who found no significant difference in P a CO 2 between baseline and after 8 minutes of cardiac arrest and between baseline and after 5 minutes of cardiac arrest, respectively. P a CO 2 started to rise when chest compressions were performed. This could have been the result of the increased delivery of carbon dioxide from peripheral tissue into the cardiocirculatory system during chest compressions. The pattern of changes in P a CO 2 during the chest compression-only CPR period of the present study is in accordance with previously published animal studies. 25,26 However, Chandra et al. 25 and Berg et al. 27 reported lower values of P a CO 2 in animal studies (38.8 AE 6.4 mmHg after 10 min and 41 AE 12 mmHg after 4 min of chest compression-only CPR). This difference may be due to the lower baseline (prearrest) values of P a CO 2 in canines (27 AE 1.5 mmHg) 25 and shorter period of cardiac arrest (2 min) 25,27 compared with our study. Dorph et al. 26 observed significantly higher levels of carbon dioxide (92.5 mmHg after 9 minutes of chest compression-only CPR) in pigs with obstructed airway, which prevented passive ventilation and resulted in higher values of P a CO 2. During the CFIO period, we observed two different patterns of P a CO 2 changes. CFIO using an NC or a combination of an NC and NRB resulted in slightly decreased values of P a CO 2 compared with the previous period. Similar values of P a CO 2 were reported by Hayes et al. 12 (57 AE 9 mmHg after 6 min of CPR with CFIO). It could be presumed that CFIO together with chest compressions produces passive ventilation and thus eliminates carbon dioxide. Branditz et al. 28 showed that external cardiac chest compressions combined with CFIO generate adequate ventilation, while CFIO generates positive pressure in the lungs. In contrast, when oxygen was applied using the NRB in the present study, an additional rise of P a CO 2 was observed, with the highest value being achieved at the end of the CFIO period. It seems that the increased values of P a CO 2 in our study could be attributed to insufficient ventilation. This finding is supported by an animal study by Idris et al., 24 who detected similar values of P a CO 2 (62 AE 16 mmHg) in the nonventilated group of domestic swine during chest compressions. An inadequate facemask seal, which is a common problem observed with the METI HPS, is considered to be a factor that contributed to insufficient ventilation in our study. The inadequate seal may have led to less pressure in the lungs and thus less effective passive ventilation, resulting in the accumulation of carbon dioxide in the lungs. We set the tidal volume on the METI HPS at the lowest possible value (200 mL) and the highest possible breathing frequency (40/min) to simulate passive ventilation achieved by chest compressions. Owing to the simulator limitations, we could not set the breathing frequency closer to the recommended rate of chest compressions (i.e., 100-120/min). The tidal volume was similar to that reported by Safar et al., 21 who showed that in anesthetized patients with an open airway, rhythmic firm pressure over the lower half of the sternum at a rate of one compression per second generates an average tidal volume of 156 mL. In more than half of the patients, the tidal volume was larger than the estimated dead space, presuming effective passive ventilation occurred in those patients. 21 Steen et al. 8 also demonstrated that CFIO during mechanical chest compression-active decompression CPR provided adequate ventilation. The airway pressure induced by CFIO was positive during the entire cycle of CPR, thus increasing the functional residual capacity and decreasing physiological dead space. Saissy et al. 10 found significantly greater elimination of carbon dioxide in the CFIO group because of better lung mechanics in this group and concluded that CFIO through a multichannel open tube was as effective as intermittent positive-pressure ventilation during out-of-hospital arrest. Additionally, animal studies have demonstrated large minute volumes generated by chest compressions alone in dogs 25 or through a combination of precordial compression and gasping in pigs. 29 Deakin et al. 30 reported that passive ventilation occurring as a result of compression-only CPR in humans appears to be ineffective in generating tidal volumes adequate for gas exchange. They found that in all patients, the passive tidal volume was significantly lower than the patients' estimated dead space. 30 This finding could be the consequence of reduced respiratory system compliance because the measurement in that study was made 40 to 50 minutes post-arrest, when the respiratory compliance had already decreased due to pulmonary edema and venous congestion; in contrast, measurements were made a few minutes post-cardiac arrest in the above-mentioned previous studies. Deakin et al. 30 also found sustained levels of endtidal carbon dioxide in most patients during compression-only CPR, suggesting that alveolar gas exchange was occurring despite the low passive tidal volumes measured. Oxygenation is the aim of emergency ventilation. Therefore, CFIO was introduced as a new approach in resuscitation. Our study showed that P a O 2 did not change during the cardiac arrest period. The P a O 2 during this period is expected to decline due to utilization of oxygen in peripheral tissues for ongoing metabolic processes. A decline in P a O 2 during cardiac arrest was observed by Hayes et al. 12 and Idris et al., 24 who found a decrease in P a O 2 between baseline and after 7 minutes of cardiac arrest and between baseline and after 5 minutes of cardiac arrest, respectively. In the present study, P a O 2 started to fall only after the beginning of the chest compression period. The same decline in P a O 2 was described by Chandra et al. 25 with a similar value of P a O 2 (40.9 AE 7.5 mmHg) 10 minutes after chest compression-only CPR. After starting CFIO combined with chest compressions, increasing P a O 2 values were observed. The highest value was noted after 10 minutes of combined application of oxygen (NC NRB) during chest compressions. This type of combined application of oxygen is commonly used to prevent desaturation during emergency airway management. 31 Administration of oxygen using an NC is as effective as using an NRB but reaches a lower maximal P a O 2 value. The PaO 2 values obtained in the present study are higher than those reported by Hayes et al. 12 and are in accordance with those in the animal study conducted by Steen et al., 8 who showed significantly higher average PaO 2 values in the CFIO group during 30 minutes of mechanical CPR than in the intermittent positive-pressure ventilation group. Steen et al. 8 used an endotracheal Boussignac tube, which was developed for oxygen administration in the distal trachea through five or eight capillaries molded into the tubing wall and an opening in the main lumen 2 cm above the distal end of the tube. Although the P a O 2 values in both studies were similar, oxygen delivery using a Boussignac tube could be more effective in real-life situations than that using an NC or NRB, for which distal delivery of oxygen depends on a patent airway. In the present study, the S p O 2 did not change during the cardiac arrest period. In contrast, Steen et al. 8 noted a significant fall of S p O 2 (to 86% AE 2%) after an 8-minutelong cardiac arrest period. A delayed fall in S p O 2 was observed in our experiment starting in the chest compression-only CPR period. This finding is in accordance with that by Lejus et al., 20 Bertrand et al. 9 reported a higher detectable pulse saturation and higher proportion of patients with a peripheral arterial oxygen saturation of >70% among patients treated with CFIO. Although the peripheral arterial oxygen saturation is commonly considered to be unreliable in low-flow states, the possibility of detecting it may be increased by improved peripheral circulation and oxygenation. A significantly higher coronary perfusion pressure when using mechanical CPR combined with CFIO has also been reported, 8 although this observation did not result in better patient survival. No differences in return of spontaneous circulation, 10 hospital admission, or intensive care unit admission were noted in patients treated with CFIO during CPR versus patients who were mechanically ventilated. 9 The animal study by Hayes et al. 12 also showed no significant difference in the neurological outcome between the different ventilation protocols. In contrast, among adults with witnessed out-of-hospital cardiac arrest and ventricular fibrillation/ventricular tachycardia as the initial recorded rhythm, the neurologically intact survival rate was higher for individuals who received CFIO. 11 Although we did not analyze different survival outcomes because of the nature of the preset study, findings from the above-mentioned study suggest that patients receiving minimally interrupted cardiac resuscitation are more likely to survive. Advanced airway management can be time-consuming and may disrupt CPR chest compression continuity. Our study has several limitations. First, instead of manual or mechanical chest compressions, the cardiac output was set at 25% of the normal value and kept constant during the chest compression-only CPR and CFIO periods. The efficiency of chest compressions during resuscitation declines due to fatigue of healthcare professionals, 32 resulting in lower cardiac output achieved by resuscitation efforts. In addition, the pathophysiological changes in the myocardium during cardiac arrest lead to a decrease in myocardial compliance, preventing hemodynamically effective chest compressions. 33 These changes probably result in different arterial gas values in clinical practice. Second, the airway of the METI HPS was always patent during the study. The airway in cardiac arrest victims is usually closed and should be opened with the airway adjunct to perform CFIO. The METI HPS has been shown to be the most realistic patient simulator among other simulators with respect to airway anatomy. 34 Therefore, the values observed in the present study could differ from those in a study using another high-fidelity simulator. Third, the METI HPS was manipulated to respond to extreme case simulation; therefore, the simulated response may not be physiologically accurate. Owing to these limitations, translation of the results from this study to clinical practice should be done with caution. Another limitation of the present study is that the simulation was limited to only one session for each intervention without repetition. During the first two periods of the study, minor variance in variables was observed, which led to the assumption that further series of experiments would not contribute to the accuracy of the study. This assumption is supported by Cumin et al., 35 who found minor divergence between time series generated with the METI HPS. In conclusion, the METI HPS was proven to be a suitable experimental tool to accurately simulate the dynamics of changes in P a CO 2. Use of the METI HPS for simulation of oxygenation changes during cardiac arrest has some limitations because the simulated P a O 2 levels during cardiac arrest correlate poorly with the results from published studies. Our study confirmed the delayed decrease in S p O 2 already described in the literature. To our knowledge, this is the first study to address the dynamics of arterial blood gas changes during cardiac arrest, resuscitation efforts, and CFIO combined with CPR in a simulated scenario with the METI HPS. Findings from this study show that both transferring the conclusions from METI HPS studies into the clinical environment and testing new equipment on the METI HPS should be done with caution. Further studies are needed to confirm the conclusions of the present study. Data availability All data underlying the findings of the study are published in the article.
If you were to take the entertainment media as gospel - and many do - you could be forgiven for thinking the R&B singer Chris Brown was the only star ever to have assaulted a woman. Brown pleaded guilty to felony assault in 2009 after an altercation that left his then girlfriend, R&B star Rihanna, with significant injuries. Chris Brown ... pleaded guilty to felony assault in 2009. Credit:Reuters Since then, it has become de rigueur to remind people that "Chris Brown beats women". A local review of his new album, Fortune, with the rating "no stars ever" and the assertion: "Screw you, don't encourage his actions", went viral online. A group of guerilla activists in Britain this week began attaching stickers to his albums that read "Warning: Do not buy this album! This man beats women." Such is the level of saturation online that an app, Brownout, can now remove all mentions of the singer from your web browsing experience. Let's get one thing straight: Chris Brown has beaten women. He also seems to be an extremely unpleasant man; he recently got a neck tattoo of a woman's battered face, and his unrepentant petulance when it comes to the matter of his past violence is deeply troubling.
Every time I give a talk that includes the Restaurant Opportunities Centers United, whose board of directors I serve on, people ask me how they can figure out which restaurants treat their workers well and badly, so they can figure out the best places to patronize. Sadly, Zagat’s doesn’t include labor practices in its rating system. Now, just in time for your holiday family outings, ROC United has released the first and long-awaited national Diners’ Guide to help you make those choices. The Guide evaluates more than 150 popular restaurants and chains nationwide against 3 criteria: provision of paid sick days, wages of at least $9 per hour for non-tipped workers and $5 per hour of tipped workers, and opportunities for internal advancement. These are good criteria. I don’t want a sick person handling my food, nor do I want them to lose wages or jobs because they’re sick. The minimum wage for tipped workers has remained at a measly $2.13 per hour for nearly 20 years, so every day consumers have to push for a higher standard since Congress won’t. And finally, racial and gender hierarchies are a fact of life in the restaurant industry, with white men getting the best paying jobs at the front of the house. Across the country, ROC United has found that a system that enables internal promotion so that back of the house workers can get access to front of the house jobs, is a key element of restaurants that don’t discriminate. The Guide goes further than telling you where to go. Since it doesn’t cover absolutely every one of the millions of restaurants in this country, ROC United asks diners to simply take a look around and ask a few questions when they eat out. Just opening our eyes will tell us who works where. Are all the waiters white? Are all the bussers Latinos? Are there no black people or women anywhere? It isn’t difficult to ask your waiter what his hourly wages are. And if the restaurant doesn’t meet the standards listed above, there are tear out cards in the back of the guide that you can leave with management to let them know where they can get help to do better. One set of restaurants you might do this with is highlighted in the guide directly. The Darden Group owns and operates nearly 2,000 restaurants nationwide, including Olive Garden, Red Lobster, and LongHorn Steakhouse. ROC-D.C. has identified a pattern of racial discrimination against black workers in particular, which is partly upheld by the lack of internal promotion systems. But its most famous restaurant is the high-end Capital Grille Steakhouse, where black workers say they are routinely told they don’t “meet the standards,” no matter how much serving experience they have. Industry wide, black workers have a particularly tough time getting work in table-service restaurants. The industry has relegated them to fast food. With black unemployment at record levels–16 percent nationally, well over 20 percent in many cities–ROC’s campaign is an urgently important intervention. Ironically, the CEO of the Darden Group is Clarence Otis, Jr.–a highly-awarded African American businessman who used to work for J.P. Morgan. I have no doubt that Otis’ race will feature prominently in Darden’s defense against ROC’s findings, but of course the issue is not his identity or even his intention, but rather the actual impact of the company’s employment practices. You can get your copy of the guide at ROC United’s website. May your eating out be flavored with justice this holiday season.
Fast Bi-layer Neural Synthesis of One-Shot Realistic Head Avatars We propose a neural rendering-based system that creates head avatars from a single photograph. Our approach models a person's appearance by decomposing it into two layers. The first layer is a pose-dependent coarse image that is synthesized by a small neural network. The second layer is defined by a pose-independent texture image that contains high-frequency details. The texture image is generated offline, warped and added to the coarse image to ensure a high effective resolution of synthesized head views. We compare our system to analogous state-of-the-art systems in terms of visual quality and speed. The experiments show significant inference speedup over previous neural head avatar models for a given visual quality. We also report on a real-time smartphone-based implementation of our system. Introduction Personalized head avatars driven by keypoints or other mimics/pose representation is a technology with manifold applications in telepresence, gaming, AR/VR applications, and special effects industry. Modeling human head appearance is a daunting task, due to complex geometric and photometric properties of human heads including hair, mouth cavity and surrounding clothing. For at least two decades, creating head avatars (talking head models) was done with computer graphics tools using mesh-based surface models and texture maps. The resulting systems fall into two groups. Some are able to model specific people with very high realism after significant acquisition and design efforts are spent on those particular people. Others are able to create talking head models from as little as a single photograph, but do not aim to achieve photorealism. In recent years, neural talking heads have emerged as an alternative to classic computer graphics pipeline striving to achieve both high realism and ease of acquisition. The first works required a video create neural head avatars from a handful of photographs (few-shot setting) or a single photograph (one-shot setting), causing both excitement and concerns about potential misuse of such technology. Existing few-shot neural head avatar systems achieve remarkable results. Yet, unlike some of the graphics-based avatars, the neural systems are too slow to be deployed on mobile devices and require a high-end desktop GPU to run in real-time. We note that most application scenarios of neural avatars, especially those related to telepresence, would benefit highly from the capability to run in real-time on a mobile device. While in theory neural architectures within stateof-the-art approaches can be scaled down in order to run faster, we show that such scaling down results in a very unfavourable speed-realism tradeoff. In this work, we address the speed limitataions of one-shot neural head avatar systems, and develop an approach that can run much faster than previous models. To achieve this, we adopt a bi-layer representation, where the image of an avatar in a new pose is generated by summing two components: a coarse image directly predicted by a rendering network, and a warped texture image. While the warping itself is also predicted by the rendering network, the texture is estimated at the time of avatar creation and is static at runtime. To enable the few-shot capability, we use the meta-learning stage on a dataset of videos, where we (meta)-train the inference (rendering) network, the embedding network, as well as the texture generation network. The separation of the target frames into two layers allows us both to improve the effective resolution and the speed of neural rendering. This is because we can use off-line avatar generation stage to synthesize high-resolution texture, while at test time both the first component (coarse image) and the warping of the texture need not contain high frequency details and can therefore be predicted by a relatively small rendering network. These advantages of our system are validated by extensive comparisons with previously proposed neural avatar systems. We also report on the smartphone-based real-time implementation of our system, which was beyond the reach of previously proposed models. Related work As discussed above, methods for the neural synthesis of realistic talking head sequences can be divided into many-shot (i.e. requiring a video or multiple videos of the target person for learning the model) and a more recent group of few-shot/singe-shot methods capable of acquiring the model of a person from a single or a handful photographs. Our method falls into the latter category as we focus on the one-shot scenario (modeling from a single photograph). Along another dimension, these methods can be divided according to the architecture of the generator network. Thus, several methods use generators based on direct synthesis, where the image is generated using a sequence of convolutional operators, interleaved with elementwise non-linearities, and normalizations. Person identity information may be injected into such architecture, either with a lengthy learning process (in the many-shot scenario) or by using adaptive normalizations conditioned on person embeddings. The method effectively combines both approaches by injecting identity through adaptive normalizations, and then fine-tuning the resulting generator on the fewshot learning set. The direct synthesis approach for human heads can be traced back to that generated lips of a famous person in the talking head sequence, and further towards first works on conditional convolutional neural synthesis of generic objects such as. The alternative to the direct image synthesis is to use differentiable warping inside the architecture. The X2Face approach applies warping twice, first from the source image to a standardized image (texture), and then to the target image. The Codec Avatar system synthesizes a pose-dependent texture for a simplified mesh geometry. The MarioNETte system applies warping to the intermediate feature representations. The Few-shot Vid-to-Vid system combines direct synthesis with the warping of the previous frame in order to obtain temporal continuity. The First Order Motion Model learns to warp the intermediate feature representation of the generator based on keypoints that are learned from data. Beyond heads, differentiable warping/texturing have recently been used for full body re-rendering. Earlier, DeepWarp system used neural warping to alter the appearance of eyes for the purpose of gaze redirection, and also used neural warping for the resynthesis of generic scenes. Our method combines direct image synthesis with warping in a new way, as we obtain the fine layer by warping an RGB pose-independent texture, while the coarse-grained pose-dependent RGB component is synthesized by a neural network directly. Methods We use video sequences annotated with keypoints and, optionally, segmentation masks, for training. We denote t-th frame of the i-th video sequence as x i (t), corresponding keypoints as y i (t), and segmentation masks as m i (t) We will use Embeddings New pose Image composition Fig. 2: During training, we first encode a source frame into the embeddings, then we initialize adaptive parameters of both inference and texture generators, and predict a high-frequency texture. These operations are only done once per avatar. Target keypoints are then used to predict a low-frequency component of the output image and a warping field, which, applied to the texture, provides the high-frequency component. Two components are then added together to produce an output. an index t to denote a target frame, and s -a source frame. Also, we mark all tensors, related to generated images, with a hat symbol, ex.x i (t). We assume the spatial size of all frames to be constant and denote it as H W. In some modules, input keypoints are encoded as an RGB image, which is a standard approach in a large body of previous works. In this work, we will call it a landmark image. But, contrary to these approaches, at test-time we input the keypoints into the inference generator directly as a vector. This allows us to significantly reduce the inference time of the method. Architecture In our approach, the following networks are trained in an end-to-end fashion: -The embedder network E x i (s), y i (s) encodes a concatenation of a source image and a landmark image into a stack of embeddings { i k (s)}, which are used for initialization of the adaptive parameters inside the generators. -The texture generator network G tex { i k (s)} initializes its adaptive parameters from the embeddings and decodes an inpainted high-frequency component of the source image, which we call a textureX i (s). -The inference generator network G y i (t), { i k (s)} maps target poses into a predicted imagex i (t). The network accepts vector keypoints as an input and outputs a low-frequency layer of the output imagex i LF (t), which encodes basic facial features, skin color and lighting, and i (t) -a mapping between coordinate spaces of the texture and the output image. Then, the highfrequency layer of the output image is obtained by warping the predicted texture:x i HF (t) = i (t) X i (s), and is added to a low-frequency component to produce the final image: During training, we first input a source image x i (s) and a source pose y i (s), encoded as a landmark image, into the embedder. The outputs of the embedder are K tensors i k (s), which are used to predict the adaptive parameters of the texture generator and the inference generator. A high-frequency textureX i (s) of the source image is then synthesized by the texture generator. Next, we input corresponding target keypoints y i (t) into the inference generator, which predicts a low-frequency component of the output imagex i LF (t) directly and a highfrequency componentx i HF (t) by warping the texture with a predicted field i (t). Finally, the output imagex i (t) is obtained as a sum of these two components. It is important to note that while the texture generator is manually forced to generate only a high-frequency component of the image via the design of the loss functions, which is described in the next section, we do not specifically constrain it to perform texture inpainting for occluded head parts. This behavior is emergent from the fact that we use two different images with different poses for initialization and loss calculation. Training process We use multiple loss functions for training. The main loss function responsible for the realism of the outputs is trained in an adversarial way. We also use pixelwise loss to preserve source lightning conditions and perceptual loss to match the source identity in the outputs. Finally, a regularization of the texture mapping adds robustness to the random initialization of the model. Pixelwise and perceptual losses ensure that the predicted images match the ground truth, and are respectively applied to low-and high-frequency components of the output images. Since usage of pixelwise losses assumes independence of all pixels in the image, the optimization process leads to blurry images, which is suitable for the low-frequency component of the output. Thus the pixelwise loss is calculated by simply measuring mean L 1 distance between the target image and the low-frequency component: On the contrary, the optimization of the perceptual loss leads to crisper and more realistic images, which we utilize to train the high-frequency component. To calculate the perceptual loss, we use the stop-gradient operator SG, which allows us to prevent the gradient flow into a low-frequency component. The input generated image is, therefore, calculated as following: Following and, our variant of the perceptual loss consists of two components: features evaluated using an ILSVRC (ImageNet) pre-trained VGG19 network, and the VGGFace network, trained for face recognition. If we denote the intermediate features of these networks as f i k,IN (t) and f i k,face (t), and their spatial size as H k W k, the objectives can be written as follows: Texture mapping regularization is proposed to improve the stability of the training. In our model, the coordinate space of the texture is learned implicitly, and there are two degrees of freedom that can mutually compensate each other: the position of the face in the texture, and the predicted warping. If, after initial iterations, the major part of the texture is left unused by the model, it can easily compensate that with a more distorted warping field. This artifact of an initialization is not fixed during training, and clearly is not the behavior we need, since we want all the texture to be used to achieve the maximum effective resolution in the outputs. We address the problem by regularizing the warping in the first iterations to be close to an identity mapping: Adversarial loss is optimized by both generators, the embedder and the discriminator networks. Usually, it resembles a binary classification loss function between real and fake images, which discriminator is optimized to minimize, and generators -maximize. We follow a large body of previous works and use a hinge loss as a substitute for the original binary cross entropy loss. We also perform relativistic realism score calculation, following its recent success in tasks such as super-resolution and denoising. Additionally, we use PatchGAN formulation of the adversarial learning. The discriminator is trained only with respect to its adversarial loss L D adv, while the generators and the embedder are trained via the adversarial loss L G adv, and also a feature matching loss L FM. The latter is introduced for better stability of the training. Texture enhancement To minimize the identity gap, suggested to fine-tune the generator weights to the few-shot training set. Training on a person-specific source data leads to significant improvement in realism and identity preservation of the synthesized images, but is computationally expensive. Moreover, when the source data is scarce, like in one-shot scenario, fine-tuning may lead to over-fitting and performance degradation, which is observed in. We address both of these problems by using a learned gradient descend (LGD) method to optimize only the synthesized textureX i (s). Optimizing with respect to the texture tensor prevents the model from overfitting, while the LGD allows us to perform optimization with respect to computationally expensive objectives by doing forward passes through a pre-trained network. Specifically, we introduce a lightweight loss function L upd (we use a sum of squared errors), that measures the distance between a generated image and a ground-truth in the pixel space, and a texture updating network G upd, that uses the current state of the texture and the gradient of L upd with respect to the texture to produce an update ∆X i (s). During fine-tuning we perform M update steps, each time measuring the gradients of L upd with respect to an updated texture. The visualization of the process can be seen in Figure 14. More formally, each update is computed as: where The network G upd is trained by back-propagation through all M steps. For training, we use the same objective L G total that was used during the training of the base model. We evaluate it using a target frame x i (t) and a generated fram It is important to highlight that L upd is not used for training of G upd, but simply guides the updates to the texture. Also, the gradients with respect to this loss are evaluated using the source image, while the objective in Eq. 8 is calculated using the target image, which implies that the network has to produce updates for the whole texture, not just a region "visible" on the source image. Lastly, while we do not propagate any gradients into the generator part of the base model, we keep training the discriminator using the same objective L D adv. Even though training the updater network jointly with the base generator is possible, and can lead to better quality (following the success of model agnostic meta-learning method), we resort to two-stage training due to memory constraints. Segmentation The presence of static background leads to a certain degradation of our model for two reasons. Firstly, part of the capacity of the texture and the inference generators has to be spent on modeling high variety of background patterns. Secondly, and more importantly, the static nature of backgrounds in most training videos biases the warping towards identity mapping. We therefore, have found it advantageous to include background segmentation into our model. We use a state-of-the-art face and body segmentation model to obtain the ground truth masks. Then, we add the mask prediction outputm i (t) to our inference generator alongside with its other outputs, and train it via a binary cross-entropy loss L seg to match the ground truth mask m i (t). To filter out the training signal, related to the background, we have explored multiple options. Simple masking of the gradients that are fed into the generator leads to severe overfitting of the discriminator. We also could not simply apply the ground truth masks to all the images in the dataset, since the model works so well that it produces a sharp border between the foreground and the background, leading to border artifacts that emerge after adversarial training. Instead, we have found out that masking the ground truth images that are fed to the discriminator with the predicted masksm i (t) works well. Indeed, these masks are smooth and prevent the discriminator from overfitting to the lack of background, or sharpness of the border. We do not backpropagate the signal from the discriminator and from perceptual losses to the generator via the mask pathway (i.e. we use stop gradient/detach operator SG m i (t) before applying the mask). The stop-gradient operator also ensures that the training does not converge to a degenerate state (empty foreground). Implementation details All our networks consist of pre-activation residual blocks with LeakyReLU activations. We set a minimum number of features in these blocks to 64, and a maximum to 512. By default, we use half the number of features in the inference generator, but we also evaluate our model with full-and quater-capacity inference part, with the results provided in the experiments section. We use batch normalization in all the networks except for the embedder and the texture updater. Inside the texture generator, we pair batch normalization with adaptive SPADE layers. We modify these layers to predict pixelwise scale and bias coefficients using feature maps, which are treated as model parameters, instead of being input from a different network. This allows us to save memory by removing additional networks and intermediate feature maps from the optimization process, and increase the batch size. Also, following, we predict weights for all 1 1 convolutions in the network from the embeddings { i k (s)}, which includes the scale and bias mappings in AdaSPADE layers, and skip connections in the residual upsampling blocks. In the inference generator, we use standard adaptive batch normalization layers, but also predict weights for the skip connections from the embeddings. We do simultaneous gradient descend on parameters of the generator networks and the discriminator using Adam with a learning rate of 2 10 −4. We use 0.5 weight for adversarial losses, and 10 for all other losses, except for the VGGFace perceptual loss (Eq. 5), which is set to 0.01. The weight of the regularizer (Eq. 6) is then multiplicatively reduced by 0.9 every 50 iterations. We train our models on 8 NVIDIA P40 GPUs with the batch size of 48 for the base model, and a batch size of 32 for the updater model. We set unrolling depth M of the updater to 4 and use a sum of squared errors as the lightweight objective. Batch normalization statistics are synchronized across all GPUs during training. During inference they are replaced with "standing" statistics, similar to, which significantly improves the quality of the outputs, compared to the usage of running statistics. Spectral normalization is also applied in all linear and convolutional layers of all networks. Please refer to the supplementary material for a detailed description of our model's architecture, as well as the discussion of training and architectural features that we have adopted. Experiments We perform evaluation in multiple scenarios. First, we use the original Vox-Celeb2 dataset to compare with state-of-the-art systems. To do that, we annotated this dataset using an off-the-shelf facial landmarks detector. Overall, the dataset contains 140697 videos of 5994 different people. We also use a high-quality version of the same dataset, additionally annotated with the segmentation masks (which were obtained using a model ), to measure how the performance of our model scales with a dataset of a significantly higher quality. We obtained this version by downloading the original videos via the links provided in the VoxCeleb2 dataset, and filtering out the ones with low resolution. This dataset is, therefore, significantly smaller and contains only 14859 videos of 4242 people, with each video having at most 250 frames (first 10 seconds). Lastly, we do ablation studies on both VoxCeleb2 and VoxCeleb2-HQ, and report on a smartphone-based implementation of the method. For comparisons and ablation studies we show the results qualitatively and also evaluate the following metrics: -Learned perceptual image patch similarity (LPIPS), which measures overall predicted image similarity to ground truth. -Cosine similarity between the embedding vectors of a state-of-the-art face recognition network (CSIM), calculated using the synthesized and the target images. This metric evaluates the identity mismatch. -Normalized mean error of the head pose in the synthesized image (NME). We use the same network, which was used for the annotation of the dataset, to evaluate the pose of the synthesized image. We normalize the error, which is a mean euclidean distance between the predicted and the target points, by the distance between the eyes in the target pose, multiplied by 10. -Multiply-accumulate operations (MACs), which measure the complexity of each method. We exclude from the evaluation initialization steps, which are calculated only once per avatar. The test set in both datasets does not intersect with the train set in terms of videos or identities. For evaluation, we use a subset of 50 test videos with different identities (for VoxCeleb2, it is the same as in ). The first frame in each sequence is used as a source. Target frames are taken sequentially at 1 FPS. We only discuss most important results in the main paper. For additional qualitative results and comparisons please refer to the supplementary materials. Comparison with the state-of-the-art methods We compare against three state-of-the-art systems: Few-shot Talking Heads, Few-shot Vid-to-Vid and First Order Motion Model. The first system is a problem-specific model designed for avatar creation. Few-shot Vid-to-Vid is a state-of-the-art video-to-video translation system, which has also been successfully applied to this problem. First Order Motion Model (FOMM) is a general motion transfer system that does not use precomputed keypoints, but can also be used as an avatar. We believe that these models are representative of the most recent and successful approaches to one-shot avatar generation. We also acknowledge the work of, but do not compare to them extensively due to unavailability of the source code, pretrained models or pre-calculated results. A small-scale qualitative comparison is provided in the supplementary materials. Additionally, their method is limited to the usage of 3D keypoints, while our method does not have such restriction. Lastly, since Few-shot Vid-to-Vid is an autoregressive model, we use a full test video sequence for evaluation (25 FPS) and save the predicted frames at 1 FPS. For quality metrics, we have compared synthesized images to their targets using a perceptual image similarity (LPIPS ↓), identity preservation metric (CSIM ↑), and a normalized pose error (NME ↓). We highlight a model which was used for the comparison in Figure 5 with a bold marker. We observe that our model outperforms the competitors in terms of identity preservation (CSIM) and pose matching (NME) in the settings, when models' complexities are comparable. In order to better compare with FOMM, we did a user study, where users have preferred the image generated by our model to FOMM 59.6% of the time. Importantly, the base models in these approaches have a lot of computational complexity, so for each method we evaluate a family of models by varying the number of parameters. The performance comparison for each family is reported in Figure 4 (with Few-shot Talking Heads being excluded from this evaluation, since their performance is much worse than the compared methods). Overall, we can see that our model's family outperforms competing methods in terms of pose error and identity preservation, while being, on average, up to an order of magnitude faster. To better compare with FOMM in terms of image similarity, we have performed a user study, where we asked crowd-sourced users which generated image better matches the ground truth. In total, 361 users evaluated 1600 test pairs of images, with each one seeing on average 21 pairs. In 59.6% of comparisons, the result of our medium model was preferred to a medium sized model of FOMM. Another important note is on how the complexity was evaluated. In Fewshot Vid-to-Vid we have additionally excluded from the evaluation parts that are responsible for the temporal consistency, since other compared methods are evaluated frame-by-frame and do not have such overhead. Also, in FOMM we have excluded the keypoints extractor network, because this overhead is shared implicitly by all the methods via usage of the precomputed keypoints. We visualize the results for medium-sized models of each of the compared methods in Figure 5. Since all methods perform similarly in case when source and target images have marginal differences, we have shown the results where a source and a target have different head poses. In this extrapolation setting, our method has a clear advantage, while other methods either introduce more artifacts or more blurriness. Evaluation on high-quality images. Next, we evaluate our method on the high-quality dataset and present the results in Figure 6. Overall, in this case, our method is able to achieve a smaller identity gap, compared to the dataset with the background. We also show the decomposition between the texture and a low frequency component in Figure 7. Lastly, in Figure 8. The medium-sized model ported to the Snapdragon 855 (Adreno 640 GPU, FP16 mode) takes 42 ms per frame, which is sufficient for real-time performance, given that the keypoint tracking is being run in parallel, e.g. on a mobile CPU. Ablation study. Finally, we evaluate the contribution of individual components. First, we evaluate the contribution of adaptive SPADE layers in the texture generator (by replacing them with adaptive batch normalization and perpixel biases) and adaptive skip-connections in both generators. A model with these features removed makes up our baseline. Lastly, we evaluate the contribution of the updater network. The results can be seen in Table 1 and Figure 9. We evaluate the baseline approach only on a VoxCeleb2 dataset, while the full models with and without the updater network are evaluated on both low-and highquality datasets. Overall, we see a significant contribution of each component with respect to all metrics, which is particularly noticeable in the high-quality scenario. In all ablation comparisons, medium-sized models were used. Table 1: Ablation studies of our approach. We first evaluate the baseline method without AdaSPADE or adaptive skip connections. Then we add these layers, following, and observe significant quality improvement. Finally, our updater network provides even more improvement across all metrics, especially noticeable in the highquality scenario. Source Pose Ours +Upd. Fig. 9: Examples from the ablation study on VoxCeleb2 (first two rows) and VoxCeleb2-HQ (last two rows). Conclusion We have proposed a new neural rendering-based system that creates head avatars from a single photograph. Our approach models person appearance by decomposing it into two layers. The first layer is a pose-dependent coarse image that is synthesized by a small neural network. The second layer is defined by a poseindependent texture image that contains high-frequency details and is generated offline. During test-time it is warped and added to the coarse image to ensure high effective resolution of synthesized head views. We compare our system to analogous state-of-the-art systems in terms of visual quality and speed. The experiments show up to an order of magnitude inference speedup over previous neural head avatar models, while achieving state-of-the-art quality. We also report on a real-time smartphone-based implementation of our system. A Methods We start by explaining training process of our method in much more details. Then, we describe the architecture that we use and how different choices affect the final performance. Finally, we provide a more extended explanation of the mobile inference pipeline that we have adopted. A.1 Training details We optimize all networks using Adam with a learning rate equal to 2 10 −4 1 = 0.5 and 2 = 0.999. Before testing, we calculate "standing" statistics for all batch normalization layers using 500 mini-batches. Below we provide additional details for the losses that we use. Texture mapping regularization. Below we provide additional implementation details as well as better describe the reasons why this loss is used. The training signal that the texture generator G tex receives is first warped by the warping field i (t) predicted by the inference generator. Because of this, random initializations of the networks typically lead to subpotimal textures, in which the face of the source person occupies a small fraction of the total area of the texture. As the training progresses, this leads to a lower effective resolution of the output image, since the optimization process is unable to escape this bad local optima. In practice, we address the problem by treating the network's output as a delta to an identity mapping, and also by applying a magnitude penalty on that delta in the early iterations. As mentioned in the main paper, the weight of this penalty is multiplicatively reduced to zero during training, so it does not affect the final performance of the model. More formally, we decompose the output warping field into a sum of two terms: i (t) = I + ∆ i (t), where I denotes an identity mapping, and apply an L 1 penalty, averaged by a number of spatial positions in the mapping, to the second term: To understand why this regularization helps, we need to briefly describe the implicit properties of the VoxCeleb2 dataset. Since it was obtained using a face detector, a weak from of face alignment is present in the training images, with face occupying more or less the same region. On the other hand, our regularization allows the gradients to initially flow unperturbed into the texture generator. Therefore, gradients with respect to the texture, averaged over the minibatch, consistently force the texture to produce a high-frequency component of a mean face in the minibatch. This allows the face in the texture to fill the same area as it does in the training images, leading to better generalization. Adversarial loss. Below we elaborate in more details on the type of adversarial loss that is used. We use the terms and to calculate realism scores for real and fake images respectively, with i n and t n denoting indices of mini-batch elements, N -a mini-batch size and i ∈ {i 1,..., i n }: Moreover, we use PatchGAN formulation of the adversarial learning. In it, the discriminator outputs a matrix of realism scores instead of a single prediction, and each element of this matrix is treated as a realism score for a corresponding patch in the input image. This formulation is also used in a large body of relevant works and improves the stability of the adversarial training. If we denote the size of a scores matrix s i (t) as H s W s, the resulting objectives can be written as follows: The loss serves as the discriminator objective. For the generator, we also calculate the feature matching loss, which has now become a standard component of supervised image-to-image translation models. In this objective, we minimize the distance between the intermediate feature maps of discriminator, calculated using corresponding target and generated images. If we denote as f i k,D (t) the features at different spatial resolutions H k W k, then the feature mathing objective is computed as follows: A.2 Architecture description All our networks consist of pre-activation residual blocks. The layout is visualized in the Figures 10-14. In all networks, except for the inference generator at the updater, we set the minimum number of channels to 64, and increase (decrease) it by a factor of two each time we perform upsampling (downsampling). We pick the first convolution in each block to increase (decrease) the number of channels. The maximum number of channels is set to 512. In the inference generator we set the minimum number of channels to 32, and the maximum to 256. Also, all linear layers (except for the last one) have their dimensionality set to 256. Moreover, as described in Figure 11, in the inference generator we employ more efficient blocks, with upsampling performed after the first convolution, and not before it. This allows us to halve the number of MACs per inference. In the embedder network ( Figure 12) each block operating at the same resolution reduces the number of channels, similarly to what is done in the generators. In fact, the output number of channels in each block is excatly equal to the input number of channels in the corresponding generator block. We borrowed this scheme from, and assume that is it done to botteleneck the embedding tensors, which will be used for the prediction of the adaptive parameters at high resolution. This forces the generators to use all their capacity to generate the image bottom-up, instead of using a shortcut between the source and the target at high resolution, which is present in the architecture. We do not use batch normalization in the embedder network, because we want it to be trained more slowly, compared to other networks. Otherwise, the whole system overfits to the dataset and the textures become correlated with the source image in terms of head pose. We believe that this is related to the VoxCeleb2 dataset, since in it there is a strong correlation in terms of pose between the randomly sampled source and target frames. This implies that the dataset is lacking diversity with respect to the head movement, and we believe that our system would perform much better either with a better disentangling mechanism of head pose and identity, which we did not come up with, or with a more diverse dataset. On contrary, we find it highly beneficial to use batch normalization in the discriminator ( Figure 13). This is less memory efficient, compared to the classical scheme, since "real" and "fake" batches have to be concatenated and fed into the discriminator together. We concatenate these batches to ensure that the first and second order statistics inside the discriminator's features are not whitened with respect to the label ("real" or "fake"), which significantly improves the quality of the outputs. We also tried using instance normalization, but found this to be more sensitive to hyperparameters. For example, the config working on a high-quality dataset cannot be transferred to the low-quality dataset without the occurring instabilities during the adversarial training. We predict adaptive parameters following the procedure inspired by a matrix decomposition. The basic idea is to predict a weight tensor for the convolution via a decomposition of the embedding tensor. In our work, we use the following procedure (taken from ) to predict the weights for all 1 1 convolutions and adaptive batch normalization layers in the texture and the inference generators: -Resize all embedding tensors i k (s), with the number of channels C k, by nearest upsampling to 32 32 resolution for the texture generator, and 16 16 for the medium-sized inference generator. -Flatten the resized tensor across its spatial resoluton, converting it to a matrix of the shape C k 1024 for the texture generator, and 1 2 C k 512 for the inference generator (the first dimensionality has to match the reduced number of channels in the convolutions of the medium-sized model). -Three linear layers (with no nonlinearities in between) are then applied, performing the decomposition. A resulting matrix should match the shape of the weights, combined with the biases, for each specific adaptive layer. These linear layers are trained separately for each adaptive convolution and adaptive batch normalization. Each embedding tensor i k (s) is therefore used to predict all adaptive parameters inside the layers of the k-th block in the texture and inference generators. We do not perform an ablation study with respect to this scheme, since it was used in an already published work on a similar topic. Finally, we describe the architecture of the texture enhancer in Figure 14. This architecture is standard for image-to-image translation tasks. The spatial dimensionality and the number of channels in the bottleneck is equal to 128. A.3 Mobile inference As mentioned in main paper, we train our models using PyTorch and then port them to smartphones with Qualcomm Snapdragon 855 chips. For inference, we use a native Snapdragon Neural Processing Engine (SNPE) APK, which provides a significant speed-up compared to TF-Lite and PyTorch mobile. In order to convert the models trained in PyTorch into SNPE-compatible containers, we first use the PyTorch-ONNX parser, as it is simple to get an ONNX model right from PyTorch. However, it does not guarantee that the obtained model can be converted into a mobile-compatible container, since some operations may be unsupported by SNPE. Moreover, there is a collision between different versions of ONNX and SNPE operation sets, with some versions of the operations being incompatible with each other. We have solved this problem by using PyTorch 1.3 and SNPE 1.32, but solely for operations used our inference generator. This is part of the reason why we had to resort to simple layers, like BathNorm-s, convolutions and nonlinearities in our network.. All ported models have spectral normalization removed, and adaptive parameters fixed and merged into their base layers. In our experiments the target platform is Adreno 640 GPU, utilized in FP16 mode. We do not observe any noticeable quality degradation from running our model in FP16 (although training in FP16 or mixed precision settings leads to instabilities and early explosion of the gradients). Since our model includes bilinear sampling from texture (using a predicted warping field), that is not supported by SNPE, we implement it ourselves, as a part of application, called after each inferred frame on a CPU. The GPU implementation should be possible as well, but is more time-consuming to implement. Our reported mobile timings (42 ms, averaged by 100 runs) do not include the bilinear sampling and copy operations from GPU to CPU. On CPU, bilinear sampling takes additional 2 milliseconds, but for a GPU implementation, the timing would be negligible. B.1 Training details for the state-of-the-art methods. First Order Motion model was trained using a config provided with the official implementation of the model. In order to obtain a family of models, we modify minimum and maximum number of channels in the generator from default 64 and 512 to 32 and 256 for the medium, and 16 and 128 for the small models. For Few-shot Vid-to-Vid, we have also used a default config from the official implementation, but with slight modifications. Since we train on a dataset with videos already being cropped, we removed the random crop and scale augmentations in order to avoid a domain gap between training and testing. In our case, that would lead to black borders appearing on the training images, and a suboptimal performance on a test set with no such artifacts. In order to obtain a family of models, we also reduce the minimum and maximum number of channels in the generator from the default 32 and 1024 to 32 and 256 for the medium model and 16 and 128 for the small model. To calculate the number of multiply-accumulate operations, we used an offthe-shelf tool that evaluates this number for all internal PyTorch modules. That way of calculation, while being easy, is not perfect as, for example, it does not account for the number of operations in PyTorch functionals, which may be called inside the model. Other forms of complexity evaluation would require significant refactor of the code of the competitors, which lies out of the scope of our comparison. For our model, we have provided accurate complexity esimates. B.2 Extended evaluations. We provide extended quantitative data for our experiments in Table 2, and additional qualitative comparisons in Figures 15-17, which extend the comparisons provided in the main paper. We additionally perform a small comparison with a representative mesh-based avatar system in Figure 18 and compare our method with MarioNETte system in Figure 20. Also we extend our ablation study to highlight the contribution of the texture enhancement network in the Figure 19. Finally, we show cross-person reenactment results in Figure 21. Table 2: We present numerical data for the comparison of the models. Some of it duplicates the data available in Figure 5 of the main paper. F-s V2V denotes Few-shot Vid-to-Vid, FOMM denotes First Order Motion Model, and NTH denotes Neural Talking Heads. Here we also include SSIM evaluation, which we found to correlate with LPIPS, and therefore excluded it from the main paper. We also provide evaluation for initialization and inference time (in milliseconds) for the medium-sized models of each method, measured on NVIDIA P40 GPU. We did not include this measurement in the main paper since we cannot calculate it using target low-performance devices (due to difficulties with porting the competitor models to the SNPE framework), while evaluation on much more powerful (in terms of FLOPs) desktop GPUs may be an inaccurate way to measure the performance on less capable devices. We, therefore, decided to stick with MACs as our performance metric, which is more common in the literature, but still provide our obtained numbers for desktop GPUs here. We report median values out of a thousand iterations with random inputs. Fig. 12: Architecture for the embedder. Here we do not use normalization layers. First, we downsample input images and stickmen to 8 8 resolution. After that, we obtain embeddings for each of the blocks in the texture and the inference generators. Each embedding is a feature map, and has the same number of channels as the corresponding block in the texture generator. Therefore, we reduce the number of channels in the final blocks, from the maximum of 512 to the minimum of 64 at the end. In the blocks operating at the same resolution, we insert a convolution into a skip connection only when the input and the output number of channels is different. Fig. 13: Architecture of the discriminator. We use 5 downsampling blocks and one block operating at final 88 resolution. Additionally, in each block we output features after the second nonlinearity. These features are later used in the feature matching loss. For downsampling, we use average pooling. The architecture of the final block, operating at the same resolution, is similar to the one in the embedder: it is without a convolution in the skip connection, but with batch normalization layers. Fig. 18: Comparison of our method with a closed-source product, which is representative of the state-of-the-art in real-time one-shot avatar creation, based on explicit 3D modelling. The first row represents reenactment results, since the frontal image was used for initialization of both methods. We can see that our model does a much better job of modelling the face shape and the hair. Source Pose Texture + Upd. Ours + Upd. system in a one-shot selfreenactment task. The results for are taken from the respective paper, as no source code is available. The evaluation of the computational complexity of this system was also beyond our reach since it would require re-implementation from scratch. However, since it utilizes an encoder-decoder architecture with a large number of channels, it can be assumed to have a similar complexity to the largest variant of FOMM. For our method, we use a medium-sized model. Lastly, the evaluation for is done on the same videos as training (on the hold-out frames), while our method is applied without any fine-tuning. Source Driver 1 Driver 2 Driver 3 Driver 4 Driver 5 Fig. 21: The results for cross-person reenactment. While our method does preserve the texture of the original image, the driving identity leakage remains noticeable.
Some of Madoff's investors are now being investigated. The Bernard Madoff investigation is expanding as investigators consider whether there was any wrongdoing by any of the jailed Ponzi king's investors who should have questioned his improbable returns, according to a report. Federal officials named three of at least eight investors now under investigation, sources told the Wall Street Journal. Jeffry Picower, Stanley Chais and Carl Shapiro are among those whose financial records are being probed. Picower and Chais are philanthropists named in a civil complaint filed by the trustee overseeing the liquidation of Madoff's assets, and Shapiro, an entrepreneur, is close friends with Madoff, the article said. The complaint alleges that all three men must have known or at least should have known they were being paid out with fraudulent money. Five others named in the complaint have not been publicly named. Representatives for Chais and Shapiro denied having any knowledge of the fraud, and an attorney for Picower told the Journal that his client took no part in the scam and lost billions of dollars. Madoff, 71, bilked thousands of investors out of $65 billion by using money from new clients to pay out existing ones in the classic Ponzi scheme fashion. Only about $1 billion has been found, and it could take years for the rest of it to be located. Madoff pleaded guilty in March and was ordered directly to jail to await next month's sentencing. He could get 150 years in prison.
Control of Bagworms on Chinese Junipers, 1989 Bagworm control was evaluated on Chinese junipers. The plants were 4 to 5 ft high and 10 to 15 ft wide and were located in Stillwater, OK. Spray applications were made on 29 Jul and were replicated 4 times in a randomized complete block design. The plants were sprayed until runoff with a Hardi RY-15 backpack sprayer at 60 psi. The temperature and wind velocity was 86°F and 8-10 mph, respectively. Mortality was determined by examining 10 larvae from each replication at 3, 7 and 14 DAT. The larvae cases ranged from 1 to 1.5 inches in length.
Birth rates of different types of neutron star and possible evolutions of these objects We estimate the spatial densities of different types of neutron star near the Sun. It is shown that the distances of dim isolated thermal neutron stars must be on average about 300-400 pc. The combined birth rate of these sources together with radio pulsars and dim radio quiet neutron stars can be a little more than the supernova rate as some of the dim isolated thermal neutron stars can be formed from dim radio quiet neutron stars and radio pulsars. Some of these sources must have relations with anomalous X-ray pulsars and soft gamma repeaters. In order to understand the locations of different types of neutron star on the P-\.{P} diagram it is also necessary to take into account the differences in the masses and the structures of neutron stars. Introduction Existence of single neutron stars with different physical properties is very well known. But the amount of data about different types of neutron star except radio pulsars is small. Actually, the change in the beaming factor together with the value of and the change in the angle between the magnetic field and the rotation axes are not well known. It is also not clear how the relation between the characteristic time ( ) and the real age changes in time. Therefore, the birth rate, initial periods and values of the real magnetic field are not known well. So, it is difficult to understand what the evolutionary tracks of pulsars on the P- diagram must be and where these tracks end. As there are not many available observational data for the other types of neutron star and since they show some exotic phenomena such as ray bursts, it is difficult to understand their nature. Below, we analyse birth rates of different types of neutron star, their locations on the P- diagram and possible evolutions of these objects. 2 Analysis of the data about the birth rates of DITNSs and supernova rate around the Sun According to Yakovlev et al. and Kaminker et al. neutron stars cool down to T=4.610 5 K (or kT=40 eV) in t≤10 6 yr. According to Haberl (2003, and references therein), there are 7 X-ray dim isolated thermal neutron stars (DITNSs) located within 120 pc around the Sun and only one DITNS, namely RX J1836.2+5925, is located at a distance of 400 pc. All of these 8 radio-quiet objects have T>4.610 5 K. On the other hand, there are only 5 radio pulsars with characteristic times smaller than 10 7 yr in the cylindrical volume around the Sun with a radius of 400 pc and height 2|z|=400 pc, where |z| is the distance from the Galactic plane. Among these 5 pulsars, Vela has ∼10 4 yr and the other 4 pulsars have 310 6 < <510 6 yr. The real ages of 1-2 of these 4 radio pulsars may be smaller than 10 6 yr. We have included Geminga pulsar (B0633+1748) in Table 1 to compare it with different types of neutron star, since this pulsar has properties in between the properties of DITNSs and of young radio pulsars. In order to make a reliable estimation of the number density of the sources in each considered volume, it is necessary to consider how these objects move in the Galaxy. Practically, for pulsars with such ages the space velocity must not decrease in the gravitational field of the Galaxy. In order to demonstrate this in a simple and reliable way, we can use the observational data about the scale height and the average peculiar velocity of old stars near the Sun. The average values of |z| and |V z | for population II stars which belong to the old disk, the intermediate and the halo are, respectively, 400 pc and 16 km/s, 700 pc and 25 km/s, and 2000 pc and 75 km/s (Allen 1991). Ages of these stars are close to 10 10 years. They are in dynamical equilibrium and their ages are large enough to make many oscillations in the gravitational field of the Galaxy. Let us consider, in the first approximation, their oscillations in a homogeneous field in the direction perpendicular to the Galactic plane. In this case and the period of oscillation is where A is a constant which is related to the gravitational field intensity near the Sun. If we use the average values of |z| and |V z | for different subgroups of population II stars as written above, then we find values of A very close to each other showing the reliability of the approximation. Therefore, we can adopt a value of A=5.210 −5 cm 1/2 s −1, which is the average value for the 3 subclasses of population II objects, for all objects located up to about 2000 pc away from the Galactic plane and we can also use this value for pulsars. If we put the average value of |V z |=170-200 km/s for pulsars in eqn., then we find T=(1.2-1.7)10 9 yr. As we see the age values of young radio pulsars are very small compared to the average period of pulsar oscillation. So, we can adopt that pulsars with ages 10 6 -10 7 yr move practically with constant velocity which they gain at birth. Since the progenitors of neutron stars have scale height of about 60 pc, most of the neutron stars are born close to the Galactic plane. Therefore, they can go 200 pc away from the plane in about 10 6 yr. As the Sun is located very close to the Galactic plane, considering the value of z of the Sun in the analysis does not change the results and so can be neglected. There are 23 supernova remnants (SNRs) within about 3.2 kpc around the Sun with surface brightness >10 −21 Wm −2 Hz −1 sr −1 and the ages of these SNRs are not greater than 10 4 yr (Green 2001;). So, the number of neutron stars which were born in the last 10 6 yr within the region of 400 pc around the Sun must be about 11. Only 2 or 3 of these 11 neutron stars can go more than 200 pc away from the Galactic plane. Therefore, the considered region may contain about 8-9 neutron stars. But the birth rate of radio pulsars is up to 3-4 times smaller than the supernova explosion rate in our galaxy (). So, we can expect only 3 radio pulsars with an age up to 10 6 yr located in this region and this is in accordance with the data mentioned above. According to (see also the references therein), surface temperatures of DITNSs are 40-95 eV and their ages must not be more than 10 6 yr ) as they are in the cooling stage. Guseinov et al. (2003a) include most of these sources (for which there exist considerably more information) in the list of isolated radio quiet neutron stars which were observed in X-ray band. We have included in Table 1 only a few very important data taken from the tables given in Guseinov et al. (2003a) together with some new data. In some cases, the distance values of DITNSs are about 2-3 times larger than the ones given in and in some other papers (see also the references in a and. Is it acceptable to adopt such smaller values of distance as given by for these objects? The statistics about the SNRs and the radio pulsars in the considered region are very poor, but the number density of DITNSs for the case of small distances ) is very large so that there exists a contradiction. If the data about the distances and ages of DITNSs were reliable, then their birth rate would turn out to be about 4 times more than the supernova explosion rate. Since the measured space velocities of all types of neutron star are very large compared to the space velocities of O and B-type stars, there is no doubt about the origin of neutron star formation which is due to the collapse of the progenitor star together with supernova explosion. For example, RX J0720.4-3125 at a distance of 200 pc (which may actually be ∼300 pc) has a tangential velocity of about 100 km/s calculated from the proper motion =97±12 mas/yr. PSR J0538+2817 has proper motion =67 mas/yr and has a transverse velocity in the interval 255-645 km/s at a distance of 1.2 kpc (). Therefore, the birth rate of neutron stars can not exceed the rate of supernova explosion. How can we explain the contradiction between the neutron star birth rate and the supernova explosion rate? Note that does not assume all the DITNSs to be only cooling neutron stars but also discusses other possibilities, for example the accretion from interstellar gas. But as a rule, today the cooling origin of the X-ray radiation is considered and this is reliable. In order to solve the problem about the difference in the birth rates it is necessary to adopt either larger distance values or longer lifetimes for DITNSs. The distances of DITNSs Since the theory of cooling of neutron stars has been well developed and all the DITNSs have T>4.610 5 K, we can say that these neutron stars have ages less than 10 6 yr (a,b;a,b;Pavlov & Zavlin 2003;). As the ages of these objects are small, their number density must be small. Therefore, it is better to adopt larger values of distance for these neutron stars. Before beginning to discuss how to adopt distance values, it is necessary to analyse the data of DITNSs. These data vary a lot from one observation to another. The most recent data about DITNSs are given in and Guseinov et al. (2003a). For all DITNSs there exist data about their temperatures and it is known that they have approximately blackbody radiation. In this work, we have used the temperature values given in Pavlov et al. (2002b) and which are more in accordance with the other data that they are more reliable. The cooling curves of neutron stars, which are obtained from the fit of the data of different pulsars on the surface temperature versus age diagram, are given in various articles (see for example ). Since the general form of the cooling curves given by different authors is approximately the same in all the works, we will use the data given in Yakovlev et al.. DITNS RX J1605.3+3249 ) is located in a region of sky (l=53 o, b=48 o ) where it is easier to determine the temperature with small uncertainty. There are 2 different distance estimations for this source, 0.1 kpc ) and 0.3 kpc ). We may adopt reliable distance values for any source which has a spectrum close to the blackbody using luminosity values. The temperatures of DITNS RX J1605.3+3249 and of radio pulsars J1057-5226 and J0659+1414 which have similar soft X-ray spectrums are 0.092, 0.070 and ≤0.092 keV, respectively. Therefore, the luminosity of RX J1605.3+3249 must not be smaller than the luminosity of these radio pulsars (Becker & Aschenbach 2002;b), i.e. it must not be less than about 10 33 erg/s. The luminosity of RX J1605.3+3249 at 0.1 kpc is 1.110 31 erg/s, so that, its luminosity at 0.3 kpc must be about 10 32 erg/s. On the other hand, Geminga pulsar (B0633+17), which has temperature of about 0.045 keV, has L x =1.0510 31 erg/s in 0.1-2.4 keV band (Becker & Trumper 1997). This also suggests to adopt a larger L x value for RX J1605.3+3249. So, we can adopt a distance value of 0.3 kpc or even a larger value which is more reliable. Following the same path, we can adopt distance values for other DITNSs; the distances of J0420.0-5022, J0720.4-3125, J0806.4-4123, J1308.8+2127, and J214303.7+065419 must be about 3-4 times larger than the distance values given in. All the new distance values and other reliable data are represented in Table 1. 4 Where must DITNSs be located on the P-P diagram? As seen from Table 2, the X-ray luminosities of all the 8 DITNSs and Geminga are in the interval 10 30 -1.710 32 erg/s. Among these sources, Geminga has the lowest luminosity (T ef f value of Geminga is also very small, see Table 1) and the value of Geminga is 3.510 5 yr. Taking these data into consideration, the ages of DITNSs can be adopted as 10 5 -10 6 yr in accordance with the age values calculated from the cooling models. Moreover, these sources are nearby objects and there is not any pulsar wind nebula around them nor any SNR to which they are connected; this also shows that the ages of these objects must be greater than 10 5 yr. On the other hand, pulsar wind nebula is present around the neutron stars with the rate of rotational energy loss>510 35 erg/s and with L x (2-10 keV)>510 32 erg/s (b). Naturally, it may be possible to observe pulsars with smaller values of and L x (2-10 keV) which are located closer to the Sun. Taking these facts into account, we can assume that values of DITNSs must be less than about 310 35 erg/s (constant=310 35 erg/s line is shown on the P- diagram, see Figure 1). It is well known that the difference between and the real age can be significant for very young pulsars (Lyne & Graham-Smith 1998). On the other hand, for single-born old pulsars must be approximately equal to the real age if the evolution takes place under the condition B=constant. But none of the DITNSs is connected to a SNR, so that, DITNSs must be located in the belt between =310 4 yr and =10 6 yr lines on the P- diagram if the condition B=constant is satisfied. For 5 of the 9 DITNSs (including Geminga) represented in Table 1, the spin periods (P) have been measured. Four of these objects have spin periods greater than 8 s and Geminga pulsar has P=0.237 s. As known, the period values of anomalous X-ray pulsars (AXPs) and soft gamma repeaters (SGRs) are greater than 5 s (see for example Mereghetti 2001;a and the references therein). The small value of=5.410 −13 s/s belongs to AXP 1E2259+586 which has P=6.98 s. Therefore, all single neutron stars with P>10 s must be related to AXPs and SGRs. The P=10 s line is displayed in Figure 1 to show the two separate regions in which DITNSs can be located. The DITNSs with <10 6 yr may have values of P>10 s (see Table 1) and this is possible if the values of are very large. Therefore, these objects may be the evolutionary continuations of SGRs and AXPs. Naturally, their birth rates should be in agreement with each other for this assumption to be true. The locations of pulsars Geminga, RX J0720.4-3125 and RX J1308.8+2127 on the P- diagram are shown in Figure 1. From the position of Geminga pulsar it is seen that this pulsar evolves similar to the radio pulsars with B=10 12 -10 13 G. The position of pulsar RX J0720.4-3125 is not within our chosen interval of =10 4 -10 6 yr, but as this pulsar has a high value of kT ef f (see Table 1) its real age (according to the cooling models) must be smaller than its value. So, there may be magnetic field decay or some other reason for this pulsar (i.e. n>3, where n is the braking index). Pulsar RX J1308.8+2127 is located in the SGR/AXP region on the P- diagram so that this pulsar seems to have a relation with the SGR/AXP class of neutron stars. Pulsars RX J0806.4-4123 and RX J0420.0-5022 with ages <10 6 yr have large values of P. Although their values are not known, they must be located on the upper part of the P- diagram and their positions must not be lower than the position of J0720.4-3125, if the P values are correct. Birth rates of different types of neutron star near the Sun In section 2, we have mentioned that the birth rate for all the types of neutron star, in other words the supernova rate, must be about 11 in 10 6 yr in the region up to 400 pc from the Sun. In the same region, the birth rate of radio pulsars can be about 3-4 in 10 6 yr. In Figure 1, we have plotted all the 9 radio pulsars with ≤410 4 yr which are connected to SNRs and located at distances up to 3.5 kpc (c). There are also 2 other radio pulsars, J1048-5832 and J1837-0604, in this region with such values of but without any connection with SNRs (b). From these data it follows that the birth rate of radio pulsars is 3.6 in 10 6 yr in the region up to 400 pc. It is necessary to take into consideration that for such young radio pulsars the beaming factor is close to 1 and the influence of the luminosity function on the estimations is small. Note that the searches of pulsars near the Sun in the central regions of SNR shells and the searches of pulsar wind nebulae are considerably better than the searches of pulsars under the surveys. Also note that in some cases pulsars have been found after observing point X-ray sources in the central parts of SNRs. Therefore, we can adopt that the birth rate in the region up to 400 pc from the Sun is about 3-4 radio pulsars in 10 6 yr. It is necessary to take into account that in the region with distance up to 3.5 kpc from the Sun, there are also 6 dim radio quiet neutron stars (DRQNSs) which are connected to SNRs (Table 1). The locations of 2 of these objects, 1E1207.4-5209 and RXJ0002+6246, are shown in Figure 1. These objects have considerably large values of compared to the ages of the SNRs in which they are located, so that, they have different evolutionary tracks compared to other radio pulsars which are connected to SNRs. If we assume that all DRQNSs are also radio pulsars with 10 12 <B<10 14 G, then they must have low radio luminosities and/or the direction of their radio radiation does not pass through the line of sight. On the other hand, all of them may have large P and values and significantly different evolutionary tracks compared to other pulsars with the same magnetic field, because there exist some other important differences between DRQNSs and most of the radio pulsars. Practically, all the radio pulsars which are connected to SNRs and which have similar ages as DRQNSs have pulsar wind nebula (b). None of the DRQNSs has such property. Therefore, these DRQNSs have<310 35 erg/s. The positions of 1E1207.4-5209 and RX J0002+6246 on the P- diagram require magnetic field decay or some additional ideas for the pulsar models (in Fig.1 the location of RX J0002+6246 has been found using the condition = age of the SNR). Therefore, the number of SNRs which contain ordinary and other types of pulsars with similar properties for d≤3.5 kpc is 17. By this approach, we give the upper limit for pulsar birth rate in the region up to 400 pc which is 5.5 in 10 6 yr. In section 3, we have adopted up to 3-4 times larger distance values for DITNSs and this gives us the possibility to estimate the birth rate of this type of neutron star as 9 in 10 6 yr in the same region with a radius of 400 pc. These sources have P>0.1 s,<310 35 erg/s and, according to the cooling theories, ages between 310 4 -10 6 yr that they must be in later stages of the evolution of single neutron stars with initial magnetic field B>10 12 G (see the locations of these sources in Figure 1). But how can we explain such a large birth rate for these sources which is comparable with the supernova rate? First, note that the statistical data are poor. Second, the actual ages of the SNRs may be not up to 410 4 yr but up to 310 4 yr (see Table 1). On the other hand, the rate of supernova must be a little more if we take into account the SNRs which have low surface brightness during their evolution. In this case, the birth rate of DITNSs must roughly be equal to the birth rate of radio pulsars and DRQNSs together. As seen from Figure 1, there exist 5 single radio pulsars which have been detected in X-ray band and have 10 5 < <610 5 yr and 510 11 <B<1.110 12 G. They are located in the region up to 2 kpc from the Sun and 2 of them, J0659+1414 and J0538+2817, are most probably connected to S type SNRs (;b and the references therein). If we also consider that 2 such pulsars may go far away from the Galactic plane that they can be missed in the surveys, the total number of the pulsars in the considered volume turns out to be 7. Therefore, the birth rate of this type of pulsar in the region with d≤400 pc is not more than 0.6 in 10 6 yr. From the estimations done in this section, we see that approximately 60% of the SNRs with surface brightness >10 −21 Wm −2 Hz −1 sr −1 are connected to normal pulsars and DRQNSs. The rate of birth for DITNSs is also approximately equal to 60% of the rate of supernova explosion. Therefore, the neutron stars with ages approximately <510 5 yr, which show themselves as radio pulsar or DRQNS in SNRs, may mainly transform to DITNSs. The numbers of radio pulsars with effective values of magnetic field B≥10 13 G and B≥310 12 G which have <10 6 yr and d≤3.5 kpc are 5 and 32, respectively (ATNF pulsar catalogue 2003; ). From these data, the birth rates of radio pulsars with B≥10 13 G and B≥310 12 G located up to 400 pc from the Sun must be considerably more than 0.06 and 0.4, respectively, in 10 6 yr, because the values may be several times larger than the real ages. In the region up to 8 kpc from the Sun, there are 4 AXPs and one SGR and the ages of these objects must not be larger than 510 4 yr (Mereghetti 2001;a). Therefore, the birth rate of AXPs and SGRs in the considered cylindrical volume with the radius of 400 pc in 10 6 yr is not less than 0.15. This is about 60 times smaller than the supernova rate in the same volume and the time interval, but it approximately coincides with the birth rate of radio pulsars with effective value of B≥10 13 G. As seen from Figure 1, only 3 pulsars, J1740-3015, J1918+1444 and J1913+0446 (in the order of increasing value of P), are located in the volume with a radius of 3.5 kpc and with ≤10 5 yr. Discussion and Conclusions It is known that, different types of neutron star which have different properties are born as a result of supernova explosion: radio pulsars, DRQNSs, DITNSs, AXPs and SGRs. There exist PWN and often SNR shell around very young single radio and X-ray pulsars which have>510 35 erg/s (Guseinov et al 2003b). There may exist only the shell around pulsars wit E<310 35 erg/s, DRQNSs and AXPs the ages of which are less than 10 5 yr. No shell nor PWN has been found around some of the very young pulsars (e.g. J1702-4310 and J1048-5832) and SGRs. Absence of the shell or PWN around DITNSs must be considered normal because they have considerably large ages. In section 4, we have adopted ∼3-4 times larger distance values, compared to the values usually adopted, for most of the DITNSs and this gives the possibility to decrease their birth rate down to the sum of the birth rates of radio pulsars and DRQNSs. Birth rate of these 3 types of neutron star together is approximately equal to the rate of supernova. The combined birth rate of these 3 types of neutron star may be more than the supernova rate, because some of the DITNSs may be formed as a result of the evolution of DRQNSs and radio pulsars. Birth rate of AXPs and SGRs, which belong to the same class of objects, is about 60 times smaller than the supernova rate and it is about same as the birth rate of radio pulsars with effective values of B≥10 13 G. As seen from Table 1, the period values of 4 DITNSs are very large, though their ages are smaller than 10 6 yr. From this situation, there arises a possibility of a relation between some of the DITNSs and AXPs/SGRs. Naturally, we must take into account the position of each DITNS on the P- diagram to show the relation between some of these objects and AXPs/SGRs. Existence of radio pulsars with n<3 and with real ages smaller than (for young pulsars) show that the condition B=constant is not satisfactory in all cases and this is well seen in Figure 1. Most of the pulsars with 10 5 < <10 7 yr are not in the belt B=10 12 -10 13 G where most pulsars are born in; often, there occurs magnetic field decay. But the evolutions of AXPs/SGRs and some of the DITNSs according to the field decay approach (the magnetar model) lead to bimodality in the number of neutron stars versus the magnetic field distribution. On the other hand, the time scale of the magnetic field decay must be very short. This shows that the large effective B values of these objects and the shape of their evolutionary tracks must be related mainly to the masses of and the density distributions in the neutron stars and also to the activity of the neutron star. This is also necessary to understand the different positions of radio pulsars which are connected to SNRs and of DRQNSs on the P- diagram despite the fact that they have similar ages. If the evolution under the condition B=constant were true, then they would be located along =constant belt, but not along the constant magnetic field belt. We think that there is a possibility to get rid of these difficulties and to understand the large X-ray luminosities and also the bursts of AXPs/SGRs. It is necessary to assume the birth of neutron stars with masses about half of the maximum mass values found from the given equations of state and rotational moment. In principle, it must be easy to identify such smaller mass neutron stars as they are far away from hydrodynamical equilibrium. They must have an elipsoidal shape due to rotation and possibly they do not rotate as a rigid body. In this case, the young pulsar may especially demonstrate itself when the angle between the magnetic field and the rotation axes is close to 90 o. In such a case, a considerably larger effective value of magnetic field can be produced as compared to the real magnetic field. 0.9d 2 0.3 Figure 1: Period versus period derivative diagram for different types of pulsar. The '+' signs denote the radio pulsars with d≤3.5 kpc which are connected to SNRs. The 'X' signs show the positions of the radio pulsars with d≤3.5 kpc and 10 5 < <210 7 yr which have been detected in X-rays. The locations of 3 radio pulsars which have d≤3.5 kpc and <10 5 yr are shown with 'circles' to make a comparison between the birth rates (see text). DITNSs are represented with 'stars' and DRQNSs are displayed with 'empty squares'. The 'filled squares' show the positions of all AXPs/SGR in the Galaxy. Names of DITNSs, DRQNSs, 2 of the AXPs, and some of the radio pulsars are written. Constant lines of B = 10 11−15 G, = 10 3−9 yr, and = 10 29, 10 32, 10 35, 310 35 and 10 38 erg/s are shown. P=10 s line is also included (see text).
The risk of disease transmission resulting from the incorrect use or disposal of syringes is a serious problem around the world. Diseases such as HIV (Human Immuno-deficiency Virus) and Hepatitis B along with other blood borne diseases are readily transmitted between persons when a contaminated needle tip comes into contact with and penetrates the skin of a third party. Protocols exist in hospitals and medical facilities which dictate that a used syringe must be disposed of in a sharps waste unit immediately after an injection has been given. However, the risk still exists of a medical practitioner or other person being injured by the needle tip in a needle stick injury during the disposal of the syringe. The problem of correct syringe disposal is particularly prevalent amongst intravenous drug users who commonly dispose of syringes without paying head to standard disposal protocols. The discarding of syringes in public places puts the population at risk of needle stick injuries. The spread of blood borne diseases through the re-use of syringes is a significant problem amongst intravenous drug users. When a needle is refilled after an injection and subsequently injected by another person without adequate sterilisation, there is a serious risk of any diseases carried by the first person being transmitted to the second person. A retractable syringe has been proposed in which a metallic clip is affixed to the head of the syringe plunger. As the plunger is pushed within the barrel during an injection stroke, the metallic clip frictionally engages with an internal wall of a needle hub, causing the head of the plunger to lock within the needle hub. A subsequent withdrawal of the plunger causes the needle hub, and the needle tip to retract within the body of the barrel, preventing a user or other person from accidentally coming into contact with the needle tip. However, a problem associated with such retractable syringes is that they are typically more expensive to manufacture than conventional, non-retractable syringes. A further problem is that the metallic clip is known to occasionally not adequately engage with the needle hub. Accordingly, the retractability of the syringe may not operate in all syringes from a given batch. Clearly this is unfavourable in medical applications where syringe malfunction is not acceptable and places the user at risk.
Sourcing in China: The International Purchasing Office Solution China possesses one of the most dynamic economies in the world. Many companies, enticed by the opportunities offered (not only cost-related ones) have decided to source from this market. In Chinas social, cultural, and legal context, so different from Western ones, they are likely to encounter numerous obstacles when creating and managing a supply flow from China. Creating an International Purchasing Office (IPO) is becoming one of the solutions most frequently adopted by Western companies to manage international sourcing activities. Despite the increasing importance of this solution, there are still few studies published on this topic. The present study attempts to fill the gap in scientific literature, defining the solutions for the creation and management of an International Purchasing Office (IPO) in China. Moreover, the study attempts to describe the reasons for the creation of an IPO in China, the main activities that can be delegated to it, and adoptable organizational solutions.
Location-based apps have been booming lately, but Google's My Maps Editor is one of the best. A free app that allows users to create and share personalized maps, My Maps Editor is intuitive, smooth, and useful. Upon opening the app, you see several options that allow you to create a new map, give a map description, or have your map show up in search results. A map of your general location appears next, with a plus icon and a list icon on the right. Tapping the plus icon gives options to mark your location or to add a photo, address, marker, line, or shape. I had little trouble navigating the app, but I needed a bit of time adjusting to the Create a Line and Create a Shape features. When creating a map for a fictitious pub crawl through San Francisco, I found it easy to add markers at the starting point, to draw lines from location to location, and to add details for each marker. I could see this app as being particularly useful for road-trip planning, as it allows you to add information (such as the details for late-night restaurants, for instance) along a route. Or, you could map out local attractions for visiting friends. You can even add a photo, though I should note that the one difficulty I encountered with this app was when I tried to add a photo to a marker--none of mine would load. Other aspects of this app help to make it indispensable: My Maps Editor syncs automatically with the My Maps tab on Google Maps, which makes it easy to take your desktop maps along with you. The app also syncs automatically to your Web version of My Maps so that you can update your maps along your route.
Optimal motion for obstacle stepping in a hybrid wheeled-legged hexapod This paper presents a methodology for obtaining optimal motions for the leg of a hybrid wheeled-legged hexapod. The aim of the approach is to compare motion strategies for a leg when it has to step over an obstacle. The study is numerically solved by referring to a prototype of a hexapod built at LARM in the Cassino University. The motion planning procedure is a combination of a quick random search algorithm (Rapidly Exploring Random Trees) together with an optimisation method (Genetic Algorithm).
Immigration of Ethiopians with typhoid fever to Israel: apparent lack of influence on the local population. The epidemiology of typhoid fever in Western countries may be affected by immigration from developing countries. We studied the immigration of Ethiopian Jews to Israel to find the effects of an influx of many individuals infected with typhoid into an area with a low incidence of the disease. Typhoid fever affected 204 Israelis and 121 (1.1%) of 10,654 Ethiopian immigrants during the period of 1984-1985. Of those Ethiopian cases, 107 occurred during a 3-month period. During the 5 months following that 3-month period, there was no increase in the number of cases of typhoid among Israelis. Although after that time there was a local waterborne outbreak of typhoid that affected 83 Israelis, no Ethiopians resided in the area where the outbreak occurred; therefore, we concluded that these 83 cases of typhoid fever were not related to the immigration of Ethiopians into Israel. In fact, if those 83 cases were excluded from the statistical analysis, there was no increase in the occurrence of typhoid during the 2-year period studied. Therefore, the immigration of many people with typhoid into an area of low incidence does not necessarily confer a risk of infection to the local population.
This well presented, spacious and light five bedroom detached period house (built circa 1920) on popular Burdon Lane in Cheam measures almost 3300sq feet and comes with a separate detached one bedroom annex. The main house offers superb living space with several reception rooms including a grand entrance hall with wood panelling which is used as the dining room, a sitting room and a family room. Also on the ground floor is a study, kitchen, breakfast room and a wet room. The bedrooms and bathrooms are all located on the upper floor with the master suite containing a dressing room and lovely ensuite bathroom with separate shower. On the ground floor, several sets of doors open onto a patio with steps down onto a stunning south west-facing mature garden with a substantial lawn, perfect for alfresco dining, entertaining and enjoying the sun throughout the day and evening. The garden is incredibly private and secluded and to the right of it is the annex measuring 920sq feet which has a kitchen, utility room, sitting room and conservatory/sun room on the ground floor, and bedroom and bathroom on the upper floor. With its own courtyard and room to park outside, the annex would make the ideal residence for an additional family member, teenager who wants their own space, or guest. The house has lots of charm, a bright, neutral décor, and tremendous scope for extending at the back and into the loft. The current owner has previously had planning permission to make these changes (now lapsed but which he's happy to show and discuss) making the house the ideal long term family home for any discerning buyer wanting to put down roots in the area. Cheam is an historic and charming village with the site of the Tudor Nonsuch Palace and Nonsuch Mansion at its heart. Epsom, Banstead, Kingston and Wimbledon are all within easy driving distance. The village itself is beside the extensive Nonsuch Park and has a number of local shops, restaurants and coffee houses as well as a mainline station serving London Victoria (approx. 30 mins.) and London Bridge during peak times (approx. 37 mins). There are a number of highly sought-after schools in the area, most notably Nonsuch High School, Sutton Grammar, Wallington County Grammar, Wilson's Grammar, Wallington Girls Grammar and independent schools including Whitgift, Epsom College, Aberdour School and Chinthurst in nearby Tadworth. Excellent primary schools are also nearby including Avenue Primary School, St Dunstan's and Cuddington Croft. Also nearby are both Banstead Down and Cuddington Park gold clubs, perfect for the avid golfer.
Tech-world icon Steve Wozniak is no stranger to seeing Silicon Valley go through changes big and small. It's why the Apple cofounder, who's been around the valley for 41 years, doesn't try to think too much about the current political state in America. However, Wozniak does have one key, direct message for Donald Trump: don't ruin the future and stifle technological innovations. As the person behind Silicon Valley Comic-Con, where this year's theme was "The Future of Humanity: Where Will Humanity Be in 2075?", Wozniak himself believes our future success is dependent on technology. When it comes to Trump's popular campaign slogan, "Make America Great Again," Wozniak told POPSUGAR he hopes Trump realizes one aspect that fulfills that statement: technology. "But our technologies and everything are really what puts the United States prominent in the worldview — computers being a large part of that," Wozniak said. "So, don't spoil the future. That's going to be important, be it self-driving cars, be it energy-efficient cars, be it electric-operated cars. Don't spoil that." He also wants Trump to realize we shouldn't give up our lead in technology simply because "we're already wealthy." "So don't spoil the future that's going to be important. . ." The Woz also takes issue with which tech companies Trump's talking to and who's on his advisory team. He believes that only talking to existing companies will lead to those companies asking for tax breaks. Instead, Wozniak thinks Trump should listen to younger entrepreneurs and companies. "He should have advisors from college professors, and engineering, and entrepreneurs, and young people who want to start companies, not existing companies," said Wozniak. Regardless of who Trump is talking to, Wozniak is optimistic those conversations will help him realize they're seriously invested in the future. "I hope that's the outcome of his being open to talk from companies around here," he said. "Because we've always been into 'let's change the future and improve it.'"
Wool felt: Characterization, comparison with other materials, and investigation of its use in hospital accessories This article presents groundbreaking research on wool felt for use in hospital accessories. The results of mechanical, scanning electron microscopy (SEM), chemical, flammability, and microbiological tests are presented, as well as research on the acceptability of three wool felt hospital accessories (i.e. sheet cover, pillowcase cover, and insole). An innovative approach was utilized to compare the mechanical properties of unwashed and washed wool felt samples by three different washing machines, with textiles commonly used in hospital (i.e. nonwoven of polyester felt, woven of 100% cotton, and woven of a blend of 67% cotton and 33% polyester). The mechanical tests showed that the wool felt had tensile resistance similar to that of polyester felt, superior elongation to the 100% cotton and the blend, inferior tearing stress, lower resistance to slippage, and good pilling resistance. After washing, the wool felt washed with the extractor washer and dry washer increased their tensile strength 33% and 19%; the tear strength did not change; the slippage decreased; and the samples washed with the dry washer showed 14% less pilling than those not washed. The SEM tests showed differences in appearance of the fibers after the washing processes. Chemical tests revealed that 0.11% of lanolin was retained in the wool felt after washing the samples with the dry process. The flammability tests showed the dependence of carbonization length with the wool felt washing process. The volunteers showed a good acceptance of the wool felt accessories emphasizing the feeling of freshness, release of pain, and reduction in sweating and unpleasant odors. Microbiological tests showed growth in the insoles of the bacterium Staphylococcus aureus and the fungus Candida albicans, commonly found in the hospital environment.
Experimental characterization of a non-local convertor for quantum photonic networks We experimentally characterize a quantum photonic gate that is capable of converting multiqubit entangled states while acting only on two qubits. It is an important tool in large quantum networks, where it can be used for re-wiring of multipartite entangled states or for generating various entangled states required for specific tasks. The gate can be also used to generate quantum information processing resources, such as entanglement and discord. In our experimental demonstration, we converted a linear four-qubit cluster state into different entangled states, including GHZ and Dicke states. The high quality of the experimental results show that the gate has the potential of being a flexible component in distributed quantum photonic networks. Introduction.-Quantum networks consisting of multipartite entangled states shared between many nodes provide a setting for a wide variety of quantum computing and quantum communication tasks. Recent works have experimentally realized some of the basic features of distributed quantum computation and quantum communication schemes, including quantum secret sharing, opendestination teleportation and multiparty quantum key distribution. These experiments employed networks of small-sized entangled resources and showed the potential of distributed quantum information processing in realistic scenarios. Individual photons serve as a viable platform for implementation of quantum networks, since they can be easily transmitted over free-space or fiber links in order to distribute the necessary resources. A common problem in quantum networks is that once the entangled resource is shared among the nodes it is fixed and can only be used for a given set of quantum tasks. A different task then requires conversion of the available multipartite entangled state into another state. When the separation between the nodes is large, such conversion can employ only local operations and classical communication, which severely limits the class of potentially available states. Fortunately, in some cases two nodes of the network may be close enough for application of a non-local operation between them. This relaxes the constraint and opens an interesting question: what types of entangled states are convertible in this scenario? Recently, a non-local conversion gate was proposed for exactly this setting of two nodes in close proximity. It was shown that one can employ a single probabilistic twoqubit gate to convert a four-qubit linear cluster state into many other forms of four-qubit entangled states that are inequivalent to each other under local operations and classical communication. The gate therefore enables one to convert between different states so that different tasks can be performed. For instance, the four-qubit linear cluster state can be used for a variety of quantum protocols, such as blind quantum computation and quantum algorithms. On the other hand, a four qubit GHZ state can be used for opendestination teleportation and multiparty quantum key distribution, and a four-qubit Dicke state can be used for telecloning and quantum secret sharing. In this work, we experimentally realize the non-local conversion gate of Ref. with single photons using a linear optical setup and characterize its performance using quantum process tomography. We find that the conversion gate operates with high quality under realistic conditions and show its potential for converting a four-qubit linear cluster state into a GHZ state, a Dicke state, and a product of two Bell states. The conversion gate can also be used to generate quantum correlations that are not associated with entanglement, but whose presence is captured by the notion of discord. The generated states with discord may also be used as resources in distributed quantum tasks. Furthermore, the conversion gate can be used for 're-wiring' the entanglement connections in a larger graph state network. The experimental results match the theory expectations well and highlight the suitability of the conversion gate as a flexible component in photonic-based quantum networks. Theoretical background.-The non-local conversion gate for polarization encoded photonic qubits is depicted in Fig. 1. The gate operation is based on postselection where one photon is detected at each of the output ports. The gate itself is based on a Mach-Zehnder interferometer and it is created from two polarizing beam splitters (PBSs) and four half-wave plates (HWPs). Two of the HWPs labeled as HWP(45 ) are rotated to a fixed angle of 45, the other two HWPs labeled as HWP 1 ( 1 ) and HWP 2 ( 2 ) are used to adjust the gate to a particular setting. The total operator characterizing the action of the gate in the computational basis of horizontally (|H ) and vertically (|V ) polarized photons is given by where k = cos 2 (2 k ) and k = sin 2 (2 k ) (k = 1, 2), and 1 = cos(2 1 ) cos(2 2 ) and 2 = sin(2 1 ) sin(2 2 ). The input modes are labelled in = 1, 2 and the output modes are labelled out ability to convert quantum states from one type to another, even though such a conversion is impossible with local operations and classical communication. To demonstrate the gate's capabilities consider a four-qubit linear cluster state |C 4 given by as an input. Applying the gate on the second and third qubits yields The angles 1 and 2 can be tuned in order to achieve the conversion of the cluster state to a specific kind of state. Notable examples are the four-qubit GHZ and Dicke states, as well as a pair of maximally entangled bipartite states. The angle settings and success probabilities for these example conversions can be found in Tab. I. The non-local nature of the gate can also be effectively utilized to generate classical or nonclassical correlations in a pair of initially separable states. For example, the gate G(3/8, /8) transforms a pair of factorized pure states into a maximally entangled state, while the gate G(/3, 0) transforms the state 1, into a state with quantum correlations but no entanglement. Experimental setup.-We experimentally demonstrated and characterized the photonic non-local conversion gate using the linear optical setup shown in Fig. 2. Here, orthogonally polarized time-correlated photon pairs with central wavelength of 810 nm were generated in the process of degenerate spontaneous parametric down-conversion in a BBO crystal pumped by a continuous-wave laser diode and fed into single mode optical fibers guiding photons to the signal and idler input ports of the linear optical setup. The lin-Converted state 1 2 ps Cluster state 0 0 1 /2 /2 1 GHZ state 0 /4 1/2 /2 /4 1/2 /4 0 1/2 /4 /2 1/2 Dicke state + − 3/10 − + 3/10 Two Bell states 3/8 /8 1/4 /8 3/8 1/4 early polarized signal and idler photons were decoupled into free space and directed into polarization qubit state preparation blocks (dotted boxes), each consisting of a quarter-wave plate (QWP) and a half-wave plate (HWP). In contrast to the theoretical proposal of Ref. shown in Fig. 1, the experimental conversion gate was implemented using a displaced Sagnac interferometer and a single polarizing beam splitter (PBS), where the interferometric phase was controlled by tilting one of the glass plates (GP). This construction provides passive stabilization of the Mach-Zehnder interferometer. HWP 1 ( 1 ) and HWP 2 ( 2 ) were used to configure the conversion gate for its different settings. Outputs from the conversion gate were analyzed using the detection blocks (DB), which consist of a HWP, a QWP, and a PBS followed by an avalanche photodiode (APD). The scheme operated in the coincidence basis and the operation succeeded upon detecting a two-photon coincidence at the output ports. By measuring the coincidences we were able to carry out complete quantum process tomography of the non-local conversion gate for all the settings in Tab. I. Each input qubit was prepared in six states {|H, |V, |+, |−, |R, |L }, and each output qubit was measured in three bases {|H, |V },. Two-photon coincidences corresponding to the measurement in any chosen product of two-qubit bases were recorded sequentially and the measurement time of each basis was set to 10 s. Using the measured coincidence counts as a mean value of the Poisson distribution from the down-conversion we numerically generated 1000 samples in order to estimate the uncertainty of the experimental results. The process matrices of the quantum process were reconstructed from this data using a Maximum Likelihood estimation algorithm. The quality of the different non-local gate operations can be evaluated with help of the process fidelity F = Tr 2, which is the overlap between the reconstructed process matrix,, and the process matrix for the ideal theoretical operation, th. The process matrix represents the completely positive map that fully characterizes the conversion gate operation. Using the Jamiolkowski-Choi isomorphism, the matrix is defined on the tensor product of the input and output Hilbert spaces H in and H out, which are each four-dimensional Hilbert spaces spanning the polarization states of the two photons. Therefore is a 1616 matrix and the two-qubit input state in transforms to the two-qubit output state, out, according to the relation out = Tr in , where T denotes transposition. We also use the process purity P = Tr to quantify the quality of the operation. For the ideal theoretical case the matrix th corresponds to a pure density matrix, th = 1 1 ⊗ | + + | 1 1 ⊗, where | + = V a,b=H |ab in |ab out denotes a maximally entangled state on two copies of a two-qubit Hilbert space, and therefore P = 1. A possible source of reduction in the process fidelity is the introduction of phase shifts experienced by one or more modes in the setup, which are caused by the imperfect nature of the realistic experimental components. These can be compensated for by suitable phase corrections. To reflect this we calculated two kinds of process fidelity for each scenario. The first is the raw fidelity, which was calculated directly from the reconstructed process matrix. The second is the optimized fidelity, which was calculated from the process matrix subject to four phase shifts, one in each of the two input and two output modes. The four phases were optimized over and ultimately chosen in such a way that the resulting fidelity is maximal. The relevant process purity and process fidelity for all the considered scenarios are given in Tab. II, while the process matrices are shown in Fig. 3. We also analyzed the conversion gate and its performance from a different angle. The gate is non-local and as such it should be able to transform a two-qubit factorized state into a state with non-zero entanglement. To see this entanglement generation, the input state was set to | − − in and fed into the conversion gate with parameters 1 = 3/8 and 2 = /8, which in the ideal case transforms it into the entangled Bell state | + = 1 √ 2 (|HH + |VV ). Using the non-local conversion gate we generated the maximally entangled Bell state with purity P = 0.946 and fidelity F = 0.966. The number in the brackets represents one standard deviation at the final decimal place. We can also use the conversion gate to prepare a separable state, i.e. a state with no entanglement, but with nonzero quantum correlations that can be measured by the discord. For this we started with a mixed factorized state in = 1 2 1 1 ⊗ |+ +| and fed it into the conversion gate with parameters 1 = /3 and 2 = 0. The experimental realization was similar to the previous case, only the totally mixed state was prepared by using an electronically driven fiber polarization controller. The polarization controller applied mechanical stress on the input single mode optical fiber in three orthogonal axes using three co-prime frequencies. This randomized the polarization state on a time scale of tens of ms, which is two orders of magnitude shorter than the projectionacquisition time of 1 s, thus effectively resulting in a partially mixed state. This preparation method lead to an output state with zero entanglement and non-zero discord. The output state was again determined by using full two-mode quantum state tomography, followed by a maximum likelihood estimation algorithm. Separability of a realistic reconstructed state is difficult to prove, but both entanglement measures we employed -logarithmic negativity LN = 0.019 and concurrence C = 0.015 -show values separated from zero by less than one standard deviation. This points to a high probability that the state is indeed separable. On the other hand, the discord of the state is D = 0.066, which is significantly positive. The confidence intervals were obtained by using a Monte Carlo method based on the measured data. As the final step of our analysis we looked at how the conversion gate might perform in a realistic scenario. For this, we employed the reconstructed process matrices from Fig. 3 and numerically simulated the effect of the conversion gate on a realistic version of a four-qubit linear cluster state generated in a four-qubit linear-optical quantum logic circuit. The cluster state |C 4 C 4 |, whose density matrix was reconstructed with the help of a maximum likeli- hood algorithm, is shown in Fig. 4 Summary and discussion.-We have experimentally realized the non-local photonic conversion gate proposed in Ref. and tested its performance for all four of its basic conversion settings. We performed quantum process tomography and characterized the individual conversion gate operations by their process matrices. We also directly tested the ability of the conversion gate to generate quantum correlations without entanglement between a pair of separable qubits, finding no entanglement, but non-zero discord present. Finally, we tested the limits of the setup by simulating its action on a realistic experimental four-qubit linear cluster state. In all the tests the conversion gate performed close to the theoretical predictions, with fidelities generally surpassing 90%. These experimental results are very promising with regards to the potential future applications of the conversion gate, which includes converting between different small-sized multipartite entangled states. This is an important problem in quantum networks, where different quantum protocols require different resource states and the conversion gate can be used to prepare them. The conversion gate can also be used to rewire the entanglement connections in larger multipartite entangled states in the form of extended graph states and so it would provide a useful way to reconfigure a network for a given distributed quantum protocol.
New Times & Star columnist Shelley Lofthouse byline pic; Monday 8th April 2013: PAUL JOHNSON 50046739T000.JPG. Can you believe it’s February already? Sub-zero temperatures and exchanging cheap teddies hastily bought in petrol stations. For me it means just one thing - renewing my car insurance. You may remember that I recently purchased a new car. This all happened over the phone with a dealership. I secured a good price for old Colin as I "forgot" to mention the cigarette burns and damage to his bumper. Sixteen pounds! I couldn’t afford that! I have to budget VERY carefully each month. I’m still paying off an electric bill from nine years ago at £2 a month. I’m so bad with money I have the direct number to the managing director of every collection agency within a 240-mile radius. I did a quick meerkat search seeking a cheaper quote but my current provider came out on top. Damn you, Aleksandr! So I had to find an extra £16 a month. Should I cut down on the smoking? No, I need those for my nerves. Buy less wine? Ah, the nerves. Sell the kids? I don’t think anyone would part with that much money for my two. I caught them washing their hands in the toilet the other day then donning helmets and head-butting the fridge, but that’s a story for another time. I tried to convince myself that ‘one day’ I would lose enough weight to squeeze my ever-expanding feet back into those flatforms I bought in 2001, but deep down I knew my Spice Girl days were over. I needed around £200 so begrudgingly listed a few pairs of my finest Principal, Topshop and George specials. Pretty soon I had an offer for a pair of Souliers. £250?! I couldn’t believe it! Did I even have some of those? I had Miss Selfridge and Select, yeah but those? I inspected the advert and in my error I’d clicked the wrong manufacturer - they were actually a pair of black, suede brogues purchased in the sale at B&M. It wasn’t long before I received a message: "These are not Souliers!" I’ve forwarded a pair of George slipper socks to say sorry. I hear they’re so comfy to drive in.
The Recombination of Iodine Atoms Generated by a C. W. Argon Ion Laser The rate constants of iodine-atom recombination in various foreign gases were determined by measuring the relative concentrations of iodine atoms in the photostationary state, through irradiation by a c.w. argon-ion laser at 4880 A. The relative concentration has been obtained from the absorption intensity for the emission line from an iodine discharge lamp at 1830 A. The rate constants have been given for the diffusion of iodine atoms and for the second-order recombination of iodine atoms. The values of the diffusion rate constants in various foreign gases have been compared with the calculated values. The logarithmic second-order rate constants in various gases were plotted against their ionization potentials. The plot yielded a straight line, indicating that the formation of the charge-transfer complex between iodine atoms and foreign gas molecules is important in the recombination process, as has been suggested by Porter et al.
A SERVER-SIDE FRAMEWORK FOR THE EXECUTION OF PROCEDURALLY GENERATED QUESTS IN AN MMORPG We describe a framework for executing procedurally generated quests implemented in the MMORPG Everquest using the Open Source EQEmu Everquest server. Quests play out at run-time using a collection of triggers, which consist of a testable game state condition and a script that is to be run when the condition is satisfied. We describe the interface between the quest generator and the server which enables the seamless integration of the procedurally generated quests within the existing server architecture. To demonstrate how this process takes place in real time, we analyze a nontrivial procedurally generated quest and describe the key servercontrolled actions that derive from it.
Telephone Analysis: Compromised Treatment or an Interesting Opportunity? Under the pressure of societal changes, today many analysts agree to conduct parts of an analysis over the telephone. However, little has been written about particular ways in which use of the phone affects the psychoanalytic process. The author focuses on the impact of the phone on psychoanalytic treatment and particularly on one of its potential advantages, i.e., the combination of a continuity that intensifies the treatment and physical distance between analyst and patient, making this intensity less threatening. Two detailed case reports illustrate how this combination facilitated the growth of affective tolerance and enabled these two patients to bring their emotional experiences from phone sessions into the consulting room.
Two Wilmington teenagers are now suspects in a Target robbery that ended with the thieves escaping by using pepper spray, police say. Two Wilmington teenagers are now suspects in a Target robbery that happened Monday and ended with the thieves escaping by using pepper spray on an employee, police say. State police said 18-year-old Diamond Negron and a 17-year-old girl are wanted on robbery and conspiracy charges. State police said the teens walked into the Target at 1050 Brandywine Parkway in the Brandywine Hundred around 7:45 p.m. and loaded shopping carts with merchandise. They left the store without paying, state police said. An employee confronted them at the doors and the suspects used pepper spray on him to escape, state police said. Earlier this month, two men and a woman loaded shopping carts in the Stanton Home Depot and rushed out the door without paying, state police said, then pepper-sprayed an employee who confronted them. State police do not believe the cases are connected. Anyone with information can call state police at (302) 761-6677. April 23, 2019, 6:44 p.m.
Formulation And In-Vitro Evaluation Of Pravastatin Solid Lipid Nanoparticles Solid lipid nanoparticles are typically spherical with an average diameter between 1 and 1000 nm. It is an alternative carrier system to tradition colloidal carriers, such as, emulsions, liposomes, and polymeric micro and nanoparticles. Recently, solid lipid nanoparticles have received much attention by the researchers owing to its biodegradability, biocompatibility and the ability to deliver a wide range of drugs. The reason for better treatment with SLNs might be the significant uptake of SLNs by due to smaller size and its lipidic nature. Pravastatin sodium is an cholesteral lowering agent used in the treatment of hyperlipidemia. It is administered through oral route and the dose is 10mg. The oral bioavailability is 17% and half life is 1-3hrs. It is rapidly excreted through the renal route. The aim is to increase the bioavailability of Pravastatin sodium, solid lipid nanoparticles of Pravastatin sodium are prepared by hot homogenization technique using lipids (Trimyristin, Comprital and glyceryl monostearate ) with soylecithin surfactant and poloxamer 188 as stabilizer. The prepared formulations have been evaluated for entrapment efficiency, drug content, in-vitro drug release, particle size analysis, Fourier transform-infrared studies, and stability. The optimization is based upon the range of particle size,zeta potential, and dug release studies. The nanoparticles possess negative surface charge and were enough magnitude for stable preparations. In vitro drug release studies in Phosphate buffer of pH 7.4 exhibited initial burst effect followed by a sustained release of Pravastatin. A solid lipid nanoparticle formulation containing drug pravastatin sodium and lipid Compritol, stabilized with poloxamer 188 as surfactant showed prolonged drug release, smaller particle size, as compared to other formulations with different lipids. Introduction Heart disease refers to various types of conditions that can affect heart function. These types include:Coronary artery (atherosclerotic) heart disease that affects the arteries to the heart Valvular heart disease that affects how the valves function to regulate blood flow in and out of the heart. Coronary heart disease is initially diagnosed by patient history and physical examination. EKG blood tests, and tests to image the arteries and heart muscle confirm the diagnosis. Treatment for coronary heart disease depends upon its severity. Many times lifestyle changes such as eating a heart healthy diet, exercising regularly, stopping smoking and controlling high blood pressure, high cholesterol and diabetes may limit the artery narrowing.1 Hypertension, myocardial infarction, atherosclerosis, arrhythmias and valvular heart disease, coagulopathies and stroke, collectively known as cardiovascular diseases (CVDs)2. Hyperlipidemia refers to increased levels of lipids (fats) in the blood, including cholesterol and triglycerides. Although hyperlipidemia does not cause symptoms, it can significantly increase your risk of developing cardiovascular disease, including disease of blood vessels supplying the heart (coronary artery disease), brain (cerebrovascular disease), and limbs (peripheral vascular disease). These conditions can in turn lead to chest pain, heart attacks, strokes, and other problems3. Hyperlipidemia is a common risk factor for CVD, with 53.4 percent of adults in the United States having abnormal cholesterol values and 32 percent having elevated low-density lipoprotein (LDL) cholesterol levels4. Hyperlipidemia is a medical condition characterized by an increase in one or more of the plasma lipids, including triglycerides, cholesterol, cholesterol esters, phospholipids and or plasma lipoproteins including very low-density lipoprotein and low-density lipoprotein along with reduced high-density S T Bhagawati,Varsha N S Department of Pharmaceutics, Sree Siddaganga college of Pharmacy, B H Road Tumkur-572102, Karnataka, India lipoprotein levels. This elevation of plasma lipids is among the leading risk factors associated with cardiovascular diseases5 Cholesterol does not travel freely through the bloodstream. Instead, it is attached to a protein and the two together are called a lipoprotein (lipo=fat). There are three types of lipoproteins that are categorized based upon how much protein there is in relation to the amount of cholesterol. Low-density lipoproteins (LDL) contain a higher ratio of cholesterol to protein and are thought of as the "bad" cholesterol. Elevated levels of LDL lipoprotein increase the risk of heart disease, stroke, and peripheral artery disease, by helping form cholesterol plaque along the inside of artery walls. Over time, as plaque buildup increases, the artery narrows (atherosclerosis) and blood flow decreases. If the plaque ruptures, it can cause a blood clot to form that prevents any blood flow. This clot is the cause of a heart attack or myocardial infarction if the clot occurs in one of the coronary arteries in the heart.8 Hypercholesterolemia is a common disorder and is of major interest since it is one of the risk factor for ischaemic heart disease. For the management of hypercholesterolemia and dyslipidamias, statins are prefered drugs of choice which are proved as the most potent therapies for treating elevated Low Density Lipoprotein-Cholesterol (LDL-C) and congestive heart disease. The widely prescribed statins possess low bioavailability which limits their application in clinical use.6 High-density lipoproteins (HDL) are made up of a higher level of protein and a lower level of cholesterol. These tend to be thought of as "good" cholesterol because they can extract cholesterol from artery walls and dispose of them in the liver. The higher the HDL to LDL ratio, the better it is for the individual because such ratios can potentially be protective against heart disease, stroke, and peripheral artery disease.Very low-density lipoproteins (VLDL) contain even less protein than LDL.Total cholesterol is the sum of HDL, LDL, and VLDL. 8 Pravastatin is the generic form of the brand-name drug Pravachol, which is used to lower cholesterol levels.Pravastatin reduces levels of "bad" cholesterol, which is called low-density lipoprotein or LDL.It also raises levels of "good" cholesterol, which is called high-density lipoprotein or HDL, and it lowers levels of harmful triglycerides in the blood.Lowing cholesterol and fats in the blood with pravastatin may prevent heart disease, chest pain, strokes, and heart attacks.Pravastatin is in a group of drugs known as statins, which work by blocking an enzyme that the body needs to make cholesterol7.Pravastatin sodium is a cholesterol lowering agent, which is used in the treatment of hyperlipidemia. Its absolute bioavailability is 17% and average total absorption is 34%. Pravastatin sodium is one of the lower potency statins, and produces its lipid lowering effect in two ways. First as a consequence of its reversible inhibition of HMG-CoA reductase activity. It effects modest reductions in intracellular pools of pravastatoin inhibits LDL production by inhibiting The aim of this study is to decrease the bioavailability of the drug by prepare and characterize the Pravastatin sodium loaded solid lipid nanoparticles were prepared by hot homogenisation technique, using trimyristin, compritol and glyceryl monostearate as the lipid matrices and soylecithin, poloxamer 188, as stabilizer with a view to improve the bioavailability, which would increase the biological activities. Preperation of Pravastatin solid lipid nanoparticles 10,11 Solid lipid nanoparticles were prepared by using lipid (Trimyristin/ Campritol / Glycerol monostearate) which is first melted by heating and then adding the lecithin (soya lecithin) in a boiling tube and then drug was incorporated in to the lipid-lecithin melt which was then heated at 5 °C temperature above melting point to melt the lipid. Simultaneously in another beaker taken poloxamer 188 was dissolved in water and heated to temperature equal to that of lipid phase, then this aqueous phase is transferred to lipid phase. This mixture is homogenized at 20,000 rpm for 3 min and then immediately placed in probe ultrasonicator at 75% amplitude for 20 min. Blank nanoparticles were prepared in a similar manner omitting the Pravastatin in the preparation. Evaluation of Pravastatin-loaded solid lipid nanoparticles 10.11 1)Particle size analysis: The particle size was determined by dynamic light scattering, using a Malvern system, with vertically polarized light supplied by an argon-ion laser (Cyonics) operated at 40 mW. Experiments were performed at a temperature of 25.0 ± 0.1C at a measuring angle of 90 to the incident beam. The zeta-potential of the nanoparticles was measured in distilled water using a Malvern Zeta sizer. The technique of laser diffraction is based around the principle that particles passing through a laser beam will scatter light at an angle that is directly related to their size. As the particle size decreases, the observed scattering angle increases logarithmically. The observed scattering intensity is also dependent on particle sizes and diminishes to a good approximation, in relation to the particle's cross-sectional area. Large particles therefore scatter light at narrow angles with high intensity, whereas small particles scatter at wider angles but with low intensity. 2)Zeta potential: Zeta potential analysis was performed to estimate the stability of the nanoparticles. Zeta potential is a measure of effect of electrostatic charges. This is the basic force that causes the repulsion between adjacent particles. Net results are attraction or repulsion depends upon the magnitude of both forces. The thumb rule describes the relation between zeta potential determination responses of the nanoparticles. 3)Percentage Drug entrapment efficiency (% DEE): About 1ml of solid lipid nanoparticles loaded with pravastatin was taken and placed in outer chamber of the centrisart device and the sample recovery chamber is placed on the top of the sample. The unit is centrifuged at 5000 rpm for 15 min. The solid lipid nanoparticles along with the encapsulated drug remained in the outer chamber and the aqueous phase is moved into the sample recovery chamber through filter membrane (molecular weight cutoff 2 Drug Loading efficiency. 4)Fourier-transform infrared spectroscopy (FT-IR) Drug-polymer interactions were studied by FTIR spectroscopy. Pure drug and excipients were subjected to FT-IR studies. Also physical mixtures were subjected and the spectra recorded by scanning in the wavelength of 500-4000 cm-1 in a FT-IR spectrophotometer. 5)In vitro drug release.In vitro drug release studies were carried out in Franz diffusion cell. 1 ml of nanoparticles dispersion was used for diffusion study. Nanoparticles containing drug were placed in donor chamber while the receiver chamber consists of 22 ml of diffusion medium of pH 7.4 (separated by presoaked semi-permeable membrane) maintained at 37 ± 2°C in Franz diffusion cell. The rpm of the magnetic bead was maintained at 50 rpm. 1 ml of the aliquot was withdrawn at predetermined intervals. The solution was analysed for the drug content spectrophotometrically at 260 nm against blank. Equal volume of the diffusion medium was replaced in the vessel after each withdrawal to maintain sink condition. Similarly diffusion of blank formulation is also done to correct the interference. Three trails were carried out for all formulations. From data obtained percentage drug release was calculated and plotted against function of time to study the pattern of drug release. 6)Stability studies Stability studies is carried out for the formulations were taken in glass vials and closed tightly using rubber closures and aluminium cap and kept in room temperature for 90 days.At the end of studies, samples wera analyzed for the particle size. Results And Discussion Entrapment efficiency: It is an important parameter for characterizing solid lipid nanoparticles. In order to attain optimal efficiency, several factors were varied, including the type and concentration of the lipid and surfactant material used. The entrapment efficiency of all the prepared SLN formulations is shown in Table 2. The entrapment efficiency of the SLN dispersions was found to be in the range of 91.75% to 99.98% In vitro drug release The in vitro drug release profile of Pravastatin from Campritol SLN formulations is shown in [ Figure 1 FT-IR studies Infrared studies were carried out to confirm the compatibility between the lipid, drug, and selected SLN formulation. From the spectra it was observed that there was no major shifting, as well as, no loss of functional peaks between the spectra of the drug, lipid, and drugloaded SLN. . This indicated no interaction between the drug and lipid. Stability Studies: After 90 days of stability period at room temperature the formulations were observed under microscope where it was observed absence of particle in micron range which indicates the formulations were stable. Conclusion It was observed that the hot homogenization and ultrasound dispersion method was a useful method for the successful incorporation of the drug Pravastatin with high entrapment efficiency. Furthermore, it could be presumed that if the nanometer range particles were obtained, the bioavailability might be increased. Hence, we can conclude that solid lipid nanoparticles provide controlled release of the drug and these systems are used as drug carriers for lipophilic drugs, to enhance the bioavailability of poorly water-soluble drugs through nanoparticles, as a drug delivery system.
Enolase 1 regulates stem cell-like properties in gastric cancer cells by stimulating glycolysis Recent studies have demonstrated that gastric cancer stem cells (CSCs) are a rare sub-group of gastric cancer (GC) cells and have an important role in promoting the tumor growth and progression of GC. In the present study, we demonstrated that the glycolytic enzyme Enolase 1 (ENO1) was involved in the regulation of the stem cell-like characteristics of GC cells, as compared to the parental cell lines PAMC-82 and SNU16, the expression of ENO1 in spheroids markedly increased. We then observed that ENO1 could enhance stem cell-like characteristics, including self-renewal capacity, cell invasion and migration, chemoresistance, and even the tumorigenicity of GC cells. ENO1 is known as an enzyme that is involved in glycolysis, but our results showed that ENO1 could markedly promote the glycolytic activity of cells. Furthermore, inhibiting glycolysis activity using 2-deoxy-d-glucose treatment significantly reduced the stemness of GC cells. Therefore, ENO1 could improve the stemness of CSCs by enhancing the cells glycolysis. Subsequently, to further confirm our results, we found that the inhibition of ENO1 using AP-III-a4 (ENOblock) could reduce the stemness of GC cells to a similar extent as the knockdown of ENO1 by shRNA. Finally, increased expression of ENO1 was related to poor prognosis in GC patients. Taken together, our results demonstrated that ENO1 is a significant biomarker associated with the stemness of GC cells. Introduction Gastric cancer (GC) is the fifth most prevalent malignant neoplasm and the third most deadly carcinoma worldwide, based on WHO GLOBOCAN reporting 1. An estimated one million new GC cases and nearly 600,000 deaths due to GC are diagnosed with each year 2,3. The five-year survival rate of GC patients is <30%, because of tumor aggressiveness, metastasis, chemotherapy resistance, and relapse. Cancer stem cells (CSCs) are characterized by their self-renewing ability and demonstrated pluripotent differentiation ability, which has been verified to contribute to cancer drug resistance, metastasis, and recurrence 7. Numerous researchers have proved that CSCs are present in many types of tumors, such as breast cancer, brain tumors, and gastric cancer. In 2009, Takaish et al. 12 first isolated and identified gastric cancer stem cells (GCSCs) from gastric carcinoma cell lines. The source of GCSCs may be related to gastric epithelial cells 13. With the characteristics of self-regeneration and pluripotent differentiation, GCSCs are associated with the occurrence and development of GC 14. Furthermore, numerous signaling pathways and functions have been investigated, and results show that GCSCs are the primary causes of invasiveness, drug resistance, and metastasis in GC. Known as the Warburg effect, aerobic glycolysis is both a hallmark of cancer cells and the basis of various cancer cell's biological characteristics 18. In many types of tumors, the Warburg effect leads to a rise in total glycolysis not only in normal oxygen conditions but also in hypoxic conditions 18,19. Thus, the Warburg effect may create a positive environment for cancer cells to divert nutrients 20 for proliferation, metastasis, and drug resistance. Recent studies have demonstrated that glycolytic enzymes such as Enolases have a critical role in glycolysis in cancer cells 21. ENO1, one of four types of Enolase isozymes, has been detected in almost all mature tissues 22,23. The functions of ENO1 is now considered to be both a plasminogen receptor, which can promote inflammatory responses in several tumors 24 and a glycolytic enzyme, which participates in catalyzing the penultimate step in glycolysis 23. As a glycolysis enzyme, ENO1 can be overexpressed and activated by several glucose transporters and glycolytic enzymes that participate in the Warburg effect in cancer cells 25. Moreover, ENO1 is thought to be related to aerobic glycolysis levels in tumor cells and malignant tumor development 26. Recent studies have shown that ENO1 has a pivotal role in different tumor tissues, such as head and neck cancers, Non-Hodgkin's lymphoma, breast cancer, cholangiocarcinoma, glioma, and GC. For example, overexpression of ENO1 can promote tumor growth in hepatocellular carcinoma, and head and neck cancers, and functions as a potential oncogenic factor 29,30. Furthermore, ENO1 was shown to influence proliferation, metastasis, and drug resistance in cancer cells by participating in the Warburg effect 31,32. These studies indicated that ENO1 functioned as a potential oncogenic factor in endometrial carcinoma by inducing glycolysis 33. It was also demonstrated that ENO1 was the center of a protein-protein interaction network composed of 74 GCassociated proteins and inhibition of ENO1 led to the growth inhibition of GCs 34. Moreover, many studies have demonstrated that the overexpression of ENO1 contributes to the occurrence and development of GC. For example, ENO1 is related to the proliferation and metastasis of GCs 35. In addition, overexpression of ENO1 can promote cisplatin resistance by enhancing glycolysis in GCs, while in contrast, inhibition of ENO1 can increase the sensitivity of GCs to chemotherapy by repressing glycolysis 36. Importantly, although ENO1 has been shown to be associated with the occurrence and progress of GC and take part in glycolysis in GCs, far less is known about the role of ENO1 in GCSCs. We, therefore, investigated the relationship between ENO1 and the stem cell-like characteristics of GC cells. We found that the expression of ENO1 was significantly increased in spheroids of GC cells. In addition, we discovered that ENO1 could promote the stemness of GC cells by enhancing glycolysis levels. Therefore, ENO1 was shown to be a possible biomarker of GCSCs, and targeting ENO1 could, therefore, be a valuable tool for improving the prognosis of GC patients. Cell culture and clinical samples The human GC cell lines PAMC-82 and SNU16 were obtained from the Chinese Academy of Sciences. The PAMC-82 cell line was cultured in Dulbecco's modified Eagle's medium (DMEM, Invitrogen, Carlsbad, CA, USA) containing 10% fetal bovine serum. The SNU16 cell line was cultured in RPMI-1640 medium. All of the cell lines were confirmed to be free of mycoplasma contamination after testing with the kit from Shanghai Yise Medical Technology (MD001). The commercial tissue microarrays were constructed by Shanghai Biochip Co. Ltd. The study was approved by the medical ethics committee of Cancer Hospital, Chinese Academy of Medical Sciences (Beijing, China) (Ethical approval number: NCC1999 G-003). Self-renewal assay We used these spheroid-formation experiments to explore the self-renewal capacity of these cells. The cells were seeded in 24-well ultra-low attachment plates (Corning) at a density of 500 cells/well and cultured in SFM that was supplemented with 0.8% methylcellulose (Sigma), 20 ng/mL EGF, B27 (1:50), 10 ng/mL LIF, and 20 ng/mL bFGF. The cells were cultured at 37°C in 5% CO 2 for 7-13 days, and then the quantity of spheroids was counted using a microscope. Transwell™ invasion assay To evaluate the invasive activity, a total of 2 10 4 serum-starved cells were resuspended in 200 L SFM and plated in the top of a Transwell™ chamber (24-well insert; pore size, 8 m; Corning) that was coated with diluted Matrigel (BD Biosciences). After 24 h, the number of infiltrating cells was counted using a light microscope, and the invasion of cells was analyzed quantitatively. Chemosensitivity assay Cells were seeded in 96-well plates (4000 cells/well) and cultured for 24 h. Then the cells were treated with different concentrations of cisplatin for 72 h. A Cell Counting Kit-8 (CCK8) was used to evaluate the number of viable cells, and the absorbance at 450 nm was measured using a microplate reader (Bio-rad, USA). Tumorigenicity in BALB/c nude mice BALB/c nude mice (4-5 weeks old) were obtained from HFK Bioscience Company (Beijing, China). For tumorigenesis assays, 2.4 10 6 cells were subcutaneously injected into the back of nude mice (5 mice/group). The tumor size was recorded every week. All mice were then sacrificed on day 30 after inoculation and the tumor weight of each mouse was measured. Glucose consumption We seeded cells in six-well plates for 24 h and then replaced the medium with 3 mL of fresh medium. After a fixed time, we collected the supernatant and measured glucose consumption using a Glucose and Sucrose Assay Kit (Sigma-Aldrich, MAK013). The number of cells was counted three times. The glucose consumption was normalized to mol/10 6 cells. Lactic acid measurement Cells were collected after culturing for the same length of time as indicated above, and then lactic acid production of cells was measured by colorimetry according to the instructions of a Lacate Colormetric Assay Kit II (Biovision, K627-100). The cells left were counted and the lactic acid production was normalized to mol/10 6 cells. Glycolysis level analysis Cells were plated in a Seahorse XF96 plate at a density of 15,000 cells per well, and the compounds that included glucose, oligomycin, and 2-deoxy-D-glucose (2-DG) were loaded into appropriate ports of a hydrated sensor cartridge. Finally, the cells' glycolysis stress was tested using the Seahorse XFe/XF Analyzer. Statistical analysis All data were shown as the mean ± standard deviation (SD) derived from at least three independent experiments. The statistical significance was calculated by unpaired Student's ttests and results were considered significant if P < 0.05. SPSS 13.0 and GraphPad Prism 5.0 were used to perform all analyses. ENO1 is related to the stemness of GCs Considering that CSCs only account for a small fraction of the heterogeneous GC cell lines PAMC-82 and SNUU16, we enriched GCSCs by performing a spheroidforming culture of both PAMC-82 and SNU16 cells. After 7-10 days, both cell lines could form non-adherent spheres containing between 40 and 100 cells, we called them "spheroids". These spheroids could be continuously passaged, and third-passage spherical cells were used in all relevant experiments. To determine if spheroids can be considered as CSCs, we measured the important characteristics of CSCs in spheroids compared with parental cells. A self-renewal assay showed that the capacity of selfrenewal in spheroids was superior to parental cells, as the spheroids had markedly increased the number of colonies when compared to parental cells (Fig. 1A). Then, we A Analysis of the self-renewal abilities of PAMC-82, SNU16 parental cells, and spheroids using methylcellulose spheroid-formation assay. Scale bar, 100 m. B Tumorigenicity assay in PAMC-82, SNU16 parental cells, and spheroids. C Western blot analysis for the expression of ENO1 in PAMC-82, SNU16 parental cells, and spheroids. spheroids: GSCSs enriched from parental cells cultured in spheroid-formation conditions and passaged to the third-passage. Results are from representative experiments in triplicate and shown as the mean ± standard deviation (SD). *P < 0.05, **P < 0.01, ***P < 0.001. assessed the tumorigenicity of these spheroids and parental cells (PAMC-82 and SNU16) using a xenograft model. The results indicated that the same number of spheroid cells could possess a stronger tumorigenic ability as compared to parental cells, thus spheroids had a higher tumorigenic potential (Fig. 1B). Taken together, these results demonstrated that spheroids are CSC-like cells. To verify whether ENO1 could be related to the stemness of GC cells, we investigated the ENO1 expression in spheroids as compared to parental cells (PAMC-82 and SNU16) using western blotting. Results demonstrated that the expression of ENO1 in spheroids was significantly higher than that in parental cells (Fig. 1C). In a word, these findings indicated that ENO1 could be related to the stem cell-like properties of GC cells. ENO1 promotes the stem-like characteristics of GCs To explore the impact of ENO1 on stem-like characteristics, we first used retroviral transduction technology to stably knockdown and overexpress ENO1 in PAMC-82 and SNU16 cells, and confirmed these perturbations by the Western blot ( Fig. 2A). Next, we used these stable cell lines to determine the role of ENO1 in stem cell-like characteristics. First, we investigated the impact of ENO1 on the capacity of self-renewal. Upon ENO1 overexpression (pLenti-ENO1), the ability of selfrenewal in PAMC-82 and SNU16 cells were significantly increased (Fig. 2B). In contrast, the capacity for selfrenewal in ENO1 knockdown (shENO1) cells was markedly decreased as compared to control cells (Fig. 2B). In order to expand our observations in vitro, we explored whether ENO1 could affect the tumorigenicity of GC cells in vivo. pLenti-ENO1, shENO1, and corresponding control cells (pLenti-NC, shcon) were injected subcutaneously into nude mice. We found that tumors derived from the pLenti-ENO1 cells grew faster and weighed more and the tumors derived from shENO1 cells grew slower and were much smaller in weight compared with those originating from corresponding control cells (Fig. 2C). In addition, IHC experiments confirmed the pattern of ENO1 expression in the above-mentioned tumors (Fig. 2D). These results demonstrated that pLenti-ENO1 cells possessed much stronger tumorigenic potentials and shENO1 cells possessed much weaker tumorigenic potentials. Moreover, we found that the expression of the stem cell markers CD44, Nanog, Oct4, and Sox2 increased in pLenti-ENO1 cells while these markers all decreased in shENO1 cells (Fig. 2E). Taken together, these results suggested that ENO1 could enhance the CSC-like characteristics of GC cells. ENO1 promotes characteristics associated with stemness in GCs Several studies have demonstrated that high metastasis and drug resistance may be important characteristics of stem-like cancer cells. Thus, we performed Transwell™ assays to determine the invasion and migration potentials of pLenti-ENO1 and shENO1 cells. Compared with the control group (pLenti-NC, and shcon), the migration and invasion rates of pLenti-ENO1 cells were higher, while those of shENO1 cells were significantly lower (Fig. 3A, B). We then determined the effect of ENO1 on cisplatin resistance. Our results indicated that the overexpression of ENO1 significantly decreased the cisplatin sensitivities of PAMC-82 and SNU16 cells (Fig. 3C). Inversely, knockdown of ENO1 resulted in a marked increase of cisplatin sensitivity (Fig. 3C). ENO1 increases the stemness of GC cells through the promotion of glycolysis As it is well-known that ENO1 is an important enzyme for catalyzing 2-phosphoglycerate to phosphoenolpyruvate in the glycolysis pathway, we wondered whether ENO1 could affect the stemness of cells by enhancing glycolysis. We explored the changes to glycolysis in overexpression and knockdown cells compared with their corresponding control cells. The results showed that glucose consumption and lactic acid production were increased in ENO1 overexpression cells (Fig. 4A). On the contrary, after stably silencing ENO1, glucose consumption, and lactic acid production were both markedly decreased (Fig. 4A). To further confirm that ENO1 could influence glycolytic metabolism, we determined the extracellular acidification rate (ECAR) of these stable cell lines. Consistent with our hypothesis, overexpression of ENO1 increased the ECAR levels (Fig. 4B). Meanwhile, decreased ECAR levels were observed in ENO1 knockdown cells (Fig. 4B). Glycolysis level significantly related with the stemness of GCs To determine whether the glycolysis level could affect the CSC-like characteristics of GC cells, we treated PAMC-82 and SNU16 cells with the glycolytic inhibitor 2-DG and confirmed whether the glycolysis level could be inhibited in these cells. Our results demonstrated that treatment with 2-DG (10 or 20 mM) could markedly inhibit the glycolysis level since glucose consumption and the production of lactic acid were decreased by 2-DG treatment (Fig. 5A). Moreover, we found that 2-DG treatment (10 or 20 mM) could significantly decrease the ECAR levels of these cells (Fig. 5B). We then studied the stem cell-like characteristics of cells that were treated with 2-DG as compared with corresponding basal cells. Firstly, our results demonstrated that 2-DG treatment at 10 or 20 mM could observably decrease the self-renewal capacity of both cells (Fig. 5C). Then we tested the function of 2-DG on cell migration and invasion behaviors and found that 2-DG treatment at 10 or 20 mM all markedly inhibited migration and invasion rates (Fig. 5D). Finally, this indicated that treatment with 2-DG (10 or 20 mM) could strongly increase the cisplatin sensitivity in PAMC-82 and SNU16 cells (Fig. 5E). Furthermore, these studies suggested that the inhibition of glycolysis was related to the stemness of GCs. Taken together, these results demonstrated that overexpression of ENO1 could enhance glycolysis to promote the stemness of cells while knockdown of ENO1 could inhibit glycolysis to reduce the stemness of cells. Thus, ENO1 can regulate glycolysis levels to influence the stem cell-like characteristics of GCs. ENO1 inhibitor (ENOblock) inhibits the stemness of GC cells AP-III-a4 (ENOblock) is a well-known inhibitor of ENO1. To extend our observations as before, we used ENOblock to inhibit the activity of ENO1 in PAMC-82 and SNU16 cells and then investigated the changes of stemness in GCs. We found that treatment with ENOblock (10 or 20 M) could reduce the glycolysis level, that is, ENOblock could decrease glucose consumption and lactic acid production (Fig. 6A). Moreover, ENOblock treatment (10 or 20 M) significantly decreased the ECAR levels of these cells (Fig. 6B). Furthermore, ENOblock treatment at 10 or 20 M could significantly inhibit GCs' self-renewal capacity (Fig. 6C). We then explored the function of ENOblock on cell migration and invasion. Our results indicated that treatment with ENOblock (10 or 20 M) could strongly reduce cells' migration and invasion rates (Fig. 6D). Moreover, cells' cisplatin sensitivities were markedly increased by treatment with ENOblock at 10 or 20 M (Fig. 6E). These results suggested that the function of inhibition by ENOblock was in consistent with the function of ENO1-knockdown. ENO1 is a predictor of poor prognosis in clinical cases of GC To explore the clinical significance of ENO1 in the development of GC, we determined the expression of ENO1 in GCs and their adjacent non-tumorous tissues by IHC. Our results showed that ENO1 expression was positive in 59/83 primary tumors (71.1%), but weak or nonexistent in adjacent normal tissues (Fig. 7A). Table 1 summarizes the relationship between ENO1 expression level and clinicopathological characteristics in patients with GC. Interestingly, our analysis demonstrated that high levels of endochylema ENO1 were markedly correlated with infiltration depth (P = 0.038). We also found that the levels of nuclear ENO1 expression were markedly correlated with Stage (P = 0.023). Nevertheless, there were no statistically significant correlations between ENO1 expression and other clinicopathologic features (Table 1). Kaplan-Meier analysis was used to test whether ENO1 expression was related to the survival of GC patients. This analysis indicated that the overall survival of patients with GC with high levels of ENO1 in the cytoplasm and nucleus were all significantly shorter than those with low or no ENO1 expression (Fig. 7B). In summary, these observations showed that the levels of ENO1 might have an important role in GC progression. Discussion In recent years, an increasing number of reports have confirmed the existence and importance of CSCs in GC 37,38. As we all know, CSCs are a small population of tumor cells, which are characterized by self-renewal capacity, higher tumorigenicity, multiple differentiation, and drug resistance. Stem cell markers are also overexpressed in CSCs such as CD44, Oct4, Lgr5, CD24, (see figure on previous page) Fig. 4 Enolase 1 (ENO1) increases the stemness of gastric cancer (GC) cells via glycolysis promotion. A PAMC-82 and SNU16 cells stably expressing pLenti-NC, pLenti-ENO1, shcon, or shENO1 were cultured for 36 h, then the levels of glucose consumption and lactic acid production were measured according to the cell numbers. Fold changes were normalized (mol/10 6 cells). B Extracellular acid ratio (ECAR) was measured by Seahorse XF in PAMC-82 and SNU16 cells stably expressing pLenti-NC, pLenti-ENO1, shcon, or shENO1. ECAR curves from cells treated with glucose, oligomycin, and 2-DG. Black arrows indicate the time point of cell treatment. Results are from representative experiments in triplicate and shown as the mean ± standard deviation (SD). *P < 0.05, **P < 0.01, ***P < 0.001. and CD133 12. These cells are linked with tumor hierarchy, initiation, heterogeneity, and propagation 38. Spherical cell culture is a mature stem cell-like cell formation technique 9. CSCs in GC tissues and cell lines have been sorted successfully using this method 39. In this study, we obtained GCSCs (spheroids) from the GC cell lines PAMC-82 and SNU16, and we found that these spheroids were characterized by the enhanced capacity of self-renewal and tumorigenicity compared with their respective parental cell lines. Interestingly, we found that ENO1 upregulated in spheroids compared with parental cells, suggesting that ENO1 was possibly associated with these cells' stem-like characteristics. Enolases have three isoenzyme forms, namely alphaenolase, beta-enolase, and gamma-enolase 42. Alphaenolase (ENO1) is mainly present in almost all adult tissues. ENO1 is not only an important enzyme in the glycolysis pathway, catalyzing the dehydration of 2phosphate-D-glycerate to form phosphoenolpyruvate, but also a plasminogen receptor on the surface of various cells 43,44. However, in this study, we only focused its enzymatic role and function. Recently, It has been shown that ENO1 expression is abnormal in many human cancers, including glioma, colorectal cancer, pancreatic cancer, lung cancer, and head and neck cancers 28,29,31,45,46. Furthermore, previous studies have demonstrated that ENO1 was overexpressed in GC tissues and was related to the progression and prognosis of GC 35,36. In this study, we further demonstrated that ENO1 expression was significantly associated with the overall survival of GC patients, implying the important functions of ENO1 in GC progression. Studies focusing on the relationship of ENO1 to CSCs are scarce, including GCSCs. In the present study, we Results are from representative experiments in triplicate and shown as the mean ± standard deviation (SD). *P < 0.05, **P < 0.01, ***P < 0.001. addressed whether ENO1 was associated with GC cells' stem cell-like characteristics. We found that overexpression of ENO1 could increase GC cells' stem celllike characteristics, including their self-renewal capacity, migration and invasion rates, tumorigenicity, and drug resistance. Moreover, the levels of stem cell markers were enhanced in these cells, such as CD44, OCT4, Sox2, and Nanog. On the contrary, the silencing of ENO1 by shRNA could inhibit GC cells' stemness and decreased the levels of these markers. Furthermore, we confirmed these results using the ENO1 inhibitor ENOblock. These results indicated that inhibition of ENO1 by ENOblock also could inhibit the stem-like characteristics of GC cells to a similar agree as the silencing of ENO1 by shRNA. Taken together, ENO1 could markedly regulate GC cells' stemness. ENO1 is considered to be an important enzyme in the glycolytic pathway, but it is not the rate-determining enzyme in glycolysis. To further evaluate the effect of ENO1 on the glycolysis pathway in GC cells, we analyzed the glycolysis changes caused by ENO1. The results of our analysis of glucose consumption and lactic acid production of stable GC cells showed that overexpression of ENO1 significantly enhanced cells' capability for glycolysis. We also demonstrated that the silencing of ENO1 decreased the glycolysis capacity of GC cells. These results showed that ENO1 could increase the stemness of GC cells by enhancing the glycolysis capacity of cells. The phenomenon of increased glycolysis rate in tumor cells is called the Weinberg effect 47. The significance of glycolysis has been increasingly demonstrated in many diverse cancers, including GC. Recent studies have revealed that increased glycolysis levels possibly contribute to the development of cancer cells. For example, ENO1 enhances the level of glycolysis to promote GC cells' resistance to chemotherapy 27. Moreover, accelerated glycolysis increases the proliferation and invasion of non-small cell lung cancer 31. Previous studies have demonstrated that the enhancement of aerobic glycolysis markedly promotes cancer cell growth and development 32,53,54. However, studies focused on the association of glycolysis levels and stem-like characteristics of GC cells are scarce to nonexistent. In this study, we inhibited GC cells' glycolysis capacity by using 2-DG, and then explored the changes of stemness in GC cells. Our analysis of glucose consumption and lactic acid production confirmed that treatment with 2-DG significantly inhibited the glycolysis of GC cells. We also found that inhibiting glycolysis could decrease their capacity for self-renewal, invasion, and resistance to chemotherapy. In summary, inhibition of glycolysis could markedly reduce the stemness of GC cells. Taken together, these results indicated that ENO1 could increase the stemness of GC cells by enhancing the glycolysis capacity of cells. In conclusion, our study illustrated that ENO1 was upregulated in GC spheroid cells that were characterized by increased stemness compared with parental cells, and its upregulation was associated with poor prognosis in GC patients. Functionally, ENO1 could promote the stem-like characteristics of GC cells by prominently regulating tumor glycolysis. Our data demonstrated that ENO1 was connected with the stemness of GC cells and could be used as a predictive biomarker for GCSCs. Future work should illustrate if it is possible to use ENO1 for prognosis and as a therapeutic target in GC.
Birth prevalence and initial treatment of Robin sequence in Germany: a prospective epidemiologic study Background We conducted a monthly epidemiological survey to determine the birth prevalence of Robin sequence (RS) and the use of various therapeutic approaches for it. Methods Between August 2011 and July 2012, every pediatric department in Germany was asked to report new admissions of infants with RS to the Surveillance Unit for Rare Pediatric Diseases in Germany. RS was defined as retro- or micrognathia and at least one of the following: clinically evident upper airway obstruction including recessions, snoring or hypoxemia; glossoptosis; feeding difficulties; failure to thrive; cleft palate or RS-associated syndrome. Hospitals reporting a case were asked to return an anonymized questionnaire and discharge letter. Results Of 96 cases reported, we received detailed information on 91. Of these, 82 were included; seven were duplicates and two erroneous reports. Given 662,712 live births in Germany in 2011, the birth prevalence was 12.4 per 100,000 live births. Therapeutic approaches applied included prone positioning in 50 infants, followed by functional therapy in 47. Conventional feeding plates were used in 34 infants and the preepiglottic baton plate (PEBP) in 19. Surgical therapy such as mandibular traction was applied in 2 infants, tracheotomy in 3. Conclusion Compared to other cohort studies on RS, surgical procedures were relatively rarely used as an initial therapy for RS in Germany. This may be due to differences in phenotype or an underrecognition of upper airway obstruction in these infants. Introduction In 1911, Shukowsky was the first to identify a small mandible as responsible for dyspnea and cyanosis in newborns. In 1923, the French stomatologist Pierre Robin described infants with a hypoplastic mandible and glossoptosis resulting in upper airway obstruction with or without a cleft palate ; he later became the eponym for this condition. In about 50% of affected children, RS is not isolated but associated with other, mostly syndromic anomalies. Incidence figures are scant, ranging from 1:8500 for the Liverpool area to 1:14,000 for Denmark ; there are no such data for Germany. One reason for the rareness of birth prevalence data for RS is probably that no consensus on diagnostic criteria exists for this disorder. RS can lead to complications such as failure to thrive, hypoxemia or cor pulmonale. Although less distinctive forms may only become apparent some weeks after birth with snoring or obstructive sleep apnea (OSA), mandibular hypoplasia appears to be the main problem in RS, particularly if it is associated with the tongue being shifted backwards (glossoptosis), so that its base compresses the epiglottis. Glossoptosis may result in a narrow pharynx and life-threatening respiratory distress. At present there is no consensus about either diagnosis or treatment, and procedures applied seem heterogeneous, but epidemiologic data about their use are missing. We performed a prospective epidemiologic study in Germany to determine 1) the birth prevalence of RS, 2) the distribution of various treatments applied and 3) their perceived effect on airway patency and growth. Methods and patients As part of the Surveillance Unit for Rare Pediatric Conditions in Germany (ESPED), all pediatric departments (459 contact persons) received monthly reporting cards asking them about new admissions of infants with RS between August 2011 and July 2012. Reports on the mailing card prompted immediate mailing of an anonymized threepage questionnaire. Electronic reminders were sent to all non-responders of the full questionnaire. In cases of persistent missing return of the completed full questionnaire, additional telephone requests were performed contacting the local person responsible for the ESPED collaboration (for details see ). Captured were all infants receiving inpatient health care in a pediatric unit during their first year of life independent of the indication leading to hospitalization. Infants with mild expressions of RS never admitted to a pediatric unit during this period were not included. Inclusion criteria were retro-/micrognathia in patients between 0-12 month of age as suspected by the attending physician showing at least one of the following additional criteria: upper airway obstruction, including sub-/intercostal retractions, snoring or hypoxemia, glossoptosis, feeding difficulties, weight < 3 rd percentile at admission, cleft palate or RS-associated syndrome. Regular analyses of ESPED's return rate for completed questionnaires consistently exceeds 93%. Hospitals reporting a case received an anonymized threepage questionnaire from the ESPED study center, designed by our group and asking for basic demographic data and clinical symptoms related to RS, occurrence of craniofacial disorders in the family, time of diagnosis, diagnostic procedures and treatment received, complications, nutrition and growth (online supplement). An anonymized medical report was also asked for. Having collected these data, we excluded cases that were reported in duplicate or did not meet inclusion criteria. To determine the birth prevalence of RS, we used data from the National Bureau of Statistics on the number of births in Germany. As this is an explorative study without a primary hypothesis, no sample size calculations were done. Descriptive statistics were applied to characterize the study population. For evaluating weight gain, standard deviation score (SDS) for weight was computed using the Microsoft Excel add-in LMS Growth (version 2.14; www. healthforallchildren.com/?product=lmsgrowth) and compared using non-parametric tests. The reference population for this program is the British 1990 growth reference fitted by maximum penalized likelihood. A p-value <0.05 was considered statistically significant. Analyses were done with statistical software (SPSS, release 21.0 for Mac; Chicago, Illinois, USA). The study protocol, including a parental consent waiver, was approved by the ethics committee of Tuebingen University Hospital. Results Between August 2011 and July 2012, a total of 96 patients with RS were reported to the study center; detailed information was supplied via returned questionnaires in 91, yielding a response rate of 95%. Of these 91 cases, 82 could be verified, two were erroneous notifications, having a diagnosis other than RS, and 7 were duplicate cases. Given 662,712 live births in Germany in 2011 a birth prevalence of 12.4 per 100,000 live births results. Patient characteristics are shown in Table 1. Unfortunately, several respondents did not provide answers to all items of the questionnaire. Response rates for single items therefore ranged from 70 to 100%. Of the 82 infants with RS, information about associated syndromes was provided in 56: 28 were isolated RS and 28 had additional syndromic features ( Table 2). An underlying genetic disorder was diagnosed either by the attending pediatrician or a geneticist, but only 50% of infants with RS were referred to a geneticist. In 13 children, the malformation was observed prenatally, in 7 pregnancies a polyhydramnion had been noted and a family history of a craniofacial condition was found in 8 patients. A diagnosis of RS was made on the day of birth in 58 infants, in 9 during the first week of life, in five in the first month and in one case after the first month of age. Forty-two children (55% of N = 76 with information supplied) experienced respiratory difficulties necessitating respiratory support. In the first week after birth, 40 patients received nasogastric tube feeding, 16 were fed via a Habermann feeder and in 10 conventional bottle feeding was used. Other performed procedures are shown in Table 3 and implemented therapeutic interventions in Table 4. Nutritional supply at discharge occurred in 26 children via nasogastric tube, in 21 via feeding with a regular nipple and in 12 with the Habermann feeder. Fifty-six children were able to feed independently at discharge. Discussion This is the first epidemiological study on Robin sequence from Germany. The 82 cases of infants with RS reported here correspond to a birth prevalence of about 1:8080, which is in the upper range of that reported by others. Nevertheless our data still might represent an underestimation of birth prevalence as potentially some mild cases of infants with RS were missed if they had never been hospitalized in pediatric units. Another study found a birth prevalence of 1:2500 for infants with nonsyndromic cleft palates, about one quarter of these representing infants with RS. Further incidence data on the disorder are sparse as many other studies consist of (large) case series. Prenatal detection of RS was rare. In 71% of recorded infants a suspicion of RS was raised on the day of birth suggesting that in a substantial proportion of infants initial diagnosis and implementation of adequate diagnostic evaluation could have been missed or at least delayed. This might result in a significantly enhanced risk of affected babies from severe, underestimated breathing difficulties due to unrecognized UAO in their first days of life. Hence improvement of the prenatal detection rate of RS and prenatal referral to a reference center is a substantial concern, at least in Germany. Additionally, higher awareness in midwives and obstetricians of characteristic symptoms and potential consequences of UAO related to the disorder could contribute to earlier recognition of leading symptoms of RS and might allow for a more timely surveillance and treatment of affected neonates. This could potentially lead to an alleviation of long-term consequences resulting from unrecognized hypoxia. When designing a survey such as this, applying a correct definition is paramount. In RS, this is hampered by the fact that there is considerable variability regarding its definition. To minimize potential underreporting, we used a rather broad definition (retrognathia) plus one of the other symptoms/components listed in a recent survey on diagnostic criteria for RS. Reassuringly, however, 89% of infants were reported do have respiratory distress and 85% a cleft palate, i.e. the classic hallmarks of RS. Glossoptosis, however, was reported in only 71%. In our experience as a referral center for RS, glossoptosis may not always be detected by pediatricians not familiar with it. The distribution of syndromic vs. non-syndromic RS is in line with other studies, although some syndromic forms of RS may have been missed as only 50% were referred to a geneticist. Particularly Stickler syndrome may have been missed, as its clinical features are often not immediately apparent after birth. When comparing the diagnostic procedures applied, we realized that only 34% of respondents had used polysomnography, the gold standard for detecting OSA in children with RS. This may have led to an underrecognition of sleep-related upper airway obstruction, thus misinforming subsequent therapy. Regarding treatment, we noted that surgical interventions, frequently reported by others, are apparently only rarely performed in Germany. Only 3 children received tracheostomy and 2 were treated with mandibular traction. Treatments most commonly applied were prone positioning in 50/82 infants, followed by functional therapy through a speech therapist (e.g. Castillo Morales); for details see Table 4. Castillo-Morales therapy, originally developed for children with Down syndrome, involves stimulation of the orofacial musculature to help relieving upper airway obstruction (UAO), but has never been systematically studied in RS. Prone positioning has been described in some case series as a sufficient treatment modality in 50 to 80% of RS infants, but its effectiveness has also not been proven objectively (e.g., by polysomnography). Furthermore, it is concerning that studies report a more than 10-fold increase in the risk of sudden infant death syndrome in healthy infants placed prone for sleep, making it questionable whether parents can safely be advised to place their baby with RS prone for sleep. In our survey, the use of palatal plates was reported in 53 infants. These have been shown to resolve glossoptosis and airway obstruction, however, feeding problems may persist. Use of the preepiglottic baton plate (PEBP) was reported in 19 infants (by 10 centers); it is yet the only intervention for RS whose effectiveness has been tested in a randomized controlled trial. At the time of admission, 40 infants were tube fed, this number was reduced to 26 by the time of hospital discharge; SDS for weight allows assessing weight gain objectively. When comparing SDS for weight at admission and discharge, we saw a decrease by 0.7 standard deviations. Thus, although many infants were discharged without nasogastric tube feeding, they apparently did not gain weight appropriately. Unfortunately, underlying reasons remain unclear. Considering the fact that UAO in the majority of infants was treated with prone positioning as the sole intervention one may speculate that high energy expenditure due to insufficiently treated breathing difficulties based on UAO could be an important source of faltering growth. Poor feeding resulting from swallowing and breathing difficulties might be another reason. In order to elucidate the influence of different factors leading to faltering growth, such as enhanced energy expenditure due to undertreated UAO, feeding difficulties or a potentially underlying genetic disorder, growth data during infancy under different treatment modalities should be evaluated. This could lead to a better appreciation of growth problems in infants with RS and in the long term to a reduction of poor growth and its long-term consequences in this population. We received information on cases only some weeks or months after admission. Our data may thus be subject to recall bias. As questionnaires were not always completed throughout, there is lacking information for several items, the latter also potentially biasing our results. Also, we have no way to ascertain how many cases were missed. Notably outpatients never having required pediatric inpatient treatment or having died prior to transport to a monitoring unit (i.e., a pediatric department) might not have been captured. Thus, patients with mild characteristics of RS might have been missed. This is a limitation of our study. Unfortunately, there are no guidelines in Germany as to whether an infant with suspected RS has to be admitted to a hospital as an inpatient for further evaluation and initial treatment. On the other hand, it is reassuring that our birth prevalence data are in the upper range of what has been reported by others, and response rates with the ESPED surveillance system are high. Finally, our definition for RS differed from that used by others, leading to uncertainty in comparing our results with published data on birth prevalence rates for RS. Conclusion The birth prevalence of RS in Germany is about 1:8000 life births and surgical therapeutic options, dominating the international literature, do not seem to play a major role. Nonetheless, our data confirm that achieving normal weight gain remains a challenge, and we have no data to ascertain whether UAO, frequently encountered in these infants, was resolved by the therapies applied.
The multimillion-dollar developer at the centre of the storm over sunset clawbacks of off-the-plan apartments has been revealed to have just $83.41 in his company’s bank account. Ash Samadi, the man behind the dispute-plagued East Central apartment building in Surry Hills, and now the soon-to-be-built Botanik next door, has lost his NSW Supreme Court bid to delay his company’s financial affairs being scrutinised by the builders of the first block, SX Projects. They’ve successfully sued Samadi Developments Pty Ltd for $1.4 million but haven’t yet received the money. In his court judgment, Justice Francois Kunc​ ordered that Mr Samadi produce all details of the company’s bank accounts and financial documentation, following the dispute over the construction contract. Mr Samadi had previously given a security interest over all his company’s assets to his sister. It later came to light that its bank balance was virtually nil. Justice Kunc said these “might be thought to be some odd features” and, in addition, noted that the company “somewhat unusually” did not even own the land on which East Central was built. Instead, that belonged to two other companies, one owned by his father and sister, and the other owned by a separate company operated by Mr Samadi. Mr Samadi had also asked that he defer having to appear to be grilled about the company’s finances as he was busy with his Porsche racing car team, Garth Walden Racing, at the Bathurst 1000 and then the upcoming Gold Coast 600 event next week. He was ordered to attend court on the originally scheduled date. Samadi Developments, in which Mr Samadi is the sole director and shareholder, is now also facing legal action from six of the seven buyers of two-bedroom apartments in East Central who had their contracts rescinded – in an unusual but quite legal move – because of works that went over the due sunset clause completion date. Some apartments have subsequently been put back on the market by the developer for price tags up to 50 per cent more. Mr Samadi did not return Domain calls. One of those buyers taking action against him is retired oncologist John Stewart, who’d bought a two-bedroom unit for $830,000 in 2012 as his retirement nest egg. Stewart believes the unit is now worth about $500,000 more. “We’ve all gone in together to take legal action against him, but we were shocked to learn there’s only $83 in the account,” says Stewart, 65, who’d bought the apartment through his superannuation fund. “How can that be? That wouldn’t even buy enough fuel for a single lap with a racing car.” The group of people affected by the sunset clawbacks have all donated money to start legal proceedings, asking for documents from all parties that have been involved with the seven-level, 42-unit East Central development on Elizabeth Street. They’re now preparing to lodge a discovery motion at the NSW Supreme Court. Stewart has also put a caveat on the apartment he put the deposit on off the plan, to prevent it from being resold, and says he’s been told that if he doesn’t remove it, he may be sued. “I feel very disenchanted that I bought an apartment three years ago and now I may have nothing to show for that,” says Mr Stewart, the father of four children. “This has cost me a tax-free capital gain of $500,000. “I’m very disappointed, and bemused, that this could ever happen.”
1. Field of the Invention. The present invention relates to storage subsystems and in particular to methods and associated apparatus which provide shared access to common storage devices within the storage subsystem by multiple storage controllers. 2. Discussion of Related Art Modern mass storage subsystems are continuing to provide increasing storage capacities to fulfill user demands from host computer system applications. Due to this critical reliance on large capacity mass storage, demands for enhanced reliability are also high. Various storage device configurations and geometries are commonly applied to meet the demands for higher storage capacity while maintaining or enhancing reliability of the mass storage subsystems. One solution to these mass storage demands for increased capacity and reliability is the use of multiple smaller storage modules configured in geometries that permit redundancy of stored data to assure data integrity in case of various failures. In many such redundant subsystems, recovery from many common failures can be automated within the storage subsystem itself due to the use of data redundancy, error correction codes, and so-called xe2x80x9chot sparesxe2x80x9d (extra storage modules which may be activated to replace a failed, previously active storage module). These subsystems are typically referred to as redundant arrays of inexpensive (or independent) disks (or more commonly by the acronym RAID). The 1987 publication by David A. Patterson, et al., from University of California at Berkeley entitled A Case for Redundant Arrays of Inexpensive Disks (RAID), reviews the fundamental concepts of RAID technology. There are five xe2x80x9clevelsxe2x80x9d of standard geometries defined in the Patterson publication. The simplest array, a RAID level 1 system, comprises one or more disks for storing data and an equal number of additional xe2x80x9cmirrorxe2x80x9d disks for storing copies of the information written to the data disks. The remaining RAID levels, identified as RAID level 2,3,4 and 5 systems, segment the data into portions for storage across several data disks. One of more additional disks are utilized to store error check or parity information. RAID storage subsystems typically utilize a control module that shields the user or host system from the details of managing the redundant array. The controller makes the subsystem appear to the host computer as a single, highly reliable, high capacity disk drive. In fact, the RAID controller may distribute the host computer system supplied data across a plurality of the small independent drives with redundancy and error checking information so as to improve subsystem reliability. Frequently RAID subsystems provide large cache memory structures to further improve the performance of the RAID subsystem. The cache memory is associated with the control module such that the storage blocks on the disk array are mapped to blocks in the cache. This mapping is also transparent to the host system. The host system simply requests blocks of data to be read or written and the RAID controller manipulates the disk array and cache memory as required. To further improve reliability, it is known in the art to provide redundant control modules to reduce the failure rate of the subsystem due to control electronics failures. In some redundant architectures, pairs of control modules are configured such that they control the same physical array of disk drives. A cache memory module is associated with each of the redundant pair of control modules. The redundant control modules communicate with one another to assure that the cache modules are synchronized. When one of the redundant pair of control modules fails, the other stands ready to assume control to carry on operations on behalf of I/O requests. However, it is common in the art to require host intervention to coordinate failover operations among the controllers. It is also known that such redundancy methods and structures may be extended to more than two control modules. Theoretically, any number of control modules may participate in the redundant processing to further enhance the reliability of the subsystem. However, when all redundant control modules are operable, a significant portion of the processing power of the redundant control modules is wasted. One controller, often referred to as a master or the active controller, essentially processes all I/O requests for the RAID subsystem. The other redundant controllers, often referred to as slaves or passive controllers, are simply operable to maintain a consistent mirrored status by communicating with the active controller. As taught in the prior art, for any particular RAID logical unit (LUNxe2x80x94a group of disk drives configured to be managed as a RAID array), there is a single active controller responsible for processing of all I/O requests directed thereto. The passive controllers do not concurrently manipulate data on the same LUN. It is known in the prior art to permit each passive controller to be deemed the active controller with respect to other LUNs within the RAID subsystem. So long as there is but a single active controller with respect to any particular LUN, the prior art teaches that there may be a plurality of active controllers associated with a RAID subsystem. In other words, the prior art teaches that each active controller of a plurality of controllers is provided with coordinated shared access to a subset of the disk drives. The prior art therefore does not teach or suggest that multiple controllers may be concurrently active processing different I/O requests directed to the same LUN. In view of the above it is clear that a need exists for an improved RAID control module architecture that permits scaling of RAID subsystem performance through improved connectivity of multiple controllers to shared storage modules. In addition, it is desirable to remove the host dependency for failover coordination. More generally, a need exists for an improved storage controller architecture for improved scalability by shared access to storage devices to thereby enable parallel processing of multiple I/O requests. The present invention solves the above and other problems, and thereby advances the useful arts, by providing methods and associated apparatus which permit all of a plurality of storage controllers to share access to common storage devices of a storage subsystem. In particular, the present invention provides for concurrent processing by a plurality of RAID controllers simultaneously processing I/O requests. Methods and associated apparatus of the present invention serve to coordinate the shared access so as to prevent deadlock conditions and interference of one controller with the I/O operations of another controller. Notably, the present invention provides inter-controller communications to obviate the need for host system intervention to coordinate failover operations among the controllers. Rather, a plurality of controllers share access to common storage modules and communicate among themselves to permit continued operations in case of failures. As presented herein the invention is discussed primarily in terms of RAID controllers sharing access to a logical unit (LUN) in the disk array of a RAID subsystem. One of ordinary skill will recognize that the methods and associated apparatus of the present invention are equally applicable to a cluster of controllers commonly attached to shared storage devices. In other words, RAID control management techniques are not required for application of the present invention. Rather, RAID subsystems are a common environment in which the present invention may be advantageously applied. Therefore, as used herein, a LUN (a RAID logical unit) is to be interpreted as equivalent to a plurality of storage devices or a portion of a one or more storage devices. Likewise, RAID controller or RAID control module is to be interpreted as equivalent to a storage controller or storage control module. For simplicity of this presentation, RAID terminology will be primarily utilized to describe the invention but should not be construed to limit application of the present invention only to storage subsystems employing RAID techniques. More specifically, the methods of the present invention utilize communication between a plurality of RAID controlling elements (controllers) all attached to a common region on a set of disk drives (a LUN) in the RAID subsystem. The methods of the present invention transfer messages among the plurality of RAID controllers to coordinate concurrent, shared access to common subsets of disk drives in the RAID subsystem. The messages exchanged between the plurality of RAID controllers include access coordination messages such as stripe lock semaphore information to coordinate shared access to a particular stripe of a particular LUN of the RAID subsystem. In addition, the messages exchanged between the plurality of controllers include cache coherency messages such as cache data and cache meta-data to assure consistency (coherency) between the caches of each of the plurality of controllers. In particular, one of the plurality of RAID controllers is designated as the primary controller with respect to each of the LUNs (disk drive subsets) of the RAID subsystem. The primary controller is responsible for fairly sharing access to the common disk drives of the LUN among all requesting RAID controllers. A controller desiring access to the shared disk drives of the LUN sends a message to the primary controller requesting an exclusive temporary lock of the relevant stripes of the LUN. The primary controller returns a grant of the requested lock in due course when such exclusivity is permissible. The requesting controller then performs any required I/O operations on the shared devices and transmits a lock release to the primary controller when the operations have completed. The primary controller manages the lock requests and releases using a pool of semaphores for all controllers accessing the shared LUNs in the subsystem. One of ordinary skill in the art will readily recognize that the primary/secondary architecture described above may be equivalently implemented in a peer-to-peer or broadcast architecture. As used herein, exclusive, or temporary exclusive access, refers to access by one controller which excludes incompatible access by other controllers. One of ordinary skill will recognize that the degree of exclusivity among controllers depends upon the type of access required. For example, exclusive read/write access by one controller may preclude all other controller activity, exclusive write access by one controller may permit read access by other controllers, and similarly, exclusive append access by one controller may permit read and write access to other controllers for unaffected portions of the shared storage area. It is therefore to be understood that the terms xe2x80x9cexclusivexe2x80x9d and xe2x80x9ctemporary exclusive accessxe2x80x9d refer to all such configurations. Such exclusivity is also referred to herein as xe2x80x9ccoordinated shared access.xe2x80x9d Since most RAID controllers rely heavily on cache memory subsystems to improve performance, cache data and cache meta-data is also exchanged among the plurality of controllers to assure coherency of the caches on the plurality of controllers which share access to the common LUN. Each controller which updates its cache memory in response to processing an I/O request (or other management related I/O operation) exchanges cache coherency messages to that effect with a designated primary controller for the associated LUN. The primary controller, as noted above, carries the primary burden of coordinating activity relating to the associated LUN. In addition to the exclusive access lock structures and methods noted above, the primary controller also serve as the distributed cache manager (DCM) to coordinate the state of cache memories among all controllers which manipulate data on the associated LUN. In particular, a secondary controller (non-primary with respect to a particular LUN) wishing to update its cache data in response to an I/O request must first request permission of the primary controller (the DCM for the associated LUN) for the intended update. The primary controller then invalidates any other copies of the same cache data (now obsolete) within any other cache memory of the plurality of controllers. Once all other copies of the cache data are invalidated, the primary controller grants permission to the secondary controller which requested the update. The secondary controller may then complete the associated I/O request and update the cache as required. The primary controller (the DCM) thereby maintains data structures which map the contents of all cache memories in the plurality of controllers which contain cache data relating to the associated LUN. The semaphore lock request and release information and the cache data and meta-data are exchanged between the plurality of shared controllers through any of several communication mediums. A dedicated communication bus interconnecting all RAID controllers may be preferred for performance criteria, but may present cost and complexity problems. Another preferred approach is where the information is exchanged via the communication bus which connects the plurality of controllers to the common subset of disk drives in the common LUN. This communication bus may be any of several industry standard connections, including, for example, SCSI, Fibre Channel, IPI, SSA, PCI, etc. Similarly the host connection bus which connects the plurality of RAID controllers to one or more host computer systems may be utilized as the shared communication medium. In addition, the communication medium may be a shared memory architecture in which the a plurality of controllers share access to a common, multiported memory subsystem (such as the cache memory subsystem of each controller). As used herein, controller (or RAID controller, or control module) includes any device which applies RAID techniques to an attached array of storage devices (disk drives). Examples of such controllers are RAID controllers embedded within a RAID storage subsystem, RAID controllers embedded within an attached host computer system, RAID control techniques constructed as software components within a computer system, etc. The methods of the present invention are similarly applicable to all such controller architectures. Another aspect of the present invention is the capability to achieve N-way connectivity wherein any number of controllers may share access to any number of LUNs within a RAID storage subsystem. A RAID storage subsystem may include any number of control modules. When operated in accordance with the present invention to provide temporary exclusive access to LUNs within commonly attached storage devices such a RAID subsystem provides redundant paths to all data stored within the subsystem. These redundant paths serve to enhance reliability of the subsystem while, in accordance with the present invention, enhancing performance of the subsystem by performing multiple operation concurrently on common shared LUNs within the storage subsystem. The configuration flexibility enabled by the present invention permits a storage subsystem to be configured for any control module to access any data within the subsystem, potentially in parallel with other access to the same data by another control module. Whereas the prior art generally utilized two controllers only for purposes of paired redundancy, the present invention permits the addition of controllers for added performance as well as added redundancy. Cache mirroring techniques of the present invention are easily extended to permit (but not require) any number of mirrored cached controllers. By allowing any number of interfaces (i.e., FC-AL loops) on each controller, various sharing geometries may be achieved in which certain storage devices are shared by one subset of controller but not another. Virtually any mixture of connections may be achieved in RAID architectures under the methods of the present invention which permit any number of controllers to share access to any number of common shared LUNs within the storage devices. Furthermore, each particular connection of a controller or group of controllers to a particular LUN or group of LUNs may be configured for a different level of access (i.e., read-only, read-write, append only, etc.). Any controller within a group of commonly connected controllers may configure the geometry of all controllers and LUNs in the storage, subsystem and communicate the resultant configuration to all controllers of the subsystem. In a preferred embodiment of the present invention, a master controller is designated and is responsible for all configuration of the subsystem geometry. The present invention therefore improves the scalability of a RAID storage subsystem such that control modules can be easily added and configured for parallel access to common shared LUNs. Likewise, additional storage devices can be added and utilized by any subset of the controllers attached thereto within the RAID storage subsystem. A RAID subsystem operable in accordance with the present invention therefore enhances the scalability of the subsystem to improve performance and/or redundancy through the N-way connectivity of controllers and storage devices. It is therefore an object of the present invention to provide methods and associated apparatus for concurrent processing of I/O requests by RAID controllers on a shared LUN. It is a further object of the present invention to provide methods and associated apparatus for concurrent access by a plurality of RAID controllers to a common LUN. It is still a further object of the present invention to provide methods and associated apparatus for coordinating shared access by a plurality of RAID controllers to a common LUN. It is yet another object of the present invention to provide methods and associated apparatus for managing semaphores to coordinate shared access by a plurality of RAID controllers to a common LUN. It is still another object of the present invention to provide methods and associated apparatus for managing cache data to coordinate shared access by a plurality of RAID controllers to a common LUN. It is further an object of the present invention to provide methods and associated apparatus for managing cache meta-data to coordinate shared access by a plurality of RAID controllers to a common LUN. It is still further an object of the present invention to provide methods and associated apparatus for exchanging messages via a communication medium between a plurality of RAID controllers to coordinate shared access by a plurality of RAID controllers to a common LUN. It is another object of the present invention to provide methods and associated apparatus which enable N-way redundant connectivity within the RAID storage subsystem. It is still another object of the present invention to provide methods and associated apparatus which. improve scalability of a RAID storage subsystem for performance. The above and other objects, aspects, features, and advantages of the present invention will become apparent from the following description and the attached drawing.
Some thoughts on ancient civilizations' trinity of philosophy, religion and economics Here are some loud thoughts that reflect upon the relationship that had long existed amidst philosophy, religion and economics in the so-called grand civilizations (that had existed during 3100 BC to the beginning of Christian era). Historically, the visions of intellectuals, rulers, men of faiths, and business people have helped drive these civilizations to their zenith. The philosophies, religions, and economics of the time were deeply involved in this process of development, and seem to have acted in unison. Here is an attempt to provoke some fresh thinking on the subject by re-examining this triad relationship of the fundamental spheres of human life. The logic of this paper attempts to raise doubts, if the relationship was ideal and was based on ethical and moral values, as it was proclaimed by the philosophers, pontiffs, politicians and the business leaders of the time.
Effect of coke bed on the electrical performance of HVDC ground electrode To prevent the high voltage direct current (HVDC) ground electrode from eroding, it is usually wrapped by a coke bed, which may affect the electrical performance of the electrode, such as current distribution and step voltage. In this paper, the effect is analyzed by a numerical method. The method couples moment method with circuit theory by regarding the coke bed as several resistors connecting the ground electrode with soil, which is simple and efficient. The current distribution, ground resistance and step voltage of several typical electrodes with and without coke beds are calculated. The effect of the coke bed on the above parameters is summed up.
SAN DIEGO -- The amphibious transport dock ship USS Anchorage is scheduled to open for public tours Wednesday afternoon at Broadway Pier in downtown San Diego. Tours will be available today from 1-4 p.m., and Thursday from 8:30 a.m. to 4 p.m., the Navy said. The 684-foot-long ship, commissioned in her namesake city three years ago, was built to embark, transport, and land elements of a Marine Corps force in a variety of expeditionary and special operations missions. The Anchorage and other San Antonio class vessels can also deliver expeditionary fighting vehicles, landing craft, air cushion boats and tilt-rotor MV-22 Ospreys. The Navy said visitors will board on a first-come, first-served basis, and may have to wait in line during peak hours. All visitors in line by 4 p.m. will be accommodated for a tour. Guests should also expect security screenings prior to boarding. When touring the ship, visitors are encouraged to bring as few items as possible, and flat-heeled, closed-toe shoes are recommended. High-heel shoes, flip-flops and inappropriate attire like bathing suits are not permitted. The ship is not handicapped accessible and visitors must be in good physical condition to walk safely about the ship, and move up and down steep ladders, according to the Navy. Touring the ship may not be appropriate for small children or those with medical conditions that impede mobility. Adult guests will be required to show a valid U.S. state or federal government issued photo identification card. Minors should be accompanied by an adult with valid ID. Small hand-carried items such as handbags, clear bottles of water, small cameras or diaper bags are permitted, but guests and bags will be screened. -- Large bags or purses, including backpacks or large camera bags.
WASHINGTON (Reuters) - The commander of U.S. Central Command, General Joseph Votel, on Friday rejected claims by Turkish officials that he supported a failed coup attempt earlier this month. U.S. Army General Joseph Votel, commander, U.S. Central Command, briefs the media at the Pentagon in Washington, U.S. April 29, 2016. REUTERS/Yuri Gripas “Any reporting that I had anything to do with the recent unsuccessful coup attempt in Turkey is unfortunate and completely inaccurate,” Votel said, according to the statement from U.S. Central Command. Turkey has undertaken purges of its military and other state institutions following the failed coup, targeting the supporters of U.S.-based Muslim cleric Fethullah Gulen, accused by Ankara of masterminding the July 15-16 coup attempt. Turkey’s Western allies condemned the coup attempt, in which at least 246 people were killed and more than 2,000 injured, but they have been rattled by the scale of the crackdown. Votel issued his statement after Turkish President Tayyip Erdogan condemned Votel’s earlier remarks that some military figures the United States had worked with were in jail as a result of the purge. On Thursday, Votel said at a public forum that he was worried about “longer-term” impacts from the failed coup on counter-terrorism operations and the United States’ relationship with the Turkish military. Those comments drew a condemnation on Friday from Erdogan. “Instead of thanking this country which repelled a coup attempt, you take the side of the coup plotters. The putschist is in your country already,” Erdogan said, referring to Gulen, who has denied any involvement in the coup attempt. Turkey’s cooperation in the fight against Islamic State is of paramount importance to Washington. It is a central part of the U.S.-led military operation against Islamic State, hosting U.S. troops and warplanes at Incirlik Air Base, from which the United States flies sorties against Islamic State militants in Iraq and Syria. Those air operations were temporarily halted following the coup attempt. “Turkey has been an extraordinary and vital partner in the region for many years,” Votel said in his statement. “We appreciate Turkey’s continuing cooperation and look forward to our future partnership in the counter-ISIL fight,” he said, using an acronym for Islamic State.
The 36th annual California Strawberry Festival is seeking corporate sponsors and exhibitors. This year’s festival will be May 18 and 19 at Strawberry Meadows of College Park in Oxnard. Sponsors are included in marketing, public relations and social media campaigns leading up to the festival. A limited number of commercial exhibitor spaces are also available. Money raised through sponsorships and corporate exhibitors help the festival fund regional charitable organizations and post-secondary education scholarships. Since the festival began more than $4.5 million has gone to these efforts. For sponsorship and commercial exhibitor options, visit http://castrawberryfestival.org and click on "Sponsors & Partners." Direct any questions to Marty Lieberman at 818-512-5892, marty@liebermanconsulting.net.
Michael Booth, who died on Jan. 18, had been hospitalized since the crash on Nov. 6. A Spencerport man has died from injuries he sustained in a two-vehicle crash last November. A vehicle driven by Michael Booth, 23, was struck by an oncoming driver on Buffalo Road in Gates on Nov. 6, according to the Monroe County Sheriff's Office. The driver of that vehicle, 88-year-old Waldo Comfort, was issued two tickets at the time of the incident. but was not criminally charged. Deputies did not witness any signs of impairment by drugs or alcohol, according to an MCSO statement. According to the family's GoFundMe page, Booth suffered a traumatic brain injury in the crash. He was on his way to work at Wegmans when the accident occurred on Buffalo Road near Interstate 490. Comfort, a Henrietta resident, was ticketed for unreasonable speed and failing to maintain his lane. Deputies also submitted a driver referral form, which prompts the New York state Department of Motor Vehicles to determine whether a person is qualified to be driving.
Yes, wind power is “green,” but it didn’t become a force on the energy landscape until it also became cheap. Over the past decade, that has begun to happen, thanks to a combination of improvements in technology and federal and state tax incentives. As Stephen Gandel and Katie Fehrenbacher report this week in Fortune, the average cost of wind energy dropped by about a third between 2008 and 2013; in some parts of the country, it’s the cheapest electricity source available. Not coincidentally, as the chart below shows, wind’s share of renewable-energy output has soared. The Department of Energy expects wind to generate 10% of America’s electricity by 2020, up from about 7% today. (By comparison, coal and natural gas today each account for about a third.) There’s no guarantee that the wind empire will keep growing. Climate conditions and population density keep it from being cost-effective in some parts of the country. Perhaps more important, the regulatory climate could become less friendly. The current federal tax credits for wind power will begin to phase out in 2017, and President-elect Donald Trump has said he doesn’t want the government to continue subsidizing the industry. But wind will continue to have at least one powerful backer: Warren Buffett, whose Berkshire Hathaway Energy division is on track to become the country’s largest producer of wind power. For more on Buffett’s effort to put more turbines in the skies, read Fortune’s feature here. The charts above appear in “Warren Buffett’s All-In-Clean-Energy Bet,” part of the 2017 Investor’s Guide in the December 15, 2016 issue of Fortune.
Used private jets are selling for millions less than they were just a few years ago, when demand from buyers in Brazil, Russia, and other emerging luxury markets overwhelmed suppliers. These days a Gulfstream GV sells for just $10 million, down from the $18 million it would have fetched in 2014. If you’re looking to upgrade from economy, now’s the time to pounce.
Ilhan Omar’s tweet came in response to accusations that a tweet she wrote in 2012, accusing Israel of “evil doings,” amounts to anti-Semitism. In an interview with ABC News for a segment titled “Progressive Democrats increasingly criticize Israel, and could reap political rewards,” Omar rejected accusations of anti-Semitism by conservative critics. Minnesota’s primary election is Aug. 14. The ABC News segment noted the recent upset primary victory of Alexandria Ocasio-Cortez in New York, calling her one of several progressives whose willingness to criticize Israel’s actions have paid off politically.
Post-Soviet Russia: The New Hybrid R E T H I N K I N G C L A S S I N R U S S I A portrays a painfully forming, new hybrid of post-Soviet society in Russia. Drastic changes in the post-Soviet space over the last quarter century have transformed the socio-economic structure of the society. Thus, the need has emerged to revise the existing socio-cultural and anthropological foundations of class and society in post-Soviet Russia. A dramatic increase in inequality, unemployment, poverty, homelessness, health problems, alcoholism, drug addiction, and crime, added to already existing problems inherited, and indeed aggravated, from the Soviet era. All these changes presented a challenge not only for the society, but for scholars as well: research and explanations were needed. In order to try and address this challenge, Suvi Salmenniemi assembled a strong team of scholars who are acclaimed experts in a range of social sciences. The market transition of the 1990s was signified by the transformation from a socialist to capitalist society, with all the accompanying changes in class structure taking place swiftly, within just a few years. Changes in the class structure were signified by worsening socio-economic conditions. As Salmenniemi points out in Chapter 3, Post-Soviet Khoziain: Class, Self and Morality in Russian Self-Help Literature, The explosive growth of income disparities and the concomitant social differentiation have been amongst the most palpable and significant consequences of Russias social transformation It has disrupted the sense of human worth, redefined conceptions of morality and social justice, and created gaping chasms between people who not so long ago shared the same social milieu and ideological universe. In these circumstances, a classical interpretation of class becomes an excellent example of bounded rationality. The book argues that class today crosses national boundaries and absorbs layers of social strata. The symbolic dimensions of class are signified by perceptions and ascription of characteristics and identities, both negative and positive. The authors present a highly nuanced look at specific issues in public discourses and practices of class.