text
stringlengths
188
632k
On 10 November 2009, a local news crew were at a road-side site investigating a previous rock slide. Strange popping sounds on the cliff face were heard as small fragments of rock started to bounce down the slope. Luckily, there was a geologist on site, who recognised this as a sign that another landslide might happen, and she moved everybody out of the way. The camera man just stood with his camera and recorded this amazing footage of a rock slide event. This rock is made up of thin layers that dip towards the road that has cut through this mountain side. As water seeps through weak spots (bedding or cleavage planes), the rock loses strength and the rocks slide. On 15 Feb 2010, the side of a hillslope slipped past people whilst they stood and watched. A landslide had happened here before and geologists had seen signs that it would move again so people were evacuated and no one was hurt. Gravity constantly tugs downward on a slope, but only when gravity's force exceeds the strength of the rocks, soils, and sediments making up the slope does land begin to slide down hill. Heavy rainfall in the Maierato region is likely to have started this slide. On 18 June 1972, near 14 Po Shan Road approximately 40 000 m3 of debris travelled some 270 m down slope and resulted in 67 deaths, 20 injuries, two buildings destroyed and one building severely damaged. A construction site above the major part of the landslide was being redeveloped at the time of the landslide. In late 1971 two landslides had occurred at the site. This landslide occurred over a few days. Work on the construction site above the road, together with the exceptionally heavy rainfall in early 1972, caused this landslide. About 1400 mm of rainfall was recorded between May and June 1972 and in particular more than 650 mm of rainfall was recorded from 16 to 18 June 1972 when the main landslide happened. On 29 April 1978 a landslide wiped away an area of 330 km2 (about the same as 47 football pitches) including 13 farms, two homes and the local community centre. The slide contained about 5 to 6 million m3 of material — about 2400 Olympic swimming pools — and was the biggest slide in Norway in this century. Of the 40 people caught in the slide area, only one person died. In this case the farmer dug a pit on his land and put the extra material on the edge of the lake. This extra weight was too much for the clay to cope with and so the landslide began. The slide started at the lake shoreline and developed backwards and landwards taking with it people, farms and homes. This type of landslide is rare and was caused by the special make-up of the clay material. Quick clay was laid down millions of years ago under the sea. Over the years, salt has been removed by water passing through it over time, leaving a clay crust with the salt-free marine clay underlying it. When the clay has too much weight loaded on to it the strength fails, and collapses. It then becomes 'remoulded' and acts like a liquid. Not only did the landslide travel backwards from the lake, it also caused great damage to the community of Leira when as a result of the clay sliding into the lake, a three-metre high floodwave reached the opposite bank of Lake Botnen shortly after the main slide. This is a huge landslide is thought to have occurred in prehistoric times, perhaps 10 300 years ago; an earthquake is the likely cause of this landslide. It is a good example of how a landslide can be studied and mapped through modern satellite imagery. It is known as the Saidmareh Landslide but is also sometimes called the Said Marreh, Seymareh or the Kubi Kuh Landslide. It is 5 km wide and the landslide material covered an area of 64 square miles (165 km2). It is thought that 20 km3 of material was moved 50 billion tonnes of rock were moved in a single event — or enough water to fill eight million Olympic swimming pools! The landslide blocked and dammed two rivers which meant that a pair of lakes formed. These have since drained leaving good soil for farming. The modern river has found a new channel through the landslide deposit. The landslide carried rocks a long way and large blocks of rock have been found as far as nine miles (14 km) from their starting point. The type of deposit left after the landslide occurred and the long distance these blocks travelled suggests that this was a landslide that happened at high speed also known as a rock avalanche or sturztrom.
The utilization of innovation in the classroom is a hot subject right at this point. Terms, for example, m-learning, e-learning, mixed taking in, the flipped classroom, and so forth are being utilized once a day. These trendy expressions are hurled around on Twitter and LinkedIn as progressive new answers for training challenges, however truly what is in all probability is that educators are truly intrigued by what they have been, which is to enhance their students ‘ understanding and maintenance of material being instructed. While innovation is not considered the foundation of the great learning environment, the times are changing as schools put resources into innovation like never before some time recently. This interest has brought about the making of items and administrations custom-worked for classrooms with the finished objective of utilizing innovation to upgrade the learning environment, as opposed to diverting from it. So by what means can Elvin siew chun wai influence innovation to serve the age-old test to expand understudy engagement? We’ve concocted a couple of tips are given interior and outside sources. Abbreviate lesson beginning time ‘Capitalize on class time’ is a general mantra for instructors, and which is all well and good. Practically speaking this implies beginning class on time and setting the desire for students that once the ringer rings, it’s an ideal opportunity to get the opportunity to work. However, as innovation turns out to be more predominant in the classroom, beginning goes up against another measurement that can challenge. In a late overview, we found that postponement because of specialized troubles is a major barricade to efficiency in the working environment, and we think the same is valid for innovation in the classroom. So while assessing classroom innovation, make certain the arrangement bolsters fast startup time and usability so as to get students ‘ consideration rapidly and keep it all through the class. Influence every one of those annoying cell phones and gadgets In any case, portable innovation is digging in for the long haul, and your students are going to bring their cell phones into the classroom. It can represent a test for instructors with a class loaded with Digital locals who spend truly hours on their telephones and tablets every day. These gadgets can without much of a stretch turn into a diversion – however, they don’t need to be. According to Elvin siew chun wai, Utilizing cell phones for teaching in the classroom can accomplish extremely positive results. In one study, more than 65% of students reacted that using their cell phones for scholarly purposes expanded correspondence with personnel and different students. In all actuality, students are exceptionally drawn into utilizing their cell phones, and that reality is unrealistic to change. In the classroom, cell phones can either be a diversion from the lesson substance or they can be utilized to build understudy engagement with the lesson material when those gadgets are used appropriately. Long story short, don’t battle a losing fight against each one of those tablets and cell phones; rather influence them in the classroom as utensils for more extensive engagement.
What are Diamond Sizes? Does diamond size and diamond carat mean the same thing? Diamond size and diamond carat do not mean the same thing, they are related but they are different measurements. Diamond size refers to the physical dimensions of a diamond, such as its length, width and depth. It's the way the diamond looks to the eye. Diamond carat, on the other hand, refers to the weight of the diamond. One carat is equal to 0.2 grams or 200 milligrams. So, a 1-carat diamond weighs 200 milligrams, 2-carat diamond weighs 400 milligrams and so on. A diamond of the same carat weight can have different size depending on its cut, the depth and table percentages, a diamond with a deeper depth will appear smaller than one with shallower depth. Therefore, it is important to consider both the carat weight and the physical dimensions of a diamond when evaluating its size. Does the carat weight affect the price of the diamond? The carat weight is one of the factors that can affect the price of a diamond. Generally, as the carat weight of a diamond increases, so does its price. This is because larger diamonds are more rare and therefore more valuable. However, it's worth noting that carat weight is not the only factor that affects the price of a diamond. Other factors such as cut, color, and clarity also play a role in determining the value of a diamond. A diamond with a high carat weight but poor cut, color, or clarity may not be as valuable as a diamond with a lower carat weight but better overall quality. Additionally, the shape of the diamond also plays a role, for example, fancy shape diamonds are typically cheaper than round ones in the same carat weight. The location of the diamond mine and the demand of the diamond in the market also affect the price. It's important to evaluate all of these factors together when determining the price of a diamond, and consider the overall value and beauty of the diamond, rather than just its carat weight. What diamond carat weight is best for an engagement ring? The best carat weight for an engagement ring is a matter of personal preference and budget. Some people prefer larger diamonds and are willing to pay more for a higher carat weight, while others prefer smaller diamonds or are working with a limited budget. A 1-carat diamond is a popular choice for an engagement ring, as it is large enough to make a statement but not so large that it becomes too expensive. A diamond of 0.5-carat is also a good choice for an engagement ring, it is smaller in size but it can be less expensive, and also can have better quality than a diamond of a higher carat weight. Ultimately, the best carat weight for an engagement ring is the one that fits your personal style, budget, and preferences. It's important to consider not just the carat weight but also the diamond's cut, color, and clarity when making your selection.
Definition of IX (Internet Exchange) It’s called IX (Internet Exchange), IXP (Internet Exchange Point) or DIX (Domestic Internet Exchange). IX refer to Internet traffic exchange between Domestic ISPs. IX networks is the Shared Internet Infrastructure that available for ISPs, International NSPs, Mobile Operators, CATV Operators, Internet Contents Providers to exchange their Internet Traffic with more neutrally, equally, efficiently and economically. Prevention of Finance in Cambodia from Overseas Outflow: - Without IXP, Internet traffics between local ISPs shall go to overseas first and then will return domestic again. This makes many problems like waste of overseas line costs, delayed communication, low stability and others. IXP can solve the problems. - ISP can share IXP’s overseas line without ISP’s independent connection with overseas network to save the costs remarkably and to overcome national waste factors. - The Internet traffic bandwidth between local ISP can be greatly improved to help develop Internet contents businesses, next generation Internet and telecommunication businesses in Cambodia. Saving of Integration Costs between ISP: Without IXP, multiple connections between multi-operators and between ISPs shall be needed to exchange traffics. But, with IXP, sole connection can be made by using IXP to save not only line cost but also administrative cost. - Not only high-speed Internet subscriber-oriented ISP but also contents-oriented ISP can directly exchange each other to minimize hop counts made by transit. - Minimum hop count can lessen latency time that may occur when Internet users download contents. - Internet customers of each ISP can experience higher Internet speed owing to low latency. Will promote new generation of Internet Businesses : - Simple Internet access path by IX allows Internet users to elevate Internet use efficiency and to develop Internet related contents businesses such as portals, filing sharing and e-commerce, etc. - These days, big Internet Portal Service Providers have become powerful Internet traffic source creating more than 30 Gbps traffic: The portal providers’ large traffic makes Internet traffic not to rely upon IDC, and it has built up independent BGP network to connect IXP directly. - Decrease unnecessary International Internet Traffic. Flexible Internet Business - Domestic Internet Exchange Speed is going to Ultra-High. - Int. NSP can connection many ISP immediately without new cable installation. - ISP can choose/change the Int. NSP which provide more competitive price immediately. Expanding to New Business Area - Mobile operator can be a new wired Internet Provider using shared infrastructure of IX. - Internet Contents Provider can provide next generation of Internet Service like VoD, IPTV, Audio/Video Conference with Ultra High speed of bandwidth. - No filtering, No Packet drop - No apply any priority for transit traffic - Transit speed between IX peers must be a equal - No Internet Connection Service to end/home/SMB/Enterprise customers. - From any Domestic ISP, International NSP. - F.O. Cable have to be run by IX itself. Efficiency and Reliability - Minimum Transit Speed is 100Mbps, Maximum is up to 10Gbps. - L-2 and L-3 IX connectivity. - 24*365 availability , Primary & Backup IX system. - IX is the critical resource for nation-wide. - IX Infrastructure like F.O. Cable and Routing/Switch Equipment must be share to every peer as a free for their own business. - Prepare all of Infrastructure for commercial. - Invest the initial all of money to build Commercial IX. - Is the owner of Commercial IX. - Run the cable to all of ISPs. For ISPs and Others - No need to initial investment money. - No need to run the Cable for connecting IX. - Monthly payment is a little bit high than Non-Commercial. - ISPs feel need of IX - ISPs make association to build IX - ISPs invest the initial money to build Non-Commerical IX - ISPs are the owner of IX - Invest money is high. - Run the Cable → Cabling cost high. - Pay for leased polls to EDC. - Pay small money to IX for Maintenance. - HT Networks(HTN-IX) - CIDC – 155 Internet End Users(FY2010) - High Speed transit between domestic ISPs. - Reduce cabling. - Available internet transit to other ISPs by using HT-IX cable. - Choose/Change Int. NSP using IX Infrastructure immediately. - Service Available Internet Traffic to other ISPs using IX Infrastructure immediately. - Save International Internet traffic → Reduce Cost. - Connecting every ISP with High Speed. - Freedom choosing the Int. NSP. - Can make other business like available bandwidth will use to service for Other ISPs using IX Infrastructure. So, will be a New Internet. - No need to run cable to ISPs. It’s make service installation speed more fast. - Only one cable to IX, get many connection to ISPs. - Reduce Cost service to ISPs. Provide more competitive price to ISPs than other Int. NSP. - Get many customers immediately. - High speed to every ISPs . - They can connect each other for ever because IX never die. - They can change connecting immediately. - Fault tolerance. How to get HTN-IX service? - MOU Sign-up each other - Physical Networks connection has to be established with minimum 100Mbps L-2 Ethernet. - Logical L-3 BGP Connection have to be established - You can connection other ISPs, International NSPs, other contents providers as need using HTN-IX infrastructure freely. - Office: +855-23-880-526 - (FAX) +855-23-880-562 - E-Mail: firstname.lastname@example.org - Operation Manager - Ms. Vuth Banaka - Email: email@example.com - Mobile: +855-12-841-624
Discovering Timber-framed Buildings by Richard Harris Half-timbered houses, cottages and barnes are a familiar feature of the landscape, but only rarely do we have an opportunity to see below the surface and understand how they were planned and constructed. Timber-framed buildings catch the imagination of those who work with them because of their beauty, their strength and the quality of the material of which they were made: English oak. Many thousands of buildings of all ages still remain to remind us the strength of the tradition. This book looks behind the commong image of 'black and white' houses, showing how timber buildings were built and how they vary from region to region.
quick find clear Hoya is often called wax plant because its fragrant flower clusters are very waxy. It is a vining plant with thick leaves. Grow it on a trellis or topiary form, or allow it to cascade from a hanging basket. Hoya needs bright light to bloom well but will grow in medium light. It likes temperatures 55-75 degrees F. Allow the soil to dry between waterings. Plants rebloom on flower stalks, so avoid pruning them off. how to grow Hoya more varieties for Hoya Golden wax plant Hoya carnosa 'Variegata' has leaves variegated with creamy-yellow centers. Hindu rope plant Hoya carnosa 'Crispa' has tightly packed, contorted leaves surrounding the stem, bearing resemblance to a braided rope.
One of the mysteries of the English language finally explained. Denoting or referring to a particular but unspecified one out of a set of items.‘not all instances fall neatly into one or another of these categories’ - ‘Sometimes one or the other of you will even find yourself considering giving up on the whole thing and ending the relationship.’ - ‘Each block is labeled with the name of one or another of the characters.’ - ‘That way, guests will move around more and not congregate next to one or the other in a big group.’ - ‘There's only one explanation, and that is that one or the other of them went through our garbage and stole from it.’ - ‘We always create in layers, using many elements, so that if you took away one or another of the elements there would still be a song there.’ - ‘If one or the other of them gets control of that, they get control of the business.’ - ‘Instead I regularly travel to one or another of the more politically enlightened towns mentioned earlier to spend my pension.’ - ‘He was sure that one or another of his brothers would accompany me to the peak.’ - ‘So, at any one time, one or another of us is going through some sort of turmoil, giving rise to unhappiness.’ - ‘Every foreign policy action tends to reinforce one or the other of these approaches.’ Top tips for CV writingRead more In this article we explore how to impress employers with a spot-on CV.
Studies have shown significant value in moist wound healing as opposed to treatment of wounds in a dry environment, and clinical evidence has supported this view for many years. Moist wound healing has been shown to promote re-epithelialization and can result in a reduction of scar formation because a moist environment keeps new skin cells alive and promotes cell regrowth.1 Treatment of wounds in a moist environment additionally shows promise for the creation of a microenvironment conducive to regenerative healing without scar formation.2 For these reasons, clinicians often select dressings that will create and manage a moist wound environment. A decreased inflammatory response creates the possibility for a related decrease in scarring when wounds heal. Studies have shown that chronic wounds generally,3 and diabetic foot ulcers specifically,4 respond especially well to moist healing environments. In the case of chronic wounds, moist healing significantly enhances the rate of wound contraction, with regular assessments demonstrating the size of the wound area reducing over the course of 12 days. When a controlled, moist microenvironment is created, the wound surface acts as a highly permeable membrane. This microenvironment allows for several benefits over dry wound treatment. These benefits include: In addition to more precise monitoring of the foregoing factors, moist treatment has been clinically shown to: A final and important note is that the use and application of moist healing dressings reduce the economic impact on health care systems and home health providers because dressing change frequency per patient is significantly reduced compared with patients with traditional gauze or wet-to-dry wound dressings.5 Precise control and monitoring are essential for a successful moist wound healing process. Recommendations6 for creating a moist wound environment include the application of a moisture-retentive dressing for up to three days. These dressings may include one or more of the following: For the study of wet wound healing, as opposed to simple moist healing environments, other applications include the use of small, saline-filled clear vinyl chambers over the wound sites to create a wet wound environment. Technological advances in moist wound dressings are ongoing. Technical and industry observers report that leading companies are working to develop moist dressings with different properties of absorption, hydration, and antibacterial activity. Current work in the field focuses on improving the condition of the wound bed tissue and providing products that repair and regenerate damaged tissue.7 Watch for promising developments in the areas of keratin-based wound management and keratin-based dressings that can enhance cell growth when paired with moist wound healing technology. 1. Junker JP, Kamel RA, Caterson EJ, Eriksson E. clinical impact upon wound healing and inflammation in moist, wet, and dry environments. Adv Wound Care (New Rochelle). 2013;2(7):348-356. doi:10.1089/wound.2012.0412 2. Cavaliere C. Should you bandage a cut or sore, or let it air out?. Cleveland Clinic. 2017. https://health.clevelandclinic.org/cover-wound-air/. Accessed April 14, 2020. 3. Souliotis K, Kalemikerakis I, Saridi M, Papageorgiou M, Kalokerinou A. A cost and clinical effectiveness analysis among moist wound healing dressings versus traditional methods in home care patients with pressure ulcers. Wound Repair Regen. 2016;24(3):596-601. 4. Ravari H, Modaghegh M-HS, Kazemzadeh GH, et al. Comparision of vacuum-assisted closure and moist wound dressing in the treatment of diabetic foot ulcers. J Cutan Aesthet Surg. 2013;6(1):17-20. 5. Sood A, Granick MS, Tomaselli NL. Wound dressings and comparative effectiveness data. Adv Wound Care (New Rochelle). 2014;3(8):511‐529. doi:10.1089/wound.2012.0401 6. London Health Sciences Centre (LHSC). Wound care management. London, Ontario, Canada: LHSC; n.d. https://www.lhsc.on.ca/wound-care-management/is-the-wound-wet-or-dry. Accessed April 15, 2020. 7. Technavio Research. Global Moist Wound Dressings Market 2018-2022| Advances in Moist Wound Dressing Technologies to Boost Growth| Technavio. Business Wire (English). 2018. https://www.businesswire.com/news/home/20180608005818/en/Global-Moist-W…. Accessed July 6, 2020. The views and opinions expressed in this blog are solely those of the author, and do not represent the views of WoundSource, HMP Global, its affiliates, or subsidiary companies.
How to help you and your loved ones cope with tragedy April 16, 2013 When we are alerted to or involved in community tragedy, our world can suddenly feel unsafe and our place in it uncertain. We are often surprised by the emotional, physical and cognitive impacts that we experience as a result of the trauma event, particularly when the event takes place many miles from our own communities. However, experiencing symptoms of distress following a community tragedy is very normal despite its proximity. This is particularly true when the trauma is intentional and strikes in a context that we have deemed safe and iconic. Such events seem to threaten the very core of our way of life. The Boston Marathon is a symbol of vigor, endurance, aspiration, achievement—America itself. Many view the marathon, which takes place on Patriots’ Day each year, as evidence that anyone with high goals and a strong work ethic can achieve their dreams. It is no wonder that our sense of well-being was acutely disrupted when the finish line was fraught with tragedy. To help minimize the negative psychological consequences of encountering community trauma, individuals can engage in healthy measures that have been demonstrated in the research to mitigate distress symptoms. The following tips are intended to assist you and those you love cope with the emotional and spiritual aftermath of community trauma events. Normalize your experience: Remind yourself and those around you that experiencing psychological stress following such events is normal. It is not unusual to have a range of emotions after community trauma incidents. This emotional distress can be similar in its impact on the body as a physical injury. By normalizing your reactions, you can actually help reduce your body’s release of stress hormones, thus restoring a sense of inner calm and healthy physical and cognitive functioning. Set boundaries on media exposure. Tuning into the event through the media can help us feel a sense of connectedness to those in the broader community; however, over-exposure to the trauma can reawaken feelings of stress. Be sensitive to your need to balance your life with other events not related to the trauma incident. Talk to others. Receive support from others by talking about it. Discussing your feelings with those who are aware of the tragedy can be comforting and reassuring. Help others. Find opportunities to reach out in service to others to remind you that there is a purpose to life, and that we do have control over some things such as doing good in our communities. Take care of yourself. Community trauma events can take a toll on our physical and mental health. Establishing healthy routines, such as exercise and regular mealtimes can help reduce stress symptoms. Engaging in relaxation actives such as taking a bubble bath, lighting a scented candle, or reading a good novel can also help restore balance and lessen feelings of distress. Draw upon your spiritual resources. Traumatic events can disrupt our spiritual worldviews. Engaging in spiritual practices such as prayer, meditation, scripture study, and walking in nature can help us make meaning of the event and find solace in our connection to a higher power. Many families also feel concern about how to help their children cope with such events. The following are suggestions for helping children cope with community trauma events. Model calm. Children turn to adult for emotional cues about how to respond to tragic events. Communicating bout the event in a calm manner helps them feel more safe than when they perceive us as anxious. Be honest and developmentally appropriate in your explanations. Children can sense when we are distorting the truth. This can increase their levels of stress. Share the facts of the events with minimal embellishment and in an age-appropriate manner. Encourage discussion. Help your child to feel safe to discuss her/his feelings. Assure your child that his/her feelings are normal and that they will subside with time. Reassure your child that they are safe. Spend extra time playing. Play games and read to your child to help create a positive environment and to distract your child from the event. Maintain a normal routine. Engage in the daily routines that you established prior to the event to help generate feelings of normalcy and security. Also, be flexible to allow for your child to express special needs or minor alterations to the family routine (e.g. keeping a light in the bedroom at night for a few nights). Be affectionate. Extra affection and attention can help children feel cared for and safe. Hug or hold your child and express love. Affectionate touch and communication reduces physiological stress reactions and reminds both the child and parent that they have each other. Engage your child in spiritual practices. Praying with your child for victims of the tragedy can help your child feel that he/she is supporting the victim. Praying for and engaging in other religious rituals on the behalf of the victims can be empowering for children as well as create a healthy sense of community.
Commentary by Linda Saxon The purpose of the Municipal Freedom of Information and Protection of Privacy Act, 1991, is: a) To provide a right of access to information under the control of government organizations in accordance with the following principles: information should be available to the public; exemptions to the right of access should be limited and specific; decisions on the disclosure of government information may be reviewed by the Information and Privacy Commissioner. b) To protect personal information held by government organizations and to provide individuals with a right of access to their own personal information. Given the number of appeals and breaches of information I have endured, I would like to hear from other residents. How open is government in Amherstburg? How are informal requests for information handled? What about Freedom of Information requests – are they routinely denied?
One of the mysteries of the English language finally explained. A person who plays the organ. - ‘He is a first-class pianist and organist and has composed music and written and published his own poems.’ - ‘As the cathedral organist, he lived just beside it in a lovely old Georgian house.’ - ‘The concert ended with a Mass by Bellini in which the organist, choir and soloists gave a magnificent performance.’ - ‘Thomas Heywood enjoys an outstanding reputation as one of the world's finest concert organists.’ - ‘The choir will sing from the High Altar in the Church and will be accompanied by their own organist.’ - ‘Soloists, organists and all musicians are reminded that their primary role is one of service to the liturgy.’ - ‘Mr Ramsay was one of the more famous cinema organists and was musical director for Union Cinemas, which operated the Ritz and Regal theatre chain.’ - ‘Perhaps the biggest disappointment - certainly for organists - is the organ writing, and particularly Debussy's limited use of the pedal-board.’ - ‘Bach's ‘Clavierubung III’ is one of the monuments of organ literature and has received a number of fine recordings by established organists over the years.’ - ‘Some were town musicians (fiddlers or pipers), others organists, Kantors, or Kapellmeister, and not a few were composers of at least local distinction.’ - ‘Many think organists are just accompanists or just play in church.’ - ‘Countless local churchgoers also knew him as their regular organist and choirmaster.’ - ‘He was an accomplished organist, owning two impressive electronic instruments.’ - ‘Most of the successful film composers began as theater organists or conductors.’ - ‘Students were allowed to play a piece on the organ, and then the organist performed for them.’ - ‘Sebastian had helped train several organists and pianists during a career that spanned over 40 years.’ - ‘I had been invited to sing in a church choir by my friend, the organist's daughter.’ - ‘Famous organists drew crowds in the thousands, sometimes more than ten thousand, to the churches, concert halls, pavilions and even department stores with grand organs.’ - ‘In common with organists, drummers, pianists, heavy metal guitarists, and Indian style violinists - he uses his feet to make music.’ - ‘The Caput ensemble (nine brass musicians, two organists, and a bassist along with various bells and electronics) performs the work.’ Top tips for CV writingRead more In this article we explore how to impress employers with a spot-on CV.
Speaking based on wikipedia is a model socio-linguistic study (represented as a mnemonic) developed by Dell Hymes. It is a tool to assist the identification and labeling of components of interactional linguistics that was driven by his view that, in order to speak a language correctly, one needs not only to learn its vocabulary and grammar, but also the context in which words are used. Then in englishclub, speaking is the delivery of language through the mouth. To speak, we create sounds using many parts of our body, including the lungs, vocal tract, vocal chords, tongue, teeth and lips. This vocalized form of language usually requires at least one listener. When two or more people speak or talk to each other, the conversation is called a “dialogue”. Speech can flow naturally from one person to another in the form of dialogue. It can also be planned and rehearsed, as in the delivery of a speech or presentation. Of course, some people talk to themselves! In fact, some English learners practise speaking standing alone in front of a mirror. Speaking can be formal or informal: - Informal speaking is typically used with family and friends, or people you know well. - Formal speaking occurs in business or academic situations, or when meeting people for the first time. Speaking is probably the language skill that most language learners wish to perfect as soon as possible. It used to be the only language skill that was difficult to practise online. This is no longer the case. English learners can practise speaking online using voice or video chat and services like Skype. They can also record and upload their voice for other people to listen to.
King Solomon loved many women who were not from Israel. He loved the daughter of the king of Egypt, as well as women of the Moabites, Ammonites, Edomites, Sidonians, and Hittites. The Lord had told the Israelites, "You must not marry people of other nations. If you do, they will cause you to follow their gods." But Solomon fell in love with these women. He had seven hundred wives who were from royal families and three hundred slave women who gave birth to his children. His wives caused him to turn away from God. As Solomon grew old, his wives caused him to follow other gods. He did not follow the Lord completely as his father David had done. Solomon worshiped Ashtoreth, the goddess of the people of Sidon, and Molech, the hated god of the Ammonites. So Solomon did what the Lord said was wrong and did not follow the Lord completely as his father David had done. On a hill east of Jerusalem, Solomon built two places for worship. One was a place to worship Chemosh, the hated god of the Moabites, and the other was a place to worship Molech, the hated god of the Ammonites. Solomon did the same thing for all his foreign wives so they could burn incense and offer sacrifices to their gods. The Lord had appeared to Solomon twice, but the king turned away from following the Lord, the God of Israel. The Lord was angry with Solomon, because he had commanded Solomon not to follow other gods. But Solomon did not obey the Lord's command. So the Lord said to Solomon, "Because you have chosen to break your agreement with me and have not obeyed my commands, I will tear your kingdom away from you and give it to one of your officers. But I will not take it away while you are alive because of my love for your father David. I will tear it away from your son when he becomes king. I will not tear away all the kingdom from him, but I will leave him one tribe to rule. I will do this because of David, my servant, and because of Jerusalem, the city I have chosen." The Lord caused Hadad the Edomite, a member of the family of the king of Edom, to become Solomon's enemy. Earlier, David had defeated Edom. When Joab, the commander of David's army, went into Edom to bury the dead, he killed all the males. Joab and all the Israelites stayed in Edom for six months and killed every male in Edom. At that time Hadad was only a young boy, so he ran away to Egypt with some of his father's officers. They left Midian and went to Paran, where they were joined by other men. Then they all went to Egypt to see the king, who gave Hadad a house, some food, and some land. The king liked Hadad so much he gave Hadad a wife -- the sister of Tahpenes, the king's wife. They had a son named Genubath. Queen Tahpenes brought him up in the royal palace with the king's own children. While he was in Egypt, Hadad heard that David had died and that Joab, the commander of the army, was dead also. So Hadad said to the king, "Let me go; I will return to my own country." "Why do you want to go back to your own country?" the king asked. "What haven't I given you here?" "Nothing," Hadad answered, "but please, let me go." God also caused another man to be Solomon's enemy -- Rezon son of Eliada. Rezon had run away from his master, Hadadezer king of Zobah. After David defeated the army of Zobah, Rezon gathered some men and became the leader of a small army. They went to Damascus and settled there, and Rezon became king of Damascus. Rezon ruled Aram, and he hated Israel. So he was an enemy of Israel all the time Solomon was alive. Both Rezon and Hadad made trouble for Israel. Jeroboam son of Nebat was one of Solomon's officers. He was an Ephraimite from the town of Zeredah, and he was the son of a widow named Zeruah. Jeroboam turned against the king. This is the story of how Jeroboam turned against the king. Solomon was filling in the land and repairing the wall of Jerusalem, the city of David, his father. Jeroboam was a capable man, and Solomon saw that this young man was a good worker. So Solomon put him over all the workers from the tribes of Ephraim and Manasseh. One day as Jeroboam was leaving Jerusalem, Ahijah, the prophet from Shiloh, who was wearing a new coat, met him on the road. The two men were alone out in the country. Ahijah took his new coat and tore it into twelve pieces. Then he said to Jeroboam, "Take ten pieces of this coat for yourself. The Lord, the God of Israel, says: 'I will tear the kingdom away from Solomon and give you ten tribes. But I will allow him to control one tribe. I will do this for the sake of my servant David and for Jerusalem, the city I have chosen from all the tribes of Israel. I will do this because Solomon has stopped following me and has worshiped the Sidonian god Ashtoreth, the Moabite god Chemosh, and the Ammonite god Molech. Solomon has not obeyed me by doing what I said is right and obeying my laws and commands, as his father David did. "'But I will not take all the kingdom away from Solomon. I will let him rule all his life because of my servant David, whom I chose, who obeyed all my commands and laws. But I will take the kingdom away from his son, and I will allow you to rule over the ten tribes. I will allow Solomon's son to continue to rule over one tribe so that there will always be a descendant of David, my servant, in Jerusalem, the city where I chose to be worshiped. But I will make you rule over everything you want. You will rule over all of Israel, and I will always be with you if you do what I say is right. You must obey all my commands. If you obey my laws and commands as David did, I will be with you. I will make your family a lasting family of kings, as I did for David, and give Israel to you. I will punish David's children because of this, but I will not punish them forever.'" Solomon tried to kill Jeroboam, but he ran away to Egypt, to Shishak king of Egypt, where he stayed until Solomon died. Everything else King Solomon did, and the wisdom he showed, is written in the book of the history of Solomon. Solomon ruled in Jerusalem over all Israel for forty years. Then he died and was buried in Jerusalem, the city of David, his father. And his son Rehoboam became king in his place.
The November 11 issue of Science reports the sequencing of the sea urchin genome. Every wader in a tide pool knows this spiky creature. What most people don't know is that we are more closely related to sea urchins than we are to worms or flies. Vertebrates and urchins share a common ancestor, 500 million years ago. Now we have a complete readout of the 814 million base pairs (compared to the human 3 billion bases) that are the four-letter code for making an urchin, encoding approximately 23,300 genes. Sea urchins have been standard laboratory animals for over a hundred years, a sort of marine white rat. A century ago Theodor Boveri demonstrated in a famous experiment that a complete set of chromosomes must be present in every cell of a sea urchin for embryonic development to occur normally. The same, of course, applies to us. Expect now to see even more rapid progress in understanding basics of embryonic development, immunology, speciation -- and a more complete understanding of our place among the myriad creatures of Earth. Who would have guessed that a history of life over hundreds of millions of years is written in every cell of our bodies, linking us across the eons with creatures that begin their larval lives as tiny bells of transparent jelly afloat in the sea. This is what the naturalist Donald Culross Peattie called the "most unutterable thing" in evolution, "the terrible continuity and fluidity of protoplasm, the inexpressible forces of reproduction -- not mystical human love, but the cold batrachian jelly by which we vertebrates are linked to things that creep and writhe and are blind yet breed and have being."
|Demography of the ecosystem engineer Crassostrea gigas, related to vertical reef accretion and reef persistence|Walles, B.; Mann, R.; Ysebaert, T.; Herman, P.M.J.; Smaal, A.C. (2015). Demography of the ecosystem engineer Crassostrea gigas, related to vertical reef accretion and reef persistence. Est., Coast. and Shelf Sci. 154: 224-233. dx.doi.org/10.1016/j.ecss.2015.01.006 In: Estuarine, Coastal and Shelf Science. Academic Press: London; New York. ISSN 0272-7714; e-ISSN 1096-0015, meer biogenic carbonate; ecosystem engineer; population structure; accretion; persistence; Oosterschelde estuary |Auteurs|| || Top | - Walles, B., meer - Mann, R. - Ysebaert, T., meer - Herman, P.M.J., meer - Smaal, A.C. Marine species characterized as structure building, autogenic ecosystem engineers are recognized worldwide as potential tools for coastal adaptation efforts in the face of sea level rise. Successful employment of ecosystem engineers in coastal protection largely depends on long-term persistence of their structure, which is in turn dependent on the population dynamics of the individual species. Oysters, such as the Pacific oyster (Crassostrea gigas), are recognized as ecosystem engineers with potential for use in coastal protection. Persistence of oyster reefs is strongly determined by recruitment and shell production (growth), processes facilitated by gregarious settlement on extant shell substrate. Although the Pacific oyster has been introduced world-wide, and has formed dense reefs in the receiving coastal waters, the population biology of live oysters and the quantitative mechanisms maintaining these reefs has rarely been studied, hence the aim of the present work. This study had two objectives: (1) to describe the demographics of extant C. gigas reefs, and (2) to estimate vertical reef accretion rates and carbonate production in these oyster reefs. Three long-living oyster reefs (>30 years old), which have not been exploited since their first occurrence, were examined in the Oosterschelde estuary in the Netherlands. A positive reef accretion rate (7.0–16.9 mm year-1 shell material) was observed, consistent with self-maintenance and persistent structure. We provide a framework to predict reef accretion and population persistence under varying recruitment, growth and mortality scenarios.
Triggering by a laser pulse the reaction of insertion of a calcium atom in a CH3F molecule adsorbed on a rare gas aggregate (LIDYL image). A chemical reaction depends not only of atoms and molecules involved but also of their short range environment. Understanding a chemical reaction demands a fundamental approach taking into account both temporal and spatial features. Therefore, IRAMIS implements with lasers, time-resolved spectroscopies in the range from femtosecond to the millisecond, to study the dynamics of molecular systems, like for example DNA biomolecules, or chromophore molecules for photovoltaics. The various conformations adopted by the biomolecules by virtue of their flexibility are studied by a dual experience - theory approach, with quantum chemical simulations. More complex systems that are out of equilibrium, including isolated gas phase or molecular systems bound to aggregates, are also studied in order to identify and model the forces that drive their reaction dynamics.
Our opinion: Getting natural gas out of the ground turns out to be pretty dirty business. Energy shouldn’t come at the price of drinkable water and clean air. The natural gas industry likes to portray its product as abundant, domestic and clean. Perhaps it thinks two out of three isn’t bad. We don’t. Nor should Congress and the government agencies entrusted with protecting our drinking water and environment. Time and again lately, we’ve received fresh warnings that mining this source of energy is far from a clean process, despite the industry’s often artfully parsed claim that the method of choice — horizontal hydraulic fracturing — is safe. The process involves drilling deep underground, down and horizontally, and pumping in millions of gallons of water, sand and chemicals under high pressure to crack the rock and release trapped gas. The industry is using fracking to tap portions of the Marcellus Shale, a gas-rich rock formation that lies under six states, including New York. The state has yet to issue permits while it drafts regulations. Among the latest rebuttals to the industry’s claim of safety: Thousands of gallons of chemical-laced water spewed into a stream last week from a well in Bradford County, Pa. Homeowners and farmers don’t know if their water is safe now for people and animals. This follows well contaminations elsewhere in the state, which embraced the rush to drill. While the industry likes to note that chemicals are only a tiny fraction of the fracking mixture, a congressional investigation found that it added up to 866 million gallons, including hazardous and carcinogenic compounds, pumped into wells in at least 13 states from 2005 to 2009. And while underground, the water can become radioactive. The U.S. Environmental Protection Agency has told Pennsylvania to test drinking water for radium. Pennsylvania officials have halted the disposal of drilling wastewater through treatment plants that discharge into rivers and streams that provide drinking water for hundreds of thousands of people. The plants, it turns out, aren’t equipped to remove the pollutants. A Cornell University study concluded that fracking contributes to global warming even more than coal or oil burning by releasing methane, a more potent greenhouse gas than carbon dioxide. The industry — which has sought to block release of methane emission data — dismissed the peer-reviewed study as lacking credibility. New York has prudently held off issuing drilling permits at least until regulations are finished this summer. We urge the state, once again, to continue its moratorium until the EPA finishes a study into the safety of hydraulic fracturing, most likely next year. That study, focusing on water, should be expanded to air quality in light of the Cornell report. Likewise, the interstate Delaware River Basin Commission, which controls a watershed that spans New York, New Jersey, Pennsylvania and Delaware, and supplies water for millions including New York City residents, should wait for the EPA study, too, before issuing its own drilling regulations. New York Attorney General Eric Schneiderman should follow through on his threat to sue the commission if it doesn’t take the time for a proper environmental study. Finally, Congress needs to fix mistakes it made in 2005 to exempt hydrofracking from federal clean air and water standards. Lawmakers need to let the agency charged with protecting the environment do its job in regulating an industry that has proven to be anything but clean.
|1||Educational Worksheets For Children| Worksheets for Reading, Handwriting, Spelling, Grammar, Math, Science and much more. www.WondrousWorksheets.com Free, online games, interactive activities and charming animated stories for very young children, all set to calming, gentle music. |3||Chem1 Virtual Textbook:| A free, online resource for General Chemistry aimed mainly at the first-year university level. It offers a more comprehensive, organized, and measured approach than is found in most standard textbooks. It should also be accessible to advanced high-s |4||Discovering Ancient Egypt| All about ancient Egypt, pyramids, temple reconstructions and the pharaohs. Free screen savers and hieroglyphics you can write your name in the ancient script. |5||English Grammar and Vocabulary Worksheets| Download,view and print English Grammar and Vocabulary Worksheets |6||White Group Mathematics| Caters to A level H2 maths learning-sections include detailed advice/recommendations, question locker vault with fully worked problems. Higher level, early college material also available. free printable ESL EFL flashcards and worksheets |8||Ancient Wisdom: Researching the Frontiers of Prehistory| The ancient-wisdom website is an ever-growing reference and research medium for people interested in the areas of prehistory and ancient wisdom. |9||Mrs. Jackson's Classroom-Pre-K-12 Units-Lessons-Activities-I| Teacher classroom website of resources to plan themes, lessons, holidays, projects, activities, parties, and to give the best links, ideas, and info for students, teachers, parents, and homeschoolers. Great ideas for learning! Tons of themes and fun. |10|| Finite Mathematics & Applied Calculus Resource Page | Tutorials, game tutorials, and tons of resources for topics in applied calculus, finite mathematics, and probability
- Lifestyle choices Subsequent blogs describe these factors and their impact on our body, health and wellbeing in more detail. 1. Lifestyle Choices Many experts (ref 1) now acknowledge that degenerative diseases are mainly caused by our lifestyle choices. In recent years, products and services which promote a quick, easy convenience lifestyle rule the day. Consumption of processed foods, denatured dairy products, refined sugars and unhealthy fats have increased dramatically. Simultaneously we are eating less fresh fruit, vegetables, naturally raised and toxin free animals and fish and minimally cooked foods. Lifestyles are becoming increasingly stressful and frantic. Chronic i.e. prolonged stress (ref 2) has become ‘normal’ leading to likely imbalances in the endocrine and hormonal systems. This may prompt constantly elevated cortisol and insulin levels leading to impaired weight control and muscle building functions. Diabetes, obesity, and other metabolic disorders may develop. Your food, drink, medications and air are all saturated with microscopic chemical debris – environmental toxins and Persistent Organic Pollutants (POPs). You can’t see, feel or smell them, but they are real and over time, very dangerous to your health and wellbeing. It appears that the result may be compromised digestive, metabolic and immune systems struggling to cope with toxic overload. Toxicity may lead to all sorts of metabolic disorders like IBS and GERD. In fact, it is now becoming increasingly recognised that gut health is not only vital in itself. But also it appears that there is a very closely regulated gut-brain connection: many experts now view the gut as our second brain (ref 3). It is able to significantly influence mind, mood and behaviour. For example, the greatest concentration of serotonin, which is involved in mood control, depression and aggression, is found in the intestines, not in the brain. Think about ‘butterflies in your stomach’! Researchers are also finding that depression and a wide variety of behavioural problems appear to stem from nutritional deficiencies and/or an imbalance of bacteria in the gut. To summarize, the words of our Founder, Warren Matthews are pertinent: “Our modern diet will make you fat, sap your energy, lower your immune system, destroy your vital digestive system, and make you disease susceptible. In turn this will increase your risk of cancer and brain disease, age you prematurely and literally 'take away your life'.” In the following blogs, we’ll see how exactly this ‘modern lifestyle’ may accelerate aging. - Effects of stress http://www.webmd.com/mental-health/effects-of-stress-on-your-body - Gut-brain connection http://www.health.harvard.edu/healthbeat/the-gut-brain-connection
Technology allows for advances in cosmetic industry Waverly Long February 28, 2018 Science/Tech Throughout history, the tradition of altering one’s appearance with cosmetics has played a role in nearly every culture, tracing back 6,000 years. While the ancient form of cosmetics consisted of rubbing red mineral pigments on people’s skin, our modern technology allows for much more advanced methods of appearance enhancement, some of which even possess medical purposes or health benefits. These methods include, but aren’t limited to, the use of plastic surgery, clear aligners, corneal refractive therapy (CRT) lenses and technologically advanced makeup. Many Paly students have had experience with these various ways of changing one’s appearance. Cosmetic Surgery One of the more permanent methods is cosmetic surgery. Last winter break, junior Jess Weiss underwent a rhinoplasty procedure, a plastic surgery operation commonly known as a nose job. Weiss began meeting with her surgeon almost a year before the surgery. According to Weiss, it is the responsibility of a surgeon to ensure that their patient undergoes the surgery with a healthy mindset and a body in proper condition for the procedure. After meeting with her surgeon for one year, Weiss finally made it to the day of her operation. After the surgery, Weiss researched the procedure to learn about what she experienced. “They cut between the nostrils and peel back your skin,” Weiss said. “Then, depending on what you’re having done, they either break your nose or just shave down. For me, because they had to straighten it, they broke it. Essentially, they cut down little pieces of bone and cartilage. At the end, [they put in] these little tubes that went far up into my nasal passages to help keep it straight during the healing process.” After the procedure, Weiss experienced a lot of pain and took painkillers throughout her recovery period. In addition to her swollen eyes and face, Weiss had to avoid getting her cast wet; she wore gauze beneath her nose to help with the bleeding. She had to restrict her diet to only liquid foods for five days and clean her nose with water and saline solution in order to avoid infection. Though Weiss said her nose will not settle into its final shape until next year, she began to appreciate the long-term effects after she underwent the most painful part of the recovery process and the swelling reduced. She now likes her nose more and feels that it suits her better. According to Weiss, another long-term effect is an increase in self-confidence; however, Weiss gained more confidence from her honesty about the surgery than her appearance. “I probably am more honest now because I was worried if people would be weird or not accepting about [the surgery],” Weiss said. “I had a debate with myself about if I was going to tell people … and it seemed like a really silly thing to not be honest about. So I’d say I probably gained some confidence in that way. I’m very not shy, clearly, about telling people about it and I don’t think it’s something to hide.” Although Weiss was happy with her appearance as a whole, she was never satisfied with her nose, leading her to undergo rhinoplasty. She also knew many people who underwent plastic surgery and had a positive experience. “My parents were super supportive,” Weiss said. “My dad has had a lot of nose surgeries for unrelated, not cosmetic reasons … and they’ve always been very understanding and supportive of me and they weren’t scared about it because my dad had that experience with the surgeries. And also, knowing other people who have done it was a big thing for them in being comfortable with it.” Weiss does not believe the common misconception that those who undergo cosmetic surgery are fake. According to Weiss, altering appearance through cosmetic surgery is not much different from putting on makeup or wearing clothes that make an individual look a certain way. Weiss also emphasizes the importance of refusing to subscribe to the misconception that plastic surgery will be life-changing. “I would not tell people that cosmetic surgery is something you should do because you think it will really do something to change your life — I don’t think that’s accurate,” Weiss said. “You’re still the same person, you’re not really going to be that different.” CRT Lenses While Weiss’ plastic surgery was solely for cosmetic purposes, improvements in technology have also created procedures that affect appearances and have medical benefits. For example CRT lenses, commonly known as nighttime contact lenses, make perfect vision more convenient for those who opt against glasses. Senior Andrew Shieh has been using CRT lenses since middle school and likes not having to worry about contact lenses or glasses during the day. “[CRT lenses] have become more popular in recent years, especially among athletes,” Shieh said. “Since I only need to wear my contacts when I go to sleep, I don’t have to worry about losing them at school or when I work out.” Individuals who use CRT lenses put the contacts on every night before bed. The lenses reshape the cornea of the individual’s eyes while they sleep, so when they wake up, they have perfect vision. The eye gradually returns to its normal shape throughout the day, so the user must wear the contacts every night in order to maintain their vision. Overall, Shieh said he has had a positive experience with his lenses. “Night contacts have been effective for me. My vision has pretty much stabilized and I don’t have to wear glasses anymore.” Andrew Shieh Clear Aligners Clear aligners and CRT lenses are similar in that they both affect one’s appearance and have medical purposes, but while CRT lenses have short-term effects, clear aligners are a permanent solution. They are plastic retainers that adjust an individual’s teeth over a long period of time. People often refer to clear aligners as Invisalign, a popular clear aligner product. Clear aligners are essentially like braces, but are clear and fit over the individual’s teeth rather than attaching to them. According to junior Courtney Kernick, clear aligners can take a significant amount of time to reshape teeth due to the gradual process of adjusting one’s teeth through slight changes to the aligners. “I got a mold of my teeth at the beginning, then they sent it to Invisalign labs and came up with two week increments of sets of Invisalign that would eventually lead to [my] straight teeth and a correction of my overbite,” Kernick said. “Every two weeks, I would get a new set [that] would have a couple millimeters of changes to different teeth and parts of my mouth and that would [eventually created a long-lasting] change over the course of two years.” After wearing both clear aligners and braces, Kernick said she preferred clear aligners due to their conveniency. “Invisalign was a lot easier and more convenient. It was a lot less noticeable than braces, and the metal of braces can rub against your gums and the inside of your mouth and cause abrasions.” Courtney Kernick Makeup Though clear aligners, contact lenses and plastic surgery change an individual’s appearance, the most common form of cosmetic enhancement is makeup. Improvements in technology have created makeup with health benefits, typically seen in skincare products. These include tinted moisturizers with SPF, foundation with acne medication and eyelash serums that condition lashes to be longer and thicker. Senior Kendra Wu uses moisturizers with SPF over 30 and sunscreens with SPF over 50 to help remedy acne scars. “I buy a facial moisturizer with more than 30 SPF,” Wu said. “Wearing more sunscreen causes the pigmentation spots from acne to fade.” Technology has also made makeup more practical for all types of weather and activities. Makeup has undergone significant improvements that have contributed to the waterproof aspect as well as makeup’s long-lasting effects. Blinc, a brand of mascara, utilizes its own type of tube-technology that forms tiny water-resistant tubes around one’s eyelashes, opposed to most mascaras, which only paint the lashes. While regular mascara can be removed with makeup remover or soap and water, Blinc mascara can only be removed by using water and applying light pressure to gently slide the tubes off one’s lashes. This means that unlike regular mascara, Blinc cannot be smudged accidentally. These methods of enhancing one’s appearance have become increasingly common throughout the centuries as improvements in technology have made them more accessible and more effective. However, Weiss emphasizes that at the end of the day, it is what’s on the inside that is of paramount importance. “Just know that the most important thing is to be confident in who you are,” Weiss said. “If you are confident in who you are, no matter what you do you are going to be great and have a good life and no one needs to ‘fix’ anything.” Leave a Reply Cancel Reply Your email address will not be published.CommentName* Email* Website Notify me of follow-up comments by email. Notify me of new posts by email.
Reference Guide to Strength TrainingAn In-Depth Look -- By Jen Mueller and Nicole Nichols, Fitness Experts SparkPeople’s Exercise Reference Guides offer an in-depth look at the principles of fitness. Every movement we make—from walking to driving—involves our muscles. Muscles are unique. They have the ability to relax, contract, and produce force. They are metabolically active, meaning that the more muscle you have, the more calories your body uses at rest and during exercise. Your muscles are highly responsive to strength training, which helps them to become larger and stronger. But if you don’t know anything about strength training, where do you start? Right here! This guide will tell you everything you need to know to begin and even offer a few tips for experienced exercisers as well. What is Strength Training? Strength training is the process of exercising with progressively heavier resistance for the purpose of strengthening the musculoskeletal system. It is also referred to as weight lifting, weight training, body sculpting, toning, body building, and resistance training. What are the Benefits of Strength Training? Regular strength training increases the size and strength of the muscle fibers. It also strengthens the tendons, ligaments, and bones. All of these changes have a positive impact on your physical fitness, appearance, and metabolism, while reducing the risk of injury and decreasing joint and muscle pain. Muscle is metabolically-active tissue. This means that the more muscle you have, the faster your metabolism is while at rest. So, strength training is an important component of weight loss and weight maintenance. Without consistent strength training, muscle size and strength decline with age. An inactive person loses half a pound of muscle every year after age 20. After age 60, this rate of loss doubles. But, muscle loss is not inevitable. With regular strength training, muscle mass can be preserved throughout the lifespan, and the muscle lost can also be rebuilt.<pagebreak> 4 Principles of Strength Training The four principles of strength training are guidelines that will help you strength train safely and effectively to reach your goals. 1. The Tension Principle: The key to developing strength is creating tension within a muscle (or group of muscles). Tension is created by resistance. Resistance can come from weights (like dumbbells), specially-designed strength training machines, resistance bands, or the weight of your own body. There are three methods of resistance: Calisthenics (your own body weight): You can use the weight of your own body to develop muscle, but using body weight alone is less effective for developing larger muscles and greater strength. However, calisthenics adequately improve general muscular fitness and are sufficient to improve muscle tone and maintain one’s current level of muscular strength. Examples include: pushups, crunches, dips, pull ups, lunges, and squats, just to name a few. Fixed Resistance: This method of resistance provides a constant amount of resistance throughout the full range of motion (ROM) of a strength training exercise. This means that the amount of resistance/weight you are lifting does not change during the movement. For example, during a 10-pound curl, you are lifting 10 pounds throughout the motion. Fixed resistance helps to strengthen all the major muscle groups in the body. Examples include: Exercises that use dumbbells (free weights), resistance bands and tubes, and some machines. - Variable Resistance: During exercises with variable resistance, the amount of resistance changes as you move through the range of motion. This creates a more consistent effort of exertion throughout the entire exercise. For example, when lifting weights, it is harder to lift up against gravity and easier to lower the weight down with gravity. Specially-designed machines (like Nautilus and Hammer Strength brands) take the angle, movement, and gravity into account so that the release of a biceps curl feels just as hard as the lifting phase of the curl. Isometric means “same length.” This is a high-intensity contraction of the muscle with no change in the length of the muscle. In other words, your muscles are working hard but the muscle itself remains static. Isometric exercises are good for variety and some strength maintenance, but they don’t challenge your body enough to build much strength. Learn more about isometric exercise here. - Isotonic means “same tension.” When you lift weights or use resistance bands, your muscles are shortening and lengthening against the resistance. This challenges your muscles throughout the entire range of motion. However, the amount of force the muscle generates will change throughout the movement (Force is greater at full contraction/shortening of the muscle). Unlike isometric exercises, this type of contraction does help build strength.<pagebreak> 4. The Detraining Principle: After consistent strength training stops, you will eventually lose the strength that you built up. Without overload or maintenance, muscles will weaken in two weeks or less! This is the basis behind why individuals lose muscle mass as they age—because they are detraining by exercising less frequently. How Much Strength Training Should You Do? When considering the guidelines for aerobic exercise, keep the FITT principles in mind (Frequency, (Intensity, Time and Type). Frequency: Number of strength training sessions per week Aim to train each muscle group at least two times per week, and up to three if you have the time or are more advanced. One day per week may help you maintain your current level of strength, but in most cases, it will not be enough to build muscle. It is important to rest 1-2 days in between working the same muscle(s) again. Rest days give the muscles time to repair themselves from small tears that occur during strength training, and this is how you get stronger. For example, if you do a full body routine on Monday, do not lift again until Wednesday or Thursday (1-2 days). If you decide to split up your strength training (due to time, schedule or personal preference), and do upper body exercises on Monday and lower body exercises on Tuesday, it’s okay to lift two days in a row—because you are working different muscles. You wouldn’t lift upper body again until Wednesday or Thursday, or lower body again until Thursday or Friday. Intensity: How much weight or resistance you should lift This is a tricky one—and if you’re new to exercise, it will take some trial and error. The intensity of the resistance you lift should challenge you. It should be high enough that as you approach your last repetition, you feel muscle exhaustion. Exhaustion means your muscle is so tired that you can’t do another full repetition in good form. Many people do not lift to exhaustion, mostly because they don’t know that they are supposed to. They tend to just lift the number of reps that they have subscribed to and stop. For example, if you are going to do 10 reps of biceps curls, don’t merely stop on that 10th rep if you haven’t reached muscle exhaustion. You could either continue doing reps until you do reach exhaustion, or take this as a sign that the weight you are lifting is too light. Increase your weight until you do feel exhausted on the 10th rep. How much weight/resistance you lift will work hand in hand with the number of reps you do (see Time below). Time: Number of reps and sets you should do Going from the starting position, through the action and back to the starting position counts as one rep. Most people lift somewhere between 8 and 15 reps, which equals one set. Most people do 1-3 sets with rest in between each set. How many reps should you do? Most experts recommend between 8 and 15 reps per set. If your goal is to build strength and muscle size, then aim for fewer reps (like 8-10). Because you are doing fewer reps, you will need a heavier weight to reach muscle exhaustion in each set, so that’s where the words “heavy weight, low reps” come from. If your goal is general fitness or endurance, then aim for more reps (like 10-15). Because you are doing so many, you’ll need a lighter weight. No matter what your goal, be sure to lift resistance that is heavy enough to exhaust you at the end of your set. So, while you may be able to curl 20 pounds and feel exhaustion in 8 reps, you may only be able to lift 12 or 15 pounds if you are doing 15 reps. The ideal number of sets has been debated about for years. A good rule of thumb is 1-3 sets. Research studies have shown that performing 2 sets is not significantly better than one. And performing 3 sets is not significantly better than doing 2. The only significant difference is between 1 and 3 sets. As long as you are working to the point of exhaustion, you can maintain and even build strength by doing only 1 set. But unless you are crunched for time, most beginners start with 2 sets of each exercise. Make sure you rest 30-90 seconds between sets. You can use this time to stretch the muscle you are working and catch your breath or get a drink of water. The longer you rest, the more strength you will have to finish out your next set just as strongly as the previous one—which will aid in your strength development.<pagebreak> Type: Activities that count as strength training Perform exercises to target every major muscle group when strength training: your arms (biceps and triceps), shoulders, chest, back, core (abs, obliques and lower back), and legs (quads, hamstrings, glutes and calves). Make sure you work opposing muscles, not just the ones you see when you look in the mirror (biceps, chest, abs, quads). The opposing muscles are the ones that work in opposition to those (in this case, the triceps, back, lower back, and hamstrings). Also be sure to work the sides of your body: obliques, hips, abductors and adductors (outer and inner thigh). The idea is to achieve balance. The same goes for the upper and lower body. Don’t neglect one or the other just because one is more important to you. This can create imbalance and set you up for injury and pain. Strength training can be done with a variety of equipment such as resistance bands, stability ball, hand weights, machines, or body weight. The Fitness Resource Center has numerous examples of exercises and workouts for you to choose from. These tips will help you get started on the right foot! - Check with your doctor before starting an exercise program. Get more about exercise safety tips for beginners. - Always warm up for at least 5-10 minutes before strength training. - Proper form is essential for safety and effectiveness. Start with light weights as you perfect your form and get accustomed to strength training. Gradually increase the amount of weight you lift over time, by no more than 10% each week. - Always cool down at least 5-10 minutes at the end of your workout. - Vary your exercise program to avoid boredom and plateaus. Changing your routine every 6-8 weeks is crucial to keeping your body/muscles surprised and constantly adapting. They'll have to work harder, you'll be challenged, and you'll burn more calories and build more lean muscle in the process. Learn how to change your exercise routine to avoid plateaus. Drink plenty of water before, during and after exercise to stay hydrated. Machines are best for beginners. They usually have detailed instructions and a picture on them, plus they show which muscles you are working. They are set up to put your body in proper form and isolate the right muscles. They are usually grouped together (upper body, chest, arms, legs, etc) in a weight room, so that you can easily move through them and target every major muscle group. Free weights are more advanced. After you’ve had a good foundation with machines (or body weight exercises) you can move into free weights. When using free weights, form becomes even more important because there is nothing to support you or make you do it properly. Lift in front of a mirror and use the proper benches for support. Always watch the alignment of the joints and their relationships: shoulders, hips, knees, and ankles should be aligned. Your back should remain flat and your abs should be contracted to help support the lower back. Have a trainer assist you and have someone there to spot you if you are lifting heavy weights. Use tools such as the Exercise Demos to help you achieve proper form. - Don’t hold your breath, which can be dangerous (it increases blood pressure and can cause lightheadedness, for example). Exhale fully and forcefully on the exertion phase—usually the phase where you are lifting the weight. Inhale deeply on the easier phase—usually when returning to the starting position. Try to keep this rhythm throughout every set. In the beginning, it will take some concentration, but after a while, it will become habit.
Can melons grow on the ground? Fure-Chyi Chen ³¯ºÖºX furechen at UNIX2CC.NPPI.EDU.TW Mon Jun 10 21:42:24 EST 1996 You can put some rice or other cereal straws beneath the fruit to prevent On Fri, 7 Jun 1996, Dean McCrae wrote: > I have just planted honey melons (Sugar Baby) and water melons in my garden. The > plants are lying on the ground, but I have never tried this before, so I am not > sure if that is the right thing to do. Can anyone help me with this?? > Also, if the melon plants go on the ground, what about the actual melons. Can > they just lie on the soil without rotting or getting damaged? > Thanks for helping. More information about the Plantbio
Acne and its Affect on Mental Health People of all ages are susceptible to the skin disorder known as acne. There are pimples, blackheads, and whiteheads present, which define it. Although acne does not pose a life-threatening hazard, it can significantly affect a person’s mental health. Acne is the most prevalent skin ailment in the US, according to the American Academy of Dermatology. An estimated 50 million Americans are impacted by it annually. Although it can affect adults, acne is most prevalent in teens. There are several variables that contribute to acne, including: - Excess sebaceous gland activity - Pore-clogging dead skin cells There are several ways to treat bacterial acne, including: - Topically applied drugs - Drugs taken orally for acne treatment Although acne can be cured, it can also be a chronic, cyclical problem. This can result in a variety of mental health issues, such as: - A low sense of self - Social exclusion What Effects Does Acne Have on Mental Health? The mental health of an individual may be significantly impacted by acne. It may result in social isolation, low self-esteem, anxiety, and/or depression. Anxiety is more common in acne sufferers than in acne-free people. Acne’s outward manifestation as well as the associated social shame can both contribute to anxiety. Additionally, compared to those without acne, those with acne are more prone to develop despair. The same causes that contribute to anxiety can also create depression, along with the discomfort and physical agony that acne can bring about. Low self-esteem can also be a result of acne. Acne sufferers could be self-conscious about how they look, which might make them avoid social situations and become socially isolated. Because they are self-conscious about their looks, those who have acne may avoid social situations. Social isolation and loneliness may result from this. How Can Someone With Acne Be Helped? There are a few things you may do to support a friend who is dealing with acne: - Be encouraging. Let them know you are there for them and that you are aware of what they are going through. - Encourage them to look for expert assistance. They might wish to consult a dermatologist or mental health specialist if their acne is really distressing them. - Find strategies to manage their acne by assisting them. People who have acne have a variety of options for managing their problem, including topical treatments, oral drugs, and acne surgery. - Encourage them to remain upbeat. Acne sufferers must have a positive attitude and concentrate on their positive attributes. The mental health of an individual may be significantly impacted by acne. It’s crucial to get professional assistance if you have acne problems. You can take a variety of actions to control your acne and enhance your mental well-being.
For Sale: Baby Shoes, Never Worn This is Ernest Hemingway's infamous six-word short story, supposedly written after a bar room bet with a group of his contemporaries. It’s also one of the finest examples ever conceived of what I like to call “iceberg storytelling,” where the writer gives the audience a sparse, stripped-down story with just enough information to fill in the details with their imagination. In the case of this particular story, it forces the audience to imagine the circumstances under which this advertisement was written. It forces them to imagine a miscarriage, a stillbirth, a failed delivery. It forces them to imagine the pain of a grieving mother, selling the shoes because they’re a constant reminder of the child that never was. All of that from six simple words — For Sale: Baby Shoes, Never Worn A powerful metaphor to shape your stories The storytelling metaphor here is simple. With an iceberg, only a small portion of its total mass is visible to the naked eye. The remainder of that mass — the vast majority of it, even — lays beneath the surface of the water. With the example above, the tip of the iceberg is what we’re given. It’s the six-word story. The additional pieces — all of those painful details and themes that we’re forced to imagine — lay beneath the surface. Yet all of this subtext forms the heart of the story, the thing that makes it so compelling, even though we weren’t explicitly told about any of it. This is what makes iceberg storytelling so powerful. The human imagination is capable of incredible things, largely because we tend to use our own experiences and emotions to fill in those blanks. As storytellers, this presents us with an incredible opportunity. By giving audiences simple story cues that are meant to stoke their imagination, then consciously stripping away other story elements and pieces of context, we can potentially create a more powerful experience than if we had simply told the audience everything. Here’s a quote from Hemingway himself that poignantly summarizes the theory: If a writer of prose knows enough of what he is writing about he may omit things that he knows and the reader, if the writer is writing truly enough, will have a feeling of those things as strongly as though the writer had stated them. The dignity of movement of an ice-berg is due to only one-eighth of it being above water. A writer who omits things because he does not know them only makes hollow places in his writing. Using iceberg storytelling in short films Now that you’ve got a basic understanding of how iceberg storytelling works, let’s take a look at an example of how it can be applied to short films. This film comes from Kassim Norris, a director, cinematographer, and colorist based out of Indianapolis. It’s a short adaptation of a feature film that he’s currently working on, and is a fantastic example of how a short film can tell a much larger story than its screen time would indicate. Diving into the script of It Eats You Up In order to help show you what iceberg storytelling looks like in screenwriting, I’ve included a portion of the script for It Eats You Up below. Pay close attention to the pieces of dialogue that reveal something about the larger story. When you see one of these story elements, stop for a moment and imagine to yourself what the larger implications are. Are you ready? Let’s do it. INT. STATE CORRECTIONAL FACILITY – AFTERNOON TARA sits alone inside a grim, dimly lit visitors lounge. She overhears footsteps approaching. A tall dark middle-aged man (DARYL) wearing inmates clothing walks into the room and heads towards Tara. Tara stands up to greet Daryl as he approaches her. She quickly notices that Daryl seems a bit tense. They take a seat across from one another. Tara slides Daryl a carton of cigarettes. Daryl stares at the box of cigarettes for a moment, then hands them back to Tara. He leans in close. Why do you keep coming here? What do you mean? Daryl pauses for a brief moment. You know when you first told me I was your father? I didn’t think it was possible. Then I started to remember how wild I was. And as bad as I wanted it, it just didn’t add up. Tara stares deeply into Daryl’s eyes. You said you found me through the newspaper… OK. The man I killed, thirteen years ago, what’s his name? Tara turns her head away from Daryl and avoids eye contact. Daryl squints his eyes and moves in closer. Why can’t you say his name Tara? Daryl leans back in his seat and puts his head down while exhaling slowly Look! You ain’t gotta to say it, I know why you been coming here. Why you made up all this shit about me being your dad! Tara eyes grow big, shocked by Daryl’s statement The minute I wanted you to be my daughter was the minute I knew you wasn’t. The man I killed…. is your father isn’t it? Tara’s eyes begin to swell with tears as she looks away from Daryl in attempts to fight back her emotions. What you did? I get it. You wanna know why I did it? Daryl takes a deep breath and puts his head down. Let’s just say, the shit eats me up. And if you keep coming here it’ll do the same to you. From this single scene, which is essentially a well-crafted monologue, we can gather so much information about the larger story at play in this film. First and foremost, we can piece together a narrative about Tara seeking out her father’s killer, finding him in prison, then convincing him that she’s his daughter, all in a wayward attempt to gain emotional closure of some sort. Though we actually don’t see any of this, it’s easy to imagine thanks to the smart dialogue. Beyond this story, which is compelling in and of itself, we get to feel the pain of these two people. - A young girl who had her father taken away at a young age, and who wants closure, maybe even revenge. - A prisoner who regrets everyday the thing that put him behind bars, made even more painful by the fact that his victim’s daughter is sitting right across from him. These are emotionally-complex characters in an emotionally-complex situation, and despite the fact that we spend less than 5 minutes with them, we can feel the full weight of that complexity. It’s borderline overbearing. An interview with Kassim Norris, writer and director of It Eats You Up I was able to chat with Kassim about the process of making this film, and get his thoughts on iceberg storytelling in general. Here’s that interview in its entirety. What inspired you to make It Eats You Up, and how did you go about turning that inspiration into fully fleshed-out characters and the overarching story of this film? 'It Eats You Up' is actually a short adaptation of my feature film, 'Adore the Wolf'. In the feature, we follow a 13 year old boy who runs away from home to avenge his fathers death. In the short, we use different characters and settings but still capture that same uncomfortable atmosphere of someone facing their father’s killer. When writing, did you start with that monologue already in mind, or did you start with the larger story of a girl visiting her father’s murderer in prison, and then work your way to that climactic moment? Either way, walk us through the process of getting the script written and the story fine-tuned. I started with the scene, but since the feature was already written, I pretty much knew the outcome I wanted. It was never about the dialog but rather the uncomfortable tension between two people in such circumstances. My goal was to write the silence and just allow the dialogue to fall between the moments of tension. In short, I knew the dialogue would be great if I kept it minimal but mastered the silence. Why are you personally drawn to this idea of iceberg storytelling? I think maybe because it aligns with my view in life. I strongly believe in the idea "less is more". By giving people too much they won't have a reason to appreciate, but giving them only enough will certainly fuel their imagination and curiosity. Also, I believe that in reality people only share small fragments of their life while holding back the many unattractive layers that would reveal who they really are. The iceberg approach is only a mirror of the human experience. You mentioned Japanese cinema in your email awhile back. Tell us about some of the influences on your writing and directing style. As far as influence, storytellers that aren't afraid to be bold, blunt ,and honest rather than embellishing. In a majority of movies, people talk too much. In life, there is more time of silence than verbal communication. In my opinion, mainstream films seem to miss out on the key essence of authentic communication. Artists like Tarkovsky, Nicolas Winding Refn, and Rembrandt are all different but masters of communication without dialogue. My goal is to create the most sincere human experience through an alluring aesthetic. Talk a little bit about why this technique is so much more powerful and compelling than just spoon-feeding the story to the audience. What emotional impact do you think this has for people watching? It goes back to the "less is more" theory. I truly believe that people do not want everything handed to them. I think people are naturally attracted to the mystique. By giving no explanations, people will connect with the characters in a way that is very harmonious. I am creating an unfinished sentence but allowing people to fill their own names and memories in the blanks, which in return allows them to see themselves in the story rather than a predesigned template that forces them to accept a world that they cannot identify with. What advice would you give to someone who's setting out to make an “iceberg" short film While writing your script, stop watching films and watch people. Also, whether you are shooting on film or digital, rehearsing is key. This is where you will learn that what is great in the script may not be good in frame. Be influenced by yourself (and your story) and not that "great film" you saw recently. Where can people learn more about you and stay up to date with your latest films? All in all, iceberg storytelling is an incredibly useful tool to keep in your filmmaker's toolbox. It's an effective way to not only craft a compelling short film, but you can use it to structure a feature film as well, or individual scenes within a feature. And when you combine iceberg storytelling with good writing, acting, cinematography, editing, and sound, you're well on your way to creating an incredible experience for the audience, one that they won't soon forget. Filmmaker's Process is ad-free and always will be because of readers like you. If you find this content useful and want to see it continue for years to come, consider becoming a patron today. Plus there are some pretty cool rewards! If you enjoyed this article, you'll love the Filmmaker's Process newsletter. Each week, we share our latest posts, a weekly filmmaking resource, curated stories from around the web, a short film that we love, and a healthy dose of filmmaking inspiration. Are you ready to take your filmmaking to the next level?
Four forces an aircraft should deal with: Thrust, drag, lift, and weight Just as familiarity with a car’s engine and gear system helps a driver adjust their driving style and improve their car’s mileage, knowing how an aircraft operates and the forces that affect the aircraft’s flight will help you understand the strategies airlines use to improve their aircraft’s fuel efficiency. The four forces that influence the aircraft’s flight and its fuel consumptions are as follows. How these forces influence fuel consumption At a given speed, the increase in the aircraft’s weight affects the drag and cruising altitude. A heavier aircraft flies at lower altitude, as it requires higher air density to provide the necessary lift. Higher air density increases the drag. Since the thrust should exceed the drag for the aircraft to accelerate, fuel consumption increases. The table above shows the fuel consumption for takeoff of an aircraft at different altitudes and gross weights. Fuel consumption increases as weight and altitude increase. In the next article of this series, we’ll see how much United (UAL) lowers fuel consumption through weight reduction, engine modification, and winglets to reduce drag. All United’s peers, including Delta (DAL), American Airlines (AAL), Southwest (LUV), and JetBlue (JBLU), use standard winglets for fuel efficiency, but United is the first company to use the new Split Scimitar winglets. © 2013 Market Realist, Inc.
Bharat Matā (Hindi, from Sanskrit Bhāratāmbā भारताम्बा; अम्बा ambā means ‘mother’) is the national personification of India as a mother goddess. She is an amalgam of all the goddesses of Indian culture and more significantly of goddess Durga. She is usually depicted as a woman clad in a saffron sari holding the Indian national flag, and sometimes accompanied by a lion. The image of Bhāratmātā formed with the Indian independence movement of the late 19th century. A play by Kiran Chandra Bannerjee, Bhārat Mātā, was first performed in 1873. The play set in 1770 Bengal famine depicted a woman and her husband who went to forest and encounters rebels. The priest takes them to temple where they were shown Bharat Mata. Thus they are inspired and led rebellion which result in defeat of the British. The Manushi magazine story traces origin to a satirical work Unabimsa Purana or The Nineteenth Puranaby Bhudeb Mukhopadhyay which was first published anonymously in 1866. Bankim Chandra Chattopadhyay in 1882 wrote a novel Anandamath and introduced the hymn “Vande Mātaram“, which soon became the song of the emerging freedom movement in India. As the British Raj created cartographic shape of India through the Geological Survey of India, the Indian nationalist developed it into an icon of nationalism. In 1920s, it became In the 1920s, it became more political image sometimes including images of Mahatma Gandhiand Bhagat Singh. The Tiranga flag was also started being included during this period. In 1930s, the image entered in religious practice. The Bharat Mata temple was built in Benaras in 1936 by Shiv Prashad Gupt and was inaugurated by Mahatma Gandhi. This temple does not have any statuary but only a marble relief of the map of India. Bipin Chandra Pal elaborated its meaning in idealizing and idealist terms, along with Hindu philosophical traditions and devotional practices. It represented an archaic spiritual essence, a transcendental idea of Universe as well as expressing Universal Hinduism and nationhood. Abanindranath Tagore portrayed Bhārat Mātā as a four-armed Hindu goddess wearing saffron-colored robes, holding the manuscripts, sheaves of rice, a mala, and a white cloth. The image of Bharatmata was an icon to create nationalist feeling in Indians during the freedom struggle. Sister Nivedita, an admirer of the painting, opined that the picture was refined and imaginative, with Bharatmata standing on green earth and blue sky behind her; feet with four lotuses, four arms meaning divine power; white halo and sincere eyes; and gifts Shiksha-Diksha-Anna-Bastra of the motherland to her children. Indian Independence activist Subramania Bharati saw Bharat Mata as the land of Ganga. He identified Bharat Mata as Parashakti. He also says that he has got the Darśana of Bharat Mata during his visit with his guru Sister Nivedita. In the book Everyday Nationalism: Women of the Hindu Right in India, Kalyani Devaki Menon argues that “the vision of India as Bharat Mata has profound implications for the politics of Hindu nationalism” and that the depiction of India as a Hindu goddess implies that it is not just the patriotic but also the religious duty of all Hindus to participate in the nationalist struggle to defend the nation. This association with Hinduism has caused controversy with India’s religious minorities, especially its Muslim population. Bharat Mata temples The Temple, a gift from the nationalists Shiv Prasad Gupta and Durga Prasad Khatri, was inaugurated by Mahatma Gandhi in 1936. Mahatma Gandhi said, “I hope this temple, which will serve as a cosmopolitan platform for people of all religions, castes, and creeds including Harijans, will go a great way in promoting religious unity, peace, and love in the country.” The temple was founded by Swami Satyamitranand Giri on the banks of the Ganges in Haridwar. It has 8 storeys and is 180 feet tall. It was inaugurated by Indira Gandhi in 1983. Floors are dedicated to mythological legends, religious deities, freedom fighters and leaders. The temple is located in Michael Nagar on Jessore Road, barely 2 km away from the Kolkata Airport. Here, Bharat Mata (the Mother Land) is portrayed through the image of ‘Jagattarini Durga’. This was inaugurated on October 19, 2015 (Mahashashti Day of Durga Puja that year) by Shri Keshari Nath Tripathi, the Governor of West Bengal. The initiative to build the temple, which has been named ‘Jatiya Shaktipeeth’, was taken by the Spiritual Society of India in order to mark the 140th Anniversary of ‘Vande Mataram’, the hymn to the Mother Land. - “History lesson: How ‘Bharat Mata’ became the code word for a theocratic Hindu state”. - Visualizing space in Banaras: images, maps, and the practice of representation, Martin Gaenszle, Jörg Gengnagel, illustrated, Otto Harrassowitz Verlag, 2006, ISBN 978-3-447-05187-3 - “Far from being eternal, Bharat Mata is only a little more than 100 years old”. - Roche, Elizabeth (17 March 2016). “The origins of Bharat Mata”. livemint.com/. Retrieved 22 March 2017. - “A Mother’s worship: Why some Muslims find it difficult to say ‘Bharat Mata ki jai'”. - Kinsley, David. Hindu Goddesses: Vision of the Divine Feminine in the Hindu Religious Traditions. Motilal Banarsidass, New Delhi, India. ISBN 81-208-0379-5. pp. 181-182. - Producing India, Manu Goswami, Orient Blackswan, 2004, ISBN 978-81-7824-107-4 - Specters of Mother India: the global restructuring of an empire, Mrinalini Sinha, Zubaan, 2006, ISBN 978-81-89884-00-0 - The Goddess and the Nation: Mapping Mother India, Sumathi Ramaswamy, Duke University Press, 2010, ISBN 978-0-8223-4610-4 - “Archived copy”. Archived from the original on 2016-03-10. Retrieved 2016-03-09. - Kalyani Devaki Menon, Everyday Nationalism: Women of the Hindu Right in India: The Ethnography of Political Violence, University of Pennsylvania Press, 2009, ISBN 978-0-8122-4196-9, p. 89f. - “Patriotism in India: Oh mother: A nationalist slogan sends sectarian sparks”. The Economist. 9 April 2016. Retrieved 9 April2016. - Vinay Kumar (2 October 2012). “It is Jai Hind for Army personnel”. The Hindu. Chennai, India. Retrieved 8 October 2012. - IMPORTANT TEMPLES OF VARANASI, varanasi.nic.in - Eck, Diana L (27 March 2012), India: A Sacred Geography, Potter/TenSpeed/Harmony, pp. 100–, ISBN 978-0-385-53191-7 - Bharat Mata Temple, mapsofIndia.com - Bharat Mata Mandir - Media related to Bharat Mata at Wikimedia Commons - Patriotic fervour The Hindu, August 17, 2003. - The life and times of Bharat Mata Sadan Jha, Manushi, Issue 142. - Bharat Mata Images Prof. Pritchett, Columbia University - The Idea Of Bharat Mata Is Ancient And Originally Indian
Word Embeddings in Python with Spacy and Gensim Word embeddings are vector representations of words, which can then be used to train models for machine learning. One method to represent words in vector form is to use one-hot encoding to map each word to a one-hot vector. However, one-hot encoding of words do not measure relationships between words, and result in huge sparse matrix representations which consume memory and space. n-grams can be used to capture relationships between words, but do not resolve the size of the input feature space, which grows exponentially with the number of n-grams. Using n-grams can also leads to increasing data sparsity, which means more data is needed in order to successfully train statistical models. Word2vec embeddings remedy to these two problems. They represent words in a continuous N-dimensional vector space (where N refers to the dimensions of the vector) such that words that share common contexts and semantics are located in close proximity to one another in the space. For instance, the words "doctor", "physician" and "radiologist" share similar contexts and meanings and therefore share a similar vector representation. Word2Vec trains a neural network with a single hidden layer with the objective of maximizing the probability of the next words given the previous words. The network is not used for the task it has been trained on. The rows of the hidden layer weight matrix are used instead as the word embeddings. For a hidden layer with N=300 neurons, the weight matrix W size is V x N, where V is the size of the vocabulary set. Each row in W corresponds to the embedding of a word in the vocabulary and has size N=300, resulting in a much smaller and less sparse vector representation then 1-hot encondings (where the dimension of the embedding is o the same order as the vocabulary size). This tutorial explains - how to use a pretrained word2vec model with Gensim and with Spacy, two Python libraires for NLP, - how to train your own word2vec model with Gensim, - and how to use your customized word2vec model with Spacy. Spacy is a natural language processing library for Python designed to have fast performance, and with word embedding models built in. Gensim is a topic modelling library for Python that provides modules for training Word2Vec and other word embedding algorithms, and allows using pre-trained models. This tutorial works with Python3. First make sure you have the libraries Gensim and Spacy. You can install them with pip3 via pip3 install spacy gensim in your terminal. Word Vectors With Spacy Spacy provides a number of pretrained models in different lanuguages with different sizes. I choose to work with the model trained on written text (blogs, news, comments) in English. A list of these models can be found here: https://spacy.io/models. The similarity to other words, the vector of each processed token, the mean vector for the entire sentence are all useful attributes that can be used for NLP. Predicting similarity is useful for building recommendation systems or flagging duplicates for instance. In the example below, the words software, computer and mail are all present in the vocabulary the model was trained on, and their vectors can be accessed. The token hjhdgs is out-of-vocabulary and its vector representation consists of a zero vector with dimension of 300. Word Vectors With Gensim Gensim does not provide pretrained models for word2vec embeddings. There are models available online which you can use with Gensim. One option is to use the Google News dataset model which provides pre-trained vectors trained on part of Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The archive is available here: GoogleNews-vectors-negative300.bin.gz from https://code.google.com/archive/p/word2vec/. The raw output vectors can be accessed via gensim_model['computer'], and can be used for your NLP task. Creating your own word2vec model with Gensim A pretrained word embeddings model might not capture the specificities of the language for a spcefic domain. For instance, a model trained with Wikipedia articles might not have exposure to words and other aspects to domains such as medicine, or law. Out of vocabulary words might be another issue with a pretrained model. Training your own word2Vec might lead more optimal solutions for your application. It is possible to train your own word2vec model with Gensim. This section covers the necessary steps. To test the impact of having a customized word2vec model, I downloaded the 2017-10-30 Sample dataset (10k papers, 10MB) from Open Corpus https://api.semanticscholar.org/corpus/download/, which includes abstracts from 10k published research papers in Neuroscience, and Biomedical fields. I expect the customized model will provide word vectors that are more accurate for words in the neuroscience, and biomedical fields than the Google News dataset pretrained model. We limit the training to 10k research papers for demonstration purposes. However, a larger dataset might be better suitable for a real application. Input For Training Gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words. Sentences can be a generator, reading input data from disk on-the-fly, without loading the entire corpus into RAM. Instead of keeping a an in-memory list of sentences, which can use up a lot of RAM when the input is large, we build the class IterableSentences, where each file in the corpus is processed line by line. We use regex to preprocess the text. Every sentence is convereted to lowercase and all the digits, special characters, and extra spaces from the text are removed. After preprocessing, the generator returns a list of lowercase words. Additional preprocessing can be added in IterableSentences.__iter__. IterableSentences looks at text files within a folder. In my case, all the abstracts are loaded in one text file in a directory called dataset. Word2vec accepts several parameters that affect both training speed and quality. We focus here on few: - min_count to ignore words that do not appear above a certain level and for which there isn't enough data to learn useful representations. - size to set the size opf the hidden layer. A larger number of neurons in the hidden layer means a larger vector reprenstation, which can lead to more accurate models. However, larger numbers require more data for training. The other paramters to control training can be found here. Exploring the Customised Model To explore the benefits of having a customized model, we look at some specific examples: In the example below, the customized model exclude brain from the list ['woman', 'ovarian', 'brain'] which is more accurate than excluding woman when looking at a biomedical domain. When looking at the most similar word to author, the customised model provides a list that is more specific to research papers, whereas the pretrained model provides a list that is more specific to literary books. For the word hydroxylasepositive, the pretrained model fails since the word is OOV, which is not the case for the customised model. Using The Customised Model With Spacy It is possible to use the model we trained with Spacy, taking advantage of the tools that Sapcy provides. Here is a summary of the steps to use the customized model with Spacy: Save your model in plain-text format: Gzip the text file: which produces a word2vec.txt.gz file. Run the following command: Load the vectors in Spacy using: The word2vec model accuracy can be improved by using different parameters for training, different corpus sizes or a different model architecture. Increasing the window size of the context, the vector dimensions, and the training datasets can improve the accuracy of the word2vec model, however at the cost of increasing computational complexity. Training speed and performance can be improved by removing very frequent words that provide little information, such as a, the and and. Frequently occurring bigrams and trigrams can be detected with Gensim Phraser, which can improve the accuracy and usefulness of the embeddings. For example, the model can be trained to produce a vector for new_york, instead of training vectors for new and york. - Word2vec Tutorial, Radim Řehůřek, https://rare-technologies.com/word2vec-tutorial/ - In spacy, how to use your own word2vec model created in gensim? https://stackoverflow.com/questions/50466643/in-spacy-how-to-use-your-own-word2vec-model-created-in-gensim - Word Vectors and Semantic Similarity, https://spacy.io/usage/vectors-similarity - models.word2vec – Word2vec embeddings, https://radimrehurek.com/gensim/models/word2vec.html More Tutorials to Practice your Skills on: By Chris Tegho Chris is currently working as a Machine Learning Engineer for a startup, Calipsa. His interests include Bayesian neural networks, reinforcement learning, meta reinforcement learning, variational inference and computer vision. He completed his Masters in machine learning at the University of Cambridge, in August 2017. For his thesis, he worked on improving uncertainty estimates in deep reinforcement learning, through Bayes By Backprop, for the application of dialogue systems. In the past, he's worked as a cloud developer and consultant for an ERP consulting company in Montreal, and did my Bachelors in Electrical Engineering at McGill.
“In the beginning was the Logos…and the Logos was made flesh.” (Jn. 1:1,14) The Gospel of John starts out by telling us that the Logos became the person of Jesus Christ. But was the Logos a person before that? In the beginning, was the Logos a person, a “word”, or something else? According to Trinitarian theology, the Logos was a person, and then changed dramatically at the incarnation, to become another person, a God/man. This may come as a surprise, but for the first 150 years of the church’s existence the Logos of John 1:1 was not considered to be a person. If the earliest church didn’t consider the Logos to be a person, then what on earth (or in heaven) was it, in their minds? Did John ever intend for the Logos to be what it later became in Trinitarian theology – a divine person who was part of an eternal godhead consisting of three persons? Just what does this word actually mean? What sources are we going to use to help us drill down on what the author meant by “Logos”? Should we look to a Greek understanding under the influence of Plato, or the Stoics, or the Gnostics? A Greek and Roman point of view as influenced by the Jewish philosopher Philo? A strictly Jewish understanding since the author is Jewish and he wrote to a Jewish audience? Or a contemporary theological approach that insists on a Trinitarian understanding? These are the typical approaches but I submit the answer is much more simple and accessible to us all, even to non-academics. How about we just look the word up in a dictionary? Seriously Kirby? It’s really that easy? Yes, it’s really that easy. Here it is: What Does Logos Mean? The second part of a dictionary definition of Logos (from http://www.biblestudytools.net. – the same can be found in Strong’s Concordance): - Its use as respect to the MIND alone: - reason, the mental faculty of thinking, meditating, reasoning, calculating - account, i.e. regard, consideration - account, i.e. reckoning, scored. - account, i.e. answer or explanation in reference to judgment - relation, i.e. with whom as judge we stand in relation - reason would - reason, cause, ground Does that sound like a person, or does that sound like a thought, something in someone’s mind? This definition of logos as something in one’s mind can be clearly seen in the following usages in other places in the New Testament where logos is translated into the underlined words in all capital letters: Ac 18:14 – Gallio said unto the Jews, “If it were a matter of wrong or wicked lewdness, O you Jews, REASON would that I should bear with you.” Ac 10:29 – Therefore came I unto you without gainsaying, as soon as I was sent for: I ask therefore for what INTENT you have sent for me? Ac 8:21 – You have neither part nor lot in this MATTER: for your heart is not right in the sight of God. Ac 15:6 – And the apostles and elders came together to consider this MATTER. Mt 5:32 – But I say to you, that whosoever shall put away his wife, saving for the CAUSE of fornication, causes her to commit adultery. Mt 18:23 – The kingdom of heaven is likened unto a certain king, which would take ACCOUNT of his servants. Lu 16:2 – And he called him, and said unto him, How is it that I hear this of you? Give an ACCOUNT of your stewardship; for you can no longer steward. Encyclopedia Britannica includes “plan” as one of the English equivalents for Logos, along with “word” and “reason.” Logos as “thinking” can also be seen in our English word logic which came from the Greek word logos as well as the words biology, from bios, Greek for life, and logos, Greek for thinking, hence, the study of life, as well as anthropology, the study of man, and psychology, the study of the soul, and many other academic disciplines. Only the scriptures that translate logos as something other than “word” are listed above to illustrate that though this is the second, less frequent use of logos, this usage was common to the Biblical writers and to their audience. If I were to travel back in time to 1st century Palestine with a drawing for a house plan, buy a lot and start building that house and showed that plan to the neighbor next door he might say I had shown him a nice logos, because he would know that the plan on paper is the plan in my head. He might also tell his relatives that the guy who owns the lot next door is making “all things according to it (the logos) and it’s going to be a fine house.” The Logos, the Plan of God Seeing the Logos as a plan rather than a word or a person helps to explain a number of other concepts in the bible. From the beginning of creation God created all things with a plan in mind. That shouldn’t surprise us. What should surprise us is if God created all things without a plan but rather just haphazardly threw the whole thing together on a whim. Like some of the omelettes I’ve made when my dear wife was not around to rescue me from my own cooking. The plan that God had in mind when he created all things John calls a Logos at the beginning of his Gospel. This Logos was the blueprint for creation, and Jesus Christ is central to that plan. To paraphrase John, he is saying Jesus is God’s plan. Everything that God did before Jesus was done with Jesus in mind. That’s what is meant by “through him all things were made” two verses later and “through whom he made the worlds” in Heb. 1:2. What was in God’s mind was finally revealed when that plan became flesh, when Jesus was born, walked among us, was crucified and rose again. That’s how a first century Greek speaking Christian of Jewish descent would read John. He wouldn’t be thinking of the Logos as a person. That kind of thinking came later – in the second century. One way to understand John chapter 1 is to think in terms of predestination. Not the silly kind of predestination that Calvinists teach, but the ability to plan at one point in time and then being able to follow up at a future point in time. For example, if I buy green paint for my house, it’s not because I’m predicting that it will end up green, it’s because I am going to make sure it will end up green. When someone asks what color my house will be in a year (assuming I don’t get lazy and put it off for two years) I can predestinate what color it will be because I have a certain ability to make it happen. My logos is to have a green house. It’s through this logos I go to the paint store and buy the paint. If they try to sell me on a special for blue paint I’m going to say, “No thanks, that’s not according to my logos.” Being human, however (lazy in other words), I might not follow through with my plans, so to say I can predestinate is a bit of a stretch, but with God, that’s not a problem. His plans happen. His Logos is certain. An example in scripture of a predestined plan with regard to the person of Jesus is when it says in Revelation 13:8 that Jesus was the “Lamb slain from the foundation of the world.” Are we to believe that Jesus was actually slain in the Garden of Eden, or during creation? Of course not. This is a statement about God’s intention, how God had it in his mind from the beginning, in his Logos, that He would provide His Son to die on the cross for us. When God has a plan to do something, it’s as good as done so John speaks of it as a done deal from the beginning. He was “slain at the foundation of the world.” The beauty of the Gospel is that God’s plan as manifest in the person of Jesus involves much more than salvation. That is only the first step of faith in our walk with him. John 1 is just one of many scriptures that speak of a Logos, or plan beyond our salvation. God’s plan is also that we would become like Christ. We are continually being transformed into his likeness, being raised from one stage of glory to another by the power of God (2 Cor. 3:18). That is what the New Covenant is all about (Jer. 31:33). Though salvation is a glorious benefit, God’s plan is not that we all get saved. God’s plan is that we all move away from wickedness and toward Christlikeness. “Through him (the Logos) all things were made. Without him nothing was made that has been made,” (Jn. 1:3) is saying the same thing Paul said in Col. 1:16: “For in him were all things created, that are in heaven, that are in earth, visible and invisible, whether they be thrones, or dominions, or principalities, or powers: all things were created through him, and into him.” The principalities mentioned here refer to angelic and demonic powers and yes, both were created as part of God’s plan to form Christ-like character. The angels are like the carrot drawing us in that direction while the demons are like the hound chasing us in that direction. That’s a different picture than what we get from the idea that there is this cosmic war going on between the forces of evil and the forces of good – which in my mind is a silly idea if taken literally. There simply is no contest between an omnipotent God and anybody or anything else. It makes much more spiritual sense to think of demonic powers as helpless pawns in God’s grand scheme of forming godly character in his children. These teachings reinforce the idea that Jesus is the Logos, the Plan of God, and all things are made with that plan in mind. That Plan doesn’t stop with creation or the person of Jesus Christ. It doesn’t stop with our salvation. His Plan includes the formation of godly character in mankind in much the same way godly character was formed in that child born of Mary in Bethlehem. This character was “imprinted” into Jesus’ human character, according to Heb. 1:3, where Jesus’ human nature is likened to soft clay and God’s divine nature is likened to the hard stamp that imprints that clay in a wax seal. The result is seen in Rom. 8:29, “For whom he did foreknow (to submit to his ways), he also did predestinate to be conformed to the image of his Son, that he might be the firstborn (read that, forerunner) among many brethren.” That is a beautiful promise, often obscured by the Calvinist interpretation which centers on who was chosen instead of to what certain people were chosen. The verse says nothing about God picking who will be saved, it is about God determining the future of those who are saved, and it is a glorious future we enjoy every day today. The Logos as a Person Originated in the Second Century How do we know John’s conception of Logos was that of a Plan that became a person, rather than a person who became a person of a different nature, aside from the fact that the common, dictionary meaning was that of something in someone’s mind? We know this because the idea of the Logos being a pre-incarnate person did not exist at the time of John’s writing. From where did this idea of the Word in John 1:1 being a person originate? To answer that we need to find the first person in history to consider a Logos to be a person rather than a thought because commonly held philosophical and theological ideas generally originate with one influential thinker. Justin Martyr, writing about 150 AD, would be that guy to introduce this way of thinking into the Christological discussions of the early church. Justin didn’t call the Logos a person, per se – that wording would come about 50 years later with Tertullian, and even then it wasn’t a person as we think of a person but more a modalistic manifestation of the one God – but he did think of the Logos as “numerically distinct from the Father” rather than being an aspect of the Father. He wrote “through the Word, God has made everything”. He also believed the Logos was “born of the very substance of the Father” and places the genesis of the Logos as a voluntary act of the Father at the beginning of creation. This of course would put him at odds with the later 4th century councils that adopted the Trinitarian idea of an eternal, un-created Son of God. That would make him to be a heretic with ideas closer to the arch-heretic Arius who promoted the idea of a created Son of God at the Council of Nicaea in 325 AD. Arius and his followers would of course end up getting exiled and banished to Illyricum for this kind of thinking because it didn’t line up with Emperor Constantine’s theology and his quest to get the church to align with it. The Logos as a Plan Originated in the 6th Century BC Prior to Justin Martyr in the mid-2nd century the only way people thought of the Logos was as a plan in God’s mind. There were no other options. This way of thinking was established by the Greek philosopher Heraclitus, who, according to Bible Study Tools on the Web, “first used the term Logos around 600 B.C to designate the divine reason or plan which coordinates a changing universe. This word was well suited to John’s purpose in John 1.” Philo, a Hellenistic Jewish philosopher and a contemporary of John who may have influenced John, took Heraclitus a step farther and defined a Logos as God’s creative principle or governing plan. These concepts had become so ingrained in the collective thinking of the first century Greek speaking world that the word logos could be defined as something in one’s mind only, or a word along with the thought behind the word. By the first century your average Greek or Greek speaking Jewish Christian would have this understanding of the word logos and would read John 1:1 this way: “In the beginning was the PLAN OF GOD, and the PLAN OF GOD was toward God, and the PLAN OF GOD was divine. This (plan) was in the beginning with God. All things came into existence through it (the plan), and apart from it (the plan), nothing came into existence. (skipping to verse 14) So the PLAN OF GOD became flesh and resided among us, and we beheld his glory, the glory as of the uniquely-begotten of the Father, full of grace and truth.” – (For the anarthrous use of theos meaning “divine” instead of “God” see W. E. Vine, Dana & Mantey, and Moffatt’s translation – four Trinitarians who don’t let their own theological conclusions interfere in their scholarship.) A logos is something in someone’s mind, and is translated in our Bibles as “intent,” “reason,” or “cause”, which is a bit removed from Heraclitus’ original use as a plan in God’s mind, but John is applying the term to God instead of men, so naturally your average Christian of that time would think of the Logos in John 1 as “Divine Intent” or a “Plan of God.” For John’s audience to think of the Logos as a divine person before he was a human/divine person they would have to be convinced by John himself, given the lack of contemporary influences. When you get past the English wording and drill into the Greek, John isn’t very convincing, except for those unwilling to accept any other alternatives for the meaning of Logos other than a “person.” And yes, there is a fair amount of bias against non-Trinitarian interpretations. There’s no question about that. John’s audience would not be thinking of the Logos as a person in a godhead. That idea came later, long after John died. John’s Gospel was written at a time when Jewish Christians dominated, even to a fault, the Christian church as can be seen by the problems caused by those who taught that the Gentile Christians should be circumcised. Though they had issues with the Law, these earliest Jewish Christians were solid in their monotheism and would not have entertained the possibility that God was anything other than ONE. When they twice daily recited the Shema: “Hear oh Israel, the Lord our God is One,” (Dut. 6:4) they never once thought of God as “3 in 1”. Had someone suggested it at the time they would have considered that to be at best an interesting adaptation of pagan polytheism and at worst a strange perversion. Nobody was suggesting it at the time. There is no historical evidence that any first century Christian even thought of the Logos of the Gospel of John in terms of it being a separate person apart from the Father, yet that is the common understanding of John 1:1 taught today. According to The Jewish Encyclopedia, “Judaism has always been rigorously unitarian.” Unitarian theology does not recognize a plurality in the godhead. The Encyclopedia of Religion states: “Theologians today are in agreement that the Hebrew Bible does not contain a doctrine of the Trinity. In the immediate post New Testament period of the Apostolic Fathers no attempt was made to work out the God-Christ (Father-Son) relationship in ontological terms.” They didn’t have anything to work out because they weren’t thinking of God being more than one person. Jesus was not in the godhead per later Trinitarian theology, but rather the godhead was in Jesus, according to Col. 2:9: “For in him dwells all the fullness of the Godhead bodily.” Regarding New Testament times, the New Catholic Encyclopedia says: “The formulation ‘one God in three Persons’ was not solidly established, certainly not fully assimilated into Christian life and its profession of faith, prior to the end of the 4th century. But it is precisely this formulation that has first claim to the title THE TRINITARIAN DOGMA. Among the Apostolic Fathers, there had been nothing even remotely approaching such a mentality or perspective.” The Apostolic Fathers were those who immediately followed the Apostles between 90 and 130 AD. Some of them were trained by the Apostles. Jesus and all of the apostles were strict Monotheists before Jesus began his ministry. After Jesus established a following among Jews, and after he was crucified, they were still considered to be a Jewish sect. They continued to be strict Jewish Monotheists – who followed Christ. When they recited the Shema, they didn’t think in their minds, “The Lord thy God is 3 in 1.” God was not 3 until much later. He was just one. Period. In order to get to a point in the 4th century when people thought of God as three persons they would first need a developed Logos Christology and think of Jesus as an eternal person of the godhead. According to the Catholic Encyclopedia, “The Apostolic Fathers do not touch on the theology of the logos; a short notice occurs in St. Ignatius only (Ad Magn. viii, 2).” Later, by 200 AD, instead of Shema-reciting Jewish Christians dominating the church as was the case for the first 150 years of the church it was non-Jewish Christians dominating who had been quite accustomed to Greek and Roman polytheism. Someone like Tertullian could introduce modalistic ideas of God manifesting himself with three faces, without much objection. He would have faced a lot of objections in the first century. Later, his idea of three faces combined with 2nd century ideas of the Logos being a separate person subordinate to God. That morphed into God being three persons in the sense that we understand “person” today, someone having his own will, and not just three different manifestations of the one God as conceived by Tertullian. It’s interesting to note that Tertullian, the first Christian to describe God with the number 3 around 200 AD and thus popularized the nomenclature for the doctrine of the Trinity, would not be accepted as a bona-fide Trinitarian after Nicaea and is actually considered a heretic by the Catholic Church for some of his other teachings. It was up to the church councils like the Council of Nicaea in 325 AD to fine tune the implications of a more developed Logos Christology by eliminating the widely accepted Subordinationism of Arius in favor of three co-equal, co-eternal beings, a rather novel concept at the time. Nobody was writing about this in the first or second centuries. This of course would not have been a necessary exercise in the fourth century had they stuck with an earlier, Unitarian conception of God who had a PLAN that was manifest with the advent of his Son. Once you go down that path of elevating the Son to a pre-incarnate person with absolute equality with the Father you then have to have several more councils to hammer out positions regarding the Son being both God and man, positions that are invariably self-contradictory and ultimately contrary to scripture, yet must be accepted if pre-incarnate existence and co-equality are accepted. By the way, the Council of Nicaea in 325 AD was not trying to come up with a biblical position on the godhead. That shouldn’t be surprising considering there was no Bible at the time. It was just trying to come up with a doctrinal position to please a Roman emperor who demanded one. All the bishops and Christian thinkers of the first few centuries had their own personal theology and were free to draw from any source to teach and authenticate their positions, and be at odds with anything in our current Bibles with impunity, since there was no canon of scripture until long after Nicaea. The bishops at Nicaea were not Evangelicals trying to figure out what positions were “biblical” long before there was a Bible. How could they? As a side note, many people have a wrong understanding about “Greek thinking” as if it’s always “bad” while Jewish thinking is “good” because the Greeks were pagans and God was trying to move these pagans away from paganism and into truth, and because of what Paul says in Colossians 2:8 – “Beware lest any man spoil you through philosophy and vain deceit, after the tradition of men, after the rudiments of the world, and not after Christ,” but the fact is even Jewish followers of Jesus, such as his very own apostles, borrowed concepts from the pagans if a concept was useful for explaining the one true God to a people wanting to understand more about Him, especially if the audience was pagan. Think of a helpful Greek philosophical concept as just one of many tools a teacher could pull out of his tool bag of ways to understand God. There were certainly Greek ideas Paul wanted his parishioners to avoid, such as the many “isms” like Hedonism, Asceticism, and Gnosticism, but we error in thinking Paul was laying out a general prohibition of all Greek philosophy including the ideas of Heraclitus 600 years earlier. Brad Jersak does a great job of digging into this a bit further in his article Pushing Back: ‘Greek Thinking’ vs. ‘Jewish Thinking’ is a Dualistic Error. In addition, this blog post lists a number of times when the Apostle Paul used Plato, Aristotle, Seneca, and others in his teachings: Paul and His Use of Greek Philosophy I also explain how Jesus made use of fables to teach truth in Jesus Used Fiction to Teach Truth. Jesus and the apostles did not have a problem with borrowing concepts from their secular, pagan culture. It’s similar to how we might use a scene from Star Wars to make a spiritual point in a sermon, as one of our local megachurch pastors likes to do. It is, however, a method that lends itself to people from a different clime and time misunderstanding intentions. It’s easy to grab a fourth century interpretation and read that into a first century text, and miss the point of the author of that text. Fortunately for us, we don’t really need to be swayed by or dependent on fourth century theologians and church councils or first century Jewish philosophers or sixth century BC Greek philosophers. We can just look up the meaning of Logos in the dictionary and see that a Logos is a plan. We can then read John’s Gospel and know that PLAN became a concrete reality in Bethlehem and even more so as God made him both Lord and Christ and the forerunner of all who would become like him. The 4th Century bishops that Emperor Constantine had corralled at his summer palace in Nicaea were either Trinitarians (or better, pre-Trinitarians) led by Alexander the Bishop of Alexandria or they were Arians who agreed with the elder from Alexandria named Arius, led by Bishop Eusebius of Nicomedia. They should have stuck to one question only: Was the Logos a Plan or a person prior to the incarnation? But that question wasn’t even on the table because at that time both parties assumed the Logos was a person prior to the incarnation. The question at hand then was, among other questions: Was the Logos a created person or an uncreated person? What we know now as the dictionary definition of Logos, thanks to Greek scholars like James Strong, was lost on the 4th century bishops. Imagine this: a group of lawyers from several different countries representing international businessmen co-authoring a legal document to establish a business in America. These lawyers only have a cursory understanding of English. Now imagine them trying to do this without translators or dictionaries explaining the meaning of the English terms they are using. Who would even attempt such a thing? They would all know they all need to be able to have the same understanding of those English terms in that legal document or else the proverbial you-know-what will eventually hit the fan and they will be seeing each other in court. Such was the case at the Council of Nicaea in 325 AD. Greek was only an international trade language, a first language to only a few. In the Christian Latin West, Greek became associated with “paganism” and regarded as an uncouth foreign language. Translators into Greek were available at the council for some of those who wanted to speak, but to further complicate matters some of the key Greek terms used in whatever biblical and non-biblical writings they were using to validate their positions, such as the Greek words for essence, substance, nature, and person, bore a variety of meanings drawn from pre-Christian philosophers. That can only entail misunderstandings. Then they coined a new term not used in scripture, homoousia, which literally means “same substance”, and could never fully agree on what they meant when they said Jesus was of the same substance as the Father. I don’t think anyone even knows today. If you’re thinking it was all a big theological mess you would be correct, and we are today barely even recognizing everybody had their reasons for what they believed and these folks should have just told the Emperor, “We’re going to love each other, allow for disagreements and freedom of speech and freedom of conscience, and when we get this stuff figured out and can come to an agreement we’ll get back to you. But don’t hold your breath. In the meantime we’ll pay our taxes and pacify the subjects of your glorious kingdom but we would appreciate it if you would just keep your unsophisticated nose out of our theological business, if you don’t mind, sir.” But no, the bishops saw in this Emperor an opportunity to further their own political agendas by harnessing the powers of the state at the expense of others and we are still picking up the pieces of the forced unity of their creeds and their anathemas (damnations to Hell), with Christians still at each other for differences of opinion about disputable matters. You might also like:
Catholic Activity: Elementary Parent Pedagogy: Two Homes, Heaven and Earth — Building up Family Unity and Security Life here is a preparation for heaven. It is the parents' duty to teach our child to be aware of God's presence always. With the families breaking up and weakening, we must take steps to build up family unity and security within the home. The author provides some solid suggestions. We have two homes — one here on earth and one in heaven with God. Life here is a preparation for heaven. The supernatural life we live here is a beginning of the life hereafter. That is why good people die without fear. They know that they pass to a fuller life with God. Can you make your home so lovely that it will seem to be like a little taste of heaven here on earth below? Say often to yourself that your destiny and your child's destiny is to dwell with God in perfect happiness. And say to yourself that your duty is to begin here and now to teach your child to live in God's presence. When things go wrong in the house ask yourself: "Do we act as if God is in this house? Is this house a home, a garden enclosed, where children feel safe under the protection of loving parents, where one family dwells in peace and unity?" The Weakening Family The family is the God-made unit of society, an institution so powerful that it has lasted since the beginning of the world. Today it is weakening. People marry on an impulse, never thinking that marriage means founding a family. Then they divorce each other almost as readily; the family is split, and the children suffer. Or else parents rush off on trips and leave children to servants or neighbors or even alone. Parents become bridge and movie addicts and desert their families night after night. Offset by Catholic Family Against this type of family, stands the Catholic model, founded on a sacrament established by Christ. Marriage for life with no divorce — on this rock is built the Catholic family. That family can be a garden of the Lord, a place of boy and delight for father, mother, children. Since marriage cannot be terminated by man, even Catholic parents who have ceased to be enchanted with each other realize that they have a big job to make a success of the family, and that they can do so by drawing on the grace which God stands ready to give for the asking, grace in the form of courage and serenity and persistence in keeping the family happy and united. The family must possess solidarity. It must be a unit. Each child must learn to speak with pride of his home and parents and sisters and brothers. The child lucky enough to find himself surrounded by a loving family has a sense of security which psychologists agree is necessary for a happy childhood. How can we build up this security? How can we make the child feel himself one of the family group? A simple way is by doing things together. How many things can parents and children do together? - Say family prayers. - Go to Mass together. - Receive communion together as a family. - Have family reading. - Have family music and singing. - Go on trips and picnics in a group. - Clear off the snow, or rake up the leaves or weed the garden together (two or three of the family at least). - Play games together. We suggest that each family add to this list occupations in which the family can unite. Family Unity — Through a Family Interest To keep the children interested in home affairs is a sure way of preserving the unity of the family. We should aim to discover interests in which parents and children can share. One of these interests is pictures. A whole family may educate itself in art and learn the stories of the Old and New Testament by the simple and fascinating way of pictures. A family interest of this kind which can be shared by adults and children is invaluable in maintaining the close contact of earlier years. Recall our Purpose — Living in Two Worlds With the school term well begun we ought to make a fresh start on our big job, — creating a Catholic spirit in the home. It is wise to think a little of our aim. What shall we recall? We know that man lives in two worlds; his body walks the earth, while his soul in the fraction of a second can commune with God in another world, the supernatural world. There are people who live almost wholly in the natural world. There are many who morning and night might pass over in prayer to the other world. But parents should live in God's supernatural world a great deal more than they do. If they are close to God they will take their children with them. Have you not known people who are always aware of God's presence, of His care and protection, or who in trouble will at once say, "Yes, dear Lord, I bear this sorrow joyfully with You"? They are always seeing God's hand in the beautiful world. They look at the smiling faces of their children and see God there. They are miles and miles removed from those whose attention is absorbed in money, society, dress and all the rest that makes up "the world." A home with the supernatural attitude toward life is what we want for all Catholic families. A million such homes could set the country on fire with the desire for a better Christian life; a million supernatural-minded parents could clean up the movies, the radio, the magazines. The home where children breathe a supernatural atmosphere! How can we create it? Let us remind ourselves of a few points we have considered before: - Parents must live close to God. - They must set a good example in every word they speak and in every act they perform in their children's sight. - They would do well to remember all the suggestions made since January, which help to keep religion living and vital in the home, suggestions about: a. Prayers. b. Conversation. c. Family parties. d. Family altars. - They would profit by going through all of the preceding pages and writing out a list of things mentioned under the heading, "Things to do." - In particular, parents should remember that it is not enough to say, "Don't do this; don't do that." Say "Read this"; not "Don't read that." Say "Go to this movie," as well as "Don't go to that." Say "Suppose we go to the beach today," instead of "Where shall we go?" Time — Patience — Intelligence are required. If you have a job in a factory, shop or school you may not loaf on the job. At home, in the supreme job of bringing up the children, you may not loaf. Activity Source: Religion in the Home: Monthly Aids for the Parents of Elementary School Children by Katherine Delmonico Byles, Paulist Press, 1938
How Composting Toilets work. - Composting toilets use nature's decomposition process to reduce waste by 90% and convert it into nutrient rich compost. - They do not require water hook ups either which is great for our already stressed water supply. In short, composting toilets are a way to allow waste to decompose safely and without odors. - Composting toilets use oxygen loving bacteria that is naturally present in human waste to do all the work. - Bugs, worms, and other critters have absolutely NO role in BioLet's composting process. - You just use a BioLet like you would a regular toilet, toilet tissue and all. The main difference is you just toss in compost mix after each fecal use instead of flushing. The air flow inside the toilet pulls all odors up the 'chimney' and out of your home. - Composting reduces waste volume by 90%; the majority of the material inside the toilet is mulch and not waste. You do not even have to see it with the way BioLet is designed. Yes, you do have to empty the lower compost tray periodically, depending on how many people are using the toilet, but it is only compost, soil. There is no waste mixed in the tray. How Electric Biolets work When you raise the upper seat on the toilet you will see a "clam shell" cover called the compost cover. This hides any waste from view and automatically opens when the seat is depressed allowing the waste to be safely deposited into the compost chamber below. Inside the compost chamber is BioLet Starter Mix with a base of peat moss and pine wood shaving. After use, a small amount of BioLet Starter Mix is added to the toilet and the upper seat is lowered. A mixer automatically starts to turn mixing the waste with the BioLet Starter Mix and aerating the material already inside the chamber. This process creates a mixture with all the components required to induce aerobic decomposition (carbon, nitrogen, oxygen and moisture). An electric fan located in the back of the toilet continuously draws air into the toilet, through the seat, preventing any odors from escaping. This air is circulated throughout the composting chamber by BioLet's patented air recirculation process, allowing for maximum absorption of oxygen by the material inside the toilet. The more oxygen that is supplied to the aerobic bacteria, the more thoroughly and faster the waste will be transformed into humus. As the air is circulated through the unit, it is heated by a thermostatically controlled heater. The heated air evaporates the liquids in the toilet keeping the material inside at the proper moisture / carbon / nitrogen ratio for optimum decomposition. The moisture latent air is then vented to the outside through a vent pipe running from the toilet to the outside of the structure where the toilet is housed. As usage continues the "material" will continue to build up inside the composting chamber. When it reaches the upper leveling arm, it's time to empty the tray in the bottom of the unit (once every 2 months to a year, on average, depending on usage). Simply remove the thumbscrews from the door on the front of the unit, slide the tray out, and empty the dry, nutrient rich, ODORLESS humus. The humus can be leached into the soil around decorative plants or disposed of in an ordinary trash receptacle. How Biolet NE works If reliable power supply is not an option, the BioLet 30 NE is the way to go. Using a process called batch composting, the BioLet 30 NE can handle a high volume of usage. Batch composting is a process where you make a batch of material then remove it from service so the material can compost without anything new being added to it. The BioLet 30 NE has a compost cover and ventilation pipes described in the above models, but it works by applying a very different approach. A large compost bin is housed inside the lower portion of the toilet. The toilet is used until the bin is approximately 3/4 full. The top of the toilet is then removed and the bin is removed from the unit and moved outside for further composing. A secondary bin is then placed into the unit for continued operation. Air circulation through the unit occurs due to the natural convection of air through the ventilation pipe. This is induced by the heat that is produced by the composting process, temperature differentials between the inside and outside of the structure, and by the natural chimney effect created by the vent pipe. Installing an optional 12VDC fan into the vent pipe can induce additional circulation of air and increase the capacity of the unit.
In 1888, Sir Henry Parkes opened Centennial Park, bringing to fruition his vision of a green space for the people of Sydney and establishing an enduring asset for the community. Over the subsequent 130 years, Centennial Parklands has grown to encompass almost 360 hectares of land in the heart of a growing cosmopolitan city, providing the community with room to move, fresh air and a space to play. The Parklands has a planted population of approximately 15,000 trees, comprising 234 species, including natives and exotics. The challenge of managing trees in the Parklands The effects of drought, old age and urban impacts have taken their toll on many of these trees. Reports by independent arborists and the Centennial Park and Moore Park Trust (Trust) estimate that around 60% of these trees will need to be replaced over the next 40 years due to their terminal decline. To combat the inevitable tree loss, we are aiming towards planting 3,000 trees over the next 10 years. In 2017, 141 trees were planted throughout the Parklands. Similar numbers are aimed for each year to for our Annual Tree Planting program. Trees identified to be replaced will have reached the end of their Useful Life Expectancy (ULE), and there is nothing that the Parklands' Arborists can do to bring it back to its full bill of health or retain it safely. The Parklands takes a planned approach to managing its trees. It has developed a strong Tree Asset Management System complemented by a comprehensive Tree Replacement Program, based on its Tree Master Plan (Plan), amongst many other considerations to maintain and add to our diverse tree collection. The Plan sets out strategies for conserving the existing tree population and provides a framework for sensitively integrating new plantings into the Parklands' historic fabric. The Plan has identified that up to 200 new trees need to be planted each year over the next 10-20 years to maintain the tree populations in the Parklands. In the next 40 years, a large percentage of our tree population, especially those that are already mature or over-mature will need to be replaced due to their terminal decline. Gradual and priority based removal of the affected trees occurs year-round and is constantly reassessed as our dynamic site changes – with those identified as being in the worst condition (and/or hazardous to park visitors) being removed first. Refer to our blog, ‘The circle of (a tree) life to find out more about trees and the way we manage them at Centennial Parklands.
At the Keystone Center, societal issues are not viewed as problems, but challenges to overcome. A nonprofit organization headquartered in Keystone, the center focuses on bringing people together to discuss important national and local issues. The Keystone Dialogues identify timely policy issues and then bring in a panel of stakeholders and experts for discussion. The goal of the dialogues is to build a consensus among the participants in a non-confrontational and informative manner. The issues to be discussed come from the national government, the state government or groups of concerned or interested parties. Occasionally, the center itself identifies an important topic, and works to contact those involved. Though many of the dialogues revolve around highly technical discussions and are closed to the public, the center also puts on public engagement nights, in which anyone is invited to attend and listen to the discussion at hand. "We're proud of the results-oriented, problem-solving approaches we can bring to big societal issues," said Robyn Brewer, director of marketing communications at the Keystone Center. "Our goal is to shift the focus from oppositional approaches to shared decision-making processes." In addition to the policy work, the center offers many other services, particularly those dedicated to facilitating discussion. Its facilitation and mediation service was utilized in the early stages of Breckenridge's Peak 6 discussions, for example. It also assists companies in forming advisory boards, offers leadership training and development courses and provides joint fact-finding services to stakeholders. Education also plays an important role at the Keystone Center, which founded the Keystone Science School. The school offers camp programs during the summer, varying from daylong to weeklong schedules for both young children and teenagers. "We believe every child deserves camp," the Keystone Science School website states. "We bring in the added element of outdoor education, combining science, adventure and fun to create life-changing experiences for every child we serve." Throughout the year, school programs give students the opportunity to learn more about the environment, and methods of scientific inquiry. The goal is to teach students to develop critical thinking skills in a natural outdoor setting. Various community programs offer similar experiences to adults, both guests and residents of Summit County. Learning also never stops for teachers, and they can do so through the center's educator programs. There, teachers learn how to create unique and interesting science curricula for their students. "We don't just regurgitate textbooks," Brewer said. "We try to focus on issues that are current or relevant." The educator programs revolve around science, engineering, technology and math skills, with emphasis on such skills as critical thinking, teamwork and problem- solving. The trainings are free of cost to the teachers.
Consider a two allele system (A1 and A2) and let p and q represent the frequency of A1 and A2, respectively. Let wij represent the fitness of genotype AiAj (assume wij = wji for i ≠ j). Therefore, for this system w11 is the fitness of allele A1A1, w12 is the fitness of allele A1A2, and w22 is the fitness of allele A2A2. We can express the new frequency of A1 after one generation of selection using the rational function, where is the mean fitness of the population. We are interested in finding equilibrium values of p, in other words values of p such that p' = p, indicating no change in allele frequency in the next generation. Setting p' = p gives, Assuming p ≠ 0 we can cancel p on both sides of the above equation as, Bringing all terms to the right-hand side of the equation and substituting q = 1 - p gives the polynomial, Therefore, we have deduced that solutions to the above equations are equilibrium values of p (we are only concerned with biologically reasonable equilibria). Using fitness values: where c > 0 is a constant, answer the following questions.
A concept conceived by Don Collins Ever since its inception, the field of paranormal investigation has sought the respect and acceptance of the scientific community. In the beginning, methodology was sorely lacking in this field. Investigating ghosts or other paranormal activity consisted mainly of snapping photographs or consulting mediums. The belief in the other side led to a great interest in the Spiritualist movement. This movement served mainly to line the pockets of conmen claiming to have a special connection to the other side but did nothing to further the field of paranormal research. With the emergence of Harry Houdini onto the scene as a debunker of spirit phenomena things finally began to take a turn for the better. With the decline of Spiritualism, serious investigation of paranormal activity would take center stage.Perhaps the one man who is most responsible for many of the modern methods of ghost hunting in use today is Harry Price. Although he did not possess an advanced education, Mr. Price brought scientific methodology to the paranormal arena in a most successful fashion. He came up with the first so-called “ghost hunter’s kit” which included such things as still cameras, motion picture cameras, fingerprinting kits, and portable communication devices for use among investigators on a case. Harry Price’s methodology and instruments are still in use in some form or another to this day. In the 1950s, Duke University began its successful parapsychology program that also used scientific methodology to examine claims of the paranormal. Up to the present day, for the most part, paranormal research continues to use science to collect data in the quest to explain the paranormal. However, one thing is still lacking, a means of classifying or rating events as it specifically relates to so-called hauntings. This is where the “Collins Paranormal Index” comes in. Most reports on any investigation of an alleged haunted location will contain a lot of information. Such information includes weather data, environmental data, historical data, and empirical data. The purpose of such data is to aid in spotting patterns relating to locations experiencing activity. For example, if a researcher finds that a certain location always has an upsurge in activity during a full moon then that could be a significant pattern. If two or more different locations report similar occurrences and environmental factors then this may also be a significant pattern. The problem then becomes sorting through reports and finding similarities. For example, assume that a researcher is researching EVP phenomena. Up until this point, the researcher has had to comb through countless investigation reports searching for locations that meet his criteria. The Collins Paranormal Index or “CPI” greatly relieves the burden of the researcher in his task. By assigning an Index to each investigation summary the burden of sifting through and classifying reports is greatly eased for the researcher. The EVP researcher in this example merely needs to glance at the reports and select those which rate either a CPI 2 or CPI 3. One could take this a step further and mark all locations on a map according to their Index. If one finds that certain areas contain an abundance of a certain Index level, that may be a significant pattern. The Collins Paranormal Index is a scale composed of eight levels. An explanation of the workings of the index follows. CPI 0- no data or activity of any sort is experienced at a location. CPI 1 Environmental Anomalies – any type of minor fluctuation or manipulation of the immediate area. For example: flickering lights; EMF surges; power fluctuations; cold/hot spots etc CPI 2 General EVPs – the location produces EVPs of a general or random nature which are NOT in response to any questions asked. CPI 3 Direct EVPs – the location produces EVPs which DO respond specifically to questions asked. CPI 4 Primary Grouping – any combination of two or three of CPI 1, CPI 2, or CPI 3. A location that presents the required combination automatically becomes at least a CPI 4. CPI 5 Intelligent Environmental Manipulation– the location presents evidence of an intelligent presence upon request. For example: a knock or rapping is produced at the request of the investigator or a sensation of touch at one’s request. Any activity must be upon request to fit this index level. Activity NOT occurring at one’s request would most likely be debunked or relegated to CPI 1. However, activity NOT occurring at one’s request MAY be placed at this level if it is of a significantly “violent” or noticeable nature or is such that natural explanations are extremely unlikely. Such activity must be fully documented. CPI 6 Visual Evidence – Any type of visual evidence such as video, still photos, or personal sightings witnessed by more than one person. CPI 7 Physical Manipulation – the obvious movement of objects in the environment by intelligent design. For example: objects are thrown; items levitate; electrical appliances operate on their own accord. A request for this activity on the part of the researcher is NOT required. Strange odors and aromas could also be placed in this category. If the researcher is unsure as to whether movement is due to intelligent design, the activity may be relegated to CPI 6 CPI 8 Secondary Grouping – any combination of one or more Primary Group CPI 4 (CPI 1, CPI 2, or CPI 3) levels AND any one or more of CPI 5, CPI 6, or CPI 7 OR any two or more of CPI 5 CPI 6 or CPI 7 Explaining the scale The scale above excludes events have been explained by natural causes. The index DOES include events which remain unexplained. All events that cannot be explained by natural causes are then placed into the appropriate category. The scale is set up so that as one progresses from CPI 1 up to CPI 8 phenomena should become less typical or less apt to be natural in cause. Most investigators may have experienced some sort of environmental anomaly (CPI 1). Fewer researchers will have discovered EVPs in direct response to questions posed at an investigation (CPI 3). Fewer investigators yet will have evidence on film or video which cannot be explained (CPI 6). Even fewer investigations will produce Direct EVPs, visual evidence AND phsyical manipulation (CPI 8).
Event oriented programming is often said to produce spaghetti code which is difficult to debug. I believe these are skill-related issues, and the event-oriented programming model needs to be mastered like every other programming model. For example, signal processing diagrams in backend testing can help to document programs following the event-oriented model. Special debugging tools assign every callback a complete stack trace so that one can trace back the source of a given problem if he must. Non-blocking or asynchronous I/O allows the computation to continue while the input/ output operation is running. So it is, for example, possible to have multiple database queries running at a time. Input and output operations on the network or file can be extremely slow compared to the processing of data in memory. Most of I/O operations involve disc access in some form and spinning of discs is orders of magnitude slower than read/ write operations on an electronic circuit. There are different forms of non-blocking I/O like polling, signals, and callbacks. Greenlets or event lets in companies that offer testing services are an abstraction on top of an event loop. When using greenlets, one is programming in a blocking way, and the framework is using the event loop and a scheduler under the hood. The programming model is more like threads but avoiding resource consumption. A callback is a function, that is passed as an argument to I/O operation. Once the I/O operation is completed the part of the program which deals with the results of this operation is invoked. It is possible to use anonymous callbacks, but this is considered bad practice since it makes programs difficult to read. Therefore named callbacks are to be preferred over anonymous callbacks. The event loop, also called message dispatcher is a programming construct that waits for and dispatches events or messages in a program. It works by polling some internal or external “event provider”, which generally blocks until an event has arrived, and then calls the relevant event handler (“dispatches the event”). Ted Faison’s book “Event-Based Programming: Taking Events to the Limit” proved very helpful to me in understanding the ins and outs of event-oriented programming. The book follows a structured approach and is very well written. According to Faison, the event programming model is by far the superior programming model, especially for large software systems. Dependencies between software parts make large software systems complex and difficult to maintain. Dependencies have the biggest impact on software quality since the most complex parts also have the most dependencies. The event-oriented programming model helps you gain better modularization, so development, testing, and maintenance of parts of the system are more comfortable. Farson makes a point that by following the event-oriented programming, model complexity grows linear with the size of the system while you will experience exponential grows when following other models. If this proves to be true, this will be the biggest discovery in software engineering for the last decade.
Vitamin D could help the body fight infections of deadly tuberculosis (TB), according to doctors in London. A study in Proceedings of the National Academy of Sciences showed patients recovered more quickly when given both the vitamin and antibiotics. The idea of using vitamin D to treat TB harks back to some of the earliest treatments for the lung infection. Before antibiotics were discovered, TB patients were prescribed “forced sunbathing,” known as heliotherapy, which increased vitamin D production. However, the treatment disappeared when antibiotics proved successful at treating the disease. This study on 95 patients, conducted at hospitals across London, combined antibiotics with vitamin D pills. It showed that recovery was almost two weeks faster when vitamin D was added. Patients who stuck to the regimen cleared the infection in 23 days on average, while it took patients 36 days if they were given antibiotics and a placebo.
8 March 2012 Yesterday I posted a photo of the Tumbi Quarry site before the landslide. Today I thought I’d look at this image in a little more detail. The photograph was collected on 27th January 2010, i.e. almost two years before the landslide: For reference, lets compare this with the post-landslide image: So lets start with the quarry, and home in on the section of the new photo that shows the landslide source area (see image below). It is clear that the entirety of the workings in the pre-landslide photograph was destroyed in the landslide. The section of the quarry that survived (see the second photo) was upslope and across from the original workings – I have marked this as point a on the image below: It is quite helpful to take a look at the Exxon-Mobil plans for the development of the project, as highlighted in an earlier post: The dark grey area is the original quarry (as per the pre-landslide photo), the yellow is the new haul road up to the higher quarry section, and the light grey, light blue and green hatched areas are the new quarry, or lands to be cleared for quarrying. The plans appear to be more-or-less consistent with the configurations in the two photos. Note that the landslide appears to have destroyed most of the haul road, and it is somewhat unclear as to how much the new quarrying had expanded into the light grey area. A key question remains as to where the haul road was located, and where the quarry spoil was being dumped. Going back to the new photo, in the older (weathered) section there is clearly a stream issuing from the quarry face (marked as b on the image) – this water course is also shown on the map above. Let’s take a look at the land-use. The photo shows that most of the area that slipped was forested with mature trees, which agrees with the map above. There are some cultivated areas near to the main road, but these are for the most part not in the landslide area. The source area is densely forested, which suggests that it is unlikely that deforestation was the cause. I have previously noted that in such a deep-seated landslide, land-use change is unlikely to be a primary factor. There are some features in the landscape that are almost certainly small (but certainly not trivial) landslides. Point c on the image above is almost certainly a slip, and point d is probably another. Note though that in both cases these appear to be slips in soil or regolith, not in bedrock as per the main landslide. The most intriguing features remain these linear structures in the slope above the main quarry (point e). Superficially these look like either tension cracks or footpaths – from an image like this it is impossible to discriminate. I am erring slightly in favour of them being footpaths simply because it is hard to imagine a quarry being operated in an area with such tension cracks. So what does this tell us? Well, we can in effect rule out land use change as being a major factor in this landslide unless there was catastrophic felling between the image being collected and the landslide (and even then I do not believe that it would be a major factor in such a deep landslide). The presence of the stream suggests that the limestone was well-drained, but of course blockage of the source might have serious implications for the slope. The landslide has removed most of the quarry plus the associated infrastructure. It is impossible to say that the landslide was caused by the quarry, but it is also clear that there is nothing in the image that would definitely indicate that the landslide was not associated with it. The need for a proper independent inquiry Of course all of this indicates that there is a need for a proper, independent assessment of this landslide. I know that there are now moves by some to either try to get a court order to undertake such an investigation, or to commission such a process independently. Clearly either route would be expensive, so those involved are trying to raise the funds to support these efforts. We must remember that at least 25 people died in this event, and maybe many more. Personally, I would have thought that it is in the interests of all parties, including the quarry operators, to understabd what has happened here. It could well be that those responsible for the quarry are completely exonerated by such an investigation.
What happened to my Mandevilla? Thanks to Marie for this great question! Her Mandevilla is in a large pot, getting sun until about 2 in the afternoon. The leaves are yellowing and have some splotches, and some leaves are falling off. Well, Marie, there are definitely a few issues here that we can help with. First, the amount of sunlight. Mandevilla are tropical species, and although they do need more sunlight than many tropicals, they still can’t take the searing intensity of our sun here in Central Texas, and more importantly, they struggle in the moisture-sucking heat that comes with it. We have our Mandevilla in very bright shade in our Extension demonstration garden, and they perform very well. Leaves grown in shade tend to be darker green, so give your plant a good shearing, to remove the yellowing leaves and struggling growth, after you’ve moved it to a shadier spot. Then watch for the new growth, which should be slightly darker. Some of the leaf damage here is sunburn, but most appears to be photooxidation. When sunlight is very intense, it can burn sensitive leaves, causing brown spots, or sunburn. But before the leaf completely burns, you may notice yellowing leaves, which is a sign that the heat of the sun has denatured the chlorophyll. And since chlorophyll is a green pigment, less chlorophyll means less green. The smaller brown splotches here are likely secondary issues, possibly fungal, which move in once the plant is stressed and vulnerable. So Marie, move those containers to where they won’t get direct sun any later than mid-morning, and shear the plant to about 6 inches to force it to produce new, healthier growth. You’ll see improvement in no time.
A Mediterranean diet, high in fruits, vegetables, and legumes and low in processed foods, red meat, and sugar, was found to significantly reduce symptoms of depression in young men. Overall, the diet led to a reduction of 20.6 points on the depression scale thanks to the diet shift. Depression is a common mental health disorder that affects about 350 million people worldwide. In Australia, where the study was carried out, about one million adults experience depression in any given year. Depression can present differently in each individual and can trigger a number of different symptoms; in general, however, it includes feelings of unhappiness and loneliness, hopelessness, and low self-esteem. Depression can also have physical symptoms and can alter cognitive function. Standard treatment of major depressive disorder includes psychotherapies such as cognitive-behavioral therapy and anti-depressant medications. However, roughly 30% of patients fail to adequately respond to such medications and the effectiveness of antidepressants, in general, is hotly debated. Recently, researchers have started looking at the effect of lifestyle changes (especially dietary patterns), to see what effect they can have on patients’ mental health. The diet with the most evidence of having a positive effect on depressive symptoms is the Mediterranean diet. While observational evidence shows following a Mediterranean diet can reduce the risk of developing depression, only a few experimental trials have been done and they have all focused on older adults. With this in mind, researchers at the University of Technology Sidney in Australia wanted to determine if nutritional counseling could improve the diet quality, depressive symptoms, and overall quality of life of young adults with depression. This turned out to be the case. “The primary focus was increasing diet quality with fresh whole foods while reducing the intake of ‘fast’ foods,” lead researcher Jessica Bayes said in a statement. “Medical doctors and psychologists should consider referring depressed young men to a nutritionist or dietitian as an important component of treating clinical depression.” Diets and depression Study participants were recruited from Australia over an 18-month period. They were randomized to receive either dietary support or befriending. Participants in both groups did assessments at the start of the study, in the middle (week six), and at program completion, which the researchers used to reach overall conclusions. The group shifting to the Mediterranean diet experienced a mean reduction of 20.6 points on the depression scale at the end of the study. The researchers also found that 36% of the participants shifting diets reported low to minimal depressive symptoms. Improvements to the physical quality of life were also reported in the same group. “There are lots of reasons why scientifically we think food affects mood. For example, around 90% of serotonin, a chemical that helps us feel happy, is made in our gut by our gut microbes. There is emerging evidence that these microbes can communicate to the brain via the vagus nerve, in what is called the gut-brain axis,” Bayes said in a statement. While the results are promising, the researchers warned that dietary change usually comes with many challenges, and compliance over the long term poses significant difficulties. For example, previous studies have shown that men rate healthy behaviors as less important than women, leading to difficulties in engaging them in dietary shifts. In addition, for people experiencing severe depression symptoms, adhering to a specific diet can be a daunting and very difficult task, and any such interventions will require careful planning. The study was published in the American Journal of Clinical Nutrition. If you are experiencing feelings of depression, please contact your national health service and/or seek a helpline.
Here’s a brief jewelry vocabulary guide, I would like to eventually expand with the help of our awesome readers! Bezel – The metal around a stone that keeps it in place, e.g. sterling silver bezel. Bezel set – Stone kept in place by the use of prongs. Cut – The type of shape the gem is ‘cut’ into. The cut is either faceted or non-faceted, e.g. cabochon cut. Here’s a brief guide to cuts, more to come later. Cut is graded into excellent, very good, good, fair, and poor cuts. - Brilliant Cut– It’s a facet cut that ensures that when light reflects, it gives a unique burst of brightness, almost like radiating fire. - Cabochon– A stone that is flat at the bottom, but round on top; smooth without facets like a pebble - Fancy Cut– Several possible shapes, such as kite-shaped, lozenge shaped, triangular. - Mixed Cut– Usually rounded in outline, cut as brilliants with pavilions step-cut. Rubies and sapphires are the easiest to shape into a mixed cut. - Step Cut – Step cuts come in a variation of shapes; oval, square, octagons, baguettes, and general table cuts. The step cut is also known at the ‘emerald cut’. This cut intensifies the hue of a color. Carat – A measurement of gem weight. Clarity – Gemstone grading; a lower amount of clarity signifies stones for of inclusions and less pleasing to look at. |Grade||FL||IF||VVS1, VVS2||VS1, VS2||SI1, SI2||I1, I2, I3| |Description||Flawless||Internally Flawless||Very Very Slightly Included||Very Slightly Included||Slightly Included||Included| |Clarity Scale||0||0||1, 2||3, 4||5, 6||7, 8, 9, 10| Cavities –Formed during initial gem growth stages, inclusions filled with liquid, gasses, or solids. Faceted – Jeweler cut sides, the polished planes of a gemstone. Gauntlet – A bracelet that is oval and firmly set, with an opening in the back. Gem – polished, cut precious stone used in jewelry. Gem shape – Somewhat like a cut, but referring to the shape of the stone. E.g. pear cut, trillion cut, and cushion cut. Gemstone – Semiprecious or precious stone polished and cut to use as a gem. Inclusions – Internal flaws or blemishes; often associated with clarity. Inclusions are also used to identify types of stones. Inclusions are divided into three categories, cavities, solids, and growth phenomena. Growth Phenomena – Hollow cavities fill by iron components; examples: solid crystals, naturally occurring glass. Jewels – Polished and cut precious stone; gem. Karats – Measurement of gold weight. Marcasites –Crystalline pyrites cut/shaped to look like diamonds, popular kind of jewelry from the 1700s to the 1800s until in the 1900s marcasites were cut from class and metal. Metal – Sterling silver, silver toned, silver plating, yellow gold, white gold Ring Size – Ring gauge in circumference, varies by country. Treatment – Done to change the shade, hue, or variance of a stone. Different treatments include oiling, heating, irradiation, dying, bleaching, coating, impregnation, filing, lasering, etc.
Wooden Legs Videos Focusing on students developing and strengthening CCSS Mathematical Practice: 1. Make sense of problems and persevere in solving them. [Mathematically proficient students start by explaining to themselves the meaning of a problem and looking for entry points to its solution.] Bluford Universal Charter School Mr. Reo’s Fifth Grade Class Charlie’s Gumballs Scenario Video Max Ray acts out what is happening in Charlie’s Gumballs, a problem at the Primary level from the Math Forum’s Problems of the Week (PoWs). We encourage teachers to use the “Notice/Wonder” activity with students as they watch the video and/or listen to or read the Scenario.
Stage 8 Stage 9 Stage 10 This week’s blog is about mixed conditionals. We sometimes mix the structure of conditional sentences. Generally, this is done by mixing the second and third conditionals. Firstly it is important to know the structure of these conditionals. The 2nd Conditional is: “If + Past Simple + Would Do”. The 3rd Conditional is: “If + Past Perfect + Would Have Done”. When we are thinking about an action in the past and the consequence of that action now, we use the following construction: If + Past Perfect + Would Do (Would Be Doing). This is a mixture of the 3rd Conditional and the 2nd Conditional. Here are some examples: “f I had gone to bed early last night, I would feel better now”. This means that I didn’t go to bed early last night and therefore I don’t feel well now. The action is in the past but it has a consequence now. The result now is how I feel. “If they had eaten breakfast, they wouldn’t be hungry now”. This sentence means that they did not eat breakfast and therefore the result now is that they are feeling hungry. Again, the action happened in the past and the result is being felt now. “If you hadn’t studied for the exam, you would not be able to take it today”. In this example, it is clear that the person has studied and therefore they will be able to do the exam now. When we are thinking about a situation in the present and the consequence of that situation now, we use this construction: “If + Past Simple + Would Have Done (Would Have Been Doing). This construction is one that most people find more difficult to understand. Usually, when we use this conditional, the present situation is a general one, meaning one that doesn’t really change. For example, “If he wasn’t the boss’s son, he would have been sacked”. Clearly he will always be the boss’s son and therefore this present situation is a general one. In this example you can see the construction uses the past simple followed by ‘would have’ and a past participle. Here is another example: “If you weren’t so busy these days, you would have been able to come to dinner last night”. This means that the person is generally busy and this present situation has a result in the past, in this case the person was not able to attend dinner the night before because they are generally busy. One more example of this is: “If he was a politician, he would not have given us a straight answer”. This means that he is not a politician and therefore did give us a straight answer. The present situation, affected what happened in the past. Practise these constructions as they are often used. Vine a conèixer l'escola i fes una prova de nivell gratuïta - Oracions condicionals en anglès - The present Continuous - Remember vs Remind - Much, Many, Few, Little. Què els diferencia? - Tail Questions: preguntes esclaridores en anglès Recursos per nivells
Let's think together: Every week the World Bank team in Tanzania wants to stimulate your thinking by sharing data from recent official surveys in Tanzania and ask you a couple of questions. This post is also published in theTanzanian Newspaper The Citizen every Sunday. About 70 per cent of the world’s 1.4 billion extreme poor rely on livestock to sustain their livelihood, according to the Food and Agricultural Organization (FAO, 2009). Not only does livestock provide meat and milk for consumption, it also helps increase agricultural productivity through manure which is an organic fertilizer and draft power. Because it can be readily marketed to generate income, livestock also reduces the vulnerability of poor households to external shocks. But this crucial resource is also susceptible to many risks including drought, disease, and theft. In Tanzania, as of October 2010, there were more than 17 million heads of large livestock (bulls, cows, heifers, steers), more than 21 million of medium-sized livestock (sheep and goats or shoats), close to 2 million pigs, and over 50 million heads of poultry. As a result, about 5 million Tanzanian households, or close to 58 per cent of them, reported owning at least one kind of livestock, with the larger proportion of them in rural areas (3 out of 4 households) than in urban areas (one out of 4). Approximately, 25 per cent of rural households owned a large livestock compared to less than 4 per cent of urban households. Unfortunately, many Tanzanian households cannot fully benefit from their livestock because most of them are exposed to disease and theft. With less than 30 per cent of owners reporting having vaccinated their livestock over the previous 12 months, morbidity rates in 2010/11 were as high as 42, 29, 20 and 58 per cent for cattle, goats, pigs and poultry respectively. In 2010/11, the toll on livestock from disease and theft was staggering: - Diseases claimed more than 1.4 million cattle and 3.4 million shoats. - More than 80,000 heads of cattle and half a million shoats were stolen. - Poultry were even more exposed to diseases and theft with a loss of more than 30 million. The total loss from disease and theft for all livestock was estimated at Sh649 billion of which Sh572 billion was just from disease. This amount is equivalent to about 2 per cent of GDP and 8.3 per cent of agricultural GDP. These two plagues hit poor rural households harder with more than 55 per cent of them having experienced loss compared to 37 per cent of households in the richest quintile. Livestock theft and disease cost more than 6 per cent of average total household consumption. In addition, 8.5 per cent of the poor in 2010/11 were pushed into poverty by the loss of livestock, adding more than 800,000 people into these ranks. This raises a number of questions: - What prevents farmers from investing more in the protection of their livestock, especially through vaccination programs? Should government invest more in veterinary services to tackle livestock disease? - Should investment in new technologies, such as GPS, be considered to help track stolen livestock as experimented in Kenya? Or does this just require stronger law enforcement? - What role can insurance products play in strengthening the livestock sub-sector? - Should government subsidize livestock insurance or vaccination programs? Note: These statistics are extracted from the 2009 FAO State of Food and Agriculture report and the Tanzania National Panel Survey 2010/11. Both are publicly available and can be readily replicated.
From its inception in 1970s, the IVR technology has kept on supporting the companies across the world in enhancing their communication with the customers. Interactive voice response or IVR is a system in which there is interaction of computers with humans which is operated through a combination of software, telephonic equipment and databases. The technology is used in multiple sectors like banking, hospital, retail, hospitality and restaurants etc. Technology Used in IVR Speech recognition and DMTF are two basic technologies which are used to receive the input into IVR system. In DMTF method, the button pressed by the user on phone acts a signal for the computer. These signals are interpreted by computer using telephony card. A suitable response is then generated by the computer which is heard by the caller.
Environmental, biological and anthropogenic effects on grizzly bear body size: temporal and spatial considerations © Nielsen et al.; licensee BioMed Central Ltd. 2013 Received: 3 April 2013 Accepted: 6 September 2013 Published: 8 September 2013 Individual body growth is controlled in large part by the spatial and temporal heterogeneity of, and competition for, resources. Grizzly bears (Ursus arctos L.) are an excellent species for studying the effects of resource heterogeneity and maternal effects (i.e. silver spoon) on life history traits such as body size because their habitats are highly variable in space and time. Here, we evaluated influences on body size of grizzly bears in Alberta, Canada by testing six factors that accounted for spatial and temporal heterogeneity in environments during maternal, natal and ‘capture’ (recent) environments. After accounting for intrinsic biological factors (age, sex), we examined how body size, measured in mass, length and body condition, was influenced by: (a) population density; (b) regional habitat productivity; (c) inter-annual variability in productivity (including silver spoon effects); (d) local habitat quality; (e) human footprint (disturbances); and (f) landscape change. We found sex and age explained the most variance in body mass, condition and length (R2 from 0.48–0.64). Inter-annual variability in climate the year before and of birth (silver spoon effects) had detectable effects on the three-body size metrics (R2 from 0.04–0.07); both maternal (year before birth) and natal (year of birth) effects of precipitation and temperature were related with body size. Local heterogeneity in habitat quality also explained variance in body mass and condition (R2 from 0.01–0.08), while annual rate of landscape change explained additional variance in body length (R2 of 0.03). Human footprint and population density had no observed effect on body size. These results illustrated that body size patterns of grizzly bears, while largely affected by basic biological characteristics (age and sex), were also influenced by regional environmental gradients the year before, and of, the individual’s birth thus illustrating silver spoon effects. The magnitude of the silver spoon effects was on par with the influence of contemporary regional habitat productivity, which showed that both temporal and spatial influences explain in part body size patterns in grizzly bears. Because smaller bears were found in colder and less-productive environments, we hypothesize that warming global temperatures may positively affect body mass of interior bears. KeywordsBear Silver spoon Environmental effects GPS radiocollar Temporal and spatial heterogeneity Understanding how spatial and temporal heterogeneity of environments affect life-history traits and the growth of individuals has been a central theme in ecology and population biology [1–3]. Among other measures of phenotype, body size for many species is highly variable across different spatial and temporal scales, which illustrates the importance of environmental heterogeneity on the growth of individuals and populations. Understanding how these spatial and temporal dynamics affect phenotypes is critical to helping identify and prioritize management actions for many species of special concern, especially in today’s rapidly changing world. There is little argument that spatial heterogeneity of environments shape populations by affecting population density, fitness, dispersal and behaviour [4–6]. Indeed, such relationships are a cornerstone of landscape ecology [7, 8] and habitat selection theory [9, 10], and form the basis for natural-resource-management. Inter-annual variability in environments creates pulsed-resource dynamics that affect many animal populations [11–13] by affecting primary productivity [14–16] and the frequency and intensity of landscape disturbances [17, 18]. For example, climatic oscillations that impact plant productivity will in turn affect primary consumer populations [1, 19, 20] and thus other trophic levels dependent on primary consumers [21, 22]. For consumers that are specialized on fruit (frugivores), which often exhibit supra-annual variation in productivity [23, 24], climate conditions can have an important effect on population dynamics and the health of animals. For example, masting events or mast failures are often signalled by climatic conditions [25–28]. On Barro Colorado Island in Panama, warm ENSO events stimulate fruit masting in tropical trees resulting in population increases of frugivore species [14, 29]. Likewise, acorn production for many species of oaks in the USA and cones for spruce in Canada are known to mast synchronously across broad spatial scales [30–32] having profound effects on consumer populations [21, 33, 34]. Increasingly, it appears that such inter-annual variations have long-term effects on individuals, particularly for those experiencing boom or bust conditions during early life. In fact, conditions during in utero or natal periods can be as, or more, important than recent conditions on animal health and fitness [35–37]. This phenomenon is referred to as the “silver-spoon” effect as it emphasizes the importance of being born into “rich” environments . Since resource conditions vary among years for nearly all ecosystems, populations often exhibit cohort effects that structure population dynamics [1, 39]. For instance, cone production in white spruce during natal periods and temperature during in utero conditions had long-lasting effects on red squirrel reproductive success in the Yukon of Canada . Likewise, population growth of stoats in New Zealand beech forests is dependent on masting . One species that inhabits highly variable environments with limited resources relative to their dietary needs and large body size are grizzly (brown) bears (Ursus arctos L.) . All the calories necessary to survive and reproduce are acquired in the approximately seven months that they are active prior to about five months of fasting in a den. The importance of limiting resources and phenotypic plasticity is further emphasized by nearly a 10-fold difference in adult body mass across the species’ range . Most often, grizzly bears rely on the seasonal or inter-annual pulsing of high-calorie resources, such as salmon in coastal ecosystems [42–44] or hard and soft mast in interior populations [45–47]. Not surprisingly, body size in bears varies accordingly [48, 49], having ramifications to both survival [43, 50, 51] and reproduction [48, 52, 53]. Given these resource demands and the existence of environmental uncertainty, grizzly bears have evolved a reproductive mechanism to compensate for these factors – the delayed facultative implantation of the fertilized egg dependent on autumn body condition [54–56]. Understanding body size-environment relations is therefore critical to understanding population processes in grizzly bears, particularly reproductive success and population growth. Environmental variables used to measure hypothesized environmental drivers of body size patterns in grizzly bears within Alberta, Canada Hypothesized environmental driver and measurement variable A. Regional habitat productivity Temperature (Winter, Spring, Summer) Precipitation (Winter, Spring, Summer) B. Inter-annual environments (deviations) Temperature (Winter, Spring, Summer) Bt-1, t 0, t+1 & Ct-1, t 0 Precipitation (Winter, Spring, Summer) Bt-1, t 0, t+1 & Ct-1, t 0 C. Local habitat quality Shrub habitat (quadratic) Canopy cover (quadratic) Variation in canopy cover Deciduous canopy cover (quadratic) Forest age (quadratic) Forest age variation Regenerating forest habitat (quadratic) Variation in regen. forest age Soil wetness (quadratic) D. Human footprint & activity Safe harbour habitat Linear feature density Distance to human feature Distance to active energy well E. Landscape change Annual rate of habitat change Grizzly bear observations We used three measures of body size to represent short- to long-term measures of growth: mass; length; and body condition. Body condition was estimated using a body condition index where mass is measured relative to length . Although we had multiple capture events for some animals, we only used the most recent capture because it maximized the range of ages considered. All captures and handling were done on public lands with permits and the capture and handling procedures approved by the University of Saskatchewan’s Committee on Animal Care and Supply (Permit Number: 20010016) following guidelines provided by the American Society of Mammalogists’ Animal Care and Use Committee and the Canadian Council on Animal Care for the safe handling of wildlife. Age, sex and reproductive status (with or without offspring) of each animal was recorded. Number of times captured and density were also considered as response variables for body size measures. The local-population density was indexed as the number of genetically identified individuals surrounding a radiocollared bear [48, 53]. Each bear was assigned a single geographic centroid based on their GPS telemetry locations and a buffer around this centroid based on the radius distance of the average daily movement rate of that animal’s sex-age class (4340 m to 10380 m radius). The number of detections of unique bears within each circular buffer was then estimated from DNA hair-snag information collected within 7x7km grids in 2004 to 2008. These counts were divided by the proportion of the buffer overlapping the DNA survey grid, and by the probability of capture (derived from data of the closest observed distance of GPS collared bears to known bait sites – see 67), which varied by the age, sex and reproductive status of the individual being detected, and the DNA survey stratum . Regional environmental productivity was estimated for each bear at their home range centroid location based on monthly temperature and precipitation normal (or average) from 1971 – 2000 estimated with the software ClimateAB . ClimateAB measures of climate normals are downscaled ANUSPLIN-interpolated monthly normal data (2.5 x 2.5 arcmin) using local weather-station data and an elevation lapse-rate adjustment . Monthly climate normals for precipitation and temperature were considered for four seasonal periods (winter, spring, summer and growing season) and for the two individual months of March and July that represented late winter conditions affecting snowpack at high altitudes and peak primary productivity respectively (Table 1). We also considered ecosystem type (i.e., alpine, subalpine and foothills) as a surrogate of regional productivity based on habitat use (exposure at three possible zones of influence) measured from GPS radio-telemetry information. Zones of influence considered around each telemetry location included the local habitat-patch (HP) scale at the 30 m raster resolution, a flight-response (FR) scale of a 300 m radius representing exposure to direct human activity , and a landscape-encounter (LE) scale representing the average daily movement rate by sex group (scale or radius buffer). We measured inter-annual variations in environments using ClimateAB by estimating temperature and precipitation by month at each animal’s home range centroid from the time (year) prior to birth (Bt-1) to the year of capture (Ct 0); due to missing data (i.e. locations prior to GPS collaring) and computational considerations, we are making the assumption that home range centroids have not changed over time or if changed that local variation in climates are small (see Discussion). The inter-annual variation (anomalies) was estimated as the absolute deviation in temperature and precipitation from 30 year (1971–2000) climate normals over the range of birth years observed in sampled bears for the same home range centroids again using ClimateAB . By using anomalies rather than actual climate observations, we separated effects associated with regional productivity (climate normal) from inter-annual fluctuations (anomalies). Inter-annual variability was measured for: (1) maternal conditions (one year prior to birth; B t-1 ); (2) in-utero and natal conditions (birth year and yearling; B 0 and B t+1 ); and (3) conditions during or prior to capture (Ct-1 and C0) (Table 1). Local habitat quality was measured as habitat use (GPS telemetry) at the three scales of exposure (HP, FR and LE) for nine different measures of habitat quality reflecting the association of grizzly bears with disturbed and productive environments [72–74]: canopy cover, variation in canopy cover, deciduous canopy cover, amount of shrub habitat, forest age, forest age variation, amount of regenerating forest, variation in regenerating forest age and terrain soil wetness (Table 1). Non-linear effects were considered for canopy cover, deciduous canopy cover, forest age, amount of regenerating forest used and terrain soil wetness since intermediate amounts of these habitat conditions are normally preferred [72, 74, 75]. We used regional measures of human footprint and activity including the amount of habitat use associated with private lands (i.e., Alberta’s whitezone; see ), protected areas and high- or low-risk habitats based on a mortality risk and safe harbour habitat models , density of linear-access features, and distance to nearest human feature or recent energy wells (Table 1). Since we did not expect body size to be affected by human features and recent energy wells beyond local effects (distances), we developed exponential decay functions for each distance variable using parameters of 300 m, 1 km and 3 km. A cost-weighted distance to roads was also considered where cost was defined by terrain ruggedness (a continuous variables accounting for change in elevation) under the assumption that more rugged areas near roads would be less penetrable to humans and thus experience lower human activity. Annual rate of landscape change was measured as the annual change (%) in habitat composition using annual remote sensing of major habitat types and anthropogenic features including roads, clear-cuts and energy well-pads . We used the HIREG module for the software STATA 11 to estimate hierarchical regressions of body size based on the six main hypothesized drivers of growth. This approach was taken in order to partition variances and test for differences among the main hypothesized factors, and account for multiple measurement variables within each hypothesized factor (block) using variable ‘blocking’ approaches. The order of hierarchical regression model considered was: (1) biology effects including density-dependence; (2) regional habitat productivity; (3) inter-annual variation in environments in the form of maternal [(year before birth), in utero (year of birth) and natal (year after birth)] and capture effects (year of and before capture); (4) local habitat quality; (5) human footprint; and (6) landscape change. This order reflects the need to first control for biology before examining residual variance due to environment. We chose more regional measures of environment before inclusion of local measures of environment in the hierarchical order of blocks. No interactions among blocks were considered. For each hierarchical category, we selected predictors (i.e. block of variables) based on a forward step-wise regression procedure of variable blocks using a p < 0.1 significance level . An F-test was used to determine whether changes to the coefficient of variation (R2) among the main hypothesized factors for each block were significant. Standardized regression coefficients and significance ( p ) of model variables describing body mass (log scale), straight line length (log scale), and body condition measures of springtime grizzly bear captures in Alberta, Canada Block (hypothesized) category and measurement variables 1) Biology and capture effects Adult Females (AF) Adult F w/ cubs (AFC) Male x Age Number of captures 2) Regional habitat productivity Spring (May-Jun) temperature Alpine habitat use (HP) 3) Inter-annual climate variability Maternal effects (B t-1 ): Summer (Jul-Aug) temperature Natal effects (B t0 ): Spring (May-Jun) temperature Summer (May-Oct) temperature Winter (Dec-Mar) precipitation Capture effects (C t ): 4) Local habitat quality Canopy variation (HP) Regen. forest age variation (HP) 5) Human footprint 6) Landscape change Biological and environmental factors explained 75.3% of the variation (R2, model F = 39.0, df = 7, 62, p < 0.001) in body length (Table 2, Figfure 3). Similar to body mass, age (as non-linear quadratic function) and sex explained a large amount (61.3%) of the variation in body length. Regional-habitat productivity explained an additional 6.6% of variation in body length (F = 13.3, df = 1, 65, p <0.001) based on average springtime (May-June) temperatures. Bears associated with warmer spring temperatures were more likely to be longer. Inter-annual climate variability – based on maternal and natal effects – explained an additional 4.2% of variance in body length (F = 4.7, df = 2, 63, p <0.001). Body length was positively related to warmer summer (July-August) temperatures during maternal periods and warmer spring temperatures during the year of birth (Table 2). Habitat quality and human footprint were not related to body length, but there was a positive association with landscape change (annual rate of change in habitats associated with human disturbances) adding an additional 3.2% of model variance explained (F = 8.1, df = 1, 62, p <0.001). Density, number of captures and human footprint did not influence body length. Biological and environmental factors explained 60.0% of model variation (R2, F = 14.7, df = 7, 68, p < 0.001) in springtime body condition (Table 2, Figure 3). Although body condition represents a standardized mass by length of animal, a non-linear (quadratic) age relationship with body condition was still apparent. Adult females were more likely to have a lower body condition than subadult or adult male bears, and this relationship was more pronounced if a female had cubs. Bears captured multiple times were in lower body condition than bears captured only once. Overall, the biological (including capture effects) base model accounted for 47.7% of the variance in body condition. Unlike mass and length measures, regional productivity did not affect body condition. Effects of inter-annual climate variability were observed with higher-than-normal July precipitation during the year of birth inversely related to body condition (Table 2):, this accounted for an additional 4.5% of the remaining model variance (F = 6.5, df = 1, 69, p = 0.013). Local habitat quality, as measured by use of habitats containing greater variation in regenerating forest age, was positively related to observed body condition (Table 2) and explained an additional 7.8% of model variation (F = 13.3, df = 1, 68, p = 0.001). Density of bears, human footprint, and landscape-change were not related to body condition. Biological factors and body size Measurements of body mass and length of grizzly bears in Alberta were strongly dependent on intrinsic biological factors: age (positive, non-linear relationship) and sex (males > females). Age, sex and offspring dependence were important factors affecting body condition, which is a short-term measure of growth. Adult females, and especially adult females with cubs of the year, were likely to be in poorer condition than male bears. A negative effect of capture history (number of captures) was also observed for body condition measures which is consistent with previous observations . Although population density (density dependence) is known to inversely affect body-size patterns in animals [80–82], no density dependent effects on body size patterns of grizzly bears were observed in our study. Grizzly bear populations in Alberta are likely to be below carrying capacity given locally high rates of human-caused mortality [83, 84], and were recently classified by the province as ‘threatened’ given the low observed population densities . This is in contrast to brown bears in Sweden that are considered healthy , but where body sizes of adult female bears are inversely related to population density . Temporal and spatial environmental heterogeneity Environmental heterogeneity is an important mechanism by which animal populations are regulated . Here, we found that regional heterogeneity in habitat productivity was a moderate predictor of body size patterns of grizzly bears in Alberta. The smallest bears by mass and length occurred in the least-productive and coldest environments as measured by alpine habitat use and home ranges occupying both cool average spring temperatures and high average March precipitation (snowfall). In the Canadian Rocky Mountains, all three of these factors are associated with late timing of spring snowmelt and plant emergence, which are known to affect population dynamics of other alpine mammals . Since den emergence in grizzly bears in our area typically occurs in April to early May , the amount and timing of spring snowpack is likely a factor affecting the availability of early season food resources such as roots , and generally might restrict access to early spring food resources. Inter-annual variations in climate during the years’ prior, during and/or just following birth (maternal, in-utero and natal environments, respectively) also affected adult body size. Such silver-spoon effects by which animals that are born into ‘rich’ conditions are favoured throughout life are consistent with observations in other mammals including polar bears , Soay sheep , red squirrels and caribou . Common among these studies is the importance of winter and spring climate during (natal environments) or just prior (maternal or in utero environments) to the year of birth, which we also observed in this study. Winter and spring climate is related to summer drought conditions in the Canadian Rocky Mountains , which suggests that the effect of winter and spring climate may not necessarily be directly associated with the denning period, but rather summer environments when water is limiting. We are unsure, however, how late summer precipitation affects cubs-of-the-year. It may be related to late summer food resources, such as fruit production, or affect food-resource abundance in the following year when bears are yearlings. Further, winter precipitation (December-March) anomalies during the natal birth year were positively related to body mass. We interpreted this as snow cover during winter denning providing energetic benefits (e.g. insulation) in the den for cubs of the year. During the year prior to birth, late summer (July-August) temperature anomalies were negatively associated with body mass but positively associated with body length in grizzly bears. This late-summer environment may have affected maternal body condition prior to denning and thus subsequent condition of offspring [e.g. 53] or conversely, it may have affected the following years’ food supply during the cub-of-year period, since lag effects in fruit production are caused by weather conditions favourable to flower primordia in the mid-to-late summer period the year prior to fruiting . Although we cannot be certain which factor is more important, the fact that body mass is negatively associated with late-summer temperature anomalies, where as body length is positively associated with late-summer temperature anomalies suggests to us that maternal condition is less likely (as we would expect similar responses in body mass and length if it were solely a maternal effect). Further investigations of mid and late-summer weather on pulsing in food resource abundance the following year are needed, especially in regard to the apparent opposite effects on bear mass and length. One important consideration to our purported silver spoon effect should be discussed: that is, we have no information on our study animals prior to their first capture. This has two important implications: 1) we cannot account for litter size effects, and 2) the centroid data used to determine natal climatic conditions may not be reflective of the actual natal location. In regards to the former, not accounting for litter size should inflate the variance around our estimates. For the centroid data, this would likely only influence dispersing males, as females are philopatric . For males, average dispersal distances in the province are under 50 kilometers , thus still largely reflective of the climate in the centroid of the current home range (differences in climates among bears are mainly regional in effect, not within populations). Further, for this limitation to bias our results, males would consistently have to disperse to poorer environments, again something we deem unlikely. Thus, we argue that the silver spoon pattern is unlikely to be altered by these factors in such away that the statistical pattern would disappear. Human footprint did not directly relate to body size patterns of grizzly bears, but human activity indirectly affected body size by influencing habitats. The two most important measures of habitat quality were canopy closure and the age structure of forests. Bears that used habitats associated with higher canopy variability, such as forest/non-forest landscapes in the mountains or expanses of old growth forests with a recent, single-harvest sequence, had lower body masses. Conversely, bears that used forests with higher variability in regenerating forest age had higher body condition. Likewise, body length was positively related to annual landscape change. Taken together, these results suggest that human activities that fragment forests are positively associated with body size measures, although survival of bears in these environments is compromised due to high rates of human-caused mortalities [57, 84]. Early successional and highly variable forests are therefore important indicators of improved habitat quality for bears given the relationship to body size patterns reported here, habitat use studies and measures of food resource abundance [73, 74]. We hypothesize that positive associations between body size patterns and variability in regenerating forest age are due in part to local landscape patterns in protein availability. For instance, both ungulate and ant resource use in Alberta are associated with disturbed forests [46, 74]. While bear body size is largely dictated by age and sex, it only accounted for about 50% of the variation. More consideration of the spatial and temporal patterns of resource availability, including the conditions early in life, is needed to better understand individual performance of animals and population dynamics. For grizzly bears in Alberta, environmental effects on body size are most affected by regional environmental gradients (space) and the environmental conditions animals are born into (time). Local-habitat heterogeneity (particularly young, patchily disturbed forests), and landscape dynamics also had a small influence on body size. It is important to emphasize that while patchily disturbed forests positively affected body size, these areas also have high rates of mortality, which could negate any positive population-level effect. Worldwide, relationships between carnivore body size and climate warming show ambiguous trends ; however, polar bears body sizes have recently declined, which has been attributed primarily to loss in habitat (i.e., sea ice as a platform for hunting; [96, 97]). Despite unequivocal global patterns , a 50 year examination of regional studies showed that carnivore body sizes have generally increased over the past half century . Given the short season associated with high-alpine environments, such as the Rocky Mountains in Alberta, we hypothesize that individuals with a limited growing season and temperature-limited ecosystems, such as interior grizzly bears, might actually benefit from increases in season length associated with climate change. This prediction is largely consistent with observed body size and seasonality patterns in grizzly bears across North America , but may be dependent on sufficient snow cover during the denning period. In conclusion, we have demonstrated a complex interplay of biological, spatial and temporal factors on body size that collectively explained between 60 and 84% of the variation seen in Alberta’s grizzly bears. We thank the Natural Sciences and Engineering Research Council (NSERC) of Canada, Alberta Innovates – Bio Solutions, and partners from the Foothills Research Institute Grizzly Bear Program for financial support. This manuscript was greatly improved from the comments of three anonymous reviewers. - Forchhammer MC, Clutton-Brock TH, Lindström J, Albon SD: Climate and population density induce long‒term cohort variation in a northern ungulate. J Anim Ecol. 2001, 70: 721-729. 10.1046/j.0021-8790.2001.00532.x.View Article - Krebs C, Boutin S, Boonstra R, Sinclair ARE, Smith JNM, Dale M, Turkington R: Impact of food and predation on the snowshoe hare cycle. Science. 1995, 269: 1112-1115. 10.1126/science.269.5227.1112.View ArticlePubMed - Elton C: Animal ecology. 1927, London: Macmillan - Dooley JL, Bowers MA: Demographic responses to habitat fragmentation: experimental tests at the landscape and patch scale. Ecology. 1998, 79: 969-980. 10.1890/0012-9658(1998)079[0969:DRTHFE]2.0.CO;2.View Article - Andren H: Corvid density and nest predation in relation to forest fragmentation: A landscape perspective. Ecology. 1992, 73: 794-10.2307/1940158.View Article - Flaspohler DJ, Temple SA, Rosenfield RN: Effects of forest edges on ovenbird demography in a managed forest landscape. Conserv Biol. 2001, 15: 173-183.View Article - Wiens JA, Chr N, Van Horne B, Ims RA: Ecological mechanisms and landscape ecology. 1993, New York: Oikos - Forman R: Some general principles of landscape and regional ecology. Landscape Ecol. 1995, 10: 133-142. 10.1007/BF00133027.View Article - Fretwell SD, Lucas JRHL: On territorial behavior and other factors influencing habitat distribution in birds I. Theoretical Development. Acta Biotheor. 1970, 19: 16-36.View Article - Pulliam HR, Danielson BJ: Sources, sinks, and habitat selection: a landscape perspective on population dynamics. Am Nat. 1991, 137: S50-S66. 10.1086/285139.View Article - Anderson WB, Wait DA, Stapp P: Resources from another place and time: responses to pulses in a spatially subsidized system. Ecology. 2008, 89: 660-670. 10.1890/07-0234.1.View ArticlePubMed - Schmidt KA, Ostfeld RS: Numerical and behavioral effects within a pulse-driven system: consequences for shared prey. Ecology. 2008, 89: 635-646. 10.1890/07-0199.1.View ArticlePubMed - Yang LH: Pulses of dead periodical cicadas increase herbivory of American bellflowers. Ecology. 2008, 89: 1497-1502. 10.1890/07-1853.1.View ArticlePubMed - Woodward FI, Lomas MR, Quaife T: Global responses of terrestrial productivity to contemporary climatic oscillations. Phil Trans R Soc B. 2008, 363: 2779-2785. 10.1098/rstb.2008.0017.PubMed CentralView ArticlePubMed - Tian H, Melillo JM, Kicklighter DW, McGuire AD, Helfrich JVK, Moore B, Vörösmarty CJ: Effect of interannual climate variability on carbon storage in Amazonian ecosystems. Nature. 1998, 396: 664-667. 10.1038/25328.View Article - Black TA, Chen WJ, Barr AG, Arain MA, Chen Z, Nesic Z, Hogg EH, Neumann HH, Yang PC: Increased carbon sequestration by a boreal deciduous forest in years with a warm spring. Geophys Res Lett. 2012, 27: 1271-1274.View Article - Macias Fauria M, Johnson EA: Climate and wildfires in the North American boreal forest. Philos Trans of the R Soc B: Biol Sci. 2008, 363: 2315-2327. 10.1098/rstb.2007.2202.View Article - Macias Fauria M, Johnson EA: Large‒scale climatic patterns and area affected by mountain pine beetle in British Columbia, Canada. J of Geo Res: Biogeosci. 2009, 114: G01012-View Article - Post E, Stenseth NC: Climatic variability, plant phenology, and northern ungulates. Ecology. 1999, 80: 1322-1339. 10.1890/0012-9658(1999)080[1322:CVPPAN]2.0.CO;2.View Article - Sæther BE, Smith FM, Cooper EJ, Wookey PA: The Arctic Oscillation predicts effects of climate change in two trophic levels in a high‒arctic ecosystem. Ecol Lett. 2002, 5: 445-453. 10.1046/j.1461-0248.2002.00340.x.View Article - Haynes KJ, Liebhold AM, Fearer TM, Wang G, Norman GW, Johnson DM: Spatial synchrony propagates through a forest food web via consumer-resource interactions. Ecology. 2009, 90: 2974-2983. 10.1890/08-1709.1.View ArticlePubMed - Warne RW, Pershall AD, Wolf BO: Linking precipitation and C3-C4 plant production to resource dynamics in higher-trophic-level consumers. Ecology. 2010, 91: 1628-1638. 10.1890/08-1471.1.View ArticlePubMed - Herrera CM: Long-term dynamics of mediterranean frugivorousbirds and fleshy fruits: a 12-year study. Ecol Monogr. 1998, 68: 511-538. - Abrahamson WG, Layne JN: Long-term patterns of acorn production for five oak species in xeric florida uplands. Ecology. 2003, 84: 2476-2492. 10.1890/01-0707.View Article - Curran LM, Leighton M: Vertebrate responses to spatiotemporal variation in seed production of mast-fruiting dipterocarpaceae. Ecol Monogr. 2000, 70: 101-128. 10.1890/0012-9615(2000)070[0101:VRTSVI]2.0.CO;2.View Article - Harrison RD: Drought and the consequences of El Niño in Borneo: a case study of figs. Popul Ecol. 2001, 43: 63-75. 10.1007/PL00012017.View Article - Stenseth NC, Mysterud A, Ottersen G, Hurrell JW, Chan K-S, Lima M: Ecological effects of climate fluctuations. Science. 2002, 297: 1292-1296. 10.1126/science.1071281.View ArticlePubMed - Howe EJ, Obbard ME, Bowman J: Prior reproduction and weather affect berry crops in central Ontario, Canada. Popul Ecol. 2011, 54: 347-356.View Article - Wright SJ, Carrasco C, Calderón O, Paton S: The El Niño Southern Oscillation, variable fruit production, and famine in a tropical forest. Ecology. 1999, 80: 1632-1647. - Koenig WD, Knops J: Scale of mast-seeding and tree-ring growth. Nature. 1998, 396: 225-226.View Article - Schauber EM, Kelly D, Turchin P, Simon C, Lee WG: Masting by eighteen New Zealand plant species: the role of temperature as a synchronizing cue. Ecology. 2002, 83: 1214-1225. 10.1890/0012-9658(2002)083[1214:MBENZP]2.0.CO;2.View Article - Peters VS, Macdonald SE, Dale M: The interaction between masting and fire is key to white spruce regeneration. Ecology. 2005, 86: 1744-1750. 10.1890/03-0656.View Article - Kemp GA, Keith LB: Dynamics and regulation of red squirrel (Tamiasciurus hudsonicus) populations. Ecology. 1970, 51: 763-779. 10.2307/1933969.View Article - Bieber C, Ruf T: Population dynamics in wild boar Sus scrofa: ecology, elasticity of growth rate and implications for the management of pulsed resource consumers. J Appl Ecol. 2005, 42: 1203-1213. 10.1111/j.1365-2664.2005.01094.x.View Article - Madsen T, Shine R: Silver spoons and snake body sizes: prey availability early in life influences long‒term growth rates of free‒ranging pythons. J Anim Ecol. 2000, 69: 952-958. 10.1046/j.1365-2656.2000.00477.x.View Article - Van de Pol M, Bruinzeel LW, Heg D, Van der Jeugd HP, Verhulst S: A silver spoon for a golden future: long-term effects of natal origin on fitness prospects of oystercatchers (Haematopus ostralegus). J Anim Ecol. 2006, 75: 616-626. 10.1111/j.1365-2656.2006.01079.x.View ArticlePubMed - Descamps S, Boutin S, Berteaux D, McAdam AG, Gaillard J-M: Cohort effects in red squirrels: the influence of density, food abundance and temperature on future survival and reproductive success. J Anim Ecol. 2008, 77: 305-314. 10.1111/j.1365-2656.2007.01340.x.View ArticlePubMed - Wilkin TA, Sheldon BC: Sex differences in the persistence of natal environmental effects on life histories. Curr Biol. 2009, 19: 1998-2002. 10.1016/j.cub.2009.09.065.View ArticlePubMed - Wittmer HU, Powell RA, King CM: Understanding contributions of cohort effects to growth rates of fluctuating populations. J Anim Ecol. 2007, 76: 946-956. 10.1111/j.1365-2656.2007.01274.x.View ArticlePubMed - Ferguson SH, McLoughlin PD: Effect of energy availability, seasonality, and geographic range on brown bear life history. Ecography. 2008, 23: 193-200.View Article - Nowak RM: Walker’s Mammals of the world. 1999, Baltimore, Maryland: The Johns Hopkins University Press, 6 - Gende SM, Quinn TP, Willson MF: Consumption choice by bears feeding on salmon. Oecologia. 2001, 127: 372-382. 10.1007/s004420000590.View Article - Boulanger J, Himmer S, Swan C: Monitoring of grizzly bear population trends and demography using DNA mark–recapture methods in the Owikeno Lake area of British Columbia. Can J Zool. 2004, 82: 1267-1277. 10.1139/z04-100.View Article - Mowat G, Heard DC: Major components of grizzly bear diet across North America. Can J Zool. 2006, 84: 473-489. 10.1139/z06-016.View Article - Mclellan BN, Hovey FW: The diet of grizzly bears in the Flathead River drainage of southeastern British Columbia. Can J Zool. 1995, 73: 704-712. 10.1139/z95-082.View Article - Munro RHM, Nielsen SE, Price MH, Stenhouse GB, Boyce MS: Seasonal and diel patterns of grizzly bear diet and activity in west-central Alberta. J Mamm. 2006, 87: 1112-1121. 10.1644/05-MAMM-A-410R3.1.View Article - Naves J, Fernández-Gil A, Rodríguez C, Delibes M: Brown bear food habits at the border of its range: a long-term study. J Mamm. 2006, 87: 899-908. 10.1644/05-MAMM-A-318R2.1.View Article - Zedrosser A, Dahle B, Swenson JE: Population density and food conditions determine adult female body size in brown bears. J Mamm. 2006, 87: 510-518. 10.1644/05-MAMM-A-218R1.1.View Article - Meiri S, Yom-Tov Y, Geffen E: What determines conformity to Bergmann’s rule?. Global Ecol Biogeogr. 2007, 16: 788-794. 10.1111/j.1466-8238.2007.00330.x.View Article - Mattson DJ, Blanchard BM, Knight RR: Yellowstone grizzly bear mortality, human habituation, and whitebark pine seed crops. J Wildl Manage. 1992, 56: 432-444. 10.2307/3808855.View Article - Gunther KA, Haroldson MA, Frey K, Cain SL, Copeland J, Schwartz CC: Grizzly bear–human conflicts in the Greater Yellowstone ecosystem, 1992–2000. Ursus. 2004, 15: 10-22. 10.2192/1537-6176(2004)015<0010:GBCITG>2.0.CO;2.View Article - Hilderbrand GV, Schwartz CC, Robbins CT, Jacoby ME, Hanley TA, Arthur SM, Servheen C: The importance of meat, particularly salmon, to body size, population productivity, and conservation of North American brown bears. Can J Zool. 1999, 77: 132-138. 10.1139/z98-195.View Article - Zedrosser A, Bellemain E, Taberlet P, Swenson JE: Genetic estimates of annual reproductive success in male brown bears: the effects of body size, age, internal relatedness and population density. J Anim Ecol. 2007, 76: 368-375. 10.1111/j.1365-2656.2006.01203.x.View ArticlePubMed - Robbins CT, Ben-David M, Fortin JK, Nelson OL: Maternal condition determines birth date and growth of newborn bear cubs. J Mamm. 2012, 93: 540-546. 10.1644/11-MAMM-A-155.1.View Article - Hamlett G: Delayed implantation and discontinuous development in the mammals. Q Rev Biol. 1935, 10: 432-447. 10.1086/394493.View Article - Spady TJ, Lindburg DG, Durrant BS: Evolution of reproductive seasonality in bears. Mamm Rev. 2007, 37: 21-53. 10.1111/j.1365-2907.2007.00096.x.View Article - Nielsen SE, Stenhouse GB, Beyer HL, Huettmann F, Boyce MS: Can natural disturbance-based forestry rescue a declining population of grizzly bears?. Biol Conserv. 2008, 141: 2193-2207. 10.1016/j.biocon.2008.06.020.View Article - Nielsen SE, Stenhouse GB, Boyce MS: A habitat-based framework for grizzly bear conservation in Alberta. Biol Conserv. 2006, 130: 217-229. 10.1016/j.biocon.2005.12.016.View Article - Linke J, Franklin SE, Huettmann F, Stenhouse GB: Seismic cutlines, changing landscape metrics and grizzly bear landscape use in alberta. Landscape Ecol. 2005, 20: 811-826. 10.1007/s10980-005-0066-4.View Article - Cattet MR, Christison K, Caulkett NA, Stenhouse GB: Physiologic responses of grizzly bears to different methods of capture. J Wildlife Dis. 2003, 39: 649-654.View Article - Cattet M, Boulanger J, Stenhouse G, Powell RA, Reynolds-Hogland MJ: An evaluation of long-term capture effects in ursids: implications for wildlife welfare and research. J Mamm. 2008, 89: 973-990. 10.1644/08-MAMM-A-095.1.View Article - Cattet M, Stenhouse G, Bollinger T: Exertional myopathy in a grizzly bear (Ursus arctos) captured by leghold snare. J Wildlife Dis. 2008, 44: 973-978.View Article - Cattet M, Caulkett NA, Stenhouse GB: Anesthesia of grizzly bears using xylazine-zolazepam-tiletamine or zolazepam-tiletamine. Ursus. 2003, 14: 88-93. - Stoneberg RP, Jonkel CJ: Age determination of black bears by cementum layers. J Wildl Manage. 1966, 30: 411-414. 10.2307/3797828.View Article - Cattet MRL, Caulkett NA, Obbard ME, Stenhouse GB: A body-condition index for ursids. Can J Zool. 2002, 80: 1156-1161. 10.1139/z02-103.View Article - Gannon WL, Sikes RS: Guidelines of the American Society of Mammalogists for the use of wild mammals in research. J Mamm. 2007, 88: 809-823. 10.1644/06-MAMM-F-185R1.1.View Article - CANADIAN COUNCIL ON ANIMAL CARE: CCAC guidelines on: the care and use of wildlife. 2003, Ottawa, Ontario, Canada: Canadian Council on Animal Care - Boulanger J, Stenhouse G, Munro R: Sources of heterogeneity bias when dna mark-recapture sampling methods are applied to grizzly bear (Ursus arctos) populations. J Mamm. 2004, 85: 618-624. 10.1644/BRB-134.View Article - Wang T, Hamann A, Mbogga M: ClimateAB v3.21: a program to generate projected, normal, decadel, annual, seasonal and monthly interpolated climate data for Alberta. - Hamann A, Wang TL: Models of climatic normals for genecology and climate change studies in British Columbia. Agr Forest Meteorol. 2005, 128: 211-221. 10.1016/j.agrformet.2004.10.004.View Article - Archibald WR, Ellis R, Hamilton AN: Responses of grizzly bears to logging truck traffic in the Kimsquit River Valley, British Columbia. Bears: Their Biol and Manag. 1987, 7: 251-257. - Nielsen SE, Boyce MS, Stenhouse GB: Grizzly bears and forestry. For Ecol Manage. 2004, 199: 51-65. 10.1016/j.foreco.2004.04.014.View Article - Nielsen SE, Munro RHM, Bainbridge EL, Stenhouse GB, Boyce MS: Grizzly bears and forestry. For Ecol Manage. 2004, 199: 67-82. 10.1016/j.foreco.2004.04.015.View Article - Nielsen SE, McDermid G, Stenhouse GB, Boyce MS: Dynamic wildlife habitat models: Seasonal foods and mortality risk predict occupancy-abundance and habitat selection in grizzly bears. Biol Conserv. 2010, 143: 1623-1634. 10.1016/j.biocon.2010.04.007.View Article - Nielsen SE, Cranston J, Stenhouse GB: Identification of priority areas for grizzly bear conservation and recovery in Alberta, Canada. J of Conserv Plann. 2009, 5: 38-60. - Linke J, McDermid GJ, Laskin DN, McLane AJ, Pape A, Cranston J, Hall-Beyer M, Franklin SE: A disturbance-inventory framework for flexible and reliable landscape monitoring. Photogramm Eng Remote Sens. 2009, 75: 981-996.View Article - Bern PH: Statistical software components. HIREG: Stata module for hierarchial regression. 2005,http://EconPapers.repec.org/RePEc:boc:bocode:s432904, - Raudenbush B, Bryk AS: Hierarchical linear models: applications and data analysis methods. 2001, Thousand Oaks, California: Sage Publications, 2 - Hosmer DW, Lemeshow S: Applied logistic regression. 2000, New York, New York: John Wiley & Sons, LtdView Article - Wilbur HM: Density-dependent aspects of growth and metamorphosis in Bufo americanus. Ecology. 1977, 58: 196-200. 10.2307/1935122.View Article - Skogland T: The effects of density dependent resource limitation on size of wild reindeer. Oecologia. 1983, 60: 156-168. 10.1007/BF00379517.View Article - Mysterud A, Yoccoz NG, Stenseth NC, Langvatn R: Effects of age, sex and density on body weight of Norwegian red deer: evidence of density-dependent senescence. Proc Biol Sci. 2001, 268: 911-919. 10.1098/rspb.2001.1585.PubMed CentralView ArticlePubMed - Benn B, Herrero S: Grizzly bear mortality and human access in Banff and Yoho National Parks, 1971–98. Ursus. 2002, 13: 213-221. - Nielsen SE, Herrero S, Boyce MS, Mace RD, Benn B, Gibeau ML, Jevons S: Modelling the spatial distribution of human-caused grizzly bear mortalities in the Central Rockies ecosystem of Canada. Biol Conserv. 2004, 120: 101-113. 10.1016/j.biocon.2004.02.020.View Article - Zedrosser A, Dahle B, Swenson JE, Gerstl N: Status and management of the brown bear in Europe. Ursus. 2001, 12: 9-20. - Ozgul A, Childs DZ, Oli MK, Armitage KB, Blumstein DT, Olson LE, Tuljapurkar S, Coulson T: Coupled dynamics of body mass and population growth in response to environmental change. Nature. 2010, 466: 482-485. 10.1038/nature09210.View ArticlePubMed - Morrison SF, Hik DS: Demographic analysis of a declining pika Ochotona collaris population: linking survival to broad-scale climate patterns via spring snowmelt patterns. J Anim Ecol. 2007, 76: 899-907. 10.1111/j.1365-2656.2007.01276.x.View ArticlePubMed - Ciarniello LM, Boyce MS, Heard DC, Seip DR: Denning behavior and den site selection of grizzly bears along the Parsnip River, British Columbia, Canada. Ursus. 2005, 16: 47-58. 10.2192/1537-6176(2005)016[0047:DBADSS]2.0.CO;2.View Article - Coogan S, Nielsen SE, Stenhouse GB: Spatial and temporal heterogeneity creates a “brown tide” in root phenology and nutrition. ISRN Ecology. 2012, 2012: 10-View Article - Atkinson SN, Stirling I, Ramsay MA: Growth in early life and relative body size among adult polar bears ( Ursus maritimus). J Zool. 1996, 239: 225-234. 10.1111/j.1469-7998.1996.tb05449.x.View Article - Hegel TM, Mysterud A, Ergon T, Loe LE, Huettmann F, Stenseth NC: Seasonal effects of Pacific-based climate on recruitment in a predator-limited large herbivore. J Anim Ecol. 2010, 79: 471-482. 10.1111/j.1365-2656.2009.01647.x.View ArticlePubMed - Meyn A, Taylor SW, Flannigan MD, Thonicke K, Cramer W: Relationship between fire, climate oscillations, and drought in British Columbia, Canada, 1920–2000. Glob Chang Biol. 2010, 16: 977-989. 10.1111/j.1365-2486.2009.02061.x.View Article - Krebs CJ, Boonstra R, Cowcill K, Kenney AJ: Climatic determinants of berry crops in the boreal forest of the southwestern Yukon. Botany. 2009, 87: 401-408. 10.1139/B09-013.View Article - Proctor MF, Mclellan BN, Strobeck C, Barclay RMR: Gender-specific dispersal distances of grizzly bears estimated by genetic analysis. Can J Zool. 2004, 82: 1108-1118. 10.1139/z04-077.View Article - Meiri S, Guy D, Dayan T, Simberloff D: Global change and carnivore body size: data are stasis. Global Ecol Biogeogr. 2009, 18: 240-247. 10.1111/j.1466-8238.2008.00437.x.View Article - Derocher AE, Stirling I: Temporal variation in reproduction and body mass of polar bears in western Hudson Bay. Can J Zool. 1995, 73: 1657-1665. 10.1139/z95-197.View Article - Rode KD, Amstrup SC, Regehr EV: Reduced body size and cub recruitment in polar bears associated with sea ice decline. Ecol Appl. 2010, 20: 768-782. 10.1890/08-1036.1.View ArticlePubMed - Yom-Tov Y: Body sizes of carnivores commensal with humans have increased over the past 50 years. Func Ecol. 2003, 17: 323-327. 10.1046/j.1365-2435.2003.00735.x.View Article This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Up in Arms About What Is a Term Math? Finding out how to manage a linear equation provides you paramountessays com with an extremely straightforward comprehension of algebra so that you will have the ability to handle more elaborate equations later. If you give up at general math, then you’ll only make general money. Let’s try to realize the above graph. The Math TestNo Calculator part of the test makes it simpler to evaluate your fluency in math and your knowledge of some math concepts. Math and Memory Memory may have an important influence on thinking with numbers. Math has its very own fascinating language. The What Is a Term Math Chronicles The speed at which galaxies are moving apart from one another is called the Hubble constant, or H0. Countries with a high Gini Coefficient are more inclined to develop into unstable, since there’s a huge mass of poor folks that are jealous of the few of rich men and women. In the majority of situations it’s more natural to think about inequality of the composite measure. Based on the number of times you must multiply the very same binomial a value also called an exponent the binomial coefficients for that specific exponent are always precisely the same. The correlation coefficient value is positive when it demonstrates that there’s a correlation between the 2 values and the negative value indicates the quantity of diversity among the 2 values. It can also give you information about the graph. In functions of one variable, such as x, the amount https://www.brown.edu/academics/visual-art/home of a term is just the exponent. After the object is on or close to the surface of the human body, the force of gravity acting on the object is virtually constant and the subsequent equation may be used. Let’s start with the surface of the equation. Sometimes in the event you’d really like to do just a little bit of research on what a specific fabric or leather is created of and by what method the texture would impact the durability, some unfamiliar terms may be used. The worth of 2x is dependent on what value is assigned to x. A circle with a bigger diameter will have a bigger circumference. What Is a Term Math Testing set is critical to validate our results. When you hit okay, the reply will show up in the cell. If you’re interested in some suggestions, comments, and elaborations, click the Comments. I share a typical sort of PSLE Math questions below which most students are not able to do. One of the absolute most important ideas you’re likely to see in your entire study of algebra is the notion of slope. That observation contributes to deep and arcane mathematical and philosophical questions and a few folks make it their life’s work to consider these matters. Students represent multiplication facts throughout using context. Kids everywhere must learn the important math vocabulary words to be able to be prosperous in school. When solving math problems, they usually are expected to do the right steps in a specific order to achieve the correct answer. It actually is dependent upon the youngster’s effort and time spent. Homeschooling parents will find that if it comes to deciding on the curriculum for math, there’s a tremendous variety available. For children to be successful in mathematics, several brain functions will need to work together. The Importance of What Is a Term Math After the structure is done, give students a set price for each bit of material. Even though a linear function can be utilized to model population growth with a constant increase or decrease in the quantity of people, an exponential function can be employed to model population growth with a constant proportion change in population. In some instances, a constant is used for the interest of convenience if it represents a large or elaborate number. You should not presume that your mean is going to be one of your initial numbers. Prove that we have an endless number of prime numbers. All our numbers are thought to be constant terms. It comes into play when you own a dataset with numerous capabilities. The program delivers amazing techniques to deal with complex troubles. A normal task in math is to compute what is called the absolute value of a specific number. The Lost Secret of What Is a Term Math The alternative, apart from the usage of simple numbers, is that letters and similar symbols write my essay are variables. The expression in the brackets can currently be factored using the decompostion procedure. ERROR directive doesn’t halt assembling. Finding the Best What Is a Term Math Within this example we had the ability to combine two of the terms to simplify the last answer. In computer graphics, for example, optimal transport may be used to transform or morph shapes by finding the optimal movement from every point on a single shape in the other. The word” simple” in this context usually means that the predicament isn’t difficult to state and the question is simple to comprehend. It is an excellent idea to permit the reader know if a given letter is a constant, so that there is not any confusion. She notices that some of her classmates aren’t familiarized with the show, but they’re carrying around lots of library books. There are lots of combinations here! The further work of third grade is closely about the significant work standards. The minimum grade is what I have to find. Generally mathematical usage, however, term isn’t restricted to additive expressions.
Elliptic Geometry Drawing Tools Library Home || Full Table of Contents || Suggest a Link || Library Help |Elliptic geometry calculations using the disk model. Includes scripts for: Finding the point antipodal to a given point; drawing the circle with given center through a given point; measuring the elliptic angle described by three points; measuring the elliptic distance between two points; drawing the elliptic line, the elliptic line segment, and the elliptic semi-line defined by two points; drawing the perpendicular to a line through a point off the line; and drawing the perpendicular to a line through a point on the line.| |Levels:||High School (9-12), College| |Resource Types:||Documents/Sketches/Galleries, Geometer's Sketchpad| |Math Topics:||Elliptic & Spherical Geometry| © 1994- Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education.
Largemouth bass (Micropterus salmoides) is the most sought after sportfish in New Mexico. However, population abundance of largemouth bass in Elephant Butte Reservoir remains well below the statewide target objective of 20-40 fish/h. As such, Largemouth Bass fingerlings have been stocked into the Reservoir to augment the bass population. We were tasked to retrospectively characterize the contribution of stocked versus natal (hatched in the reservoir) Largemouth Bass to assess if stocking practices were successful. We used a non-lethal collection of dorsal spines in conjunction with strontium isotopes in the spines to characterize origin of fish and learned that stocking was successful. In addition, we identified rate of water level change within the reservoir affected spawning of the bass population. We partnered with our State cooperator (New Mexico Department of Game and Fish) to conduct the research. Our results will be used to make informed management decisions on the appropriate timing of stocking fingerling Largemouth Bass. Our results will also be used by water managers to time the delivery of water throughout the middle Rio Grande Basin to increase fish habitat for spawning in Elephant Butte Reservoir.
Statics Mechanics of Materials, third Edition in SI units Shared By Guest This is a concise and well-illustrated introduction to the theory and application of statics and mechanics of materials used in many engineering disciplines. The Author of this Book is Russell C Hibbeler The Author of this Book is Russell C Hibbeler. Array ISBN . It boasts unique pedagogical features such as visualisation tools that help accelerate understanding and develop problem-solving skills in students. Statics Mechanics of Materials, third Edition in SI units available in English. Array ISBN . This four-coloured text in SI units is a combined abridged version of two of Hibbeler’s best-selling titles, namely Engineering Mechanics Statics 12th edition in SI Units and Mechanics of Materials 8th edition in SI Units. Statics Mechanics of Materials, third Edition in SI units available in English. The book’s hallmark remains the same as the unabridged versions, that is having a strong emphasis on drawing a free-body diagram, as well as selecting an appropriate coordinate system and an associated sign convention when the equations of mechanics are applied. Many realistic analysis and design applications are presented, which involve mechanical elements and structural members often encountered in engineering practice.
French lawyer, politician and gastronome, Anthelme Brillat-Savarin once said: “Tell me what you eat, and I will tell you what you are.” Across cultures and countries, food is at the centre of existence, and food wastage seen as being irreverent towards the millions devoted to producing it and the billions who don’t have regular and easy access to it. While India has made tremendous strides in food security, much more needs to be done to curtail food wastage. According to the United Nations Development Programme, 40 per cent of the food produced in India goes waste. Wasted food also represents a waste of resources, such as land, water, energy, and other factors used in its production even as it adds to green gas emissions. Food wastage occurs at every stage: food production, transportation viz. supply chains and consumption. India is presently ranked 102nd among 117 economies in the Global Hunger Index (GHI) 2019. Hunger could be tackled by avoiding wastage and using technology and innovative business models, especially in supply chain management to reduce transit time. The Confederation of Indian Industry (CII) recommends increasing supply chain efficiencies that will help reduce wastage, especially of fruits and vegetable. The use of refrigerated containers, for example, has added a new dimension to the transportation of perishable goods. The integrated cold chain project in India is expected to bear rich dividends. The Indian government has already taken several policy initiatives and the Pradhan Mantri Kisan SAMPADA Yojana encompasses the creation of – The Indian Food Sharing Alliance (IFSA), an initiative of the Food Safety and Standards Authority of India (FSSAI), is also geared to solve India’s food waste and hunger crisis by integrating various partner organisations, food recovery agencies, and NGOs. The CII Jubilant Bhartia Food and Agriculture Centre of Excellence (CII-FACE) is also dedicatedly working towards the integrated development of India’s food and agriculture sector through skill development, and other ways for increasing the overall efficiency and income of the agriculture sector. CII has been recommending various measures for the food processing industry such as creating seamless post-harvest infrastructure to address supply chain inefficiencies and framing model guidelines to make the statutory requirements of setting up cold chain infrastructure uniform across all States instead of acquiring multiple licenses and authorities. While these measures are expected to help, India will also need to take a cue from global practices that are both unorthodox and innovative to tackle the food wastage problem. Advancements in artificial intelligence, machine learning, nanotechnology, data monitoring, storage, and packaging solutions will aid in alleviating this issue and be a game-changer in the prevention of food wastage. The customers could also be incentivised to purchase perishable products approaching their expiry dates. Food wastage can also be controlled through social campaigns similar to ‘Swachh Bharat Abhiyaan’ to sensitise people about this avoidable occurrence. India has a successful Green Revolution and Operation Flood behind it and can strengthen its food processing sector to ensure that food wastage is curtailed to ensure that no citizen goes hungry.
a projectile is fired from a cliff 100m high with a velocity of 200m/s. If the angle of projection is 20degrees above the horizontal, find the time of flight correct to 2 decimal places and the range of the projectile from the base of the cliff to the nearest metre. (assume g=9.8 metres per second per second) Can someone please give me the answers to this question. Explanation is not nessecary. i just need to verify my answers. thanx That way we minimise the possibility of giving the answer needed for a multiple choice question where the poster is not interested in learning how to solve the problem. Also if they are honest they already know the solution method so you are wasting effort typing the thing out. Also, check your calculations, you seem to have lost precision somewhere, so your answer is not correct to the nearest metre.
May 7, 2018 – Dementia should join the expanding list of possible complications following concussion, even if the patient did not lose consciousness, say researchers from UCSF Weill Institute for Neurosciences and the San Francisco Veterans Affairs Health Care System. In their study, which tracked more than one-third of a million veterans, the likelihood of dementia was found to more than double following concussion, the researchers reported in JAMA Neurology, published May 7, 2018. After adjusting for age, sex, race, education and other health conditions, they found that concussion without loss of consciousness led to 2.36 times the risk for dementia. These risks were slightly elevated for those in the loss-of-consciousness bracket (2.51) and were nearly four times higher (3.77) for those with the more serious moderate-to-severe traumatic brain injury. Concussions in General Population Also Risky for Dementia Researchers identified participants from two databases: one listing all-era veterans whose traumatic brain injuries—which includes concussion or mild traumatic brain injury—could have occurred during civilian or military life; and the second from vets serving in Iraq and Afghanistan, for whom most of these injuries had occurred in combat zones, such as from shockwaves in blasts. “The findings in both groups were similar, indicating that concussions occurring in combat areas were as likely to be linked to dementia as those concussions affecting the general population,” said first author Deborah Barnes, PhD, MPH, professor in the UCSF departments of psychiatry, and epidemiology and biostatistics. In total, 357,558 participants, whose average age was 49, were tracked. Half had been diagnosed with traumatic brain injury, of which 54 percent had had concussion. The study followed participants for an average of 4.2 years; 91 percent were male and 72 percent were white. Among Iran and Afghanistan vets, concussion was defined as mild traumatic brain injury resulting in alteration of consciousness and amnesia for one day or less, based on a comprehensive medical evaluation. In the other vets, concussion was defined using a wide list of diagnostic codes in the electronic health record. Trauma May Hasten Neurodegenerative Disorders “There are several mechanisms that may explain the association between traumatic brain injury and dementia,” said senior author and principal investigator Kristine Yaffe, MD, professor in the UCSF departments of neurology, psychiatry, and epidemiology and biostatistics. “There’s something about trauma that may hasten the development of neurodegenerative conditions. One theory is that brain injury induces or accelerates the accumulation of abnormal proteins that lead to neuronal death associated with conditions like Alzheimer’s disease. “It’s also possible that trauma leaves the brain more vulnerable to other injuries or aging processes,” said Yaffe, “but we need more work in this area.” The study’s results add to a volume of research that links concussion and other traumatic brain injuries to various psychiatric and neurodegenerative disorders. Last month, UCSF researchers reported a link between concussion and Parkinson’s disease. “Our results show that more needs to be done to reduce the likelihood of traumatic brain injuries,” said Barnes. “In older adults, exercise and multifactorial interventions may limit the risks of falls, which are a leading cause of head injury. “For those who experience a concussion, get medical attention, allow time to heal and try to avoid repeat concussions. Although our study did not directly examine this issue, there is growing evidence that repeated concussions appear to have a cumulative effect.” The study is supported by funding from the U.S. Army Medical Research and Material Command and from the U.S. Department of Veteran Affairs (Chronic Effects of Neurotrauma Consortium).
From previous published reports, we knew that the retina (the layer at the back of the eye) looks different in people with Down's syndrome and ordinary people. In Down's syndrome, there appear to be more blood vessels on the surface of the retina, and the optic disc (where the nerves leave the eye to travel to the brain) has been described as 'rosier'. Ping Ji decided to measure and catalogue these differences by taking fundus photographs in children with Down's syndrome and typical children, and analyse the size and shape of the optic disc and the numbers and arrangement of blood vessels. Because children with Down's syndrome have slightly poorer vision than typical children, we expected to find that the optic disc is smaller in Down's syndrome. Forty-five children with Down's syndrome and 44 typical children took part.The analysis isn't fully completed yet, but Ping has found, quite unexpectedly, that children with Down's syndrome have larger optic discs. Further, other dimensions of the retina appear larger, as if the retina is stretched. But the children's eyes are not stretched overall, because Ping has measured the length of the eyes and matched the children with Down's syndrome with typical children of exactly the same eye length. At the moment, Ping's findings represent a mystery. Retinal blood vessels Ping's photographs show that children with Down's syndrome do indeed have more blood vessels. This is because the same number of vessels enter the eye, but they then branch out more often over the retina than do typical children's. Ping also took measures of the dimensions of the cornea and front of the eye (anterior chamber) in the children. When we have completed this analysis, we will have a better idea of the size and shape of the eye in children with Down's syndrome, and whether the differences might explain some of the defects the children's eyes develop.
A spinner is made from the disc is the diagram and the random variable X represents the number it lands on after being spun. Write down the distribution of X. How could I do this? Basically I’m confused where to start & what to think. Could anyone explain please?
While the first European contact with Gippsland is thought to have occurred early in the 19th century, it wasn’t until the 1830s when settlers steadily began to enter the region for a range of agricultural, timber harvesting and mining purposes clearing large areas of forests and woodlands and draining swamps. This has led to changes in vegetation and natural and indigenous fire regimes that have resulted in a potentially more fire-prone landscape. Tom Griffiths, an Emeritus Professor of history at ANU explained that European newcomers did not realise that the ‘open, carefully cultivated landscape’ had been created by Aboriginal people for a very long time and required careful management to retain those qualities. Along with using fire for everyday activities like hunting and cooking, Aboriginal people’s land management practices included cultural burning to help prevent fire risks and maintain and protect native habitats. Fire regimes prior to European settlement are reflected in the vegetation that was present prior indigenous dispossession, settlement and clearing. Modelling has recreated what the vegetation may have looked like in 1750 Ecological Vegetation Class Groups in 1750 Ecological Vegetation Class Groups in 2005 Tolerable fire intervals Detailed studies and modelling of vegetation in Victoria can explain how it has evolved with natural and indigenous fire. The dependence on fire for reproduction or the requirement for no fire until plants and trees are old enough to reproduce can be expressed as upper and lower limits . These can be seen in the following two maps. For example at Yarram the expected fire interval would be between 15 and 90 years. Minimum Tolerable Fire Interval (From the growth and reproduction habits of plant species present it is possible to say that fire frequency was generally no less than the figure shown) Maximum Tolerable Fire Interval (From the growth and reproduction habits of plant species present it is possible to say that fire frequency was generally no more than the figure shown)
Plants and Animals Species of Concern Decurrent False Aster - Common Name: Decurrent False Aster - Scientific Name: Boltonia decurrens - Distribution: St. Charles County - Classification: State and federally endangered - To learn more about endangered species: explore the links listed below. The nation’s leading expert on decurrent false aster calls this critically imperiled plant a “floodplain fugitive.” Unable to compete with other plants for sun and space, it sprouts in areas where flooding or other disturbances create patches of bare soil along ditches or in other moist, sandy, low-lying areas. Development of most of its potential habitat has relegated this plant to wet edges of fields, borrow areas and lake shores. Plowing and planting in river bottoms increases soil erosion, which smothers decurrent false aster’s seeds and seedlings beneath a blanket of silt. At the same time, levees have greatly reduced the floods that create bare-soil areas where this species can thrive. Because of decurrent false aster’s gypsy lifestyle, its distribution changes from year to year. This complicates efforts to ensure its survival. The Conservation Department partners with other government agencies and private landowners to document and protect existing populations. Dig this unique little amphibian! Often mistaken for a toad, the plains spadefoot (Spea bombifrons) has smoother skin and vertical eye pupils, like a cat’s. Toads’ pupils are horizontal. Its defining characteristic is a hard, wedge-shaped spade on each hind leg. This “spade” gives spadefoots their name and equips them well for life underground. Plains spadefoots live along the Missouri River, where they burrow in sandy soil. They are smallish frogs, measuring 1.5 to 2 inches. They eat earthworms and insects. Plants Go Wild at State Fair Demo garden showcases native plants’ versatility. Whether your yard is moist and loamy or dry and rocky, you can find ideas about landscaping with native plants at the 2009 Missouri State Fair Aug. 13–23. This year’s demonstration gardens at the Conservation Pavilion follow the fair’s “Rural Lifestyles” theme by inviting fair goers to “Discover Nature Near You.” Features include a rocky glade, a woodland setting, a water feature and sunny border areas. Indoors you will find a riverbank diorama with live native plants. The Conservation Pavilion is near the south end of the fairgrounds. While there, take time to cool off in front of several aquariums or in the air-conditioned discovery room.
Sizing Structures and Predicting Weight of a Spacecraft Created: Thursday, 01 June 2006 EZDESIT is a computer program for choosing the sizes of structural components and predicting the weight of a spacecraft, aircraft, or other vehicle. In designing a vehicle, EZDESIT is used in conjunction with a finite-element structural- analysis program: Each structural component is sized within EZDESIT to withstand the loads expected to be encountered during operation, then the weights of all the structural finite elements are added to obtain the structural weight of the vehicle. The sizing of the structural components elements also alters the stiffness properties of the finite-element model. The finite-element analysis and structural component sizing are iterated until the weight of the vehicle converges to a prescribed iterative difference. The results of the sizing can be reviewed in two ways: 1. An interactive session of the EZDESIT program enables review of the results in a table that shows component types, component weights, and failure modes; and 2. The results are read into a finite-element preprocessing-and-postprocessing program and displayed on a graphical representation of the model. This program was written by Jeffrey Cerro and C. P. Shore of Langley Research Center. For further information, access the Technical Support Package (TSP) free online at www.techbriefs.com/tsp under the Software category. LAR-16878-1
In a bid to enable a world of ultrafast photonics-based chips, scientists in the US have developed the smallest semiconductor laser to date. Photonics chips are considered one way to ensure computer chips become increasingly powerful in future, with light used to communicate information far faster than relatively sluggish electrons. Furthermore, light-based on chip communications would not generate heat in the way that conventional electronic devices do, a substantial concern as chips shrink in line with Moore’s Law. Actually developing working systems and relevant on-chip components is tricky however, and is the focus of research in many labs. Now researchers at the University of Texas believe that they have made a big step towards enabling photonics systems with what they claim in is the smallest laser in the world using semiconductor materials. Developed alongside universities in Taiwan and China, the nano laser, invisible to the human eye, could help form the basis of future photonics chips by closing the size gap between electronic and photonic components that currently exists. This has been one of the main barrier to incorporating photonics devices onto chips in the past, the nanoscale device could change this. The research team says that it has crucially managed to create a laser which can function below what is known as the three dimensional optical diffraction limit. By creating a device that can operate well below this limit, the researchers claim the research will have a “large impact” on nanoscale technologies such as future photonics chips as well as a new generation of sensors.
Use the accident prevention pyramid in your safety training or toolbox meetings with your employees to create awareness of the accident prevention pyramid and steps which must be taken to address unsafe acts and conditions before they lead to incidents, severe injuries, and even fatalities. Most accidents in the workplace involve both unsafe conditions, such as inadequate ventilation or improper storage of hazardous materials, and unsafe actions, such as bypassing controls or failing to wear personal protective equipment (PPE). Unsafe acts and conditions lead to progressively more serious injuries and even fatalities as you can see in the pyramid. Organizations must work to eliminate both unsafe conditions and unsafe actions in order to bring down these other numbers. Most employers only focus on incidents that result in top 3 sections of the pyramid: medical attention, lost time and fatalities. Why is this case you may ask? Because there is a cost or time element that is associated with these types of incidents. Ask yourself, “When is the last time I had a manager/supervisor or even an employee mention a near miss to the organization or safety committee?” If you start focusing on the near misses and unsafe actions, you will start changing the safety culture of your organization and the behaviors of your employees. Addressing Unsafe Conditions: - Unsafe conditions should be discovered by hazard assessments, including job hazard analyses (JHAs). - Ideally, hazards should be completely eliminated or substituted with safer options. If this is not possible, hazards should be managed with engineering controls, administrative controls, and PPE (PPE should be considered as a last resort because this requires employee compliance with wearing the (PPE) personal protective equipment). - Conditions should be monitored with regular inspections, audits, and safety observations. Addressing Unsafe Actions: - Organizations must coach and train employees in safe behaviors. - Organizations must also develop a good safety culture by getting all employees and all levels of management involved in the safety program. - Hold you supervisors and managers accountable for accidents and incidents that occur in their department. - The organization must be very clear with safety priorities. Management and supervisors must lead by example and be held accountable when not following your companies policies and procedures - Regular inspections, audits, and safety observations should also note employee behaviors and their understanding of safety procedures. - Good safety behaviors should be rewarded and reinforced and bad safety behaviors must be counseled and disciplined.
As Congress hunts for ways to push its stem-cell bill past an expected veto, states are charging ahead on their own. Last month, Gov. Eliot Spitzer kicked off plans to spend $1 billion on New York-based stem-cell research over the next decade. Spitzer is following the lead of California, whose massive $3 billion effort pioneered the state-level stem-cell surge two years ago. Similar, if smaller-scale, efforts are afoot in Connecticut, Florida, Illinois, Maryland, and New Jersey. In backing stem cells, state leaders are promising miracle cures for deadly diseases such as Alzheimer's disease, Parkinson's disease, and AIDS—and telling voters that those miracles can be had for free. Spitzer promised during his State of the State address in January that the stem-cell investment "will repay itself many times over in increased jobs, economic activity and improved health." This sort of claim appears to have originated with a study produced in the run-up to the 2004 vote on California's initiative. The authors, Stanford University health economist Laurence Baker and Bruce Deal of the Analysis Group, concluded that stem-cell research would generate state revenues and health-care savings of $6.4 to $12.6 billion over the 30 years it will take to pay off the state bonds used to fund it. California's $3 billion investment would not only pay for itself and another $2.4 billion in bond interest payments, it would also turn the state a profit of at least $1 billion. But the Baker-Deal numbers look hopelessly optimistic. To begin with, they assume that stem-cell treatments will work in the first place. Many of the most hyped biotechnology innovations of the last 25 years have yet to live up to their early promise. And when they do work, they often tend to improve medical care at the margins instead of revolutionizing it. If medical treatments can be derived from stem-cell research, they are at least a decade or two away, if history is any guide. Even then, new therapies envisioned by supporters, such as diabetes treatments that regenerate insulin-producing islet cells, might add to government health-care costs instead of curbing them. The Baker-Deal report figured that stem-cell therapies could save California at least $3.4 billion in health-care costs over the next three decades by assuming the therapies would reduce state spending on six major medical conditions by 1 percent to 2 percent. While the authors cast that as a "conservative" estimate, they don't even model the possibility that costs might rise instead. Recent medical advances haven't appreciably slowed growth in overall U.S. health-care spending, which continues to rise far faster than inflation. Ideally, of course, stem-cell therapies would start a trend in the opposite direction by reducing or eliminating the need for expensive and often lifelong medical care. For that to happen, though, the new treatments would need to largely replace existing ones at a reasonable price, and then doctors would have to use them sparingly—for instance, only on the patients most likely to benefit. None of these assumptions is a particularly good bet under the current U.S. health-care system, in which new treatments are often simply added to older ones, and where insurers so far have tended to pay top dollar for incremental medical advances. Baker and Deal also suggest that stem-cell support could yield California as much as $1.1 billion in royalty income, assuming that companies who license the rights to new discoveries pay 2 percent to 4 percent of treatment sales back to the state. But as Richard Gilbert, a University of California at Berkeley economist, pointed out in a recent critique of the study, most basic research doesn't yield commercial products, and the actual returns on commercialized research tend to be far lower than the level Baker and Deal assume. Gilbert estimates that California's total royalty income could be as low as $18 million in current dollars, or just 0.6 percent of its $3 billion investment. (Here's some of his reasoning.) As he dryly notes in his paper, "If income generation were the sole justification for stem cell research funding (which of course it is not), the State would be better off investing in its own municipal bonds." What about the potential of stem-cell research to spur economic development—can a state that sponsors stem-cell research hope to attract cool scientists who will then draw others, plus a coterie of entrepreneurs and venture capitalists? Biotech companies do tend to cluster in places like San Francisco and Boston, but their overall impact on regional economies tends to be limited. While they often pay high salaries, the vast majority of these companies are tiny, unprofitable startups with fewer than 100 employees. They frequently collapse well before they earn a dollar in sales. Even successful biotech ventures are often bought out by distant drug companies, which sometimes shut down the acquired company while transferring its research activities and any products elsewhere. On top of all that, big states like California and New York are going to wind up competing for some of the very same scientists, VCs, and entrepreneurs, further shrinking the rewards. Why did Baker and Deal see dollar signs? The $200,000 stem-cell supporters paid to Deal's firm, the Analysis Group, for campaign consulting might have something to do with it. In an interview, Baker said he didn't think of the report as advocacy but added that "we knew we were working for people who wanted to pass this thing." And while he still believes the economic benefits of stem-cell research could be "quite large," Baker also describes the report as merely "one possible version of how things might happen." None of this means that stem-cell research doesn't deserve government funding. Stem-cell science, after all, remains in its infancy. Nearly a decade after the discovery of embryonic stem cells in humans, scientists still don't know exactly how they work, how to assure their purity, or what unexpected side effects they might have when transplanted into the human body. At this stage of basic research, private funding is in short supply precisely because it's not clear where the payoff lies. This is where the federal government should come in. But a 2001 executive order from President Bush prevents federally funded scientists—that is, the bulk of academic biomedical researchers in the United States—from creating new embryonic stem-cell lines or even studying new lines developed elsewhere. So, the states are right to ante up where the federal government has failed to. They just shouldn't expect to do well while they're doing good. Even if stem cells do yield a large, near-term commercial payoff, there's another flaw in the Baker-Deal analysis: It doesn't account for the opportunity costs associated with waiting two decades for royalty payments. The basic issue is that a dollar today is worth more than a dollar a year from now. This notion—technically known as the "time value of money"—is the reason banks pay interest on savings accounts. At an interest rate of 5 percent, a dollar today is worth $1.05 a year from now. Flip that around, and the "present value" of a dollar expected one year from now is only about 95 cents. These are particularly important calculations if you're looking at investment opportunities, since even fairly large returns are worth considerably less in today's dollars if the payoff doesn't arrive for decades. You can probably see where this is going. Under the Baker-Deal assumptions, the bulk of royalty income flows in about 20 years from now, but the report doesn't discount those sums to their present value. When Berkeley economist Richard Gilbert applied a standard discount rate of 5 percent to the Baker-Deal figures—a conservative figure derived from the interest rate on 10-year Treasury bonds—he found the present value of expected royalty income is about two-thirds less than the report indicates. The Baker-Deal analysis has proven overly optimistic in other respects, too. Gilbert also modeled the California program's actual royalty policy, which of course didn't exist at the time Baker and Deal did their study. The economist found that payments to inventors and other limits in the policy reduced the state's expected royalty income even further—to just 0.6 percent of the overall research budget in the worst case. When I talked to him, Baker argued that the effect of discounting future royalties would be largely balanced out by other factors he and Deal also left out of the report, such as a potential increase in overall health-care usage. Of course, if people use medical services more frequently in the future, overall health-care costs are unlikely to go down, which might explain why the report didn't model that possibility in the first place. David P. Hamilton is a freelance writer in San Francisco. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of biotechnology and public policy issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.
If you are listening to this in audio please do follow the notes as well as pictures are not given. Logic Gates - An electronic circuit that performs boolean function - There are 3 main Logic Gates AND, OR and NOT gate(but in the picture below it is called the inverter), there are other gates which do the function of a NOT gate and an OR and an And gate they are known as NAND gate and NOR gate. - Logic Gates are the basis of all logic circuits, inputs and outputs from electronic logic gates are in form of voltage. Combination of AND and OR and NOT Gates - AND, OR and NOT gates are connected to perform Boolean Functions. - When we want to invert a boolean equation we use a bar, which is a line on top of the equation. Exclusive OR Function - the output is true if either input is true but not if both inputs are true - Exclusive OR is also known as EX OR or XOR Function - Look at truth table to see a better view of the difference - If you look OR gives an output of a 1 if any of the inputs are true where as XOR or EX OR will gives a 1…
Bipolar disorder is a brain disorder characterized by extreme or unusual shifts in a person's mood, energy, and ability to function. Bipolar disorder affects approximately 1% of the population, and more than 2 million American adults have bipolar disorder. Research has proven that bipolar disorder has a clear and recognized genetic component. However, the mode of inheritance of bipolar disorder is complex and likely to involve multiple genes. Investigators at TGen are using genome-wide association studies to identify the most common mutations (changes or similarities from person to person) that lead to sporadic bipolar disorder. By sifting through hundreds of thousands of unique genetic "markers" or signatures across the genome, researchers can identify genes containing common predisposing mutations. Once these genes have been identified, it is possible to translate these discoveries into new therapeutics and treatment options. Neurobehavioral Research Unit at TGen Your donation in support of Bipolar Disorder research at TGen is important to furthering the institute's mission to develop earlier diagnoses and smarter treatments for patients. Volunteer! Become a TGen Ambassador and help raise funds in your community for Bipolar Disorder research. To discuss ways to become involved and support research at TGen, contact the TGen Foundation at Foundation@tgen.org, or call 602.343.8411.
Waist size can predict your risk of certain cancers, according to a World Health Organisation report published in the British Journal of Cancer. Researchers compared BMI, waist measurement, and waist to hip ratio to find out which was the most reliable predictor. They found that waist size was at least as useful as body mass index (BMI), the ratio of weight to height. Waist measurement can be a more useful indicator of cancer risk than BMI because it is closely correlated with levels of ‘visceral fat’, or abdominal fat, which is known to increase cancer risk. The study shows that an 11cm (4.3 inch) increase in waist size increases the average risk of 13 obesity-related cancers (including kidney, breast and bowel cancer) by 13 per cent. Just 8cm extra — that is, about three inches — increases the risk of bowel cancer by 15 per cent. The researchers, from the WHO’s International Agency for Research on Cancer, looked at data gathered from 43,000 study participants followed for an average of 12 years. In that time over 1,600 of them were diagnosed with an obesity-related cancer. Dr Heinz Freisling, the study’s lead author, said: ‘You only need to put a tape measure around your belly button. This is easy to do and can give a person an indication of whether their risk for specific cancers is increased or not – for instance, [for] pancreas or liver cancer, which are known to be related to increased body fatness or obesity. ‘Our findings show that both BMI and where body fat is carried on the body can be good indicators of obesity-related cancer risk. Specifically, fat carried around the waist may be important for certain cancers, but requires further investigation. ‘To better reflect the underlying biology at play, we think it’s important to study more than just BMI when looking at cancer risk. And our research adds further understanding to how people’s body shape could increase their risk.’ Most people are well aware that being overweight causes significant long-term health risks. Indeed it is the single biggest preventable cause of cancer after smoking. Conventional advice is often to measure the body mass index (BMI) but this study suggests that measuring waist size may be a better marker for potential cancers. It found that an increased risk of certain cancers and type-2 diabetes developed at a waist measurement of 40 inches (102cm) for men and 35 inches (88cm) for women. It showed that adding 11cm to the waistline increased the risk of obesity-linked cancers by 13 per cent and, for bowel cancer in particular, adding around 8cm to the hips was linked to an increased risk of 15 per cent.
Science – Anticipating the future under the influence of climate change is one of the most important challenges of our time, and the topic of the special section in this issue of Science (see p. 472). The natural systems that provide oxygen, clean water, food, storm and erosion protection, natural products, and the potential for future resources, such as new genetic stocks for cultivation, must be protected, not just because it is part of good stewardship but also so that they can take care of us. But even the first step of modeling the effects of greenhouse gas sources and sinks on future temperatures requires input from atmospheric scientists, oceanographers, ecologists, economists, policy analysts, and others. The problem is even more difficult because the very factors that influence temperature changes, such as ocean circulation and terrestrial ecosystem responses, will themselves be altered as the climate changes. With so many potential climate-sensitive factors to consider, scientists need ways to narrow down the range of possible environmental outcomes so that they know what specific problems to tackle. Researchers have turned to the geologic record to obtain ground truth about patterns of change for use in climate models. Information from prior epochs reveals evidence for conditions on Earth that might be analogs to a future world with more CO2. Projections based on such previous evidence are still uncertain, because there is no perfect analog to current events in previous geologic epochs; however, even the most optimistic predictions are dire. For example, environmental changes brought on by climate changes will be too rapid for many species to adapt to, leading to widespread extinctions. Even species that might tolerate the new environment could nevertheless decline as the ecosystems on which they depend collapse. The oceans will become more stratified and less productive. If such ecosystem problems come to pass, the changes will affect humans in profound ways. The loss in ocean productivity will be detrimental for the 20% of the population that depends on the seas for nutrition. Crops will fail more regularly, especially on land at lower latitudes where food is in shortest supply. This unfavorable environmental state could last for many thousands of years as geologic processes slowly respond to the imbalances created by the release of the fossil carbon reservoir. The time scale for biodiversity to be restored, with all the benefits that it brings, will be even longer. Unfortunately, I view these predicted outcomes as overly optimistic. We are not just experiencing increases in greenhouse gas emissions but also eutrophication, pollution of the air and water, massive land conversion, and many other insults, all of which will have interacting and accumulating effects. The real problem we need to solve in order to truly understand how Earth’s environment may change is that of cumulative impacts. Although the Paleocene-Eocene Thermal Maximum (about 55 million years ago) is the time period considered to be a reasonable analog to a higher-CO2 future, the planet was not experiencing these other stressors and climate change simultaneously. So terrestrial species that survive a climate impact alone may face extinction if reduced to a fraction of their natural range through deforestation and habitat fragmentation. Marine species that are mildly susceptible to ocean acidification may not be able to tolerate this condition plus low oxygen levels. Sometimes the science of cumulative impacts is straightforward—for example, connecting habitats to provide migration corridors in response to sea-level rise brought on by climate change. But even “clear-cut” cases require extra work, more partnerships, and more time to address. Tackling problems of cumulative dimensions is a priority if we are to find viable solutions to the real environmental crises of the coming decades. There is a need for all scientists to rise to this challenge.Source and Photo: Science 4th, 2013 Labex Korea on Twitter
Key Takeaway: The Think! Game created by Reynato Sian is more challenging and better than chess with its more complex yet realistic rules not based on war strategies but on real-life principles that develop both intelligence and character. Its supporters call for it to be integrated into national education to improve individual, organizational, and analytical skills more effectively than chess: a socially innovative way to increase the impact of education with a concept celebrating local talent. First of all, I’d like to wish my beloved 二哥 a happy 39th birthday! May God continue to bless you and keep you. Last week, I featured a social enterprise that promoted, among others, Pinoy pride and talent. That day also happened to be my 哥哥’s 42nd birthday according to the lunisolar calendar and Chinese reckoning. Being the man who taught me how to play chess, unlocking a quiet but continuous passion for the game that persists until now, I dedicate this post to him. Chess is a ubiquitous game that exemplifies the concept of “easy to learn, but difficult to master”. Played by people over the world it may be, it takes a dedicated mind to master the game. But it pales in comparison to one humble but mind-boggling game, a game whose roots can be found in Western Visayas. It is harder than chess due to its more complex rules and concept – but its creator believes this is good, as it will be more effective in unlocking the limitless potentials of the human mind that the current educational system is not as good in doing. In fact, efforts are being made to integrate it into the educational system, as they believe it will be better than chess in developing children’s logical, analytical, and critical skills. Say hello to the Think! Game. Featured on Rappler, the Think! Game is promised to be a wonder in training the human mind and developing the youth of today. Created in the 1970s by one Reynaldo Sian, a Negrense from Bacolod, the game was conceived to be a response to the perceived lack of the element of reality in chess. Sian believed that chess was not realistic enough, being based on war strategies, and not fully engaging the faculties of the human mind. Resembling a mix between chess and Chinese checkers, Think! is a two-player game with 19 pieces each, on a 61-space hexagonal board. The 19 pieces are named after man’s mental faculties – instinct, reason, judgment, logic, wisdom, and mind. The Mind is similar to the King of chess – the objective is to capture the opponent’s. To do so, a very deep strategy must be employed – one that really develops one’s mental skills. A commenter in the aforementioned Rappler article, Mr. Vic Duran, shared the following YouTube video of how it is played: Sian and the game’s supporters, who call themselves thinknologists, believe that it is the answer to a lackluster educational system in the country – effective but fun and interactive learning. They are calling for the game to be integrated into the educational system, primary to tertiary – just as how Russia, the US, Australia, and even the Philippines itself have inserted chess into their learning programs. For Sian and company, however, Think! will be better than chess in achieving these objectives – and it is a Filipino creation, thereby fostering an appreciation for the country and the genius of its people. Although the game grew quite popular and acclaimed among national and international chess masters, Sian eventually suspended the project because of the “disadvantageous nature” of business offers. Today, however, modern thinknologists are lobbying for it – by spreading awareness and soliciting production offers in going from barangay to barangay. I would personally – and I am saying this out loud now – be willing to go into business with thinknologists to produce and distribute the game, turning it into a social enterprise in the process. First, it promotes responsible learning with its character-building and mentally-enhancing gameplay. Second, it’s proudly Pinoy made. Third, I love board games. Thanks to my regular contributor, Kevin Christopher M. Tee, for sharing the Rappler link with me. If you are a, or know any, thinknologist, please let me know. I would love to meet them and partner up with them. Contact me by commenting below or by leaving an email at firstname.lastname@example.org.
Doors and windows are an indispensable part of homes. When it comes to doors and windows, many individuals pick aluminum alloy doors and windows. The decorative effect of aluminum alloy doors and windows is very good, with performance characteristics such as wind pressure resistance, water tightness, thermal insulation, etc. Thus, aluminum alloy door and window profiles have become the key object of home decoration. This article will introduce you to what are aluminum alloy doors and windows and how to choose them. 1. What is an aluminum alloy door and window? Aluminum alloy: It is based on aluminum and adds some alloying elements to enhance strength and hardness. Ordinary aluminum alloy profile: The inside and outside are connected without an air layer. The inside and outside colors can only be the same, and the surface is sprayed with an anti-corrosion treatment. Ordinary aluminum profiles are conductors as a whole, so heat transfer and heat dissipation are relatively fast. Broken bridge aluminum alloy profile: It is divided into two ends during the processing process, and the two ends are connected into a whole with PA66 nylon strips to form three air layers, the inside and outside are not conductive, and the inside and outside colors can be selected arbitrarily. PA66 nylon strips connect the two ends of the broken bridge aluminum profile to form three air layers. The inside and outside are not conductive, the temperature difference between the inside and outside is different, and the colors are diverse. About the wall thickness The wall thickness of the main stress-bearing part of the window profile is not less than 1.4mm, the high-rise reaches more than 20 floors, and the thickness of the profile can be increased. The wall thickness of the main stress-bearing part of the door profile is not less than 2.0mm, which is in line with the national standard for wind pressure resistance. Single doors and windows more prominent than 3-4 square meters can have their thickness increased, and columns can be added if they are too huge. About the thermal conductivity The heat transfer coefficient is the speed at which the internal temperature conducts to the outside when the internal heating passes through time, and the heat transfer value is obtained through time and temperature. - The heat transfer coefficient of standard aluminum alloy doors and windows is between 3.5 and 5.0. - The heat transmission coefficient of broken bridge aluminum alloy doors and windows is approximately 2.5-3.0. - The heat transfer coefficient of the system’s aluminum alloy doors and windows is between 2.0 and 2.5. 2. How to choose Aluminum alloy doors and windows? According to the thickness Aluminum alloy doors and windows are divided into several series according to the thickness of the door and window frame. If the thickness of the door frame is 90mm, it is called 90 series aluminum alloy doors and windows. There are 70 series and 90 series of aluminum alloy sliding doors, and the 70 series can be used for the aluminum alloy doors inside the house. There are 55 series, 60 series, 70 series, 90 series, and so on for aluminum alloy sliding windows. When you choose a series, you should generally decide according to the size of the window opening and the local wind pressure value. The aluminum alloy windows used as closed balconies should not be smaller than 70 series. Note: The thickness of the aluminum alloy profile directly determines the quality of the aluminum alloy doors and windows. When consumers choose aluminum alloy doors and windows, they should not be greedy for cheap and choose aluminum alloy doors and windows with too thin profiles. In addition, the thickness of the aluminum alloy profile also determines the price of the product. According to the strength Both the tensile strength (157 Newtons per square millimeter) and the yield strength (108 Newtons per square millimeter) should be met. When purchasing, you can bend the aluminum alloy profile moderately with your hands, and it should return to its original shape after letting go. According to the chromaticity The same aluminum alloy profile should have a constant hue. It is not appropriate to purchase if the color difference is noticeable. According to the flatness Look at the aluminum alloy profile’s surface; there shouldn’t be any bulges or depressions. According to the glossiness Aluminum alloy doors and windows should avoid purchasing aluminum alloy profiles with open air bubbles (white spots) and ash residue (black spots) on the surface, as well as obvious defects such as cracks, burrs, and peeling. According to the oxidation degree The oxide film thickness should reach 10 microns. When purchasing, you can lightly scratch the profile’s surface to check if the oxide deposit can be removed. The above is a brief introduction to the purchase of aluminum doors and windows. If you have special needs, please contact us. As a professional supplier of aluminum products, CHAL supplies all kinds of finished products, from building profile systems to industrial aluminum products, mainly including aluminum windows and doors, aluminum curtain walls, aluminum sunrooms, aluminum garden gates, garden fences, aluminum handrails, aluminum upstairs, all aluminum furniture, aluminum goods shelf, etc… Just please tell us your needs and we are so glad to serve you.
Skyscraper Wars: The Petty Battle For the World's Tallest Building By the late 1920s, rich people were getting really bored. Illegal booze, jazz parties, and generally Great Gatsbying it up wasn't doing it for them anymore, so they decided to start chasing world records. Specifically, in 1929, banker George L. Ohrstrom decided to build a 47-story skyscraper that would be called the Bank of Manhattan Trust Building, but then he heard about automotive mogul Walter Chrysler's plans to build the 808-foot Chrysler Building, which would be the tallest building in the world. Ohrstrom's architect, H. Craig Severance, drew up new plans for an 848-foot building, and the race was on. Both were determined to the point of mania, rushing their crews to finish in a frankly alarming period of time just to be the world's tallest building for even a minute. By April 1930, both buildings were completed, but then Severance started getting paranoid. The crew at the Chrysler Building were getting all these weird, unexplained deliveries, so even though the BMT was planned to be the taller of the two, he decided at the last minute that they needed some insurance in the form of a 35-foot lantern and 50-foot flagpole, topping the building out at 925 feet. He was right to be worried. Chrysler had an ace up their sleeves: a 175-foot spire that was built in secret inside the building and mounted in 90 minutes at the very end of construction, putting the Chrysler Building at over 1,000 feet. It might very well be the most expensive physical manifestation of "What now, bitch?" And it was all for nothing. Within a year, the Empire State Building thrust its 1,250-ft dong into the sky, having been similarly built at a rate of a floor a day, apparently just to put that Chrysler asshole in his place. By then, the Great Depression had kicked off in earnest, and half of its 102 floors just sat unused, earning it the nickname "the Empty State Building" and proving that rich people will literally build a useless skyscraper instead of going to therapy. Top image: Misterweiss/Wikimedia Commons
Chinese Kung Fu is a large system of theory and practice. It combines techniques of self-defense and health-keeping. It is estimated that Chinese Kung Fu can be dated back to primeval society. At that time people use cudgels to fight against wild beasts. Gradually they accumulated experience of self defense. When Shang Dynasty began, hunting was considered as an important measure of Kung Fu training. Kung Fu in the Shang and Zhou Dynasties During Shang and Zhou Dynasties (17th Century BC- 221 BC), martial arts evolved to be a kind of dancing. Usually the dancing of martial arts is utilized to train soldiers and inspire their morale. During Zhou Dynasty, martial-arts dancing was designated as a component of education. The application of wrestling techniques at the battlefield received much attention from various states during the period of Spring and Autumn. The then emperor held twice wrestling contests every year respectively in spring and autumn so as to select excellent people of martial arts. At the same time, the skill and technology of sword forging as well as sword ceremony achieved rapid development. Kung Fu in the Qin and Han Dynasties In Qin (221 BC – 207 BC) and Han (202 BC – 220 AD) Dynasties, wrestling, swordplay, and martial arts dancing were very popular. A well known instance was Xiang Zhuang's Sword Dancing in Hongmen Banquet at the same period. His performance was very close to today's martial arts. The application of spear play in Han Dynasty reached its summit along with the appearance of many other techniques of spear usage. It is said the Five-animal-style exercise was another innovation by Hua Tuo on the development of Chinese martial arts. Kung Fu in the Tang Dynasty Starting from Tang Dynasty (618 - 907), Kung Fu examination was proposed and implemented. Excellent candidates would receive titles and awards through the examination, largely propelling the development of martial arts. By then martial arts had evolved to be an artistic form and an independent genre. It was gradually introduced to many countries in Southeast Asia. Today Kung Fu were honored as the ancestor of kickboxing, karate, aikido, and judo. Kung Fu in the Song and Yuan Dynasties Song (960 - 1279) and Yuan (1206 - 1368) Dynasties witnessed the climax of Kung Fu development. Practice of Kung Fu by civil organizations became more and more popular. Some organizations or clubs centered on the use of spear play and cudgel, and they were called Yinglue Organization; while others majored in the practice of arching and therefore called Arching Origination. Besides, there appeared another genre called Luqi People. They made a living as performer of martial arts all over the country. Usually their performance was carried out by a single person or two persons as a pair. Kung Fu in Ming and Qing Dynasties Chinese Kung Fu achieved larger development in Ming (1368 - 1644) and Qing (1636 - 1911) Dynasties. In Ming Dynasty, a lot of genres came into being and numerous books on martial arts were published. In Qing Dynasty, the ruling empire banned the practice of martial arts, and the folk had to set up various clubs or societies to pass down feat secretly. Therefore tens of schools of martial arts came into being, such as taiji, xingyi shadowboxing, eight–diagram shadowboxing, etc. Qing Dynasty is the times of integration among different martial arts genres. Wrestling techniques were introduced into martial arts, facilitating the improvement and mature of martial arts. This period is the shed between genres for appreciation and those for actual combat. Kung Fu in Modern Times In 1927, Central National Martial Arts Society was established. In August, 1936, Chinese Martial Arts Team went to Berlin to participate Olympics. In 1956, Chinese Martial Arts Association set up Martial Arts Teams. In 1985, International Marital Arts Invitational Tournament was held in Xi'an with the establishment of International Martial Arts League. In 1987, the first Asian Martial Arts Tournament was held in Hengbin. In 1990, martial arts were for the first time listed as a competition event in the 11th Asian Games. In 1999, International Martial Arts League was invited as a member of International Individual Events Federation by International Olympic Committee. That was the sign of Chinese Martial Arts walking global. Experience Kung Fu in China Our tours are customizable, so you can discover more of what you're interested in. Enjoy a kung fu show in Beijing, practice tai chi with the locals on Shanghai's Bund, visit the Shaolin Temple and meet the monks who practice kung fu…If you want to learn more about kung fu, let us know and we will tailor-make your trip.
The Blind Juggler demonstrates that high performance robotic juggling is possible without any sensors detecting the ball. The Swinging Blind Juggler takes this concept one step further: it is the Blind Juggler strapped to a pendulum. The ball is struck at the peak angles of the pendulum, resulting in a side-to-side juggling of the ball. The Swinging Blind Juggler achieves stable juggling without any sensors to detect the ball. The same paddle shape and motion that stabilize the ball on the Blind Juggler also keep the ball from falling off the Swinging Blind Juggler. While the ball motion is stable without sensing, we need feedback to control the pendulum motion. Blind juggling is only possible if the pendulum remains synchronized to the ball motion. We developed a feedback strategy that uses special paddle motions to control the pendulum motion. The physics that make this possible are similar to what children intuitively use to control their amplitude on a swing. In Swinging Blind Juggler section, you will: - Find out more about the four bar linkage, which is the mechanism that ensures that the paddle is perpendicular to the ball velocity at impact. - Learn about how the Swinging Blind Juggler imitates a child on a swing to synchronize the pendulum to the ball motion in the section on feedback control.
Do minerals in drinking water promote health? Many advocates of very low-mineral drinking water claim: - Minerals in our tap water damage the body by being deposited as so-called slags in the connective tissue, - minerals are only found in water in inorganic form and can therefore not be absorbed by the body or only with great energy expenditure. In the long run, this leads to diseases such as dementia or arthrosis. On the surface, this statement is correct. However, water in its structured form, when energised, forms so-called hydration shells around the minerals so that we can absorb and utilise the minerals. In nature, water occurs in a structured way and we can replenish our mineral supplies. However, if the structure of the water is destroyed, the differently charged substances attract and form compounds with each other. For example, calcium and hydrogen carbonate are present in structured water, whereas in unstructured water they form a bond with each other and combine to form calcium bicarbonate (colloquially known as lime). Everyone knows the deposits that come out of this, in the kettle or the washing machine. Our blood vessels calcify in the same way. What applies to calcium naturally also applies to the other minerals in water. As soon as water loses its structure, the important elements form worthless or even harmful compounds with their antagonists. And so the vital drink becomes a health hazard. Another aspect takes us to the Blue Zones (regions of the world where the inhabitants live much longer on average with the best quality of life). For example, the inhabitants of Nicoya (Costa Rica) drink water that flows through limestone and is very rich in calcium and magnesium. Nicoya belongs to these Blue Zones. The same is true of the Blue Zone on the Japanese island of Okinawa - there, too, the inhabitants have only very mineral-rich drinking water at their disposal. Incidentally, the water loses its structure if, among other things, you add carbonic acid to it, transport it with pressure above 2.5 bar or treat it with ozone (ozone is the strongest oxidant used in drinking water practice and has proven itself in the treatment of drinking water). Structured and thus cell-available water is obtained with our YVE-BIO® water filter systems.
Top 10 Historic Brutal Executions Capital punishment has come a long way as far as our styles of execution. In most civilized countries, we have humanitarian ways to put criminals to death. Even in third world or semi civilized countries and kingdoms, beheading is pretty much the worst of it. We also hear of torture and humiliation techniques that we believe to be inhumane and downright barbaric. Folks, what happens in today’s world are child’s play as compared to how we used to carry out punishments just a few hundred years ago. In fact, humankind has become downright gentlemen about capital punishment. Read on as you discover the horrible and painful ways in which brutal executions were carried out in our Top Ten List of Historic Brutal Executions. 10. Saint Andrew Bobola 1657 Saint Andrew was an advisor, preacher, and teacher of the Jesuits. He was well respected, loved, and died at age 65 in the year 1657. This was not a great period in which to be a Christian as they were tortured and martyred for their belief in Jesus and their refusal to deny Him. His death was neither quick nor peaceful. The terrible torture that Saint Andrew endured inspired many people as to his faith in the Lord and his willingness to give his life to prove his faith. During his torture, he prayed continuously for his tormentors as they tore off his Holy Habit, tied him to a tree, and scourged him. To “scourge” is to whip with a lash or cat o’ nine tails with pieces of bone or metal tied to the end allowing it to rip the flesh from the bone. They forced a crown of thorns on his head, ripped out one of his eyes, and burned his flesh repeatedly by jabbing him with torches. Terribly, they were just getting warmed up. One of his captors used his dagger to trace a chasuble, a type of vest worn by priests of the time, into his bare back and then peeled the skin out of the trace. They also tore the skin from his fingers where he had been given the unction or, blessed with drops of oil. Then they drove needles under his fingernails. Finally tiring of his constant prayers for them as they tortured him, they tore out his tongue and crushed his skull. The most amazing part of this is that two hundred years later, when the body was exhumed (part of the canonization process) from the crypt for identification; it had not decayed at all. His corpse was in nearly perfect condition. 9. Isaiah 740 BC Many religious historians believe that Isaiah was executed by sawing, a type of capital punishment used in that period. It is one of the most brutal and painful ways in which a man can die. A favorite execution method of the Roman Emperor Caligula, it involved sawing a man in half, lengthwise. The person being executed is hung upside down with each leg tied to a pole so that the legs are spread. The executioner then saws him in half with a huge saw, starting at the groin. The person, screaming in agony, lives much longer then they would like to as the brain is being supplied with blood continuously via gravity and no major arteries are severed until the saw reaches the mid-abdomen. It will also give you a splitting headache. (Sorry, I just could not resist.) 8. Li Si 208 BC Leave it to the Chinese to invent an execution so hideous that it makes their “Water Torture” and “Death by a Thousand Cuts” seem like something we do at our children’s birthday parties. The Five Pains death also has an incredible irony attached to it as the sick asshole who invented this insidious style of lethal punishment (Qin Shi Huangdi, chief advisor to the emperor) was executed with it, giving him first hand knowledge of his work. It begins with cutting off the condemned one’s nose. Then they chop off a hand. After he has suffered a while with no nose and only one hand to pick it with if he had, they lob off a foot. If that does not make your sphincter pucker, men, grab your balls. The fourth pain is castration, followed by what I hope they do before the fourth pain, should I ever do something to deserve this, (which I will not) they saw you in half at the midsection. Talk about your emperor with little or no sense of humor and absolutely no mercy, this is one death that would be hard to swallow. 7. Mithridates 401 BC Scaphism is a form of extended or drawn out death that is so insidious I am ashamed to be a part of a species that would invent it. Mithridates, pronounced Myth ri date eez, was just a poor slob in the wrong place at the wrong time. During a battle, Mithridates stuck a dart in the temple of Cyrus the Younger, accidentally mistaking him for the enemy. His punishment was death by bug. Scaphism is the taking of two small boats or hollowed out trunks and the accused lays down in one while the other is placed over him. The only thing sticking out of this little prison is his head, hands, and feet. They force him to drink a huge, bloating amount of honey and milk, causing severe diarrhea. They rub even more honey all over the body of the poor, confused, dead man walking. (or in this case, dead man crapping) Once Mithridates was locked in, they set him in the water and anchored him so that his face was in constant sunshine, possibly on a scum filled, stagnant pond. From this point forward, nature takes over. Covered in feces and his own excrement, bugs are soon swarming him. From the flies, maggots, and beetles that eat decaying flesh to wasps and hornets attracted by the sweet nectar, his body is stung and eaten away. He was given no food or water but the force-feeding most likely continued to occur daily to ensure fresh feces for the hungry insects. He became a floating buffet. Hungry, dehydrated, and delirious, poor Mithridates lasted an astonishing 17 days and he suffered miserably every minute of it. 6. Saint Catherine of Alexandria 307 Saint Catherine was a martyr and a noted scholar. She converted from pagan to Christianity in her teens and was a powerful speaker and converter of souls. She took it upon herself to visit one of the worst persecutors of Christians, the Roman Emperor Maximinus, to convince him to stop. While Maximinus resisted her, she did succeed in converting his wife and many of his advisors. The emperor sent several of his pagan philosophers to convert her but they returned to Rome as Christians. He was compelled to imprison her only to have all that visited her leave converted so he sentenced her to death on the breaking wheel. The wheel, according to legend, broke at her slightest touch so he had her beheaded. Legend has it that angels came and flew her body away.
Generally primary sources are works that are generated at the time or by the person directly being studied. Examples include (but aren't limited too): truth and reconciliation hearings clubs or associations dedicated to remembering public memorials to past events/people historical autobiographies or memoirs scientific study of historical trauma or memory Secondary sources are generally produced by other scholars who have studied the event or person you are interested. Generally these include books and articles. The papers that you write for this class will be secondary sources that will help future scholars interested in the same topic you will write about.
- PTSD in Teachers: Yes, It’s Real! - August 19, 2018 - Teacher Anxiety: How to Cope With Anxiety Under Stress - July 29, 2018 - Depression Kills Teachers if Left Untreated: It Should Not Kill Their Careers - July 23, 2018 - Amidst Declining Mental Health in Teachers, What Can Administrators Do? - June 30, 2018 - 5 Things I’d Tell Myself in My Earlier Teaching Years - October 15, 2017 - How Class Dojo Saves My Sanity Daily - October 1, 2017 - Surviving the School Year: Game of Thrones Style - August 27, 2017 - What to Change Behavior? Start With Class Meetings in Special Education - August 20, 2017 - When Your Administrator Doesn’t Like You - July 3, 2017 - Conquering Teacher Biases Against Disabilities: Important Strategies - May 8, 2017 When thinking about classroom management, class meetings could be the answer to most of your problems. Just think about it. One of the biggest causes of classroom management issues is that students come in with things on their minds. What are class meetings? Read on to learn how to conduct class meetings and get some ideas on how to run some. These ideas come from a book called Morning Meetings for Special Education Education by Felicia Durden, ED.D. However, morning meetings don’t just have to be for special education. What are class meetings? Class meetings are whatever you make them, but are meant to be a time set aside for checking in as a class, reinforcing key educational concepts, and developing social skills. As stated by Durden, the classroom environment is “pivotal in student achievement.” Conducting classroom meetings helps make for a safe, inclusive, orderly, attractive, and comfortable classroom environment. How do you run a class meeting effectively? It’s important that you have a format to your class meetings. Creating an orderly routine that is predictable for students is a key component of a class meeting. Make sure that you start, run, and end the class meetings the same way each time. You might begin by having the students each talk about how their mornings went, go into an activity, and then wrap up by addressing any classroom concerns. Having a clipboard where students can register their concerns in order might help ensure this process goes smoothly, so you address each concern in the order in which it is received. This way, it is fairer to the students and you can keep a running record of concerns addressed during the meetings. Your first morning meeting might address the rules and expectations in the classroom, and Durden suggests the following procedures (all you need is chart paper and a marker): - Start by asking the kids why rules are important - Write each kid’s ideas on the chart paper - Explain that you ‘ll be working on the rules and expectations for morning meeting, being clear that following rules is important for helping the meetings run more smoothly. - Have a T-chart ready with a picture of an eye and an ear so students can talk about what desired behavior looks and sounds like, but have predetermined rules in mind. - Talk to the kids about what each rule looks and sounds like and record responses on the T-chart. - Model the rules. For instance, pay attention looks like you are sitting up and looking at the speaker and sounds like silence. Then have students model behavioral expectations for each rule. - Reinforce rules daily until students fully understand the expectations. - Post the rules in a prominent location so they can be referred to. Durden states that starting off this way will help morning meetings go off without a hitch. Some things you can do during class meetings Class meetings don’t have to be boring. There are several things you can do during the meetings to make them fun and interesting. Class meetings can be used as a time for community building. In the early days, this might looks like establishing classroom jobs, setting up a picture schedule for visual learners and kids with autism, celebrating differences, and several other community building activities. One suggested community activity is building teams, for which you’ll need, again, just chart paper and a marker. This activity helps students feel like they are part of the group. Here is the suggested procedure from Durden for this activity: - Begin by telling students that they are part of the classroom community and team. - Explain how teams work together to get things done. - Ask students to share any teams they’ve been a part of. Write any and all examples on the chart. - Ask students how they think team players act. Write them on the paper, ensuring things, like working together and supporting others, are included. - Tell students they’ll be coming up with a classroom team name and a class cheer today. - Have students brainstorm team names and write them on the board. Once a few names are chosen, vote as a class on the team name. - Next, get students to come up with ideas for an appropriate cheer to represent the class. Have them share their team cheers and vote on the one the class will use. Model appropriate voice tone for the team cheer to prevent it from getting out of hand. - Each day, use the team cheer to bring the class together. You might also have games and contests, treasure hunts, skits and drama, and brain breaks as a part of community building. Social Development Activities Class meetings can also be used for building social development in students. Helping students understand the importance of listening, asking and answering questions, and eye contact may be a part of your morning meeting. Body language, respect, and character building are also important social skills to review during morning meetings. One activity that I liked from Durden’s book was about complimenting others. This involves creating a bulletin board that will be used to post compliments, giving each student a card, and asking them to write a compliment for a classmate on it. Make sure to add your own compliments as well, and have a time of sharing compliments with the class. Academic Skills Development The Morning Meetings book separates strategies for English Language Arts and Social Studies from Science and Math, but there are a plethora of activities to do for each subject area. For example, you might use the time for a read aloud of a story or work on writing letters. You might work on a word wall together or go over “Who, What, When, Why, and Where” with the students. You could work on a “What’s the Weather” activity, sorting activities with shapes, mental math, money activities, and science journals during class meeting time. Several other activities are suggested in Durden’s book, but you get the idea. You can use morning meeting time to go over brief academic mini-lessons that are engaging for the students. You may be skeptical about taking up class time for a meeting with the students, but the fact is that these meetings can prevent many problems in the classroom because you address them before class begins. They can run anywhere from just 15 minutes to 30 minutes. Just set a timer to keep to your time schedule and place things on the meeting agenda on the next class meeting agenda. You’ll find that not only is classroom management easier, but the students in your class will become closer to one another. With these benefits in mind, it’s worth the effort to try class meetings out.
The principle "Ethically Aligned Design (v2): General Principles" has mentioned the topic "security" in the following places: 1. Principle 1 — Human Rights To best honor human rights, society must assure the safety and security of A IS so that they are designed and operated in a way that benefits humans: 4. Principle 4 — Transparency (The mechanisms by which transparency is provided will vary significantly, for instance 1) for users of care or domestic robots, a why did you do that button which, when pressed, causes the robot to explain the action it just took, 2) for validation or certification agencies, the algorithms underlying the A IS and how they have been verified, and 3) for accident investigators, secure storage of sensor and internal state data, comparable to a flight data recorder or black box.) 5. Principle 5 — A IS Technology Misuse and Awareness of It Providing ethics education and security awareness that sensitizes society to the potential risks of misuse of A IS (e.g., by providing “data privacy” warnings that some smart devices will collect their user’s personal data).
I noticed this morning that on this date in 1934, there had been one of the huge dust storms that characterized the Dust Bowl. While reading that article, I got distracted by a link to the Llano Estacado region of West Texas, an area of wall-to-wall f^%k-all. Except for Palo Duro Canyon, from which Col. Ranald Mackenzie evicted the Kiowa and Comanche in an eponymous battle in the 1870s. Col. Mackenzie had been a general during the War of Northern Agression, including being wounded at the Battle of Cedar Creek, where he had commanded a unit under General Phil Sheridan. Sheridan, of course, had a distinguished career, and the air-droppable M551 tank was named after him. The tank was deployed for both Operation Just Cause and Operation Desert Storm, but saw the most service in Vietnam, where it was used by the 11th ACR, the Black Horse, during the invasion of Cambodia. The 11th is now best known as the OPFOR at the National Training Center at Fort Irwin, near Barstow in California. Barstow probably wouldn't be a blip on anyone's consciousness if it weren't for the fact that Route 66 passes through. Route 66 was part of the National Highway system, and in the 1930s, it was crowded with westbound Okies fleeing the Dust Bowl.
Parent-Led Intervention May Reduce Autism Severity For the first time, researchers say they have evidence that parent-led intervention for young kids with autism continues to yield gains several years later. Children who participated in an intervention between the ages of 2 and 4 displayed less severe symptoms six years later, exhibiting fewer repetitive behaviors and better social communication, according to findings published this week in the journal The Lancet. “Our findings are encouraging, as they represent an improvement in the core symptoms of autism previously thought very resistant to change,” said Jonathan Green of the University of Manchester and Royal Manchester Children’s Hospital in England who led the study. Advertisement - Continue Reading Below The findings come from a follow-up to what’s known as the Preschool Autism Communication Trial, or PACT, which was published in 2010. For the trial, children with autism were randomly assigned to either participate in a 12-month early intervention program or simply receive standard care. All of the kids initially displayed similar levels of autism severity. Parents of those in the intervention group participated in 18 sessions over the course of the year where they watched videos of themselves interacting with their child and got feedback from therapists so that they could understand how to better communicate. These moms and dads also agreed to spend 20 to 30 minutes each day engaged in planned communication and play with their child. Six years after completing the initial study, researchers conducted new assessments of 59 children who received the parent-focused intervention and 62 kids who did not. They found that 17 percent fewer children in the intervention group had severe symptoms as compared to kids whose parents did not get the specialized training. What’s more, researchers found that children in the treatment group communicated better with their parents even though assessments showed no difference in language scores between them and the other kids studied. However, both groups of youngsters continued to display comparable levels of anxiety, challenging behaviors and depression. “This is not a ‘cure,’ in the sense that the children who demonstrated improvements will still show remaining symptoms to a variable extent, but it does suggest that working with parents to interact with their children in this way can lead to improvements in symptoms over the long-term,” Green said.
Symptoms of MCI are often frustrating causing some persons to withdraw from social activities. To help cope with the changes caused by MCI, finding support is very important. - Stay active: keep up with interests. Make modifications when possible to accommodate for short term memory or other changes. Speak to the Care Pathway Coordinator about other memory loss strategies for coping and maintaining brain health. - Lead a healthy lifestyle: - Eat right, your brain needs proper food and liquid to function normally - Exercise, there is a lot of evidence that aerobic exercise slows cognitive decline - Maintain sleep hygiene and address problems if they exist. - Talk with others about your concerns: let close friends and family know what you are going through so that you may gain their support. Connect to one of CNADC’s support groups for patients living with memory loss here. - Clinical Social Workers are available to meet with patients of the Neurobehavior and Memory Clinic. Learn more about Clinical Social Workers and the role they can play in managing MCI. To Family and Friends Consider how the changes caused by MCI are affecting the person’s life. Support this person to stay active and find strategies for coping. Keep in mind that the memory loss and other changes are caused by the MCI and cannot be controlled by the person.
If you’re a business owner, it’s likely that your company has been collecting data for years. And in the world of big data, it’s becoming more and more common to analyze historical data in order to find patterns and trends. But what if you want to analyze this information as it happens? This is where real-time data processing comes into play. Real-Time Data Processing refers to the ability of an application or database to accept streaming information in near-real time while being able to quickly process, store and analyze that data. It can be used across many industries including healthcare, finance and retail—just name a few! What Is Real-Time Data Processing? Data processing is the act of taking raw data and converting it into something meaningful. At its most basic level, this can mean simply storing your data in a format that you can access later. However, with more advanced tools and techniques available today–and especially as we continue to move towards real-time analytics–it’s important to understand what exactly real-time data processing means and how it differs from traditional methods of analyzing information. In short: Real-time data processing refers to analyzing information as soon as it comes in, rather than waiting until after an initial batch has been completed (which could take hours). For example, let’s say that you have sensors set up around your house which are constantly monitoring temperature levels throughout the day; with traditional methods of analysis (i.e., when these readings were collected), this would require taking all of them at once at midnight or some other predetermined time period; however if they were instead being analyzed immediately upon arrival via real-time processing systems then there would be no need for such rigid schedules since each new reading would automatically trigger its own analysis without any intervention needed on behalf of human operators Why Should I Care About Real-Time Data Processing? Real-time data processing is used to keep your business running smoothly. It can be used to track and analyze the performance of your business, optimize your business operations and improve customer service. Let’s take a look at some examples of how real-time data processing can be used in production environments: - You’re a retailer with thousands of products on sale at any given time; you need an automated way of monitoring inventory levels so that when one item runs out (or becomes unavailable), another one can be automatically dispatched from the supplier without having human intervention involved in this process. This requires real-time monitoring and management systems that work around the clock without any downtime or delays in response time just like robots do! - Another example would be a bank which offers loans based on credit scores; if someone applies for one but gets declined because their score isn’t high enough then they might become frustrated enough not only quit applying altogether but also stop using services offered by other financial institutions as well–which means less profit over time due solely because someone wasn’t able receive timely information about their application status.” Where Does Relational Database Management Systems (RDBMS) Fall Short When It Comes To Real-Time Processing? Relational database management systems (RDBMS) are designed for storing and retrieving data. They don’t provide tools for analyzing data in real time, nor do they have visualizations that can be used to gain insights from your data. For example, an RDBMS will let you query a table of sales records by customer ID or product code, but it won’t let you see how many customers made purchases on specific days or what their average purchase price was over time. You could use RDBMSes to build models that predict when customers will likely buy again based on past behavior and other factors–but these models would only run once per day or week at best (and probably not even then!). What Are The Benefits Of Using A Time Series Database Instead Of An RDBMS? There are many benefits to using a time series database instead of an RDBMS. Time series databases are designed to handle high-volume data, sequential data and large amounts of it. They can also process real-time processing in production environments. What Types Of Analysis Can Be Done With Time Series Databases? Time series data is easy to query. Time series databases offer fast queries and high throughput for time-based data analysis, which makes it possible for you to use your data in real time. Time series databases can analyze across different dimensions. You can analyze your data by any dimension–for example, by geographic region or customer segment–and then compare those results with other dimensions like product category or location type (online vs offline). This capability allows businesses to look at their performance across multiple segments in order to make better decisions about how they should allocate resources and improve processes where necessary. Time series databases are capable of analyzing over time periods as well as real-time processing of data streams from IoT sensors connected directly into the database itself without requiring any ETL (extract transform load) processes beforehand! This means that you don’t need separate systems for handling historical information versus current events happening right now because everything happens within one single system! There are many benefits of using a time series database for real-time processing. There are many benefits of using a time series database for real-time processing. Time series databases are designed for high-performance and can handle large data sets. This means that your data will be stored in a way that allows it to be read from quickly, which is important when you’re looking at processing information in real time or near-real time. Time series databases also provide high availability and scalability, so you can have confidence that your system will remain up even if there’s an influx of traffic or activity on the server side of things (which may happen if you’re collecting live data). Finally, because they’re optimized specifically for storing time series data–and because this type of dataset often has specific requirements around consistency, durability and reliability–time series databases offer several advantages over general purpose database systems when it comes to handling events occurring now rather than later: As you can see, time series databases have many advantages over relational databases when it comes to real-time data processing. They are able to handle much larger datasets than RDBMS and also provide faster query speeds. This makes them ideal for applications such as finance where high-volume transactions need fast responses from data analysis systems at all times.
contribution to children’s literature...svanslös (1939; Eng. trans., The Adventures of the Cat Who Had No Tail). The psychological realistic novel, delving deeply into the inner lives of children, has been developed by Maria Gripe, whose Hugo and Josephine trilogy may become classic; Gunnel Linde’s Tacka vet jag Skorstensgränd (1959; Eng. trans., Chimney-Top Lane, 1965); and Anna Lisa... Simply begin typing or use the editing tools above to add to this article. Once you are finished and click submit, your modifications will be sent to our editors for review.
Birds in the Ancient World by Jeremy Mynott Published by Oxford University Press – £16.99 (new to paperback) Jeremy Mynott’s fascinating book explores the many different roles birds played in ancient Greek and Roman civilisations and how they impressed upon the imagination to influence the literature and art of the time. Using quotations from the classical world, alongside nearly one hundred illustrations from ancient wall-paintings, pottery, and mosaics, Birds in the Ancient World also examines early scientific findings, as well as descriptions from works of history, geography, and travel. Informative and expertly narrated, Birds in the Ancient World is an enthralling look at the cultural history of birds and the huge influence they had on a bygone age. Bird Love: The Family Life of Birds by Wenfei Tong / Foreword by Dr Mike Webster Published by Ivy Press – £25 (hardback) Discover the amazing array of courtship techniques used by birds around the world in this beautiful hardback book featuring stunning colour photographs throughout. From gifts of food to aerial acrobatics and spectacular feather displays, Bird Love offers an essential insight into bird family life and is a fantastic guide to the habits of birds worldwide. “Across the 10,000-plus species of bird that live on this planet, there is an amazing diversity of behaviours aimed at one simple goal: reproduction. This book is a collection of some amazing examples of these, from elaborate courtship displays that give potential mates information about genetic quality, to intricate nests designed to protect the young from predators and parasites.” Dr Mike Webster.
Course overviewHistory A Level City of Bristol College A Level History covers depth and breadth studies of periods of time to develop your understanding of society and the world, as well as valuable skills for life including analysis, research, communication and problem-solving. This course is an option within our humanities and social science A Level pathways. - Humanities Pathway course code: ALHUM - Social Science Pathway course code: ALSOCIALSCI Who is this course for? This course is for students who are interested in History. What you'll learn Component 1 – Tsarist and Communist Russia 1855 – 1964 This is a breadth study which examines political, economic, social and cultural change in Russia over a long period of time. It takes us from the Russia of the Tsars, to the dramatic revolutions of 1905 and 1917 which topple the autocracy amidst the destruction of WWI. We then look at the attempts to build a communist society under Lenin and the Bolsheviks and how these early utopian aspirations are crushed as Stalin consolidates his grip on power. Finally, we look at how Stalin was able to defeat the attack on Russia by Nazi Germany in WWII and then how Krushchev sought to reform the Soviet system during the Cold War. This unit has a focus on historical interpretations – the differing judgements that historians draw about the past. For this unit, you will have a two-hour and thirty-minute long examination, making up 40% of your overall A Level grade. Component 2 – The Making of Modern Britain 1951 – 2007 Component 2 explores the making of Modern Britain 1951 – 2007. This is an in-depth study that takes a detailed look at Britain’s post-war history, focusing on political, economic, social and cultural events, in addition to British foreign policy. This unit focuses on how historians use contemporary sources to build their understanding of the past. Component 3 – Assessment and Essay Component 3 is a non-examined assessment: a 3500-word essay on a topic of your choosing. This requires you to independently research and then write an extended essay about an issue or development that takes place over roughly one hundred years. Course entry requirements A minimum of 5 GCSEs at Grade 4 or above, including both Maths and English. Grade 6 in History or English language. You will provide a copy of your last school report and/or a reference from an employer or relevant professional, including attendance. This is to ensure that we can support you to be able to achieve well on the course. Applicants with 85% or below attendance will have been deemed not to have met this criteria. All applicants are given an opportunity to discuss any evidence-based mitigating circumstances that may have affected the reference/school report. How is the course delivered and assessed? This classroom-based course is assessed by two examinations and one essay. Future career and study opportunities Universities recognise A Level History as a good academic qualification. It provides valuable preparation for degrees in History, Law and many others. This course is free to anyone aged 16-18. There is no formal list of additional costs. However, if you are on a limited income, you may be eligible to receive help with these costs from our Learner Support Funding Bursary. For guidance on which career path to take, to explore career options related to our courses, find out which careers are in demand, and then get the training you need.
For to Us a Child Is Born: The Meaning of Isaiah 9:6 Isaiah 9:6 is a prophecy about a future child who would bear the government on his shoulders and be called by titles that could only rightfully be attributed to God: “For to us a child is born, to us a son is given, and the government will be on his shoulders. And he will be called Wonderful Counselor, Mighty God, Everlasting Father, Prince of Peace.” This is one of the most well-known Old Testament prophecies about Jesus. But what does it mean? The historical context of Isaiah 9 Isaiah speaks to people living in three time periods: before the Babylonian exile, during the Babylonian exile, and after the Babylonian exile. In chapter 9, Isaiah is speaking to the southern kingdom of Israel (Judah) before the Babylonian exile. Israel and Syria are pressuring Judah to form a coalition against Assyria. Ahaz, the king of Judah, is afraid to go against Assyria, so he sends a king's ransom to Assyria asking for their help. Isaiah spoke into a situation where Judah felt powerless, and they were afraid of the rulers to their north. As their enemies only seemed to grow in strength and tighten their grasp, they didn’t know if God was for them or against them or if he had simply abandoned them. And among Isaiah’s prophecies about their future defeat, exile, and return, he included two prophetic visions of a child who would represent God’s presence, embody his characteristics, and bear the responsibility of governing his people. Immanuel: God with us Two chapters before Isaiah says “For unto us a child is born,” he prophesied the birth child whose name would signify the presence of God: “Therefore the Lord himself will give you a sign: The virgin will conceive and give birth to a son, and will call him Immanuel. He will be eating curds and honey when he knows enough to reject the wrong and choose the right, for before the boy knows enough to reject the wrong and choose the right, the land of the two kings you dread will be laid waste.” —Isaiah 7:14–16 Immanuel means “God with us.” Like Isaiah 9:6, this verse is believed to be a prophecy about Jesus. In fact, the Gospel of Matthew quotes this passage in 1:23 as it recounts the story of Jesus’ birth. This prophecy is an encouragement that God is indeed on Judah’s side, and an assurance that by the time this child is grown, Assyria and Syria will be defeated. “For to us a child is born” Isaiah 9:6 speaks of a child, too. And while it’s somewhat ambiguous whether or not this is the same child mentioned in Isaiah 7, both passages describe Jesus’ birth and character. “The government will be on his shoulders” means he will bear the responsibility of governing the people. Verse 7 clarifies: he will do this forever. “Of the greatness of his government and peace there will be no end. He will reign on David’s throne and over his kingdom, establishing and upholding it with justice and righteousness from that time on and forever. The zeal of the Lord Almighty will accomplish this.” Two of the titles this child will bear—Wonderful Counselor and Prince of Peace—could apply to a mortal human. And in a time when Judah desperately needed wisdom and peace, these would have been traits they greatly desired in a leader. But the other two—Mighty God and Everlasting Father—are names that would seem to clearly apply to God. But the Israelites weren’t expecting God to be born and live among them. They had no concept of the incarnation, and names and titles always carried symbolic weight to remind the Israelites about who their God was. So they would have seen this prophecy differently. Was it a prophecy about Hezekiah? It’s easy for modern Christians to read passages like Isaiah 7:14 and Isaiah 9:6 and to think: “A name like ‘God with us’ is clearly referring to the incarnation in Jesus Christ. And the empires Israel was afraid of were defeated before Jesus’ time. And titles like ‘Mighty God’ and ‘Everlasting Father’ could only apply to a child who was also divine, like Jesus.” But the Judeans believed they were in immediate need of a physical savior. The kings they were afraid of were knocking on their door. They probably thought this prophecy was about Ahaz's son—their future king—Hezekiah. But as we see later on in the book (chapters 38 and 39), Hezekiah died as a grown man, while the Israelites were still in captivity. And he could hardly be nicknamed “God with us” when he only turned to prayer on his deathbed. The Word became flesh In the Gospel of John, we read about the fulfillment of Isaiah’s prophecy. The child who bore these qualities was born centuries later, not a single generation later. And he would not be merely human, but the incarnation of the living God. “The Word became flesh and made his dwelling among us. We have seen his glory, the glory of the one and only Son, who came from the Father, full of grace and truth.” —John 1:14 In the Gospel of Luke, the angel Gabriel directly alludes to this famous prophecy when he tells Mary about Jesus: “He will be great and will be called the Son of the Most High. The Lord God will give him the throne of his father David, and he will reign over Jacob’s descendants forever; his kingdom will never end.” —Luke 1:32–33 The only king who could reign forever is one who would live forever. And the only one who could rightfully hold God’s titles was God himself. Israel was looking for an immediate remedy to their physical and political problems. God’s solution wouldn’t come for centuries, but it would last forever. Jesus reigns to this day. Sign up complete.
Constitution: We the people own it As we approach Sept. 17-23, 2013 Constitutional Week, let’s remember what has brought us, the state and the nation to where we are today. If at any time in history there is a need to remind ourselves what our forefathers sacrificed for all of us, it is now. In the past several years the Constitution has been challenged and disrespected by our news media, our state and federal governments, citizens and religious factions. Let’s reflect on how our forefathers created such a document based upon Christian beliefs to preserve the country in their day and in the future in the mist of hostile forces, knowing if they failed, all would lose their families, fortunes and most of all, their lives. Our forefathers (delegates from every colony) assembled in 1787 at the Constitutional Convention, chose to replace then revise the Articles of Confederation. The delegates held their meeting in secret to evaluate representation, slavery, taxes and procedures in the election of a president. For four months, debates, arguments and compromise dominated the meetings. Several state delegates created their own plans on how a new government should work. After months of debating the delegates opinions, suggestions, thoughts and insights the first draft of the Constitution was accepted Aug. 6, 1787. With the first draft in place, delegates continued to discuss, debate, fine tune the final draft and vote on the Constitution. After the final vote, the Constitution was sent to the states where nine states where needed for success. The first state to ratify the Constitution was Delaware and the ninth state was New Hampshire, nine months after the process began. The U.S. Constitution was signed Sept. 17, 1787, laying out the structure of the American government we know today and is the oldest written Constitution. The framers of the Constitution worried immensely about the dangers of concentrated power from their experiences dealing with Britain’s king and parliament. When designing the Constitution, the framers were careful to create a system of government which no individual or institution could obtain tyrannical power. They designed the United States government with the power to check and balance the others into three coequal branches - Legislative, Executive and Judicial. This system of government for 226 years has propelled our nation to the heights no other civilization has achieved in history of the world and reminds you of the quote, “If it’s not broke, don’t fix it.” It is never so prevalent with the latest attempts by our federal and state governments in the past year, to infringe on the people’s rights that were given to us by the men that were determined against all odds, sacrificing life and limb and with their deep faith in God, give us the life they could only dream of. As we embark using the Constitution as our guide like are ancestors before us, strive for excellence that propels patriotism, principles and prosperity along with our faith in God. We, the People of the United States take an oath and have the responsibility as citizens of this country to protect, defend and preserve The Constitution for our children and grandchildren’s future, because We the People Own It.
Welcome to Hard Fork Basics, a collection of tips, tricks, guides, and advice to keep you up to date in the cryptocurrency and blockchain world. With every week that passes there is a new use for blockchain that promises us a better and brighter future. Blockchain is now being used for more than just cryptocurrency, and every iteration is different to the last. With the sheer diversity of envisioned use cases, two fundamentally different blockchain models have cropped up: permissioned and permissionless. In this edition of Hard Fork Basics, we’re going to define what these two terms mean. A permissionless blockchain you say? Many consider the the permissionless model to be closer to the original concept of blockchain, as outlined by Satoshi Nakamoto. A permissionless blockchain is quite simple, as its name suggests, no permission is required to become part of this blockchain network and contribute to its upkeep. In theory, anyone and anything can become part of a permissionless blockchain. Permissionless is, in many ways, just a fancy way of saying “public.” As anyone can join a permissionless blockchain, they tend to be far more decentralized than a permissioned system. One trade-off is that permissionless blockchains are often slower than permissioned alternatives. Transaction information stored on permissionless blockchains is usually validated by the public. With no third-parties to regulate what goes on, the system relies on this to reach a public consensus on what transactions are considered true. But what if you need more control and privacy? Enter, permissioned blockchains Permissioned blockchains flip the whole idea of a blockchain on its head. Blockchains were originally intended to be open, free, and public systems, but a permissioned blockchain is effectively the opposite. Permissioned blockchains can also be called private blockchains. Again, in principle it’s quite simple; permissioned blockchains require permission to join. As a result, the owner of a permissioned blockchain has the ability to dictate who can and cannot become part of its network. This control also means the blockchain owner can: dictate the network’s structure, issue software updates, and generally control everything that takes place on their blockchain. Information on permissioned blockchains is validated only by approved members of that blockchain. The owner can also control who sees that information. In some cases the public will be able to view certain information stored on a private, permissioned blockchain. Take Walmart for example; the American supermarket is planning to track vegetables on the blockchain to try and reduce instances of E. Coli. The information is validated and approved by Walmart and its suppliers, but the public will be able to trace produce back to its origin. Due to the restrictive and smaller nature of permissioned blockchains they tend to be more scalable, and operate faster. They are often more centralized than permissionless blockchains. In a permissionless blockchain, the public validates transaction information. In permissioned systems, transaction information is validated by a select group approved by the blockchain’s owner. Permissioned systems tend to be more scalable and faster, but are more centralized. Permissionless systems are open for all to join, and as a result, usually more decentralized, the trade off is speed and scalability. Published November 5, 2018 — 15:54 UTC
Through my research and learning about peaceful parenting I have learned that force, threats, and punishment (defined as intentional “suffering, pain, or loss that serves as retribution”) are not the most effective forms of discipline if our goal is to produce children who do the right thing because they want to do the right thing (rather than doing the right thing for fear of being punished). Punishment would include spanking, timeout/isolation, withdrawal of love or affection, removal of something desirable, demanding that the child does something undesirable that makes them feel shamed, etc. Punishment always makes children feel worse – and you can’t truly do or become better by feeling worse (see this post). Punishment is psychologically damaging. Punishment always pits us against our child and erodes at our relationship with them, harming the one thing that gives us real, positive influence with them – our loving connection. Punishment sends children into fight or flight, where it is impossible to reason and learn. See this article about what’s wrong with strict (authoritarian) parenting, and this one about why punishment doesn’t teach accountability. Natural consequences, on the other hand, can be excellent teachers. But when a lot of parents talk about “consequences” what they’re really referring to is punishment (e.g. “If I hear any more fighting, there will be consequences!”). A consequence is defined as “a result or effect of an action or condition.” It happens naturally, on its own. When we feel like we have to fabricate arbitrary consequences in order for our children to learn a lesson (even if they seem logical), those “consequences” are never as effective as natural consequences because, if we are the one causing the painful outcome, our children are more likely to view it the same way they view punishment, which sends them into fight or flight and causes them to view us as the enemy. When we allow natural consequences to happen, while offering empathy, our children have a greater chance of learning desirable lessons from them, while also building their trust and connection with us, which increases the likelihood that they’ll follow us in the future. For example, the consequence of my child messing around at bedtime instead of getting ready for bed is that we run out of time for bedtime stories. We could push bedtime back and still read stories, saving her from the consequence of her actions, but there is a valuable lesson to be learned in the natural consequence that follows when we don’t do what we need to do, when we need to do it. So instead we set firm limits with empathy (the empathy is important here!). On the other hand, we could treat this as a punishment by saying, “That’s it! No bedtime stories! That’s what you get for messing around instead of brushing your teeth!” But then our child is less likely to cooperate or to follow us in the future than she would be if we say (in a sincere tone), “Oh sweetie, I know how much you want to read bedtime stories. That’s your favorite part of the bedtime routine, huh? But sweetheart, we’re out of time. I’m sorry this is hard. Maybe tomorrow night if you hurry fast enough we might have time for an extra story!” Empathizing through the natural consequence (while staying firm) is more effective and more loving than punishment. But wait – God is the perfect parent, and He punishes His children when they’re wicked, right? Knowing what I know about discipline and feeling its truth so strongly, I was really confused by the fact that the scriptures talk over and over about God’s wrath and about Him punishing the wicked. I had a hard time reconciling that in my mind with the idea of a gentle, merciful, loving God – especially when I consider how the Savior handled situations with sinners (see below). Surely our Heavenly Father knows how His children learn best, and what will change their hearts (and thus, their behavior). So what was I missing? Why would He use punishment? Before I go on, let me be clear that God’s ways are higher than our ways, and that we more than likely will not understand all of His ways in this life. Whatever the Lord does, or whatever He requires, is right – even if we don’t understand His reasons – of that I have no doubt. If He chooses to use punishment, then that is right. But for the sake of understanding what His will is in my parenting, I have sought to understand this issue better. Is punishment (intentionally causing pain) His way? We know that “there is a law, irrevocably decreed in heaven before the foundations of this world, upon which all blessings are predicated—And when we obtain any blessing from God, it is by obedience to that law upon which it is predicated” (D&C 130:20-21). Likewise, if we break those laws then the blessings attached to them do not come to us. We know that the natural consequences of sin are always negative, and the Lord doesn’t intervene to protect us from the effects of our sins unless we fully and sincerely repent (see 2 Nephi 2:7 and D&C 19:16). These natural consequences are often very effective. But what about punishment? Does God, in His wrath, actually inflict punishment on the wicked? Or is their ‘punishment’ simply a natural result of breaking eternal laws? In the Book of Mormon there is a story about a Nephite army and a Lamanite army. The formerly-righteous Nephites had become hardened and vengeful and blood-thirsty and filled with a desire to destroy their enemies, the Lamanites. Mormon, the Nephites’ righteous commander, refused to continue leading them from that point forward because of their wickedness, but they went to battle anyway. Mormon 4:4-5 reads, “And it was because the armies of the Nephites went up unto the Lamanites that they began to be smitten; for were it not for that, the Lamanites could have had no power over them. But, behold, the judgments of God will overtake the wicked; and it is by the wicked that the wicked are punished; for it is the wicked that stir up the hearts of the children of men unto bloodshed” (emphasis added). God didn’t force the Lamanites to destroy the Nephites because of their wickedness. I don’t believe that force and destruction are part of His nature. But He did allow it to happen, because the Nephites refused to repent and thus were beyond the reach of His mercy (see Mosiah 2:38-39). I wonder if sometimes when men in the scriptures talk about punishment, they’re really referring to natural (negative) consequences of sin that God allows to happen because the sinners refuse to repent. Perhaps it’s an issue of semantics and defining punishment and rewards/blessings: Following eternal laws results in positive natural consequences called blessings (which are attached to the specific laws, and which the Lord delights in bestowing); and perhaps breaking eternal laws – sinning – without repenting results in negative natural consequences called punishments, which are attached to those crimes (see 2 Nephi 2:10). I don’t think God comes up with arbitrary consequences for our actions; rather, our consequences are already affixed. Living a life full of love and service naturally leads to positive relationships and connections with others, as well as the ability to be influenced by the Spirit. Living a life of murder and bloodshed naturally leads to enemies who seek to destroy you, as well as other negative consequences of sin. We might better understand this as the law of justice. LDS.org says, “In scriptural terms, justice is the unchanging law that brings consequences for actions. Because of the law of justice, we receive blessings when we obey God’s commandments. The law of justice also demands that a penalty be paid for every sin we commit. When the Savior carried out the Atonement, He took our sins upon Himself. He was able to “answer the ends of the law” (2 Nephi 2:7) because He subjected Himself to the penalty that the law required for our sins. In doing so, He “satisfied the demands of justice” and extended mercy to everyone who repents and follows Him (see Mosiah 15:9; Alma 34:14-16). Because He has paid the price for our sins, we will not have to suffer that punishment if we repent (see D&C 19:15-20).” So mercy does not negate the need for that penalty, or “punishment,” to be paid, but rather, it allows for Someone else to pay that price on our behalf if we will receive Him and repent. When we refuse to repent, that penalty must still be paid — just as “what goes up must come down,” all sin must be paid for. So that punishment when we refuse to repent is not our Heavenly Father’s way of getting back at us or trying to hurt us, it is simply the law of justice being upheld. The Lord’s definition of punishment does not appear to be the same as man’s. How about the definition of wrath? I really liked this perspective on the wrath of God: “The works of God are works of love and restoration. They always have been, and always will be. . . . Those who are opposed to God’s love and restoration in the world will experience an aspect of God’s love that feels like wrath, because the forces that oppose love will one day be either transformed or eliminated from creation. . . . God’s story . . . [is] a story of purging all that is not loving, until everything is restored and only love remains. . . . Love purges war, famine, disease, oppression, hatred, violence, and everything else that fights against love. It’s what love does. . . . Those who refuse to partner with love, and insist on continuing to fight in opposition to all that love does, will experience a side of love that does not feel like love. To them, it might even feel like wrath. Thus, when we affirm the “wrath of God” it’s not so much an affirmation of wrath at all—but an affirmation of love.” In other words, I believe that God’s wrath is not anger or hatred toward His children, but toward sin and evil, which He naturally purges because “God is love” (1 John 4:8). And those on the other side, who refuse to join with Him, will naturally be purged as well. From the New Testament student manual: “The “wrath” of God is not hostility toward mankind; rather, it is rejection of sin.” So perhaps “punishment” is a result of His wrath — toward sin and all that attach themselves to sin and refuse to let go. So then how does God discipline His children? (And remember that ‘discipline’ means ‘to teach.’) Check out part 2 where we’ll look at God’s character and the way He disciplines us.
In the late 1940s, a grocery chain from Philadelphia approached the Drexel Institute of Technology to develop custom automation that could store and read information about products during checkout. A teacher at the institute – Norman Joseph Woodland – took up the task, experimenting with various data collection methods. He eventually found one that worked, a technique that utilized Morse code to represent an assigned number for the product. Instead of the typical dots associated with Morse code, however, he extended these into lines to create a linear code. (more…) The importance of custom machine vision as a component of modern industrial automation cannot be denied. When it comes to automation, engineering cameras, sensors and software into an automated factory will essentially provide it with sight that in many ways surpasses that of human sight. By using machine vision techniques, automated production lines can accurately understand images, allowing robots and other automated implements to participate in a wide array of tasks without human supervision. (more…) Automation has transformed and continues to transform people’s lives worldwide. From self-driving vehicles to the manufacturing automation of industrial robots to logistical automation engineering in logistics, custom automation continues to change the world in which we live. In modern times, the term “automation” became more widely used from about 1947, when General Motors founded its automation department. This occurred during a time many industries began to adopt feedback controllers, which began to come into industrial use during the 1930s. Automation can utilize computerized, electronic, electrical, hydraulic, mechanic or pneumatic controls, or often combinations of any of these. Automation engineering is present in a number of complex systems, appearing in factories, airplanes, bullet trains and other of modern life. Today, custom automation even appears online in search engines, virtual assistants and other Internet-based systems. (more…)
Directive 93/43/EEC introduced the concept of good hygiene practice, in response to a pan‐European increase in the incidence of food poisoning, to foster a preventive approach to food safety. UK legislation reinforces the EU position that food businesses are responsible for the implementation of good hygiene practices. The response of the food industry has been to develop audited standards of hygiene, higher than explicit legal requirements. Small businesses have, however, been slow to adopt industry hygiene standards. A case study of small manufacturers of ready to eat meat products investigated the reasons for this. Businesses were first audited to the EFSIS standard, to compare current practice with recommended best practice. Second, technical managers or owner‐managers were interviewed, to gain an insight into their knowledge of industry standards in particular, and the process of hygiene management in general. The analysis found significant differences in the knowledge of technical managers and owner‐managers, with the latter often unaware of the existence of audited standards. It is argued, therefore, that, in order to increase the implementation of good hygiene practices, further programmes to inform small food businesses about industry standards are required. MCB UP Ltd Copyright © 2000, MCB UP Limited
Nutrition for Brain Tumour Patients Malnutrition at the diagnosis of cancer is not an uncommon finding in the developing world. Nutrition is very important for Children with Cancer, because the presence of the tumour as well as the treatments that they undergo play havoc with their immune systems as well as various other systems in their little bodies. Malnutrition describes the consequences of insufficient protein-energy intake. Malnutrition is an unspecific term used to define an inadequate nutritional condition. It is characterized by either a deficiency or an excess of energy with measurable adverse effects on clinical outcome. Malnutrition describes the consequences of insufficient protein-energy intake. An adequate protein-energy balance is a prerequisite for age-appropriate growth and maintenance. The occurrence of malnutrition in children with cancer is multifactoral. While some children arrive at the hospital already malnourished due to personal home circumstances, between 40-80% of children will become malnourished during their treatment. The prevalence of malnutrition is also related to the type of tumour the child is diagnosed with as well as the extent of the disease. Malnutrition is more commonly seen in patients with advanced Neuroblastoma, Wilms Tumour, Ewing Sarcoma and advanced Lymphomas. Malnutrition is more severe with aggressive tumours in the later stages of malignancy; the more intensive treatment regimens, the more chance there is of the child becoming malnourished. A malignant tumour leads to changes in a child’s metabolism; their system is unable to regulate the expenditure of energy according to the reduced energy intake, leading to an ineffective use of nutrients and contributing to the development of malnutrition. Children with a poor nutritional status have lower survival rates than those with a good nutritional status. One study conducted on 18 children with newly diagnosed stage IV Neuroblastoma found that malnourished paediatric cancer patients were more likely to relapse and or die 1 year into the treatment than those who were well-nourished. The median survival for the group who was malnourished was 5 months versus 12 months for the well-nourished group. Children with cancer, especially those with solid tumours, have reduced body protein stores due to whole body protein breakdown. This may occur as a result of the cancer itself, the treatment they are undergoing for their tumour, or complications of the disease. The breakdown of lean body mass is a common effect of cancer, making the assessment of body composition a critical part of the nutritional assessment. Tumour types associated with malnutrition for Paediatric Oncology Patients |High risk for undernourishment||Moderate risk for undernourishment||High risk for |Solid tumours with advanced stages||Nonmetastatic solid tumours||Acute Lymphoblastic Leukemia receiving cranial irradiation| |Wilms tumours||Uncomplicated Acute Lymphoblastic Leukemia||Craniopharyngeoma| |Neuroblastoma stage III and IV Rhabdomyosarcoma||Advanced diseases in remission during maintenance treatment||Malignancies with large and prolonged doses of cortisone therapy or other drugs increasing body fat stores| |Ewing Sarcoma||Total body or abdominal or cranial irradiation| According to the American Academy of Paediatrics, children may be considered obese on the basis of a BMI standard deviation score (BMI-SDS) of greater than the 95th percentile Obesity at diagnosis has been associated with lower survival rates in children with cancer, especially those with ALL, AML, or Brain Tumours. Obesity at diagnosis of cancer is also associated with increased risk of obesity at the end of the treatment and in survivorship Obesity in survivorship can lead to an impaired glucose tolerance, diabetes mellitus, hypertension, cardiovascular disease, higher risk of developing certain forms of cancer, and less of a chance of survival should they develop cancer again late in life. Obese children with cancer have significantly lower survival rates compared with other patients. Overweight children diagnosed with cancer after age 10 have a significantly lower mean 5-year event-free survival rate and a higher mean risk of relapse than normal weight children. Mechanisms underlying the association between obesity and cancer are only starting to be understood. Most studies to date regarding obesity in children with cancer during and after treatment have concentrated on those with ALL or brain tumours due to the high risk of hypothalamic-pituitary axis damage caused by the treatment regimens or the location of the tumour. Several studies have shown that children who were obese at diagnosis became between 30% and 40% more overweight by the end of treatment. This excessive weight-gain has been attributed to reduced physical activity, exposure to corticosteroids, growth hormone (GH) deficiency/hypothalamic-pituitary axis damage due to cranial Radiation Treatment (RT), and poor dietary habits. How to Ensure Adequate Nutrition for your Child with Cancer Nutritional intervention for children with cancer is challenging, especially for those who do not have the means to employ the use of a qualified nutritionist, but there are many things that one can do at home to ensure that your child with cancer is getting adequate nutrition. Children with cancer need protein, carbohydrates, fat, water, vitamins, and minerals. Your child’s cancer itself as well as the treatment s they undergo will often cause changes in their eating habits or desire to eat. Not eating can lead to weight loss, and can cause weakness and fatigue. Helping your child eat as well as they can is an important part of helping them through their treatment and increasing their chances of survival. Use Healthy Fats Including healthy dietary fats in your child’s diet during their cancer treatment is important in treating their brain cancer: - Avoid saturated fats and hydrogenated oils (Whole milk products, Butter and margarine, High fat red meat (except grass-fed), Processed meats (like bacon, hot dogs), French fries and other deep-fried foods, Partially hydrogenated oils in pastries, crackers, processed foods); - Use healthy oils like olive oil and fish oil, which can boost the immune system while reducing inflammation and swelling. Healthy fats include Fish (salmon, flounder, herring, sardines); Olive oil, canola oil, flax oil, and coconut oil; Nuts and natural nut butters; Ground flax-seed; Chia seeds; Wheat germ; Avocado and Olives; - Omega-3 fats, found in fish and flax-seed oil, are extremely advantageous in reducing tumour resistance to therapy; - Olive oil is an Omega-9 fat and a healthy source of dietary fat – use moderately in food preparation. - Sugar feeds cancerous cells and suppresses the immune system during cancer treatment; - Cancer cells can consume 10 to 15 times more sugar than normal cells do, which increases the chances of inflammation; - During brain cancer treatment, you should reduce your child’s intake of refined sugar and carbohydrates; they can be replaced with whole-grain products and naturally sweet vegetables like sweet potatoes. Increase Fibre Intake - A diet high in fibre can decrease your child’s chances of becoming constipated or getting diarrhoea, lower their cholesterol and triglyceride levels, and regulate their blood sugar level; - Your child may get a good amount of fibre from whole grain breads and cereals, but fresh fruits, vegetables, and legumes (such as peas, beans or lentils) provide a higher intake of fibre. - Your child should try to eat 4 to 5 servings of vegetables and 1 to 2 servings of fruit; - If your child is not getting enough fibre via their diet, mix one to two tablespoons of ground flaxseed into their yoghurt, porridge, salads, or smoothies. Fruits & Vegetables - The more fruits and vegetables your child eats the better for them. Select Vegetables and Fruits with Vivid Colours because they will appeal more to your child, but also because the more intense the colour, the higher the nutritional content. - Try the “3-colours-a-day” trick as an easy way of ensuring your child eats enough variety of fruits and vegetables. For example, blueberries (1) with breakfast, dark leafy lettuce (2) on the lunch sandwich, and red peppers (3) with chicken at dinner. - Don’t shy away from canned vegetables, especially if they make your life easier right now. Frozen fruits and vegetables are also a healthy alternative. “Phyto” means plant, and phytochemicals are nutrients derived from plants. Although phytochemicals have not yet been classified as nutrients (substances necessary for sustaining life), they are healthy buzzwords in both nutrition and cancer research. - Phytochemicals have been identified as containing properties for aiding in the prevention and/or treatment of at least four of the leading causes of death in Western countries – cancer, diabetes, cardiovascular disease, and hypertension. - Phytonutrients appear to stimulate the immune system, exhibit antibacterial and antiviral activity, decrease cholesterol levels, prevent cancer cell replication, and generally, help your body fight cancer. For more information on Phytochemicals, you can download America’s Phytonutrient Report in PDF form HERE Make sure that your child drinks sufficient fluids, as additional fluids are needed to replace fluid lost during chemotherapy and through treatment side effects. The human body needs water for the following essential functions: - Remove waste and toxins - Transport nutrients and oxygen - Control heart rate and blood pressure - Regulate body temperature - Lubricate joints - Protect organs and tissue, including the eyes, ears, and heart - Create saliva Children often lose a lot of water from vomiting, diarrhoea, or by just not drinking enough. This can lead to dehydration, but it can be handled by making sure your child gets plenty of fluids. Children get some water from foods, especially fruits and vegetables, but they need to drink liquids as well to be sure that all the body cells get the fluid they need. Tap, filtered, or bottled water is best, but your child can also get necessary fluids from other sources like sports drinks, juices (100% juice is best), and clear broths. The use of supplements is rather controversial, but according to Dr. Wallace, who holds a PhD in nutrition and is a nutrition consultant, a diet rich in natural supplementation will reduce side effects of treatments. Supplements that are currently recommended for brain cancer patients include: Vitamin D – Vitamin D deficiency that occurred before birth may have set the stage for brain tumour formation later in life. Vitamin D deficiency during gestation causes long-term effects on brain development (Levenson CW et al 2008). Vitamin D remains important after birth, as it activates chemical pathways, in particular the sphingomyelin pathway, which kills glioblastoma cells (Magrassi L et al 1998). - Melatonin – There is growing evidence suggesting melatonin may be useful in treating primary brain tumours. An in vitro experiment showed that melatonin, at physiologic concentrations, inhibits growth of neuroblastoma cells (Cos S et al 1996). A 2006 paper published in Cancer Research reported that melatonin stopped the growth of gliomas that had been implanted into rats (Martín V et al 2006). As a result, some researchers suggest melatonin might be useful in treating glioma (Wion D et al 2006). The strongest evidence for the use of melatonin in brain cancer is in treating pituitary tumours (Gao L 2001). and (Yang QH et al 2006). - Folic Acid and 5-MTHF – Natural folate from food and folic acid from supplements must be converted into the active form, 5-MTHF (5-methyltetrahydrofolate), by the enzyme 5,10-methylenetetrahydrofolate reductase (MTHFR). In certain people the gene that codes for this enzyme produces a less effective enzyme. In some studies, the risk for glioma in these people is increased by about 23% while meningioma risk is more than doubled (Sirachainan N et al 2008, Bethke L et al 2008, Kafadar AM et al 2006). - Selenium – antioxidant that patients with brain tumours should consider. Exposing brain cancer cells to selenium makes them more sensitive to, and more likely to die after, radiation therapy (Schueller P et al 2004). Selenium inhibits growth and invasion, and induces apoptosis in various types of brain tumour cells, including malignant cell lines (Sundaram N et al 2000, Rooprai HK et al 2007). - Vitamin E – antioxidant of particular interest in connection with brain cancer; enhances chemotherapy treatment of drug resistant glioblastoma cells, increasing effectiveness (Kang YH et al 2005) Consult your child’s Oncology Team before giving him or her any supplements. Managing Side Effects with Nutrition It may be hard for your child to eat a balanced diet during their cancer treatment. Various side effects of their treatment such as nausea, vomiting, diarrhoea, constipation, fatigue, and mouth sores (mucositis) can make eating very difficult. The following tips may help: - Nausea: Try a low-fat, bland diet of cold foods, products containing ginger, peppermint or sea bands to combat nausea; - Diarrhoea: Try a diet of bananas, white rice, applesauce and toast; it will help minimise irritation to the digestive tract. Water soluble fibre supplements may help form firmer a stool; - Constipation: Increase your child’s fibre intake and keep them hydrated by making sure they drink sufficient liquids; - Fatigue: Give your child small meals of protein-rich foods often; decrease their sugar intake to give them more energy. Certain iron and folic acid supplements may also help boost red blood cell count, but DO NOT give your child any vitamins or supplements without their oncologist’s permission. “Nutrition is an important part of the health of all children, but it is especially important for children getting cancer treatment. This guide can help you learn about your child’s nutritional needs and how cancer and its treatment may affect them. We also offer suggestions and recipes to help you ensure your child is getting the nutrition he or she needs.” Source: Nutrition for Children with Cancer: American Cancer Society This Report offers a vast array of information regarding nutrition for the child with cancer, and even includes some great recipes. Download the Full PDF Report HERE “Children more commonly present with protein energy malnutrition (PEM) at diagnosis of cancer in developing countries than in developed countries, depending on the type of cancer and extent of the disease.” PHM at cancer diagnosis is associated with delays in treatment, increased infections and a negative outcome.’ [here is still controversy regarding the ideal criteria to use to describe PEM as there are many methods and cut-off points.]“ Source: Malnutrition in Paediatric Oncology Patients: South African Study Published 1 August, 2010: - JUDITH SCHOEMAN, BDietetics, BCompt, MSc Dietetics Principal Dietitian: Oncology. Steve Biko Academic Hospital Oncology Complex. - ANDRE DANNHAUSER, BSc Dietetics, MSc Dietetics, PhD Professor and Head, Department of Nutrition & Dietetics. University of the Free State - MARIANA KRUCER, MB ChB, MMed Paed, MPhil (Applied Ethics), PhD Professor and Executive Head, Department of Paediatrics and Child Health. Stellenbosch University and Tygerberg Hospital, W.Cape Download the Full PDF Report HERE “Study results of children (8-15.8 years) with solid tumors showed that they had a higher basal metabolic rate (BMR) at the time of diagnosis, when compared to reference values. BMR is defined as the minimum amount of energy required to maintain all essential bodily functions. The increase in BMR indicates that the tumor is more than an inert mass requiring removal. It consists of metabolically active tissue initially increasing basal energy requirements, which should be accounted for when determining requirements for nutritional support.” Source: Nutrition in the Paediatric Oncology Patient: South African Study Published in April 2007; Reviewed in 2009: Red Cross Children’s Hospital Download the Full PDF Report HERE Posted on 15 May, 2015, in Articles, Blog, Brain Cancer, nutrition and tagged cancer fighting foods, Child Cancer, Child Cancer Awareness, childhood cancer, Children with Cancer, children's cancer, Little Fighters Cancer Trust, paediatric brain cancer, paediatric cancer, south africa. Bookmark the permalink. 4 Comments.
Welcome to the February book review! This month we will be looking at, “Why We Get Fat and What To Do About It” by Gary Taubes. This book lives up to its title. At the end of the book, we should have an understanding of how we get fat, as well as the diet that is the most useful for losing weight. He sifts through centuries of research and data to only give us the most valuable nuggets of precious metal. My review will distill his thoughts further so you are left a philosopher’s stone of sorts. I have read this book a few times, and it is well worth your time and effort. I want to start with a quote from Taubes’ book, he was quoting a 1998 National Institutes of Health (NIH) report, “Obesity is a complex, multifactorial chronic disease that develops from an interaction of genotype and the environment,” they explained. “Our understanding of how and why obesity develops is incomplete, but involves the integration of social, behavioral, cultural, physiological, metabolic and genetic factors.” During the course of his book, he talks about these effects on obesity. The point he hammers home the hardest is that obesity is not a disease of overnutrition, it is the most prevalent form of malnutrition. Obesity is not a case of thermodynamics or calorie-in, calorie-out, however, the research seems to point to it being a hormonal imbalance, with a strong genetic component. If our parents are fat, we are more likely to be fat, or we may have a harder time maintaining a healthy body weight. This is a considerably quicker read than The Big Fat Surprise, however, do not judge this book by its size alone. It is jammed packed with nuggets of knowledge. These gold bombs come early and often. Taubes doesn’t even wait for the first chapter to start dropping them. In the introduction, he lays out the thesis for his book that he attempts to further prove throughout the book. One such quote is as follows: The science tells us that obesity is ultimately the result of a hormonal imbalance, not a caloric one— specifically, the stimulation of insulin secretion caused by eating easily digestible, carbohydrate-rich foods: refined carbohydrates, including flour and cereal grains, starchy vegetables such as potatoes, and sugars, like sucrose (table sugar) and high- fructose corn syrup. These carbohydrates literally make us fat, and by driving us to accumulate fat, they make us hungrier and they make us sedentary. During the first few pages, he also makes a connection to how the science of obesity research would have to evolve if it was subject to a trial in court. Most poignantly, even though the calories-in/calories-out and diet-heart hypotheses have both failed their supporters dug in their heels instead of ruling a guilty verdict. The quote from his book says it better than I could sum up: Imagine a murder trial in which one credible witness after another takes the stand and testifies that the suspect was elsewhere at the time of the killing and so had an airtight alibi, and yet the jurors keep insisting that the defendant is guilty, because that’s what they believed when the trial began. Consider the obesity epidemic. Here we are as a population getting fatter and fatter. Fifty years ago, one in every eight or nine Americans would have been officially considered obese, and today it’s one in every three. Two in three are now considered overweight, which means they’re carrying around more weight than the public-health authorities deem to be healthy. Children are fatter, adolescents are fatter, even newborn babies are emerging from the womb fatter. Throughout the decades of this obesity epidemic, the calories-in/calories-out, energy-balance notion has held sway, and so the health officials assume that either we’re not paying attention to what they’ve been telling us—eat less and exercise more—or we just can’t help ourselves. Much like Taubes, I have a hard time believing that the “common wisdom” to eat less and exercise more is true. I have seen friends and co-workers attempt to follow the US dietary guidelines, often times when they do they are left wrecked because of it. I have dealt with this myself, but instead of just calories, I also counted weight watchers points, then I started counting calories, points, and macros (plus sodium and fiber). I felt myself becoming addicted to the counting, but worst of all I felt like I was never full. At the time, I shrugged it off, since that’s what dieting is, right? No, it’s not just about the right magical amount of whatever number you are chasing then going for an extra run so you can get the points to cash them in for another Little Debbie’s snack cake. It is not about numbers, it is about eating the foods that will not cause the hormonal regulation that triggers the dysregulation; it is about eating foods that do not hijack your dopamine response. Because these foods seem to trigger addictive like tendencies making it that much harder to give up the sweet rewards. So that was a minor tirade, sorry. The last part was not directly from Taubes’ book, but it was some research I have been doing on the neuroregulation of appetite, and the signals your body sends when you get a huge dose of super sweet things. Okay, back to “Why We Get Fat.” This book is broken into two books, and I will be breaking this post into two sections as well. Book I is entitled, “Biology, Not Physics,” and the chapters I wanted to highlight are: - Why Were They Fat? - The Elusive Benefits of Exercise - Thermodynamics for Dummies, Part 1 - Thermodynamics for Dummies, Part 2 - Head Cases Book II is entitled “Adiposity 101,” and I will tell you the chapters I will be highlighting then. - Why Were They Fat? Now, let’s dive into chapter 1. Like in any good book, before telling us what to do, we must first understand why we need to make a change. Taubes approaches this by pointing out the flaws in nutritional science (much like The Big Fat Surprise does,) to illustrate that point he’s says, “[the] methods of science are supposed to guard against the adoption of false convictions, but these methods aren’t always followed, and even when they are, inferring the truth about nature and the universe is a difficult business.” In other words, scientists should be encouraged to follow the evidence no matter where it leads (even if it points a naughty finger at big business,) especially if it causes us to rethink our previously held beliefs. Science should be free from dogma and allow for open discourse. With all of this talk about what science should do, it begs the question, “Is science, more specifically nutritional science, like that today?” To answer that I want to preface with, I don’t want it to seem like I am coming down on all science because I am not. There is a lot of good science out there, but spoiler, there is also a lot of bad science that will confuse you. Taubes tackles this point head-on by saying: In most of science, skeptical appraisals of the evidence are considered a fundamental requirement to make progress. In nutrition and public health, however, they are seen by many as counterproductive, because they undermine efforts to promote behaviors that the authorities believe, rightly or wrongly, are good for us. But our health (and our weight) are at stake here, so let’s take a look at this evidence and see where it leads us. To be fair I can understand why, just think about how frustrated you would get if you article after article, each one contradicting the other. A few of my friends joke about it say, “So today barefoot running is bad, yesterday it was good. What will it be tomorrow?” This is a very valid point, however, we must keep in mind, that what works for me may not work for you. I can handle barefoot running since I have worked up to it, but if you do nothing but ware high heels and sit all day, I would not suggest going for a barefoot run or you will pull something. When you take up a significant change, to steal a phrase from Vinnie Tortorich, you need to ease into the change, then you need to go slower (or it was close to that.) Why do we need to slow down? Because change is hard; if you want to make it stick you need to do it slow and steady. I say all that to remind you why studies are important, but they are not the end all be all, because at best they are flawed by corporate backing, and at worst, they may not apply to you if they studied 80-year-old males when you are a 30-year-old female. This is why science should be skeptical because we cannot know all of the variables, This is why science should be a slow steady process that evolves as we learn more. If I hadn’t changed when my diet didn’t work I would not be writing this. Did I have to change my paradigm a bunch of times? Yes. Does that make me a hypocrite? No, because much like scientists should do, I adapted my hypothesis to fit the trends I was seeming in the data I was given. Will my hypothesis continue to evolve as my understanding and knowledge do? I sure as heck hope so! With that said, why does all of the science talk matter? It matters because the current hypothesis about obesity is flawed, it is not about these magical things we didn’t know about until relatively recently called calories, it is not about us having “too much money, too much food, too easily available, plus too many incentives to be sedentary—or too little need to be physically active—[to] have caused the obesity epidemic.” It is not that at all. Because as Tabues reminds us, “One piece of evidence that needs to be considered in this context, however, is the well-documented fact that being fat is associated with poverty, not prosperity—certainly in women, and often in men.” So if it is too much money or food that is too available. How can it be connected to poverty? To be fair, being overweight does not mean you are poor. But again in science, you seek to disprove yourself. To find out why outliers are there, which begs the question: “if the number of outliers becomes statistically significant enough to be included in the “normal” group, can they be outliers anymore? Or does your hypothesis need to change? To prove that amount of outlier there has been, Taubes put together a quick write up about each of these so-called outliers that were, in some cases, dirt poor or struck by famine. Yes, like I mentioned above these outliers would be obese in the face of years of famine. Not getting any more or less food than the emaciated person next to them. Here is the list of cases that documented the obesity triggered malnutrition: - 1870 Pima - 1951: Naples, Italy - 1954: The Pima Again - 1959: Charleston, South Carolina 1 - 1960: Durban, South Africa - 1961: Nauru, the South Pacific - 1961– 63: Trinidad, West Indies - 1963: Chile - 1964– 65: Johannesburg, South Africa - 1965: North Carolina - 1969: Ghana - 1970: Lagos, Nigeria - 1971: Rarotonga, the South Pacific - 1974: Kingston, Jamaica - 1974: Chile (again) - 1978: Oklahoma - 1981– 83: Starr County, Texas I do not want to belabor this point much more, so I will end the discussion on chapter 1 with this: obesity is about far more than just the amount of calories you get or the how sedentary you are. It is about what that food does in your body that matters. Getting exercise is important, but it alone will not make you the picture of health. - The Elusive Benefits of Exercise With that segway, let’s dive into chapter 3 “The Elusive Benefits of Exercise” I love the story that starts off this chapter. Instead of a long quote, I will simply summarize it. You are invited to a dinner where you want to have an appetite. Like this is the last supper before you start a diet and you want to end up in a food coma. What do you do? Well obviously you’d forgo breakfast and lunch, why fill up on subpar food. What else would you do? Maybe you would do a super long or intense workout, so can work up an appetite… does that sound familiar? It sounds oddly like the current dietary guidelines, that says we should eat less and move more. Is that paradoxical to anyone else? Maybe it’s just me. Or not, especially since that is the same idea Taubes ended his anecdote with. Why are we consistently working up an appetite just to starve ourselves later? Much like the lack of definitive proof linking excess calories and obesity, the science does not add up. This does not mean that there aren’t benefits to exercising because there are. Those benefits, however, do not seem to include sustained weight loss. This flies in the face of the current public policy, which is to get “Thirty minutes of moderately vigorous physical activity… five days a week.” The downside is, however, that the evidence is not as robust as one may think, in fact, these “experts could only say: ‘It is reasonable to assume that persons with relatively high daily energy expenditures would be less likely to gain weight over time, compared with those who have low energy expenditures. So far, data to support this hypothesis are not particularly compelling.’” I want to restate that last sentence again, “ So far, data to support this hypothesis are not particularly compelling.” Said another way: These scientists, were not willing to change their paradigm to fit with what the science points to, instead they dug their heels in and are not willing to compromise. Instead of me just telling you how biased these scientists are, let’s see where the “eat less, do more” line of reasoning would bring us if we followed it to its logical conclusion. To illustrate that we will be diving into one the cited studies where a group of researchers studied the effect of exercise on health. These scientists, Williams and Wood, were able to collect data on roughly 13,000 habitual runners during the case of the study they compared the weekly logged miles and their weight from year to year. “Those who ran the most tended to weigh the least, but all these runners tended to get fatter with each passing year, even those who ran more than forty miles a week—eight miles a day, say, five days a week.” For the group conducting the study, this seemed to suggest that: …even the most dedicated runners had to increase their distance by a few miles a week, year after year—expend even more energy as they got older—if they wanted to remain lean. If men added two miles to their weekly distance every year, and women three, according to Williams and Wood, then they might manage to remain lean, because this might mean expending in running the calories that they seemed fated otherwise to accumulate as fat. Before I continue quoting Taubes, I want to ask one thing. If we are fated to accumulate extra fat year after year, where were all of the overweight people throughout history? Obesity was not the epidemic it has become 50 years ago, and before that, it was even more uncommon. Just ponder that: why did it explode when it did? What would happen if we changed the way we eat to look more like our forebears? Digression over; now let’s look back at the study and see how much more we would need to run if we wanted to keep off the pounds as we age; if we followed the calories-in/calories-out model: Imagine a man in his twenties who runs twenty miles a week—say, four miles a day, five days a week. According to Williams and Wood (and the logic and mathematics of calories-in/calories-out), he will have to double that in his thirties (eight miles a day, five days a week) and triple it in his forties (twelve miles a day, five days a week) to keep fat from accumulating. A woman in her twenties who runs three miles a day, five times a week—an impressive but not excessive amount—would have to up her daily distance to fifteen miles in her forties to retain her youthful figure. If she does eight-minute miles, a nice pace for such a distance, she’d better be prepared to spend two hours on each of her running days to keep her weight in check. If we believe in calories-in/calories-out, and that in turn leads us to conclude that we have to run half-marathons five days a week (in our forties, and more in our fifties, and more in our sixties …) to maintain our weight, it may, once again, be time to question our underlying beliefs. Maybe it’s something other than the calories we consume and expend that determines whether we get fat. If gaining weight is more than just calories-in/calories-out, then how did we come up with that hypothesis in the first place? It all comes back to the idea we talked about when discussing chapter one, the quote I used was, “In most of science, skeptical appraisals of the evidence are considered a fundamental requirement to make progress. In nutrition and public health, however, they are seen by many as counterproductive…” To illustrate that point lets see just how these scientists interpret the data to get the outcome they want: As for the researchers themselves, they invariably found a way to write their articles and reviews that allowed them to continue to promote exercise and physical activity, regardless of what the evidence actually showed. One common method was (and still is) to discuss only the results that seem to support the belief that physical activity and energy expenditure can determine how fat we are, while simply ignoring the evidence that refutes the notion, even if the latter is in much more plentiful supply. Two experts in the Handbook of Obesity, for instance, reported as a reason to exercise that the Danish attempt to turn sedentary subjects into marathon runners had resulted in a loss of five pounds of body fat in male subjects; they neglected to mention, however, that it had zero influence on the women in the trial, which could be taken as a strong incentive not to exercise. I keep highlighting the abuse of the scientific method by nutritional scientist because we trust them for our dietary guidelines. If they fudge the numbers to fit their beliefs, then can we trust US guidelines they are influenced by these same people? - Thermodynamics for Dummies, Part 1 Before we talk about our next chapter, let’s learn what thermodynamics is. It is broken into 3 laws, but right now we only care about the first one. The first law is also known as, “the law of energy conservation: all it says is that energy is neither created nor destroyed but can only change from one form to another. Blow up a stick of dynamite, for instance, and the potential energy contained in the chemical bonds of the nitroglycerin is transformed into heat…” It is all about chemical reactions, it is not about a dietary balance, to continue with what taubes was saying, “All the first law says is that if something gets more or less massive, then more energy or less energy has to enter it than leave it. It says nothing about why this happens. It says nothing about cause and effect. It doesn’t tell us why anything happens; it only tells us what has to happen if that thing does happen. A logician would say that it contains no causal information.” Said another way, thermodynamic tell us that overeating causes us to become fat. If we get fatter than more energy enters our body then we use up. But when in that explanation tells us why we get fat. The problem is not how we got, but it is how do we get unfat. Since the law of thermodynamics does not address that, maybe we should come up with a theory that does explain it. Spoiler: we will be addressing the in the next part of the book review, but that is next week. Now let’s move on to the last chapter I will be talking about today. - Thermodynamics for Dummies, Part 2 In the first part, we learned about the thermodynamics, in this chapter we learn that the “eat less, move more” is neither the problem nor the cure for obesity. This is because it is an incorrect assumption. In actuality, the energy we consume and the energy we expend have little influence on one another. For example, if we limit our intake of calories over a long period of time, your body will start to burn less to compensate for the calorie deficit. So, it should be, if we eat less we will burn less. To see this in action let’s look and a study Taubes quotes, “Terry Maratos-Flier, published an article in Scientific American called ‘What Fuels Fat.’ In it, they described the intimate link between appetite and energy expenditure, making clear that they are not simply variables that an individual can consciously decide to change with the only effect being that his or her fat tissue will get smaller or larger to compensate.” Again and again, thermodynamics does not apply to humans. The law of thermodynamics was found to be true in a closed system. It is in a closed system that energy is neither created nor destroyed, it either changes form or it is transferred into another object. As this transfer happens thermal energy is released. This is why the calories-in/calories-out hypothesis does not work for humans since a closed system, we lose energy through heat, and restroom breaks. Which is why Gary Taubes and others like him have started think less about calories and more about the hormonal effects of the foods we eat. In part two we will learn about the hormonal signaling that causes us to utilize our fat stores for energy instead of consuming more food. Which means, we are able to eat less and burn more, irrespective of us moving more. That is how you lose weight, it isn’t caloric, it’s hormonal.
The Toddler Program at Montessori of Ladera Ranch is a specialized curriculum for children ages 24 to 36 months, serving as a preparatory program for transition into a Montessori preschool. Children are exposed to Practical Life activities and the beginnings of the Sensorial materials which are appropriate for the emergent two-year-old. Water work, puzzles, math activities, manipulatives, and language tasks round out the choices available during the daily work time. The concept of grace and courtesy is introduced and implemented during morning snack and lunchtime. Each month, the children explore a different Color of the Month, Book of the Month, and Song of the Month. The work activities, art projects, and circle time selections all reinforce the monthly curriculum. The ratio in the classroom is one-to-six; i.e., one teacher per six children, ensuring that children who have not yet had an academic experience will make a successful transition into a group care environment. Toilet training is also an important component of the Toddler Program and teachers work closely with parents to guarantee a painless progression to independent toileting.
Enter the value you want to convert below: A kilobyte (KB) is a unit of digital information storage. It is one of the standard units used to measure the amount of data. The term comes from the Greek "kilo", which means thousand, and "byte", a basic unit of digital information. However, in the world of digital storage, a kilobyte equals 1024 bytes, not 1000, due to the binary nature of most computer systems. Kilobytes are often used to indicate relatively small amounts of data. For example, a short Word document or a small image may contain several kilobytes of data. A megabyte (MB) is a unit that represents 1024 kilobytes. The word 'mega' means million, but in the context of digital information, a megabyte is defined as 1,048,576 bytes (or 1024^2 bytes) because of the binary nature of computers. Megabytes are used for a larger amount of data. For example, an average music track in MP3 format can contain about 5 megabytes of data. It's important to note that some areas, such as network and storage speeds, are often measured in megabits, not megabytes. There are 8 megabits (Mb) in a megabyte (MB). Converting megabytes to kilobytes and vice versa is quite simple thanks to the standardized relationship between the two. 1 MB = 1024 KB 1 KB = 0.0009765625 MB This relationship makes it easy to convert values between the two units, whether you're dealing with storage capacities, file sizes, or data transfer rates.
Buy Tools Online Parenting a chronically-ill child, holding a family together, and taking care of oneself in the process is often an insurmountable task. As caregivers, you give and give, making life changing decisions for your children. Often, it's only your will and determination that makes the endless problems survivable. But you need care too. When her daughter was born with two liver diseases, Shirley's life was upended into a whirlwind of doctors and hospitals. In this book, she shares how she learned and applied tools for emotional survival. It takes courage to find peace. Shirley gives hope and understanding for other parents and caregivers of special children. Tools for Effective Therapy with Children and Families provides mental health professionals with step-by-step tools and strategies for effective therapeutic outcomes with children and their families. An integration of solution-focused brief therapy and play therapy, this groundbreaking book is uniquely suited to clinicians working with school-aged children and their parents. Tools for Effective Therapy with Children and Families uses clearly articulated and creative play activities to elicit conversations about solutions, successes, and collaborative goals with clients. Session transcripts and technique illustrations throughout the chapters allow clinicians to see the solution-focused approach in action. Hypermedia technology needs a creative approach from the outset in the design of software to facilitate human thinking and learning. This book opens a discussion of the potential of hypermedia and related approaches to provide open exploratory learning environments. The papers in the book are based on contributions to a NATO Advanced Research Workshop held in July1990 and are grouped into six sections: - Semantic networking as cognitive tools, - Expert systems as cognitive tools, - Hypertext as cognitive tools, - Collaborative communication tools, - Microworlds: context-dependent cognitive tools, - Implementing cognitive tools. The book will be valuable for those who design, implement and evaluate learning programs and who seek to escape from rigid tactics like programmed instruction and behavioristic approaches. The book presents principles for exploratory systems that go beyond existing metaphors of instruction and provokes the reader to think in a new way about the cognitive level of human-computer interaction. Buy Tools Online Articles Buy Tools Online Books Buy Tools Online
Tải bản đầy đủ - 0trang 7 Interferences from Bilirubin, Hemolysis, and High Lipid Content 2.8 Interferences from Endogenous and Exogenous Components Hemoglobin is mainly released by hemolysis of red blood cells (RBC). Hemolysis can occur in vivo, during venipuncture and blood collection, or during sample processing. Hemoglobin interference depends on its concentration in the sample. Serum appears hemolyzed when the hemoglobin concentration exceeds 20 mg/dL. The absorbance maxima of the heme moiety in hemoglobin are at 540 to 580 nm wavelengths. However, hemoglobin begins to absorb around 340 nm and then absorbance increases at 400À430 nm as well. Interference of hemoglobin (if the specimen is grossly hemolyzed) is due to interference with the optical detection system of All lipids in plasma exist as complexed with proteins that are called lipoproteins, and particle size varies from 10 nm to 1000 nm (the higher the percentage of the lipid, the lower the density of the resulting lipoprotein and the larger the particle size). The lipoprotein particles with high lipid content are micellar and are the main source of assay interference. Unlike bilirubin and hemoglobin, lipids normally do not participate in chemical reactions and mostly cause interference in assays due to their turbidity and capability of scattering light, as in nephelometric assays. 2.8 INTERFERENCES FROM ENDOGENOUS AND EXOGENOUS COMPONENTS Immunoassays are affected by a variety of endogenous and exogenous compounds, including heterophilic antibodies. The key points regarding immunoassay interferences include: Endogenous factors such as digoxin-like immunoreactive factors only affect digoxin immunoassays. Please see Chapter 15 for a more detailed Structurally similar molecules are capable of cross-reacting with the antibody to cause falsely elevated (positive interference) or falsely lowered results (negative interference). Negative interference occurs less frequently than positive interference, but may be clinically more dangerous. For example, if the result of a therapeutic drug is falsely elevated compared to the previous measurement, the clinician may question the result, but if the value is falsely lower, the clinician may simply increase the dose without realizing that the value was falsely lower due to interference. That can cause drug toxicity in the patient. Interference from drug metabolites is the most common form of interference, although other structurally similar drugs may also be the cause of interference. See also Chapter 15. I m m un o a ss a y P l a t f o r m an d D e s i g ns 2.9 INTERFERENCES OF HETEROPHILIC ANTIBODIES IN IMMUNOASSAYS Heterophilic antibodies are human antibodies that interact with assay antibody interferences. Features of heterophilic antibody interference in immunoassays Heterophilic antibodies may arise in a patient in response to exposure to certain animals or animal products or due to infection by bacterial or viral agents, or non-specifically. Among heterophilic antibodies, the most common are human antimouse antibodies (HAMA) because of wide use of murine monoclonal antibody products in therapy or imaging. However, other anti-animal antibodies in humans have also been described that can interfere with an If a patient is exposed to animals or animal products, or suffers from an autoimmune disease, the patient may have heterophilic antibodies in Heterophilic antibodies interfere most commonly with sandwich assays that are used for measuring large molecules, but rarely interfere with competitive assays. Most common interferences of heterophilic antibodies are observed with the measurement of various tumor markers. In the sandwich-type immunoassays, heterophilic antibodies can form the “sandwich complex” even in the absence of the target antigen; this generates mostly false positive results. False negative results due to the interference of heterophilic antibodies are rarely observed. Heterophilic antibodies are absent in urine. Therefore, if a serum specimen is positive for an analyte, for example, human chorionic gonadotropin (hCG), but beta-hCG cannot be detected in the urine specimen, it indicates interference from heterophilic antibodies in the serum hCG measurement. Another way to investigate heterophilic antibody interference is serial dilution of a specimen. If serial dilution produces a non-linear result, it indicates interference in the assay. Interference from heterophilic antibodies may also be blocked by adding any commercially available heterophilic antibody blocking agent in the specimen prior to analysis. For analytes that are also present in the protein-free ultrafiltrate (relatively small molecules), analysis of the analyte in the protein-free ultrafiltrate can eliminate interference from heterophilic antibodies because, due to large molecular weights, heterophilic antibodies are absent in protein-free ultrafiltrates. Heterophilic antibodies are more commonly found in sick and hospitalized patients with reported prevalences of 0.2%À15%. In addition, rheumatoid 2.10 Interferences from Autoantibodies and Macro-Analytes factors that are IgM type antibodies may be present in the serum of patients suffering from rheumatoid arthritis and certain autoimmune diseases. Rheumatoid factors may interfere with sandwich assays and the mechanism of interference is similar to the interference caused by heterophilic antibodies. Commercially available rheumatoid factor blocking agent may be used to eliminate such interferences. A 58-year-old man without any familial risk for prostate cancer visited his primary care physician and his prostate-specific antigen (PSA) level was 83 ng/mL (0À4 ng/mL is normal). He was referred to a urologist and his digital rectal examination was normal. In addition, a prostate biopsy, abdominal tomodensitometry, whole body scan, and prostatic MRI were performed, but no significant abnormality was observed. However, due to his very high PSA level (indicative of advance stage prostate cancer) he was treated with androgen deprivation therapy with goserelin acetate and bicalutamide. After 3 months he still had no symptoms, his prostate was atrophic on digital rectal examination, and he had suppressed testosterone levels as expected. However, his PSA level was still highly elevated (122 ng/mL) despite no radiographic evidence of advanced cancer. At that point his serum PSA was analyzed by a different assay (Immulite PSA, Cirrus Diagnostics, Los Angeles) and the PSA level was , 0.3 ng/mL. The treating physician therefore suspected a false positive PSA by the original Access Hybritech PSA assay (Hybritech, San Diego, CA), and interference of heterophilic antibodies was established by treating specimens with heterophilic antibody blocking agent. Re-analysis of the high PSA specimen showed a level below the detection limit. This patient received unnecessary therapy for his falsely elevated PSA level due to the interference of heterophilic antibody . A 64-year-old male during a routine visit to his physician was diagnosed with hypothyroidism based on elevated TSH (thyroid stimulating hormone) levels, and his clinician initiated therapy with levothyroxine (250 microgram per day). Despite therapy, there were still increased levels of TSH (33 mIU/L) and his FT4 level was also elevated. The endocrinologist at that point suspected that TSH levels measured by the Unicel Dxi analyzer (Beckman Coulter) were falsely elevated due to interference. Serial dilution of the specimen showed non- linearity, an indication of interference. When the specimen was analyzed using a different TSH assay (immunoradiometric assay (IRMA), also available from Beckman Coulter), the TSH value was 1.22 mIU/L, further confirming the interference with the initial TSH measurement. The patient had a high concentration of rheumatoid factor (2700 U/mL) and the authors speculated that his falsely elevated TSH was due to interference from rheumatoid factors . 2.10 INTERFERENCES FROM AUTOANTIBODIES Autoantibodies (immunoglobulin molecules) are formed by the immune system of an individual capable of recognizing an antigen on that person’s I m m un o a ss a y P l a t f o r m an d D e s i g ns own tissues. Several mechanisms may trigger the production of autoantibodies, for example, an antigen formed during fetal development and then sequestered may be released as a result of infection, chemical exposure, or trauma, as occurs in autoimmune thyroiditis. The autoantibody may bind to the analyte-label conjugate in a competition-type immunoassay to produce a false positive or false negative result. Circulating cardiac troponin I autoantibodies may be present in patients suffering from acute cardiac myocardial infarction where troponin I elevation is an indication of such an episode. Unfortunately, the presence of circulating cardiac troponin I autoantibodies may falsely lower cardiac troponin I concentration (negative interference) using commercial immunoassays, thus complicating the diagnosis of acute myocardial infarction . However, falsely elevated results due to the presence of autoantibodies are more common than false negative results. Verhoye et al. found three patients with false positive thyrotropin results that were caused by interference from an autoantibody against thyrotropin. The interfering substance in the affected specimens was identified as an autoantibody by gel-filtration chromatography and polyethylene glycol Often the analyte can conjugate with immunoglobin or other antibodies to generate macro-analytes, which can falsely elevate the true value of the analyte. For example, macroamylasemia and macro-prolactinemia can produce falsely elevated results in amylase and prolactin assays, respectively. In macro-prolactinemia, the hormone prolactin conjugates with itself and/or with its autoantibody to create macro-prolactin in the patient’s circulation. The macro-analyte is physiologically inactive, but often interferes with many prolactin immunoassays to generate false positive prolactin results . Such interference can be removed by polyethylene glycol precipitation. A 17-year-old girl was referred to a University hospital for having a persistent elevated level of aspartate aminotransferase (AST). One year earlier, her AST level was 88 U/L as detected during her annual school health check, but she had no medical complaints. She was not on any medication and had a regular menstrual cycle. Her physical examination at the University hospital was unremarkable. All laboratory test results were normal, but her AST level was further elevated to 152 U/L. All serological tests for hepatitis were negative. On further follow-up her AST level was found to have increased to 259 U/L. At that point it was speculated that her elevated AST was due to interference, and further study by gel-filtration showed a species with a molecular weight of 250 kilodaltons. This was further characterized by immunoelectrophoresis and immunoprecipitation to be an immunoglobulin (IgG kappa-lambda globulin) complexed AST that was causing the elevated AST level in this girl. These complexes are benign . 2.11 PROZONE (OR “HOOK”) EFFECT The Prozone or hook effect is observed when a very high amount of an analyte is present in the sample but the observed value is falsely lowered. This type of interference is observed more commonly in sandwich assays. The mechanism of this significant negative interference is the capability of a high level of an analyte (antigen) to reduce the concentrations of “sandwich” (antibody 1:antigen:antibody 2) complexes that are responsible for generating the signal by forming mostly single antibody:antigen complexes. The hook effect has been reported with assays of a variety of analytes, such as β-hCG, prolactin, calcitonin, aldosterone, cancer markers (CA 125, PSA), etc. The best way to eliminate the hook effect is serial dilution. For example, if the hook effect is present and the original value of an analyte (e.g. prolactin) was 120 ng/mL, then 1:1 dilution of the specimen should produce a value of 60 ng/mL; but if the observed value was 90 ng/mL (which was significantly higher than the expected value), the hook effect should be suspected. In order to eliminate the hook effect, a 1:10, 1:100, or even a 1:1000 dilution may be necessary so that the true analyte concentration will fall within the analytical measurement range (AMR) of the assay.. A 16-year-old girl presented to the emergency department with a 2-week history of nausea, vomiting, vaginal spotting, and lower leg edema. On physical examination, a lower abdomen palpable mass was found. The patient admitted sexual activity, but denied having any sexually transmitted disease. Molar pregnancy was suspected, and the quantitative β-subunit of human chorionic gonadotropin (β-hCG) concentration was 746.2 IU/L; however, the urine qualitative level was negative. Repeat of the urinalysis by a senior technologist also produced a negative result. At that point the authors suspected the hook effect and dilution of the serum specimen (1:1) produced a non-linear value (455.2 IU/L), which further confirmed the hook effect. After a 1:10 dilution, the urine test for β-hCG became positive, and finally, by using a 1:10,000 dilution of the specimen, the original serum β-hCG concentration was determined to be 3,835,000 IU/L. Usually the hook effect is observed with a molar β-hCG level in serum because high amounts of β-hCG are produced by molar pregnancy Immunoassays can be competitive or immunometric (non-competitive, also known as sandwich). In competitive immunoassays only one antibody is used. This format is common for assays of small molecules such as a therapeutic drugs or I m m un o a ss a y P l a t f o r m an d D e s i g ns drugs of abuse. In the sandwich format two antibodies are used and this format is more commonly used for assays of relative large molecules. Homogenous immunoassay format: After incubation, no separation between bound and free label is necessary. Heterogenous immunoassay format: The bound label must be separated from the free label before measuring the signal. Commercially available immunoassays use various formats, including FPIA, EMIT, CEDIA, KIMS, and LOCI. In the fluorescent polarization immunoassay (FPIA), the free label (a relatively small molecule) attached to the analyte (antigen) molecule has different Brownian motion than when the label is complexed to a large antibody (140,000 or more Daltons). FPIA is a homogenous competitive assay where after incubation the fluorescence polarization signal is measured; this signal is only produced if the labeled antigen is bound to the antibody molecule. Therefore, intensity of the signal is inversely proportional to the analyte EMIT (enzyme multiplied immunoassay technique) is a homogenous competitive immunoassay where the antigen is labeled with glucose 6-phosphate dehydrogenase, an enzyme that reduces nicotinamide adenine dinucleotide (NAD, no signal at 340 nm) to NADH (absorbs at 340 nm), and the absorbance is monitored at 340 nm. When a labeled antigen binds with the antibody molecule, the enzyme label becomes inactive and no signal is generated. Therefore, signal intensity is proportional to analyte concentration. The Cloned Enzyme Donor Immunoassay (CEDIA) method is based on recombinant DNA technology where bacterial enzyme beta-galactosidase is genetically engineered into two inactive fragments. When both fragments combine, a signal is produced that is proportional to the analyte concentration. Kinetic interaction of microparticle in solution (KIMS): In the absence of antigen molecules free antibodies bind to drug microparticle conjugates to form particle aggregates that result in an increase in absorption that is optically measured at various visible wavelengths (500À650 nm). Luminescent oxygen channeling immunoassays (LOCI): The immunoassay reaction is irradiated with light to generate singlet oxygen molecules in microbeads (“Sensibead”) coupled to the analyte. When bound to the respective antibody molecule, also coupled to another type of bead, it reacts with singlet oxygen and chemiluminescence signals are generated that are proportional to the concentration of the analyteÀantibody complex. Usually total bilirubin concentration below 20 mg/dL does not cause interferences, but concentrations over 20 mg/dL may cause problems. The interference of bilirubin is mainly caused by its absorbance at 454 or 461 nm. Various structurally related drugs or drug metabolites can interfere with Heterophilic antibodies may arise in a patient in response to exposure to certain animals or animal products, due to infection by bacterial or viral agents, or use of murine monoclonal antibody products in therapy or imaging. Heterophilic antibodies interfere most commonly with sandwich assays used for measuring large molecules, but rarely with competitive assays, causing mostly false positive Heterophilic antibodies are absent in urine. Therefore, if a serum specimen is positive for an analyte (e.g. human chorionic gonadotropin, hCG), but beta-hCG cannot be detected in the urine specimen, it indicates interference from a heterophilic antibody in the serum hCG measurement. Another way to investigate heterophilic antibody interference is serial dilution of a specimen. If serial dilution produces a non-linear result, it indicates interference in the assay. Interference from heterophilic antibodies can also be blocked by adding commercially available heterophilic antibody blocking agents to the specimen prior to analysis. Autoantibodies are formed by the immune system of a person that recognizes an antigen on that person’s own tissues, and may interfere with an immunoassay to produce false positive results (and less frequently, false negative results). Often the endogenous analyte of interest will conjugate with immunoglobin or other antibodies to generate macro-analytes, which can falsely elevate a result. For example, macroamylasemia and macro-prolactinemia can produce falsely elevated results in amylase and prolactin assays, respectively. Such interference can be removed by polyethylene glycol precipitation. Prozone (“hook”) effect: Very high levels of antigen can reduce the concentrations of “sandwich” (antibody 1:antigen:antibody 2) complexes responsible for generating the signal by forming mostly single antibody:antigen complexes. This effect, known as the prozone or hook effect (excess antigen), mostly causes negative interference (falsely lower results). The best way to eliminate the hook effect is serial dilution. Jolley ME, Stroupe SD, Schwenzer KS, Wang CJ, et al. Fluorescence polarization immunoassay III. An automated system for therapeutic drug determination. Clin Chem 1981;27:1575À9. Jeon SI, Yang X, Andrade JD. Modeling of homogeneous cloned enzyme donor immunoassay. Anal Biochem 2004;333:136À47. Snyder JT, Benson CM, Briggs C, et al. Development of NT-proBNP, Troponin, TSH, and FT4 LOCI(R) assays on the new Dimension (R) EXL with LM clinical chemistry system. Clin Chem 2008;54:A92 [Abstract #B135]. Dai JL, Sokoll LJ, Chan DW. Automated chemiluminescent immunoassay analyzers. J Clin Ligand Assay 1998;21:377À85. Forest J-C, Masse J, Lane A. Evaluation of the analytical performance of the Boehringer Mannheim Elecsyss 2010 Immunoanalyzer. Clin Biochem 1998;31:81À8. Babson AL, Olsen DR, Palmieri T, Ross AF, et al. The IMMULITE assay tube: a new approach to heterogeneous ligand assay. Clin Chem 1991;37:1521À2. I m m un o a ss a y P l a t f o r m an d D e s i g ns Christenson RH, Apple FS, Morgan DL. Cardiac troponin I measurement with the ACCESSs immunoassay system: analytical and clinical performance characteristics. Clin Montagne P, Varcin P, Cuilliere ML, Duheille J. Microparticle-enhanced nephelometric immunoassay with microsphere-antigen conjugate. Bioconjugate Chem 1992;3:187À93. Henry N, Sebe P, Cussenot O. Inappropriate treatment of prostate cancer caused by heterophilic antibody interference. Nat Clin Pract Urol 2009;6:164À7. Georges A, Charrie A, Raynaud S, Lombard C, et al. Thyroxin overdose due to rheumatoid factor interferences in thyroid-stimulating hormone assays. Clin Chem Lab Med Tang G, Wu Y, Zhao W, Shen Q. Multiple immunoassays systems are negatively interfered by circulating cardiac troponin I autoantibodies. Clin Exp Med 2012;12:47À53. Verhoye E, Bruel A, Delanghe JR, Debruyne E, et al. Spuriously high thyrotropin values due to anti-thyrotropin antibody in adult patients. Clin Chem Lab Med 2009;47:604À6. Kavanagh L, McKenna TJ, Fahie-Wilson MN, et al. Specificity and clinical utility of methods for determination of macro-prolactin. Clin Chem 2006;52:1366À72. Matama S, Ito H, Tanabe S, Shibuya A, et al. Immunoglobulin complexed aspartate aminotransferase. Intern Med 1993;32:156À9. Er TK, Jong YJ, Tsai EM, Huang CL, et al. False positive pregnancy in hydatidiform mole. Clin Chem 2006;52:1616À8. 3.1 LABORATORY ERRORS IN PRE-ANALYTICAL, ANALYTICAL, AND POST-ANALYTICAL STAGES Accurate clinical laboratory test results are important for proper diagnosis and treatment of patients. Factors that are important to obtaining accurate laboratory test results include: Patient Identification: The right patient is identified prior to specimen collection by matching at least two criteria. Collection Protocol: The correct technique and blood collection tube have been used for sample collection to avoid tissue damage, prolonged venous stasis, or hemolysis. Labeling: After collection, the specimen was labeled properly with correct patient information; specimen misidentification is a major source of preanalytical error. Specimen Handling: Proper centrifugation (in the case of serum or plasma specimen analysis) and proper transportation of specimens to the Storage Protocol: Maintaining proper storage of specimens prior to analysis in order to avoid artifactual changes in analyte; for example, storing blood gas specimens in ice if the analysis cannot be completed within 30 min of specimen collection. Interference Avoidance: Proper analytical steps to obtain the correct result and avoid interferences. LIS Reports: Correctly reporting the result to the laboratory information system (LIS) if the analyzer is not interfaced with the LIS. Clinician Reports: The report reaching the clinician must contain the right result, together with interpretative information, such as a reference range and other comments that aid clinicians in the decision-making process. Errors in PreAnalytical, Stages .................... 35 3.2 Order of Draw of 3.3 Errors with Preparation ........... 38 3.4 Errors with Errors ..................... 38 3.5 Error of Collecting Blood in Wrong Tubes: Effect of Anticoagulants. 40 3.6 Issues with Collection .............. 42 3.7 Issues with 3.8 Special Issues: Blood Gas and A. Dasgupta and A. Wahed: Clinical Chemistry, Immunology and Laboratory Quality Control © 2014 Elsevier Inc. All rights reserved. P r e -A na ly t ica l V ar i ab le s Table 3.1 Common Laboratory Errors Key Points ............. 44 Type of Error References ............ 45 Tube filling error Patient identification error Order not entered in laboratory information system Specimen collected wrongly from an infusion line Specimen stored improperly Contamination of culture tube Inaccurate result due to interference Random error caused by the instrument Result communication error Excessive turnaround time due to instrument downtime Failure at any of these steps can result in an erroneous or misleading laboratory result, sometimes with adverse outcomes. The analytical part of the analysis involves measurement of the concentration of the analyte corresponding to its “true” level (as compared to a “gold standard” measurement) within a clinically acceptable margin of error (the total acceptable analytical error, TAAE). Errors can occur at any stage of analysis (pre-analytical, analytical, and post-analytical). It has been estimated that pre-analytical errors account for more than two-thirds of all laboratory errors, while errors in the analytical and post-analytical phases account for only one-third of all laboratory errors. Carraro and Plebani reported that, among 51,746 clinical laboratory analyses performed in a three-month period in the author’s laboratory (7,615 laboratory orders, 17,514 blood collection tubes), clinicians contacted the laboratory regarding 393 questionable results out of which 160 results were confirmed to be due to laboratory errors. Of the 160 confirmed laboratory errors, 61.9% were determined to be pre-analytical errors, 15% were analytical errors, while 23.1% were post-analytical errors . Types of laboratory errors (pre-analytical, analytical, and post-analytical) are summarized in Table 3.1. In order to avoid pre-analytical errors, several approaches can be taken, The use of hand-held devices connected to the LIS that can objectively identify the patient by scanning a patient attached barcode, typically a
This webinar originally occurred on February 8, 2023 Duration: 1.5 hours Estimation of population affinity is a core component of the forensic anthropological biological profile. Most forensic anthropologists rely on craniometric1 and cranial macromorphoscopic2 methods in formulating this estimation. Dental morphology has traditionally been underutilized as a data source in forensic anthropology. There are few published studies employing dental morphological methods for population affinity,3,4 and more are in development. Methods currently published require that practitioners have significant experience scoring dental traits before they can be accurately applied. Most forensic anthropologists are familiar with shoveling of the central maxillary incisors and Carabelli’s trait of the maxillary first molars for their utility in population affinity estimations,5 though the reliance on these traits may be more based in tradition than in actual utility.6 However, because dental anthropology is not always part of the forensic anthropological education, fewer anthropologists are familiar or comfortable with less commonly known dental morphological traits, especially those of the molars. This presentation will focus on 11 dental morphological traits of the maxillary and mandibular molars that can be useful in estimating population affinity. These traits are metacone, hypocone, metaconule, parastyle, protostylid, anterior fovea, deflecting wrinkle, enamel extensions, and mandibular cusps 5, 6, and 7. Easy to apply trait descriptions, examples, and any known global or regional population frequencies from published literature will be presented so attendees may incorporate these into their casework. We will discuss applying currently available methods for estimating population affinity and talk about new methods to be available in the near future. After viewing the presentation practitioners will understand core concepts related to the application of dental morphology in the forensic estimation for population affinity. Practitioners also will be familiar with 11 dental morphological characteristics that are useful in estimating population affinity, but that are not commonly taught about in detail. Practitioners can then apply this knowledge in their casework using methods currently available. - Jantz R.L., & Ousley S.D. 2005. Fordisc 3.1 Personal computer forensic discriminant functions. University of Tennessee. - Hefner J.T. 2009. Cranial Nonmetric variation and estimating ancestry. J Forensic Sci 54(5):985-995. https://doi.org/10.1111/j.1556-4029.2009.01118.x. - Edgar H.J.H. 2013. Estimation of ancestry using dental morphological characteristics. J Forensic Sci 58(s1):s3-8. PMCID: PMC3548042 - Scott, G. Richard, et al. "rASUDAS: A new web-based application for estimating ancestry from tooth morphology." Forensic Anthropology 1.1 (2018): 18-31. https://doi.org/10.5744/fa.2018.0003. - Birkby, Walter H., Todd W. Fenton, and Bruce E. Anderson. "Identifying Southwest Hispanics using nonmetric traits and the cultural profile." Journal of Forensic Sciences 53.1 (2008): 29-33. https://doi.org/10.1111/j.1556-4029.2007.00611.x. - Hawkey D.E., Turner C.G. II. 1998. Carabelli's trait and forensic anthropology: Whose teeth are these?. In: Lukacs J.R., editor. Human Dental Development, Morphology, and Pathology: A Tribute to Albert A. Dahlberg. Eugene: Department of Anthropology, University of Oregon. p 41-50. Detailed Learning Objectives - Attendees will understand the core concepts related to the collection of dental morphological data. - Attendees will become familiar with dental morphological characteristics used in the estimation of population affinity. - Attendees will be able to apply this knowledge in the estimation of the forensic anthropological biological profile. - Heather J.H. Edgar, Ph.D. | Professor, Forensic Anthropologist, University of New Mexico - Becca George, Ph.D. | Instructor of Anthropology/Forensic Anthropology Facilities Curator, Western Carolina University Funding for this Forensic Technology Center of Excellence webinar has been provided by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. The opinions, findings, and conclusions or recommendations expressed in this webinar are those of the presenter(s) and do not necessarily reflect those of the U.S. Department of Justice.
Thalassemia or also known as ‘Mediterranean anemia’ is a blood disorder caused by a decrease of hemoglobin in the body from the normal level. This happened because less synthesis hemoglobin chain in the blood. Hemoglobin is a substance in the red blood cells that responsible to carry the oxygen in the body. Lower level of hemoglobin and lack of red blood cells will cause anemia. Thalassemia patients will experience fatigue and lethargy. If the person suffers from minor thalassemia, he or she not requires therapy. But if they have major thalassemia, then they have to do blood transferring regularly. They also must ensure healthy lifestyle and balanced diet to increase their energy. THE TOLE ACUPUNCTURE - HERBAL MEDICAL CENTRE SDN BHD. Lot 2.01, 2nd Floor, Medical Specialist Floor, Menara KH (Menara Promet), Jalan Sultan Ismail, 50250 Kuala Lumpur. Tel No : +603-21418370 / +603-21451671 Whatsapp (only): +6012-7688284
- Web sites External Web sites Britannica Web sites Articles from Britannica encyclopedias for elementary and high school students. - Merle Haggard - Student Encyclopedia (Ages 11 and up) (born 1937). The American singer, songwriter, and guitarist Merle Haggard was one of the most popular country music performers of the late 20th century. His repertoire also included early jazz and contemporary tunes.
Pope St. Martin I. Martin I lay too sick to fight on a couch in front of the altar when the soldiers burst into the Lateran basilica. He had come to the church when he heard the soldiers had landed. But the thought of kidnapping a sick pope from the house of God didn't stop the soldiers from grabbing him and hustling him down to their ship. Elected pope in 649, Martin I had gotten in trouble for refusing to condone silence in the face of wrong. At that time there existed a popular heresy that held that Christ didn't have a human will, only a divine will. The emperor had issued an edict that didn't support Monothelism (as it was known) directly, but simply commanded that no one could discuss Jesus' will at all. Monothelism was condemned at a council convened by Martin I. The council affirmed, once again, that since Jesus had two natures, human and divine, he had two wills, human and divine. The council then went further and condemned Constans edict to avoid discussion stating, "The Lord commanded us to shun evil and do good, but not to reject the good with the evil." In his anger at this slap in the face, the emperor sent his soldiers to Rome to bring the pope to him. When Martin I arrived in Constantinople after a long voyage he was immediately put into prison. There he spent three months in a filthy, freezing cell while he suffered from dysentery. He was not allowed to wash and given the most disgusting food. After he was condemned for treason without being allowed to speak in his defense he was imprisoned for another three months. From there he was exiled to the Crimea where he suffered from the famine of the land as well as the roughness of the land and its people. But hardest to take was the fact that the pope found himself friendless. His letters tell how his own church had deserted him and his friends had forgotten him. They wouldn't even send him oil or corn to live off of. He died two years later in exile in the year 656, a martyr who stood up for the right of the Church to establish doctrine even in the face of imperial power.
Cervical cancer is the fourth most common cancer in women worldwide. At present, women are asked to attend cervical screening (also known as a 'smear' or 'Pap test') to detect the presence of high-risk HPV and/or abnormal or pre-cancerous cells. The uptake of cervical screening is low globally. The UK's Cervical Screening Programme has shown that screening can reduce mortality through early detection and treatment of pre-cancerous changes before cancer develops. However, there is variation between and within countries in the availability and uptake of screening. There are also differences based on ethnic groups, age, education and socioeconomic status and this needs to be borne in mind when developing interventions to increase uptake. The aim of the review The aim of this review was to look at the methods used to encourage women to undergo cervical screening. These included invitations, reminders, education, message framing, counselling, risk factor assessment, procedures and economic interventions. What are the main findings? Seventy trials were included in this review, of which 69 trials (257,899 women) were entered into a meta-analysis. Invitations, and to a lesser extent, educational materials probably increase the uptake of cervical screening (moderate-certainty evidence). HPV self-testing, as an alternative to Pap smears, may also increase screening coverage. However, self-testing was not covered in this review and will be considered in a subsequent review. Lay health workers used to promote screening to ethnic minority groups may increase screening uptake (low-certainty evidence). It was difficult to deduce any meaningful conclusions for other less widely reported interventions such as counselling, risk factor assessment, access to health promotion nurse, photo comic book, intensive recruitment and message framing, due to sparse data and low-certainty evidence. However, having access to a health promotion nurse and attempts at intensive recruitment may increase uptake. Certainty of the evidence The majority of the evidence was of a low to moderate certainty (quality) and further research may change these findings. For the majority of trials, the risk of bias was unclear, making it difficult to make firm assertions from their results. What are the conclusions? Invitation letters probably increase the uptake of cervical screening, and use of lay health worker involvement amongst ethnic minority populations may do so. Educational interventions may also increase screening; however, it is unclear what format is the most effective. These findings apply to developed countries and their relevance to low- and middle-income countries is unclear. There is moderate-certainty evidence to support the use of invitation letters to increase the uptake of cervical screening. Low-certainty evidence showed lay health worker involvement amongst ethnic minority populations may increase screening coverage, and there was also support for educational interventions, but it is unclear what format is most effective. The majority of the studies were from developed countries and so the relevance of low- and middle-income countries (LMICs), is unclear. Overall, the low-certainty evidence that was identified makes it difficult to infer as to which interventions were best, with exception of invitational interventions, where there appeared to be more reliable evidence. This is an update of the Cochrane review published in Issue 5, 2011. Worldwide, cervical cancer is the fourth commonest cancer affecting women. High-risk human papillomavirus (HPV) infection is causative in 99.7% of cases. Other risk factors include smoking, multiple sexual partners, the presence of other sexually transmitted diseases and immunosuppression. Primary prevention strategies for cervical cancer focus on reducing HPV infection via vaccination and data suggest that this has the potential to prevent nearly 90% of cases in those vaccinated prior to HPV exposure. However, not all countries can afford vaccination programmes and, worryingly, uptake in many countries has been extremely poor. Secondary prevention, through screening programmes, will remain critical to reducing cervical cancer, especially in unvaccinated women or those vaccinated later in adolescence. This includes screening for the detection of pre-cancerous cells, as well as high-risk HPV. In the UK, since the introduction of the Cervical Screening Programme in 1988, the associated mortality rate from cervical cancer has fallen. However, worldwide, there is great variation between countries in both coverage and uptake of screening. In some countries, national screening programmes are available whereas in others, screening is provided on an opportunistic basis. Additionally, there are differences within countries in uptake dependent on ethnic origin, age, education and socioeconomic status. Thus, understanding and incorporating these factors in screening programmes can increase the uptake of screening. This, together with vaccination, can lead to cervical cancer becoming a rare disease. To assess the effectiveness of interventions aimed at women, to increase the uptake, including informed uptake, of cervical screening. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), Issue 6, 2020. MEDLINE, Embase and LILACS databases up to June 2020. We also searched registers of clinical trials, abstracts of scientific meetings, reference lists of included studies and contacted experts in the field. Randomised controlled trials (RCTs) of interventions to increase uptake/informed uptake of cervical screening. Two review authors independently extracted data and assessed risk of bias. Where possible, the data were synthesised in a meta-analysis using standard Cochrane methodology. Comprehensive literature searches identified 2597 records; of these, 70 met our inclusion criteria, of which 69 trials (257,899 participants) were entered into a meta-analysis. The studies assessed the effectiveness of invitational and educational interventions, lay health worker involvement, counselling and risk factor assessment. Clinical and statistical heterogeneity between trials limited statistical pooling of data. Overall, there was moderate-certainty evidence to suggest that invitations appear to be an effective method of increasing uptake compared to control (risk ratio (RR) 1.71, 95% confidence interval (CI) 1.49 to 1.96; 141,391 participants; 24 studies). Additional analyses, ranging from low to moderate-certainty evidence, suggested that invitations that were personalised, i.e. personal invitation, GP invitation letter or letter with a fixed appointment, appeared to be more successful. More specifically, there was very low-certainty evidence to support the use of GP invitation letters as compared to other authority sources' invitation letters within two RCTs, one RCT assessing 86 participants (RR 1.69 95% CI 0.75 to 3.82) and another, showing a modest benefit, included over 4000 participants (RR 1.13, 95 % CI 1.05 to 1.21). Low-certainty evidence favoured personalised invitations (telephone call, face-to-face or targeted letters) as compared to standard invitation letters (RR 1.32, 95 % CI 1.11 to 1.21; 27,663 participants; 5 studies). There was moderate-certainty evidence to support a letter with a fixed appointment to attend, as compared to a letter with an open invitation to make an appointment (RR 1.61, 95 % CI 1.48 to 1.75; 5742 participants; 5 studies). Low-certainty evidence supported the use of educational materials (RR 1.35, 95% CI 1.18 to 1.54; 63,415 participants; 13 studies) and lay health worker involvement (RR 2.30, 95% CI 1.44 to 3.65; 4330 participants; 11 studies). Other less widely reported interventions included counselling, risk factor assessment, access to a health promotion nurse, photo comic book, intensive recruitment and message framing. It was difficult to deduce any meaningful conclusions from these interventions due to sparse data and low-certainty evidence. However, having access to a health promotion nurse and attempts at intensive recruitment may have increased uptake. One trial reported an economic outcome and randomised 3124 participants within a national screening programme to either receive the standard screening invitation, which would incur a fee, or an invitation offering screening free of charge. No difference in the uptake at 90 days was found (574/1562 intervention versus 612/1562 control, (RR 0.94, 95% CI: 0.86 to 1.03). The use of HPV self-testing as an alternative to conventional screening may also be effective at increasing uptake and this will be covered in a subsequent review. Secondary outcomes, including cost data, were incompletely documented. The majority of cluster-RCTs did not account for clustering or adequately report the number of clusters in the trial in order to estimate the design effect, so we did not selectively adjust the trials. It is unlikely that reporting of these trials would impact the overall conclusions and robustness of the results. Of the meta-analyses that could be performed, there was considerable statistical heterogeneity, and this should be borne in mind when interpreting these findings. Given this and the low to moderate evidence, further research may change these findings. The risk of bias in the majority of trials was unclear, and a number of trials suffered from methodological problems and inadequate reporting. We downgraded the certainty of evidence because of an unclear or high risk of bias with regards to allocation concealment, blinding, incomplete outcome data and other biases.
Peter Harper – Low Carbon Living Peter Harper is the Director of Research and Innovation at the Center for Alternative Technology (UK). Peter was the originator of the term “alternative technology” and has been a prominent theorist of the movement since the early ’70s. He is also a biologist, horticulturalist and landscape designer. In this presentation he discusses the data and projections from his “LifeStyle Lab” project. He premises the need for a zero carbon society and possible solutions that might lead us there. He outlines the pros and cons of lifestyle-driven changes and technology-enabled reduction; do we have the time, as consumers, to choose which strategy we want to take? Peter elaborates on the challenges of creating a low carbon society and in particular discusses potential different lifestyles. He illustrates these using two fictitious families he names the “WOTs” (Well-Off Techie greenie household) and the “LILs” (Low-Income Lifestyle greenie household). Watch or listen to find out whether technology-driven lifestyles or do-it-yourself approaches deliver the more promising carbon gains.
Sunday, June 13, 2010 This is one of those arthropods that can be considered a misfit. They aren't an insect and they aren't quite a spider either, although they are related to spiders. Daddy Longlegs are one of those creepy crawlies that children identify with and love to hold. They are harmless, even though rumors persist that these spiders are highly venomous and if they could bite us they would be deadly. This simply is not true. They do not possess venom; they instead feed on animal and plant matter. Another creature often called daddy-longlegs are actually spiders. These long-legged spiders are in the family Pholcidae. Previously the common name of this family was the cellar spiders but arachnologists have also given them the moniker of "daddy-longlegs spiders" because of the confusion generated by the general public. Because these arachnids are spiders, they have 2 body basic body parts (cephalothorax and abdomen), have 8 eyes most often clumped together in the front of the body, the abdomen shows no evidence of segmentation, have 8 legs all attached to the front most body part (the cephalothorax) and make webs out of silk. This is most probably the animal to which people refer when they tell the tale because these spiders are plentiful especially in cellars (hence their common name) and are commonly seen by the general public. The most common pholcid spiders found in U.S. homes are both European immigrants. There is no proven documentation that even these spiders bite humans. So this myth is definitely NOT true for the Daddy longlegs in the order Opilionides.....and it probably highly UNLIKELY for the Pholcids as well. Unlike spiders which have two body parts, a cehalothorax and an abdomen, daddy longlegs have one compact body. These spider-like creatures belong to the family Phalangiidae within the order Opiliones. with up to 150 species north of Mexico in North America. They are powerfully difficult to ID to species. They are sometimes called harvestmen, harvest spiders, shepherd spiders, phalangids, and opilionids. Most spiders have 6 to 8 eyes, daddy longlegs have 2 eyes. They do not build webs and will only be found in a web if they happen to fall into one. Then at that time they are sure to be dinner for a much more aggressive spider. These oddballs of the insect realm are fond of moist areas and will often be found around rotting logs, under rocks or near damp basements and cellars. Rest assured it is safe for your children and your grand-children to play with these little spiders (or in some cases not so little). They are no more harmful than lightning bugs or ladybugs. Information derived from : http://www.backyardnature.net/longlegs.htm