score
float64
4
5.34
text
stringlengths
256
572k
url
stringlengths
15
373
4.09375
W and Z bosons W and Z bosons are a group of elementary particles. They are bosons, which means that they have a spin of 0 or 1. Both had been found in experiments by the year 1983. Together, they are responsible for a force known as "weak force." Weak force is called weak because it is not as strong as other forces like electromagnetism. There are two W bosons with different charges, the normal W+, and its antiparticle, the W –. Z bosons are their own antiparticle. Naming[change | change source] W bosons are named after the weak force that they are responsible for. Weak force is what physicists believe is responsible for the breaking down of some radioactive elements, in the form of Beta decay. In the late '70s, scientists managed to combine early versions of the weak force with electromagnetism, and called it the electroweak force. Creation of W and Z bosons[change | change source] W and Z bosons are only known to be created under Beta decay, which is a form of radioactive decay. Beta Decay[change | change source] Beta decay occurs when there are a lot of neutrons in an atom. An easy way to think of a neutron is that it is made of one proton and one electron. When there are too many neutrons in one atom nucleus, one neutron will split and form a proton and an electron. The proton will stay where it is, and the electron will be launched out of the atom at incredible speed. This is why Beta radiation is harmful to humans. The above model is not entirely accurate, as both protons and neutrons are each made of three quarks, which are elementary particles. A proton is made of two up quarks (+2/3 charge), and one down quark (-1/3 charge). A neutron is made of one up quark and two down quarks. Because of this, the proton has +1 charge and the neutron 0 charge. The different types of quarks are known in the scientific world as flavours. Weak force is believed to be able to change the flavour of a quark. For example, when it changes a down quark in a neutron into an up quark, the charge of the neutron becomes +1, since it would have the same arrangement of quarks as a proton. The three-quark neutron with a charge of +1 is no longer a neutron after this, as it fulfills all of the requirements to be a proton. Therefore, Beta decay will cause a neutron to become a proton (along with some other end-products). W boson decay[change | change source] When a quark changes flavour, as it does in Beta decay, it releases a W boson. W bosons only last for 3x10-25 seconds, which is why we had not discovered them until less than half a century ago. Surprisingly, W bosons have a mass of about 80 times that of a proton (one proton weighs one atomic mass unit). Keep in mind that the neutron that it came from has almost the same weight as the proton. In the quantum world, it is not an extremely uncommon occurrence for a more massive particle to come from a less massive particle because it lasts less time than Planck's constant. (Planck's constant is simply a convenient number that falls out of the math when calculating this). After the 3x10-25 seconds has passed, a W boson decays into one electron and one neutrino. Since neutrinos rarely interact with matter, we can ignore them from now on. The electron is propelled out of the atom at a high speed. The proton that was produced by the Beta decay stays in the atom nucleus, and raises the atomic number by one. Z boson decay[change | change source] Z bosons are also predicted in the Standard Model of physics, which successfully predicted the existence of W bosons. Z Bosons decay into a fermion and its antiparticle, which are particles such as electrons and quarks which have spin in units of half of the reduced planks constant.
https://simple.wikipedia.org/wiki/W_and_Z_bosons
4.03125
Section 1- Introduction The cold war is the name given to a period of history between 1945 and 1989. During this time the USA and the USSR challenged each other. This was a time of extreme tension and threat of war. The USA and the USSR were known as superpowers as they were far stronger and more powerful than any other countries. Although fear and threats of war were very real and there were times when this seemed imminent, the war never happened. However, the threat was very real. However, both sides knew the extent of war would be great due to nuclear weapons. This meant conflict was always resolved using diplomatic means. The ideological rivalry between the Superpowers is central to understanding the cold war. The ideological preference of the USA was a commitment to democracy and free enterprise leading to a capitalist society. This was characterised by: • free elections, which a choice of parties • democratic freedom- freedom of speech, expression and assembly • free mass media- independent media sources • free enterprise- business, manufacturing, banking etc • Individual rights- the right to vote, to a fair trial etc. The Soviet Union The Russian communist ideology was based on Marxism/Leninism with a commitment to equality. Exemplified by: • A one party state, with only the communist party • A totalitarian system- all aspects of life influenced by communism • An emphasis on equality for all • Strict control of the media- censorship • State control of the means and creation • Suppression of dissenting opinions and opposition-secret police How the cold war ‘fought’ or conducted? The cold war was conducted in a number of ways with conflict always being short. The Arms Race The two sides engaged in ongoing nuclear and conventional arms race from 1945 onwards. Each side tried to develop increasingly powerful and sophisticated weapons. Nuclear weapons were never meant to be used. Both sides had a network of spies and secret agents engaged in intelligence and information. The USA made extensive use of CIA and the USSR used the KGB. Alliances were key to the cold war. This included NATO for the USA and the west and the Warsaw pact for the USSR. Both had well organised command networks and carried out regular drills and exercises. The space race was configured by the cold war. The USSR derived great prestige from the launch of the first space satellite, Sputnik 1 in 1957. The USA succeeded with the moon landings in 1969. USA and the USSR competed against each other in the Olympics. During the cold war both sides offered aid to countries in urgent need of help. This begun with the American Marshall Plan in the late 1940s. USSR began offering aid to independent African and Asian countries. Both sides were trying to influence the recipient. Crisis Management: Tests of leadership for the superpowers During the cold war serious episodes of tension and crisis developed. This increased the risk of armed confrontation and war. In some cases the crisis involved only 1 superpower. E.g. USSR in Hungary in 1956 and Czechoslovakia in 1968. In this instance the USA responded only by diplomatic protest. This was tactical as the USA accepted the USSR was working within the area of responsibility so had little to do with the USA. The war in Vietnam was the boldest episode of the Cold war. The USA intervened in Vietnam in strength and made a major commitment to stop advancing communism. The USSR limited it’s actions to supplying military resources to North Vietnam. The Berlin Crisis in 1962 was potentially very serious with direct confrontation between the USA and USSR. The Cuban Missile Crisis 1962 was the ultimate crisis. This was a direct conflict between the USA and the USSR with war...
http://www.studymode.com/essays/Cold-War-1397750.html
4
The United States Geological Survey (USGS) reports that ice shelves are retreating in the southern section of the Antarctic Peninsula due to climate change. The disappearing ice could lead to sea-level rise if warming continues, threatening coastal communities and low-lying islands worldwide. Every ice front in the southern part of the Antarctic Peninsula has been retreating overall from 1947 to 2009, according to the USGS, with the most dramatic changes occurring since 1990. Previously documented evidence indicates that the majority of ice fronts on the entire Peninsula have also retreated during the late 20th century and into the early 21st century. The ice shelves are attached to the continent and already floating, holding in place the Antarctic ice sheet that covers about 98 percent of the Antarctic continent. As the ice shelves break off, it is easier for outlet glaciers and ice streams from the ice sheet to flow into the sea. The transition of that ice from land to the ocean is what raises sea level. The Peninsula is one of Antarctica's most rapidly changing areas because it is farthest away from the South Pole, and its ice shelf loss may be a forecast of changes in other parts of Antarctica and the world if warming continues. Retreat along the southern part of the Peninsula is of particular interest because that area has the Peninsula's coolest temperatures, demonstrating that global warming is affecting the entire length of the Peninsula. The Antarctic Peninsula's southern section as described in this study contains five major ice shelves: Wilkins, George VI, Bach, Stange and the southern portion of Larsen Ice Shelf. The ice lost since 1998 from the Wilkins Ice Shelf alone totals more than 4,000 square kilometers, an area larger than the state of Rhode Island. "This research is part of a larger ongoing USGS project that is for the first time studying the entire Antarctic coastline in detail, and this is important because the Antarctic ice sheet contains 91 percent of Earth's glacier ice," said USGS scientist Jane Ferrigno. "The loss of ice shelves is evidence of the effects of global warming. We need to be alert and continually understand and observe how our climate system is changing." Citation: Ferrigno et al., 'Coastal-Change and Glaciological Map of the Palmer Land Area, Antarctica: 1947—2009', USGS, February 2010 - PHYSICAL SCIENCES - EARTH SCIENCES - LIFE SCIENCES - SOCIAL SCIENCES Subscribe to the newsletter Stay in touch with the scientific world! Know Science And Want To Write? - Top Secret: On Confidentiality On Scientific Issues, Across The Ring And Across The Bedroom - Would New Planet X Clear Its Orbit? - And Any Better Name Than "Planet Nine"? - Naomi Oreskes And Denialism About The Scientific Consensus On GMOs And Nuclear Energy - Drug Prevents Key Age-related Brain Change In Rats - A New Alternative To Sodium: Fish Sauce - A Conservative Argument For Genetic Modification Of Embryos - Smoking Bans Reduce Risk Of Cardiovascular Disease In Non-Smokers - "So there is no why like Bob Fletcher or as some people say you can already see it on Russian news..." - "Hi Joe, yes the thing is - all that is fine, it's logical from your point of view. And whatever..." - " Like I asked David Brin: Who are the ones who are actually insane? Certainly it is NOT the skeptics..." - "https://www.youtube.com/watch?v=nVyV4L072jY So then what is going on in this video? Also what is..." - "Just curious, When was the last time you (the author) generated a mathematical model? On what?..." - Florida Declares Zika Virus State of Emergency - Indonesia’s Many Human Physical Deformities: A Closer Look - Spinal ‘Column’: Love for Hunchback Dog, Breakthrough for 8-Yr-Old Girl - BMI is Bologna - Energy Drinks: The Dose Makes the Poison - California’s Prop 65: Bad For Public Acceptance Of Science, About To Get Worse - Cambridge researcher develops smartphone app to map Swiss-German dialects - Studies link healthy workforces to positive stock market performance - Pioneering discovery leads to potential preventive treatment for sudden cardiac death - Online shopping might not be as green as we thought - Gene family turns cancer cells into aggressive stem cells that keep growing
http://www.science20.com/news_articles/ice_shelves_retreating_antarctic_peninsula
4.03125
Primary succession is one of two types of biological and ecological succession of plant life, occurring in an environment in which new substrate devoid of vegetation and other organisms usually lacking soil, such as a lava flow or area left from retreated glacier, is deposited. In other words, it is the gradual growth of an ecosystem over a longer period. In contrast, secondary succession occurs on substrate that previously supported vegetation before an ecological disturbance from smaller things like floods, hurricanes, tornadoes, and fires which destroyed the plant life. In primary succession pioneer species like lichen, algae and fungi as well as other abiotic factors like wind and water start to "normalize" the habitat. Primary succession begins on rock formations, such as volcanoes or mountains, or in a place with no organisms or soil. This creates conditions nearer optimum for vascular plant growth; pedogenesis or the formation of soil is the most important process. These pioneer plants are then dominated and often replaced by plants better adapted to less odd conditions, these plants include vascular plants like grasses and some shrubs that are able to live in thin soils that are often mineral based. For example spores of lichen or fungus, being the pioneer species, are spread onto a land of rocks. Then, the rocks are broken down into smaller pieces and organic matter gradually accumulates, favouring the growth of larger plants like grasses, ferns and herbs. These plants further improve the habitat and help the adaptation of larger vascular plants like shrubs, or even medium- or large-sized trees. More animals are then attracted to the place and finally a climax community is reached. A good example of primary succession takes place after a volcano has erupted. The lava flows into the ocean and hardens into new land. The resulting barren land is first colonized by pioneer plants which pave the way for later, less hardy plants, such as hardwood trees, by facilitating pedogenesis, especially through the biotic acceleration of weathering and the addition of organic debris to the surface regolith. An example of primary succession is the island of Surtsey, which is an island formed in 1963 after a volcanic eruption from beneath the sea. Surtsey is off the South coast of Iceland and is being monitored to observe primary succession in progress. About thirty species of plant had become established by 2008 and more species continue to arrive, at a typical rate of roughly 2–5 new species per year. - "Biology Online Dictionary". Biology Online. Retrieved 12 October 2011. - Walker, Lawrence R.; del Moral, Roger. "Primary Succession". Encyclopedia of Life Sciences. doi:10.1002/9780470015902.a0003181.pub2. Retrieved 9 December 2015. - Baldocchi, Dennis. "Ecosystem Succession: Who/What is Where and When" (PDF). Biomet Lab, University of California Berkley. Retrieved 9 December 2015. - The volcano island: Surtsey, Iceland: Plants, Our Beautiful World, retrieved 2016-02-02
https://en.wikipedia.org/wiki/Primary_succession
4
NBC Learn, Windows to the Universe Note: you may need to scroll down the Changing Planet video page to get to this video. Video length: 6:21 min.Learn more about Teaching Climate Literacy and Energy Awareness» See how this Video supports the Next Generation Science Standards» Middle School: 6 Disciplinary Core Ideas High School: 3 Disciplinary Core Ideas About Teaching Climate Literacy Other materials addressing 5b Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials Teaching Tips | Science | Pedagogy | - Note: you may need to scroll down the Changing Planet video page to get to this video. - The video can be enlarged to eliminate visual impact of the text and images surrounding the video. About the Science - The video documents impacts of thermal expansion and melting of ice sheets and glaciers on coastal communities - The video also shows the ways that sea level rise is documented. These include the use of sediment core data and satellite data to document rate of sea level rise in the past. - The video also describes a laboratory model that examines how warming ocean currents reach Antarctica to contributing to the melting of ice sheets. - References to projected sea level rise in the coming decades is a highly dynamic field. Estimates change as research progresses and teachers need to be aware of this. - Comments from expert scientist: Includes interviews with two leading scientists in the field and provides visuals of the sources of the data (sediment cores, satellite measurements, physical models, computer models). About the Pedagogy - This video can be embedded in a lesson or activity that explores issues of ocean circulation, the melting of continental ice sheets, and how they impact sea level rise. Next Generation Science Standards See how this Video supports: Disciplinary Core Ideas: 6 MS-ESS2.C1:Water continually cycles among land, ocean, and atmosphere via transpiration, evaporation, condensation and crystallization, and precipitation, as well as downhill flows on land. MS-ESS2.C2:The complex patterns of the changes and the movement of water in the atmosphere, determined by winds, landforms, and ocean temperatures and currents, are major determinants of local weather patterns. MS-ESS2.C3:Global movements of water and its changes in form are propelled by sunlight and gravity. MS-ESS2.C4:Variations in density due to variations in temperature and salinity drive a global pattern of interconnected ocean currents. MS-ESS2.D1:Weather and climate are influenced by interactions involving sunlight, the ocean, the atmosphere, ice, landforms, and living things. These interactions vary with latitude, altitude, and local and regional geography, all of which can affect oceanic and atmospheric flow patterns. MS-ESS3.D1:Human activities, such as the release of greenhouse gases from burning fossil fuels, are major factors in the current rise in Earth’s mean surface temperature (global warming). Reducing the level of climate change and reducing human vulnerability to whatever climate changes do occur depend on the understanding of climate science, engineering capabilities, and other kinds of knowledge, such as understanding of human behavior and on applying that knowledge wisely in decisions and activities. Disciplinary Core Ideas: 3 HS-ESS2.C1:The abundance of liquid water on Earth’s surface and its unique combination of physical and chemical properties are central to the planet’s dynamics. These properties include water’s exceptional capacity to absorb, store, and release large amounts of energy, transmit sunlight, expand upon freezing, dissolve and transport materials, and lower the viscosities and melting points of rocks. HS-ESS2.D1:The foundation for Earth’s global climate systems is the electromagnetic radiation from the sun, as well as its reflection, absorption, storage, and redistribution among the atmosphere, ocean, and land systems, and this energy’s re-radiation into space. HS-ESS3.D1:Though the magnitudes of human impacts are greater than they have ever been, so too are human abilities to model, predict, and manage current and future impacts.
http://cleanet.org/resources/42953.html
4.125
May 21, 2008 Phoenix Mission Science & Technology Mars is a cold desert planet with no liquid water on its surface. But in the Martian arctic, water ice lurks just below ground level. Discoveries made by the Mars Odyssey Orbiter in 2002 show large amounts of subsurface water ice in the northern arctic plain. The Phoenix lander targets this circumpolar region using a robotic arm to dig through the protective top soil layer to the water ice below and ultimately, to bring both soil and water ice to the lander platform for sophisticated scientific analysis.The complement of the Phoenix spacecraft and its scientific instruments are ideally suited to uncover clues to the geologic history and biological potential of the Martian arctic. Phoenix will be the first mission to return data from either polar region providing an important contribution to the overall Mars science strategy "Follow the Water" and will be instrumental in achieving the four science goals of NASA's long-term Mars Exploration Program. Determine whether Life ever arose on Mars Characterize the Climate of Mars Characterize the Geology of Mars Prepare for Human Exploration The Phoenix Mission has two bold objectives to support these goals, which are to (1) study the history of water in the Martian arctic and (2) search for evidence of a habitable zone and assess the biological potential of the ice-soil boundary Objective 1: Study the History of Water in All its Phases Currently, water on Mars' surface and atmosphere exists in two states: gas and solid. At the poles, the interaction between the solid water ice at and just below the surface and the gaseous water vapor in the atmosphere is believed to be critical to the weather and climate of Mars. Phoenix will be the first mission to collect meteorological data in the Martian arctic needed by scientists to accurately model Mars' past climate and predict future weather processes. Liquid water does not currently exist on the surface of Mars, but evidence from Mars Global Surveyor, Odyssey and Exploration Rover missions suggest that water once flowed in canyons and persisted in shallow lakes billions of years ago. However, Phoenix will probe the history of liquid water that may have existed in the arctic as recently as 100,000 years ago. Scientists will better understand the history of the Martian arctic after analyzing the chemistry and mineralogy of the soil and ice using robust instruments. Objective 2: Search for Evidence of Habitable Zone and Assess the Biological Potential of the Ice-Soil Boundary Recent discoveries have shown that life can exist in the most extreme conditions. Indeed, it is possible that bacterial spores can lie dormant in bitterly cold, dry, and airless conditions for millions of years and become activated once conditions become favorable. Such dormant microbial colonies may exist in the Martian arctic, where due to the periodic wobbling of the planet, liquid water may exist for brief periods about every 100,000 years making the soil environment habitable. Phoenix will assess the habitability of the Martian northern environment by using sophisticated chemical experiments to assess the soil's composition of life giving elements such as carbon, nitrogen, phosphorus, and hydrogen. Identified by chemical analysis, Phoenix will also look at reduction-oxidation (redox) molecular pairs that may determine whether the potential chemical energy of the soil can sustain life, as well as other soil properties critical to determine habitability such as pH and saltiness. Despite having the proper ingredients to sustain life, the Martian soil may also contain hazards that prevent biological growth, such as powerful oxidants that break apart organic molecules. Powerful oxidants that can break apart organic molecules are expected in dry environments bathed in UV light, such as the surface of Mars. But a few inches below the surface, the soil could protect organisms from the harmful solar radiation. Phoenix will dig deep enough into the soil to analyze the soil environment potentially protected from UV looking for organic signatures and potential habitability. NASA Science Goals Phoenix seeks to verify the presence of the Martian Holy Grail: water and habitable conditions. In doing so, the mission strongly complements the four goals of NASA's Mars Exploration Program. Goal 1: Determine whether life ever arose on Mars Continuing the Viking missions' quest, but in an environment known to be water-rich, Phoenix searches for signatures of life at the soil-ice interface just below the Martian surface. Phoenix will land in the artic plains, where its robotic arm will dig through the dry soil to reach the ice layer, bring the soil and ice samples to the lander platform, and analyze these samples using advanced scientific instruments. These samples may hold the key to understanding whether the Martian arctic is a habitable zone where microbes could grow and reproduce during moist conditions. Goal 2: Characterize the climate of Mars Phoenix will land during the retreat of the Martian polar cap, when cold soil is first exposed to sunlight after a long winter. The interaction between the ground surface and the Martian atmosphere that occurs at this time is critical to understanding the present and past climate of Mars. To gather data about this interaction and other surface meteorological conditions, Phoenix will provide the first weather station in the Martian polar region, with no others currently planned. Data from this station will have a significant impact in improving global climate models of Mars. Goal 3: Characterize the geology of Mars As on Earth, the past history of water is written below the surface because liquid water changes the soil chemistry and mineralogy in definite ways. Phoenix will use a suite of chemistry experiments to thoroughly analyze the soil's chemistry and mineralogy. Some scientists speculate the landing site for Phoenix may have been a deep ocean in the planet's distant past leaving evidence of sedimentation. If fine sediments of mud and silt are found at the site, it may support the hypothesis of an ancient ocean. Alternatively, coarse sediments of sand might indicate past flowing water, especially if these grains are rounded and well sorted. Using the first true microscope on Mars, Phoenix will examine the structure of these grains to better answer these questions about water's influence on the geology of Mars. Goal 4: Prepare for human exploration The Phoenix Mission will provide evidence of water ice and assess the soil chemistry in Martian arctic. Water will be a critical resource to future human explorers and Phoenix may provide appreciable information on how water may be acquired on the planet. Understanding the soil chemistry will provide understanding of the potential resources available for human explorers to the northern plains. Phoenix's Robotic Arm (RA) is the single most crucial element to making scientific measurements. The robotic arm combines strength and finesse to dig trenches, scrape water ice, and precisely deliver samples to other instruments on the science deck. Also, the robotic arm carries a camera and thermal-electric probe to make measurements directly in the trench. The following table shows the relationships between Phoenix's science objectives, the scientific measurements to be made, and the instruments that will make these measurements. SSI = Surface Stereo Imager RAC = Robotic Arm Camera MARDI = Mars Descent Imager TEGA = Thermal and Evolved Gas Analyzer MECA = Microscopy, Electrochemistry, and Conductivity Analyzer WC = Wet Chemistry Experiment M = Microscopy, including the Optical Microscope and the Atomic Force Microscope TECP = Thermal and Electrical Conductivity Probe MET = Meteorological Station
http://www.redorbit.com/news/space/1395925/phoenix_mission_science__technology/
4.03125
From anyplace on Earth, the clearest thing in the night sky is usually the moon, Earth's only natural satellite and the nearest celestial object (240,250 miles or 384,400 km away). Ancient cultures revered the moon. It represented gods and goddesses in various mythologies -- the ancient Greeks called it "Artemis" and "Selene," while the Romans referred to it as "Luna." When early astronomers looked at the moon, they saw dark spots that they believed were seas (maria) and lighter regions that they believed was land (terrae). Aristotle's view, which was the accepted theory at the time, was that the moon was a perfect sphere and that the Earth was the center of the universe. When Galileo looked at the moon with a telescope, he saw a different image -- a rugged terrain of mountains and craters. He saw how its appearance changed during the month and how the mountains cast shadows that allowed him to calculate their height. Galileo concluded that the moon was much like Earth in that it had mountains, valleys and plains. His observations ultimately contributed to the rejection of Aristotle's ideas and the Earth-centered universe model. Because the moon is so close to the Earth relative to other celestial objects, it's the only one to which humans have traveled and set foot upon. In the 1960s, the United States and Russia were involved in a massive "space race" to land men on the moon. Both countries sent unmanned probes to orbit the moon, photograph it and land on the surface. In July 1969, American astronauts Neil Armstrong and Edwin "Buzz" Aldrin became the first humans to walk on the moon. During six lunar landing missions from 1969 to 1972, a total of 12 American astronauts explored the lunar surface. They made observations, took photographs, set up scientific instruments and brought back 842 pounds (382 kilograms) of moon rocks and dust samples. What did we learn about the moon from these historic journeys? Let's take a closer look at the moon. We'll examine its surface features and learn about its geology, internal structure, phases, formation and influence on the Earth. What's on the surface of the moon? As we mentioned, the first thing that you'll notice when you look at the moon's surface are the dark and light areas. The dark areas are called maria. There are several prominent maria. - Mare Tranquilitatis (Sea of Tranquility): where the first astronauts landed - Mare Imbrium (Sea of Showers): the largest mare (700 miles or 1100 kilometers in diameter) - Mare Serenitatis (Sea of Serenity) - Mare Nubium (Sea of Clouds) - Mare Nectaris (Sea of Nectar) - Oceanus Procellarum (Ocean of Storms) The maria cover only 15 percent of the lunar surface. The remainder of the lunar surface consists of the bright highlands, or terrae. Highlands are rough, mountainous, heavily cratered regions. The Apollo astronauts observed that the highlands are generally about 4 to 5 km (2.5 to 3 miles) above the average lunar surface elevation, while the maria are low-lying plains about 2 to 3 km (1.2 to 1.8 miles) below average elevation. These results were confirmed in the 1990s, when the orbiting Clementine probe extensively mapped the lunar surface. The moon is littered with craters, which are formed when meteors hit its surface. They may have central peaks and terraced walls, and material from the impact (ejecta) can be thrown from the crater, forming rays that emanate from it. Craters come in many sizes, and you'll see that the highlands are more densely cratered than the maria. Another type of impact structure is a multi-ringed basin. These structures were caused by huge impacts that sent shockwaves outward and pushed up mountain ranges. The Orientale Basin is an example of a multi-ringed basin. Besides craters, geologists have noticed cinder cone volcanoes, rilles (channel-like depressions, probably from lava), lava tubes and old lava flows, which indicate that the moon was volcanically active at some point. The moon has no true soil because it has no living matter in it. Instead, the "soil" is called regolith. Astronauts noted that the regolith was a fine powder of rock fragments and volcanic glass particles interspersed with larger rocks. Upon examining the rocks brought back from the lunar surface, geologists found the following characteristics: - The maria consisted primarily of basalt, an igneous rock derived from cooled lava. - The highland regions include mostly igneous rocks called anorthosite and breccia - If you compare the relative ages of the rocks, the highland areas are much older than the maria. (4 to 4.3 billion years old versus 3.1 to 3.8 billion years old). - The lunar rocks have very little water and volatile compounds in them (as if they've been baked) and resemble those found in the Earth's mantle. - The oxygen isotopes in moon rocks and the Earth are similar, which indicates that the moon and the Earth formed at about the same distance from the sun. - The density of the moon (3.3 g/cm3) is less than that of the Earth (5.5 g/cm3), which indicates that it doesn't have a substantial iron core. Astronauts placed other scientific packages on the moon to collect data: - Seismometers didn't detect any moonquakes or other indications of plate tectonic activity (movements in the moon's crust) - Magnetometers in orbiting spacecraft and probes didn't detect a significant magnetic field around the moon, which indicates that the moon doesn't have a substantial iron core or molten iron core like the Earth does. Let's look at what all of this information tells us about the formation of the moon. Giant Impactor Hypothesis At the time of Project Apollo in the 1960s, there were basically three hypotheses about how the moon formed. - Double planet (also called the condensation hypothesis): The moon and the Earth formed together at about the same time. - Capture: The Earth's gravity captured the fully formed moon as it wandered by. - Fission: The young Earth spun so rapidly on its axis that a blob of molten Earth spun off and formed the moon. But based on the findings of Apollo and some scientific reasoning, none of these hypotheses worked very well. - If the moon did form alongside the Earth, the composition of the two bodies should be about the same (they aren't). - The Earth's gravity isn't sufficient to capture something the size of the moon and keep it in orbit. - The Earth can't spin fast enough for a blob of material the size of the moon to just spin off. Because none of these hypotheses was satisfactory, scientists looked for another explanation. In the mid-1970s, scientists proposed a new idea called the Giant Impactor (or Ejected Ring) hypothesis. According to this hypothesis, about 4.45 billion years ago, while the Earth was still forming, a large object (about the size of Mars) hit the Earth at an angle. The impact threw debris into space from the Earth's mantle region and overlying crust. The impactor itself melted and merged with the Earth's interior, and the hot debris coalesced to form the moon. The Giant Impactor hypothesis explains why the moon rocks have a composition similar to the Earth's mantle, why the moon has no iron core (because the iron in the Earth's core and impactor's core remained on Earth), and why moon rocks seem to have been baked and have no volatile compounds. Computer simulations have shown that this hypothesis is feasible. Geologic History of the Moon Based on analyses of the rocks, crater densities and surface features, geologists came up with the following geologic history of the moon: - After the impact (about 4.45 billion years ago), the newly formed moon had a huge magma ocean over a solid interior. - As the magma cooled, iron and magnesium silicates crystallized and sank to the bottom. Plagioclase feldspar crystallized and floated up to form the anorthosite lunar crust. - Later (about 4 billion years ago), magma rose and infiltrated the lunar crust, where they reacted chemically to form the basalt. The magma ocean continued to cool, forming the lithosphere (which is like the material in the Earth's mantle). As the moon lost heat, the asthenosphere (the next layer in) shrank toward the core and the lithosphere became very large. These events led to a model of the moon's interior that is very different from that of the Earth. Lunar Behavior The moon is thought to influence our daily life and moods, possibly even causing odd behavior. In fact, it's the inspiration for the word "lunatic." Werewolf aficionados, of course, know that a full moon triggers terrifying transformations. And hospital and emergency personnel tell of more crimes, accidents and births during a full moon -- but the evidence for this is mostly anecdotal rather than statistical. - From about 4.6 to 3.9 billion years ago, the moon was intensely bombarded by meteors and other large objects. These impacts modified the lunar crust and gave rise to the large, densely cratered surface in the lunar highlands. Some of these bombardments produced large, multi-ringed basins and mountains. - When the bombardment ceased, lava flowed from the inside of the moon through volcanoes and cracks in the crust. This lava filled the maria and cooled to become the mare basalts. This period of lunar volcanism lasted from about 3.7 billion years to 2.5 billion years ago. Much of the moon's heat was lost during this period. (Because the moon's crust is slightly thinner on the side that faces the Earth, lava could erupt more easily to fill the maria basins. This explains why there are more maria on the near side of the moon compared to the far side.) - Once the volcanic period ended, most of the moon's internal heat was gone, so there was no major geologic activity -- meteor impacts have been the only major geologic factor at work on the moon. These impacts have not been as intense as in earlier periods of the moon's history; bombardments have generally been declining throughout the solar system. However, the meteoric bombardment that continues today has produced some large craters on the maria (like Tycho and Copernicus) and the fine regolith (soil) that covers the lunar surface. Let's look at some of the phenomena involving the moon's orbit. Every night, the moon shows a different face in the night sky. On some nights we can see its entire face, sometimes it's partial, and on others it isn't visible at all. These phases of the moon aren't random -- they change throughout the month in a regular and predictable way. As the moon travels in its 29-day orbit, its position changes daily. Sometimes it's between the Earth and the sun and sometimes it's behind us. So a different section of the moon's face is lit up by the sun, causing it to show different phases. Over the billions of years of the moon's existence, it has moved farther away from the Earth, and its rate of rotation has also slowed. The moon is tidally locked with the Earth, which means that the Earth's gravity "drags" the moon to rotate on its axis. This is why the moon rotates only once per month and why the same side of the moon always faces the Earth. Every day, the Earth experiences tides, or changes in the level of its oceans. They're caused by the pull of the moon's gravity. There are two high tides and two low tides every day, each lasting about six hours. The moon's gravitational force pulls on water in the oceans and stretches the water out to form tidal bulges in the ocean on the sides of the planet that are in line with the moon. The moon pulls water on the side nearest it, which causes a bulge toward the moon. The moon pulls on the Earth slightly, which drags the Earth away from the water on the opposite side, making another tidal bulge there. So, the areas of the Earth under the bulge experience high tide, while the areas on the thin sides have low tide. As the Earth rotates underneath the elongated bulges, this creates high and low tides about 12 hours apart. The moon also stabilizes the Earth's rotation. As the Earth spins on its axis, it wobbles. The moon's gravitational effect limits the wobble to a small degree. If we had no moon, the Earth might move almost 90 degrees off its axis, with the same motion that a spinning top has as it slows down. Return to the Moon Since 1972, no one has set foot on the moon. However, there is a renewed effort for a lunar return. Why? In 1994, the orbiting Clementine probe detected radio reflections from shadowed craters at the moon's South Pole. The reflections were consistent with the presence of ice. Later, the orbiting Lunar Prospector probe detected hydrogen-rich signals form the same area, possibly hydrogen from ice. Where could water on the moon have come from? It was probably carried to the moon by the comets, asteroids and meteors that have impacted the moon over its long history. Water was never detected by the Apollo astronauts because they didn't explore that region of the moon. If there is indeed water on the moon, it could be used to support a lunar base. The water could be split by electrolysis into hydrogen and oxygen -- the oxygen could be used to support life and both gases could be used for rocket fuel. So, a lunar base could be a staging point for future exploration of the solar system (Mars and beyond). Plus, because of the moon's lower gravity, it is cheaper and easier to lift a rocket off of its surface than from Earth. It might tricky to get back there though, at least for U.S. astronauts. In 2010, President Barack Obama decided to cancel the Constellation program, the intent of which was to get Americans back on the moon by 2020. That means U.S. astronauts may have to hitch a ride with private space companies, which will receive some funding from NASA. Other countries, including Japan and China, are planning to travel to the moon and researching how to build a lunar base using materials from the lunar surface. Various plans call for heading to the moon and establishing possible bases between 2015 and 2035. To learn more about the moon, take a look at the links on the next page. Man has always been fascinated by the world beyond our own, making astronomy one of the oldest sciences. Test your astro knowledge with our quiz. Related HowStuffWorks Articles - Moon Quiz - What if we lived on the moon? - Where did the moon come from? - What causes high tide and low tide? - Why is NASA playing with marbles? - Could I see a flashlight beam from Earth on the moon? - Is it possible to see (with a telescope) the things left behind on the moon? - How Lunar Landings Work - How Telescopes Work - How NASA Works - How Solar Eclipses Work - How the Orion CEV Will Work More Great Links - Chaisson, E, McMillan, S, "Astronomy Today." Prentice Hall, Upper Saddle River, 2002. - Exploring the moon, Lunar Prospector Lesson Plans. http://lunar.arc.nasa.gov/education/lesson.htm - Harland, D.M. "Exploring the moon: the Apollo Expeditions." Springer-Verlag, New York, 1999. - Kaufmann, W.J. "Universe (4th Edition)." WH Freeman & Co., New York, 1994. - Lunar and Planetary Institute, Lunar Science and Exploration. http://www.lpi.usra.edu/lunar/ - Lunar Prospector Home. http://lunar.arc.nasa.gov/ - NASA Aerospace Scholars, Lunar Base Designs. http://aerospacescholars.jsc.nasa.gov/HAS/cirr/em/6/8.cfm - NASA Aerospace Scholars, Lunar Geology. http://aerospacescholars.jsc.nasa.gov/HAS/cirr/em/6/2.cfm - NASA Aerospace Scholars, Mining and Manufacturing on the moon. http://aerospacescholars.jsc.nasa.gov/HAS/cirr/em/6/6.cfm - NASA History Office, Apollo Lunar Surface Journal. http://history.nasa.gov/alsj/frame.html - NASA Lunar Prospector Activity, Lunar Landform Identification. http://lunar.arc.nasa.gov/education/activities/active13a.htm - NASA moon Lithograph. http://lunar.gsfc.nasa.gov/images/outreach/62217 main_moon_Lithograph.pdf - NASA Solar System Exploration, Earth's moon. http://solarsystem.nasa.gov/planets/profile.cfm?Object=moon - Planetary Science Institute, The Origin of the moon. http://www.psi.edu/projects/moon/moon.html - Taylor, G.J. "A New moon for the Twenty-First Century." http://www.psrd.hawaii.edu/Aug00/newmoon.html - Taylor, G.J. Gateway to the Solar System, Lunar Prospector Teacher's Guide. http://lunar.arc.nasa.gov/education/teacher/index.htm - Taylor, G.J. Origin of the Earth and moon. http://solarsystem.nasa.gov/scitech/display.cfm?ST_ID=446 - Wilhelms, D.E. "To a Rocky moon: a Geologist's History of Lunar Exploration." University of Arizona Press, Tucson, 1993.
http://science.howstuffworks.com/moon1.htm/printable
4.375
Learn to use the order of operations to evaluate numerical expressions. Numerical Expression Evaluation with Basic Operations Interactive Learn new vocabulary words and help remember them by coming up with your own sentences with the new words using a Stop and Jot table. Develop understanding of concepts by studying them in a relational manner. Analyze and refine the concept by summarizing the main idea, creating visual aids, and generating questions and comments using a Four Square Concept Matrix. Discover how order of operations matters not only in math but also in everyday tasks such as laundry.
http://www.ck12.org/algebra/Numerical-Expression-Evaluation-with-Basic-Operations/
4.28125
How to connect linear equations, tables of values, and their graphs. How to graph a line. How to write a function to describe a table of values. How to write an equation to describe a set of pictures. How to determine what form an equation is written in. Short, helpful video on ACT Geometry by top ACT prep instructor, Devorah. Videos are produced by leading online education provider, Brightstorm. How we identify the behavior of a polynomial graph near an x-intercept. How to label the roots of a quadratic polynomial, solutions to a quadratic equation, and x-intercepts or roots of a quadratic function. Equations and slopes of horizontal and vertical lines How to determine the derivative of a linear function. How we identify the equation of a polynomial function when we are given the intercepts of its graph. How to graph a quadratic equation by hand. How to calculate and interpret the discriminant of a quadratic equation. How to graph the reciprocal of a linear function. How to find the angle of inclination of a line. How to tell if two variables vary directly. How to prove that an angle inscribed in a semicircle is a right angle; how to solve for arcs and angles formed by a chord drawn to a point of tangency. How to identify the graph of a stretched cosine curve.
https://www.brightstorm.com/tag/slope-intercept/page/2
4.0625
It was recently reported that scientists were able to overcome some of the problems with data degradation caused by computing in a quantum environment, and now Nature reports that physicists were able to build the first-ever working quantum network. Though, the the fibre optic network is in its infancy, as researchers reported a mere .2 percent accuracy in data that had been transferred. Still, the experiment has proven that quantum networks are possible. A quantum computer makes direct use of quantum mechanical phenomena to perform operations on data, and could be able to solve specific problems much faster than any traditional, transistor-based computer – The problem with quantum computing has been errors in computation. A classical computer understands data as bits, which can either have a values of 1 or 0. Qubits on the other hand, can have a value of 1, 0 or both simultaneously, which is known as superposition, and allows quantum computers to conduct millions of calculations at once. But there are errors, known as quantum decoherence, caused by things like heat, electromagnetic radiation and defective materials. German physicists from the Max Planck Institute of Quantum Optics built the network, which bounces single, data-carrying rubidium atoms through optical fiber, while emitting one proton. The proton in turn maintains the polarization state of the rubidium atom, or, it’s supposed to, hence the resulting .2 % data transmission accuracy – quantum computing relies on the coordinated motion of atomic particles. It’s been difficult keeping protons aligned in a singular environment. Once researchers have this step sorted out, it is at least proven that a quantum network can exist.
http://www.webpronews.com/german-physicists-build-first-quantum-network-2012-04/
4.1875
When The Earth Moved Nicholas Copernicus Changed The World The Father of Modern Astronomy February 2003 marked the 460th anniversary of the publication of De Revolutionibus Orbium Coelestium, (On the Revolution of Heavenly Spheres), a manuscript that changed the world. Written by the Polish astronomer Nicholas Copernicus and printed in 1543, De Revolutionibus established, for the first time in history, the correct position of the sun among the planets. The book’s findings not only formed the base for astronomers of the future, it inaugurated the great era of theoretical formulation. It is rightfully considered by some to have caused the greatest revolution in science and thought in the last two thousand years. Copernicus put an end to the belief that the earth was the center of the universe, and degraded the earth to a relatively unimportant tributary of the sun. The sun, said Copernicus, was the center of the planetary system, and instead of being stationary, the earth revolved around the sun in the course of a year while rotating once every twenty-four hours about its axis. The book, therefore, also challenged the long-standing belief that the earth was the heavenly center of the universe. The repercussions of this interpretation were magnificent. Copernicus himself originally gave credit to Aristarchus of Samos when he wrote, "Philolaus believed in the mobility of the earth , and some even say that Aristarchus of Samos was of that opinion." Interestingly, this passage was crossed out shortly before publication, maybe because Copernicus decided his treatise would stand on its own merit. 16th Century Renaissance Man The Renaissance resulted in great achievements in esthetic and literary interests, but advances in science moved slowly during the period. The era, however, opened a door for individuals—like Copernicus—to express beliefs they found contrary to what was accepted. Their views most often placed these great thinkers at odds with the Church. Fearful of being labeled heretics, many kept their ideas to themselves or within a close circle of friends. Embracing the views of alleged heretics also placed one out of favor with the church, making support for radical ideas hard to come by. Decades after Copernicus’ death, the great astronomer Galileo was forced to disavow his Copernican beliefs to avoid excommunication. The philosopher Giodano Bruno, a Dominican friar greatly influenced by Copernicus, was hunted by the Inquisition and perished in Rome at the stake. He Moved the Earth Until Copernicus, the teachings of the Greek astronomer Ptolemy were considered the gospel truth. Ptolemy, who lived in Alexandria in the second century after Christ, taught that the earth was round and calculated its circumference at an astonishingly close approximation to the true figure. The Ptolemaic system, however, taught that the earth was the stationary center of the universe, and the sun, moon, planets and stars revolved around it. Because the Ptolemaic system enjoyed the endorsement of not only scholars, but also of the church, Copernicus, in fear of trial for heresy, long hesitated to announce his heliocentric view. The fear instilled by the Church made it understandable why Copernicus’ teachings were not greatly noticed at first, and filtered very slowly into the European consciousness. This chilly reception of the correction of an established erroneous theory proved that scientific investigation was a threat to authority. It was hard to imagine a church Canon a threat to authority, especially since he was dead by the time his teachings served as inspiration to other astronomers and scientists. Initiated Great Thinking Because his presentation of De Revolutionibus lacked both observational data and mathematical underpinning (at that time, the gathering of meager data was not part of the scientific system, nor was the practice of justifying laws with countless mathematical proofs)—yet with models and proofs so convincing—it sparked future astronomers and mathematicians to justify his findings, and in effect served as an catalyst for the great inventions and theories in centuries to come. In putting out his theory of the ordered movement of the planets around the sun, Copernicus stimulated investigation into the whole body of phenomena connected with matter in motion. These researches, conducted by many scholars, among them Kepler, Galileo, and Newton and the most shining names, culminated in the theory of gravitation and the recognition of an eternally established, majestic universe of law. Hand-in-hand with these brilliant physcio-astronomical discoveries went the development of mathematics. Mathematics reached its eighteenth century culmination with the invention of the calculus by Newton and Leibnitz. It was calculus that made possible the complicated measurements demanded by the study of moving objects and it was in mathematical terms that the laws of motion—not only of solid bodies, but also of such physical phenomena as sound, heat, and light—were seated. Nicholas Copernicus (Mikolaj Kopernik) was born in Torun on February 19, 1473 of a well-to-do merchant family. He attended St. John’s School on Torun. He studied canon law at the University of Krakow from 1491 to 1495, and from 1496 to 1503, he studied at the Universities of Bologna and Padua. At the university of Krakow, then famous for its mathematics and astronomy, he discovered several contradictions in the system then used for calculating the movements of celestial bodies. At the University of Bologna, he advanced his theory that the moon was a satellite of the earth. At Padua, he studied medicine. He became fascinated by celestial motion and observed this phenomena with his naked eye. He then began drawing the positions of the constellations and planets to support his theory. His uncle Lucas, the Bishop of Varmia, appointed Copernicus a canon of the Church, which provided Copernicus a stipend to study medicine and science. He held the position as a canon of the Chapter of Varmia in Frombork, a little town in the north of Poland, from 1510 until his death in 1543. There he led a busy administrative life which included the organization of armed resistance against provocations by neighboring Teutonic Knights. His position allowed him to spend most of his time working out his theory. He made astronomical observations using very simple wooden instruments with no lenses (lenses were not invented until 100 years later). About 1515, he earnestly began to compile data and he wrote a short report on his theory which he circulated among astronomers. The first words of the text supplied the title, Commentariolus (Commentary). It took him many years to give the final form to his principal work on the detailed theory of motions in a heliocentric system. In 1539, he published De Revolutionibus. The book was dedicated to Pope Paul III. The published theory reached him on his death bed, although some accounts say he never saw the printed work. It is believed he died several hours after seeing the printed copy. The citizens of Torun, his birthplace, erected a monument in front of the city hall with the following dedication: "Nicholas Copernicus, A Torunian Moved the Earth; Stopped the Sun." In 1945, the Nicholas Copernicus University was organized in Torun. In 1973, the 500th anniversary of his birth, was aptly observed by all higher institutions of learning, astronomical observatories, historians, mathematicians, scientists, and biographers. Musical compositions were inspired by his life, and seventy nations throughout the world issued commemorative postage stamps honoring the Polish genius. Copernicus is buried in Frombork Castle. Forever in Stone Throughout the world, there are many monuments, observatories, and buildings named after the famous astronomer. Here are few of the more notable ones: • In Torun, Poland, his birthplace, a University named in his honor was organized in 1945. A monument to Copernicus stands in front of the town hall. • A statue in front of the College of Physics in Planty, Poland shows Copernicus as a young student. • A statue of Copernicus by the Danish sculptor Bertel Thorvaldsen, stands in downtown Warsaw. • The Copernicus Foundation of Chicago was established in 1971. In 1980, the foundation renovated an old theater (complete with a replica of the Royal Castle Clock Tower in Warsaw) and opened The Copernicus Cultural and Civic Center. • Also in Chicago is one of the most readily recognizable statues of Copernicus. It is located on Solidarity Drive. • An orbiting space laboratory named after Copernicus is on display at the Air and Space Museum at Independence and 6th St. in Washington, D.C. • At the main entrance to the Dag Hammarskjold Library at the United Nations Building, is a large bronze head of Copernicus. It was sculpted by Alfons Karny, and was a gift to the U.N. from Poland in 1970. Karney is one of the world’s greatest sculptures of the 20th century. The bust is on permanent display. • The Kopernik Polish Cultural Center, located in the Polish Community Center in Utica, N.Y., is operated by the Kopernik Polish Cultural Center Committee of the Kopernik Memorial Association of Central New York. It contains a fine permanent collection of Polish works of art, books, artifacts and videotape. • In 1973, Central New York’s Polonia formed the Kopernik Society to commemorate the 500th anniversary of the birth of Copernicus. The society raised money to build the Kopernik Observatory in Vestal, New York, the only one built in the 20th century without support from major donors and government funding. The Kopernik Observatory is next to the planetarium complex at the Roberson Center, making the site to be one of the best public astronomy facilities in the Northwest. Currently, the Kopernik Space Education Center project is underway to expand the original facility. • A memorial to Kopernik stands in Fairmont Park in Philadelphia. It was erected under the auspices of the Philadelphia Polish Heritage Society. • The Copernicus Society of America (CSA), established by Edward Piszek, has played a major role in the promotion of Polish heritage in the United States. In 1977, Piszek and the Copernicus Society were instrumental in enabling Fort Ticonderoga to purchase Mount Defiance, near the historic Revolutionary War fortress at which Tadeusz Kosciuszko played a critical role in 1777 to halt the British advance up Lake Champlain. Fort Ticonderoga was the "location" for filming a PBS special on Kosciuszko, a bicentennial project sponsored by the Copernicus Society and the Reader’s Digest Foundation. Most recently, the Copernicus Society was the motivating force behind W.S. Kuniczak’s translation of the Henryk Sienkiewicz "Trilogy." • A copy of the Warsaw, Poland statue of the Polish astronomer is in the square by the Dow Planetarium in Montreal, Canada. Copernicus In Lore Like many true heroes, there are many folk tales about Copernicus. One, recently retold in Polish Folk Legends by Florence Waszkelewicz-Clowes, tells of a meeting between Copernicus and the legendary magicians Dr. George Faust and Pan Twardowski. Whether or not these real men ever met in the context of the story is speculation, but the tale of their meeting at Pukier Tavern is a popular legend in Poland. The first edition copy of Copernicus’ "De Revolutionibus Orbium Coelestium" is on permanent display on the main floor of the Library of Congress in Washington, D.C. Sources and Suggested Reading A History of Europe, Harcourt, Brace and Company, New York Polish Biographical Dictionary, Bolchazy-Carducci Publishers, Chicago Polish Heritage Travel Guide to the U.S. and Canada, Hippocrene Books, New York © 2016 POLISH AMERICAN JOURNAL P.O. BOX 271, NORTH BOSTON, NY 14110-0271 (716) 312-8088 | Toll Free (800) 422-1275 HOME |SUBSCRIBE | CONTACT US | BOOKSTORE | NEWS ADVERTISE | ON-LINE LIBRARY | STAFF E-MAIL | POLKA NEWS
http://www.polamjournal.com/Library/Biographies/copernicus/copernicus.html
4.03125
The Fermi level is the total chemical potential for electrons (or electrochemical potential for electrons) and is usually denoted by µ or EF. The Fermi level of a body is a thermodynamic quantity, and its significance is the thermodynamic work required to add one electron to the body (not counting the work required to remove the electron from wherever it came from). A precise understanding of the Fermi level—how it relates to electronic band structure in determining electronic properties, how it relates to the voltage and flow of charge in an electronic circuit—is essential to an understanding of solid-state physics. In a band structure picture, the Fermi level can be considered to be a hypothetical energy level of an electron, such that at thermodynamic equilibrium this energy level would have a 50% probability of being occupied at any given time. The Fermi level does not necessarily correspond to an actual energy level (in an insulator the Fermi level lies in the band gap), nor does it require the existence of a band structure. Nonetheless, the Fermi level is a precisely defined thermodynamic quantity, and differences in Fermi level can be measured simply with a voltmeter. - 1 The Fermi level and voltage - 2 The Fermi level and band structure - 3 The Fermi level and temperature out of equilibrium - 4 Technicalities - 5 Footnotes and references The Fermi level and voltage Sometimes it is said that electric currents are driven by differences in electrostatic potential (Galvani potential), but this is not exactly true. As a counterexample, multi-material devices such as p–n junctions contain internal electrostatic potential differences at equilibrium, yet without any accompanying net current; if a voltmeter is attached to the junction, one simply measures zero volts. Clearly, the electrostatic potential is not the only factor influencing the flow of charge in a material—Pauli repulsion, carrier concentration gradients, and thermal effects also play an important role. In fact, the quantity called "voltage" as measured in an electronic circuit has a simple relationship to the chemical potential for electrons (Fermi level). When the leads of a voltmeter are attached to two points in a circuit, the displayed voltage is a measure of the total work that can be obtained, per unit charge, by allowing a tiny amount of charge to flow from one point to the other. If a simple wire is connected between two points of differing voltage (forming a short circuit), current will flow from positive to negative voltage, converting the available work into heat. The Fermi level of a body expresses the work required to add an electron to it, or equally the work obtained by removing an electron. Therefore, the observed difference (VA-VB) in voltage between two points "A" and "B" in an electronic circuit is exactly related to the corresponding chemical potential difference (µA-µB) in Fermi level by the formula where -e is the electron charge. From the above discussion it can be seen that electrons will move from a body of high µ (low voltage) to low µ (high voltage) if a simple path is provided. This flow of electrons will cause the lower µ to increase (due to charging or other repulsion effects) and likewise cause the higher µ to decrease. Eventually, µ will settle down to the same value in both bodies. This leads to an important fact regarding the equilibrium (off) state of an electronic circuit: - An electronic circuit in thermodynamic equilibrium will have a constant Fermi level throughout its connected parts. This also means that the voltage (measured with a voltmeter) between any two points will be zero, at equilibrium. Note that thermodynamic equilibrium here requires that the circuit be internally connected and not contain any batteries or other power sources, nor any variations in temperature. The Fermi level and band structure In the band theory of solids, electrons are considered to occupy a series of bands composed of single-particle energy eigenstates each labelled by ϵ. Although this single particle picture is an approximation, it greatly simplifies the understanding of electronic behaviour and it generally provides correct results when applied correctly. The Fermi–Dirac distribution gives the probability that (at thermodynamic equilibrium) an electron will occupy a state having energy ϵ. Alternatively, it gives the average number of electrons that will occupy that state given the restriction imposed by the Pauli exclusion principle: The location of µ within a material's band structure is important in determining the electrical behaviour of the material. - In an insulator, µ lies within a large band gap, far away from any states that are able to carry current. - In a metal, semimetal or degenerate semiconductor, µ lies within a delocalized band. A large number of states nearby µ are thermally active and readily carry current. - In an intrinsic or lightly doped semiconductor, µ is close enough to a band edge that there are a dilute number of thermally excited carriers residing near that band edge. In semiconductors and semimetals the position of µ relative to the band structure can usually be controlled to a significant degree by doping or gating. These controls do not change µ which is fixed by the electrodes, but rather they cause the entire band structure to shift up and down (sometimes also changing the band structure's shape). For further information about the Fermi levels of semiconductors, see (for example) Sze. Local conduction band referencing, internal chemical potential, and the parameter ζ If the symbol ℰ is used to denote an electron energy level measured relative to the energy of the edge of its enclosing band, ϵC, then in general we have ℰ = ϵ – ϵC, and in particular we can define the parameter ζ by referencing the Fermi level to the band edge: It follows that the Fermi–Dirac distribution function can also be written The band theory of metals was initially developed by Sommerfeld, from 1927 onwards, who paid great attention to the underlying thermodynamics and statistical mechanics. Confusingly, in some contexts the band-referenced quantity ζ may be called the "Fermi level", "chemical potential" or "electrochemical potential", leading to ambiguity with the globally-referenced Fermi level. In this article the terms "conduction-band referenced Fermi level" or "internal chemical potential" are used to refer to ζ. ζ is directly related to the number of active charge carriers as well as their typical kinetic energy, and hence it is directly involved in determining the local properties of the material (such as electrical conductivity). For this reason it is common to focus on the value of ζ when concentrating on the properties of electrons in a single, homogeneous conductive material. By analogy to the energy states of a free electron, the ℰ of a state is the kinetic energy of that state and ϵC is its potential energy. With this in mind, the parameter ζ could also be labelled the "Fermi kinetic energy". Unlike µ, the parameter ζ is not a constant at equilibrium, but rather varies from location to location in a material due to variations in ϵC, which is determined by factors such as material quality and impurities/dopants. Near the surface of a semiconductor or semimetal, ζ can be strongly controlled by externally applied electric fields, as is done in a field effect transistor. In a multi-band material, ζ may even take on multiple values in a single location. For example, in a piece of aluminum metal there are two conduction bands crossing the Fermi level (even more bands in other materials); each band has a different edge energy ϵC and a different value of ζ. The Fermi level and temperature out of equilibrium The Fermi level μ and temperature T are well defined constants for a solid-state device in thermodynamic equilibrium situation, such as when it is sitting on the shelf doing nothing. When the device is brought out of equilibrium and put into use, then strictly speaking the Fermi level and temperature are no longer well defined. Fortunately, it is often possible to define a quasi-Fermi level and quasi-temperature for a given location, that accurately describe the occupation of states in terms of a thermal distribution. The device is said to be in 'quasi-equilibrium' when and where such a description is possible. The quasi-equilibrium approach allows one to build a simple picture of some non-equilibrium effects as the electrical conductivity of a piece of metal (as resulting from a gradient in μ) or its thermal conductivity (as resulting from a gradient in T). The quasi-μ and quasi-T can vary (or not exist at all) in any non-equilibrium situation, such as: - If the system contains a chemical imbalance (as in a battery). - If the system is exposed to changing electromagnetic fields. (as in capacitors, inductors, and transformers). - Under illumination from a light-source with a different temperature, such as the sun (as in solar cells), - When the temperature is not constant within the device (as in thermocouples), - When the device has been altered, but has not had enough time to re-equilibrate (as in piezoelectric or pyroelectric substances). In some situations, such as immediately after a material experiences a high-energy laser pulse, the electron distribution cannot be described by any thermal distribution. One cannot define the quasi-Fermi level or quasi-temperature in this case; the electrons are simply said to be "non-thermalized". In less dramatic situations, such as in a solar cell under constant illumination, a quasi-equilibrium description may be possible but requiring the assignment of distinct values of μ and T to different bands (conduction band vs. valence band). Even then, the values of μ and T may jump discontinuously across a material interface (e.g., p–n junction) when a current is being driven, and be ill-defined at the interface itself. The term "Fermi level" is mainly used in discussing the solid state physics of electrons in semiconductors, and a precise usage of this term is necessary to describe band diagrams in devices comprising different materials with different levels of doping. In these contexts, however, one may also see Fermi level used imprecisely to refer to the band-referenced Fermi level µ-ϵC, called ζ above. It is common to see scientists and engineers refer to "controlling", "pinning", or "tuning" the Fermi level inside a conductor, when they are in fact describing changes in ϵC due to doping or the field effect. In fact, thermodynamic equilibrium guarantees that the Fermi level in a conductor is always fixed to be exactly equal to the Fermi level of the electrodes; only the band structure (not the Fermi level) can be changed by doping or the field effect (see also band diagram). A similar ambiguity exists between the terms "chemical potential" and "electrochemical potential". It is also important to note that Fermi level is not necessarily the same thing as Fermi energy. In the wider context of quantum mechanics, the term Fermi energy usually refers to the maximum kinetic energy of a fermion in an idealized non-interacting, disorder free, zero temperature Fermi gas. This concept is very theoretical (there is no such thing as a non-interacting Fermi gas, and zero temperature is impossible to achieve). However, it finds some use in approximately describing white dwarfs, neutron stars, atomic nuclei, and electrons in a metal. On the other hand, in the fields of semiconductor physics and engineering, "Fermi energy" often is used to refer to the Fermi level described in this article. Fermi level referencing and the location of zero Fermi level Much like the choice of origin in a coordinate system, the zero point of energy can be defined arbitrarily. Observable phenomena only depend on energy differences. When comparing distinct bodies, however, it is important that they all be consistent in their choice of the location of zero energy, or else nonsensical results will be obtained. It can therefore be helpful to explicitly name a common point to ensure that different components are in agreement. On the other hand, if a reference point is inherently ambiguous (such as "the vacuum", see below) it will instead cause more problems. A practical and well-justified choice of common point is a bulky, physical conductor, such as the electrical ground or earth. Such a conductor can be considered to be in a good thermodynamic equilibrium and so its µ is well defined. It provides a reservoir of charge, so that large numbers of electrons may be added or removed without incurring charging effects. It also has the advantage of being accessible, so that the Fermi level of any other object can be measured simply with a voltmeter. Why it is not advisable to use "the energy in vacuum" as a reference zero In principle, one might consider using the state of a stationary electron in the vacuum as a reference point for energies. This approach is not advisable unless one is careful to define exactly where "the vacuum" is. The problem is that not all points in the vacuum are equivalent. At thermodynamic equilibrium, it is typical for electrical potential differences of order 1 V to exist in the vacuum (Volta potentials). The source of this vacuum potential variation is the variation in work function between the different conducting materials exposed to vacuum. Just outside a conductor, the electrostatic potential depends sensitively on the material, as well as which surface is selected (its crystal orientation, contamination, and other details). The parameter that gives the best approximation to universality is the Earth-referenced Fermi level suggested above. This also has the advantage that it can be measured with a voltmeter. Discrete charging effects in small systems In cases where the "charging effects" due to a single electron are non-negligible, the above definitions should be clarified. For example, consider a capacitor made of two identical parallel-plates. If the capacitor is uncharged, the Fermi level is the same on both sides, so one might think that it should take no energy to move an electron from one plate to the other. But when the electron has been moved, the capacitor has become (slightly) charged, so this does take a slight amount of energy. In a normal capacitor, this is negligible, but in a nano-scale capacitor it can be more important. In this case one must be precise about the thermodynamic definition of the chemical potential as well as the state of the device: is it electrically isolated, or is it connected to an electrode? - When the body is able to exchange electrons and energy with an electrode (reservoir), it is described by the grand canonical ensemble. The value of chemical potential µ can be said to be fixed by the electrode, and the number of electrons N on the body may fluctuate. In this case, the chemical potential of a body is the infinitesimal amount of work needed to increase the average number of electrons by an infinitesimal amount (even though the number of electrons at any time is an integer, the average number varies continuously.): - If the number of electrons in the body is fixed (but the body is still thermally connected to a heat bath), then it is in the canonical ensemble. We can define a "chemical potential" in this case literally as the work required to add one electron to a body that already has exactly N electrons, where F(N, T) is the free energy function of the canonical ensemble, or alternatively as the work obtained by removing an electron from that body, These chemical potentials are not equivalent, µ ≠ µ' ≠ µ'', except in the thermodynamic limit. The distinction is important in small systems such as those showing Coulomb blockade. The parameter µ (i.e., in the case where the number of electrons is allowed to fluctuate) remains exactly related to the voltmeter voltage, even in small systems. To be precise, then, the Fermi level is defined not by a deterministic charging event by one electron charge, but rather a statistical charging event by an infinitesimal fraction of an electron. Footnotes and references - Kittel, Charles. Introduction to Solid State Physics, 7th Edition. Wiley. - I. Riess, What does a voltmeter measure? Solid State Ionics 95, 327 (1197) - Sah, Chih-Tang (1991). Fundamentals of Solid-State Electronics. World Scientific. p. 404. ISBN 9810206372. - Datta, Supriyo (2005). Quantum Transport: Atom to Transistor. Cambridge University Presss. p. 7. ISBN 9780521631457. - Kittel, Charles; Herbert Kroemer (1980-01-15). Thermal Physics (2nd Edition). W. H. Freeman. p. 357. ISBN 978-0-7167-1088-2. - Sze, S. M. (1964). Physics of Semiconductor Devices. Wiley. ISBN 0-471-05661-8. - Sommerfeld, Arnold (1964). Thermodynamics and Statistical Mechanics. Academic Press. - "3D Fermi Surface Site". Phys.ufl.edu. 1998-05-27. Retrieved 2013-04-22. - For example: D. Chattopadhyay (2006). Electronics (fundamentals And Applications). ISBN 978-81-224-1780-7. and Balkanski and Wallis (2000-09-01). Semiconductor Physics and Applications. ISBN 978-0-19-851740-5. - Technically, it is possible to consider the vacuum to be an insulator and in fact its Fermi level is defined if its surroundings are in equilibrium. Typically however the Fermi level is two to five electron volts below the vacuum electrostatic potential energy, depending on the work function of the nearby vacuum wall material. Only at high temperatures will the equilibrium vacuum be populated with a significant number of electrons (this is the basis of thermionic emission). - Shegelski, Mark R. A. (May 2004). "The chemical potential of an ideal intrinsic semiconductor". American Journal of Physics 72 (5): 676–678. Bibcode:2004AmJPh..72..676S. doi:10.1119/1.1629090. - Beenakker, C. W. J. (1991). "Theory of Coulomb-blockade oscillations in the conductance of a quantum dot". Physical Review B 44 (4): 1646. Bibcode:1991PhRvB..44.1646B. doi:10.1103/PhysRevB.44.1646.
https://en.wikipedia.org/wiki/Fermi_level
4
When two sets of data are strongly linked together we say they have a High Correlation. The word Correlation is made of Co- (meaning "together"), and Relation - Correlation is Positive when the values increase together, and - Correlation is Negative when one value decreases as the other increases Here we look at linear correlations (correlations that follow a line). Correlation can have a value: - 1 is a perfect positive correlation - 0 is no correlation (the values don't seem linked at all) - -1 is a perfect negative correlation The value shows how good the correlation is (not how steep the line is), and if it is positive or negative. Example: Ice Cream Sales The local ice cream shop keeps track of how much ice cream they sell versus the temperature on that day, here are their figures for the last 12 days: |Ice Cream Sales vs Temperature| |Temperature °C||Ice Cream Sales| And here is the same data as a Scatter Plot: We can easily see that warmer weather leads to more sales, the relationship is good but not perfect. In fact the correlation is 0.9575 ... see at the end how I calculated it. Correlation Is Not Good at Curves The correlation calculation only works well for relationships that follow a straight line. Our Ice Cream Example: there has been a heat wave! It gets so hot that people aren't going near the shop, and sales start dropping. Here is the latest graph: The correlation value is now 0: "No Correlation" ... ! The calculated correlation value is 0 (I worked it out), which means "no correlation". But we can see the data does have a correlation: it follows a nice curve that reaches a peak around 25° C. But the linear correlation calculation is not "smart" enough to see this. Moral of the story: make a Scatter Plot, and look at it! You may see a correlation that the calculation does not. Correlation Is Not Causation "Correlation Is Not Causation" ... which says that a correlation does not mean that one thing causes the other (there could be other reasons the data has a good correlation). Example: Sunglasses vs Ice Cream Our Ice Cream shop finds how many sunglasses were sold by a big store for each day and compares them to their ice cream sales: The correlation between Sunglasses and Ice Cream sales is high Does this mean that sunglasses make people want ice cream? Example: A Real Case! A few years ago a survey of employees found a strong positive correlation between "Studying an external course" and Sick Days. Does this mean: - Studying makes them sick? - Sick people study a lot? - Or did they lie about being sick to study more? Without further research we can't be sure why. How To Calculate How did I calculate the value 0.9575 at the top? I used "Pearson's Correlation". There is software that can calculate it, such as the CORREL() function in Excel or LibreOffice Calc ... ... but here is how to calculate it yourself: Let us call the two sets of data "x" and "y" (in our case Temperature is x and Ice Cream Sales is y): - Step 1: Find the mean of x, and the mean of y - Step 2: Subtract the mean of x from every x value (call them "a"), do the same for y (call them "b") - Step 3: Calculate: a × b, a2 and b2 for every value - Step 4: Sum up a × b, sum up a2 and sum up b2 - Step 5: Divide the sum of a × b by the square root of [(sum of a2) × (sum of b2)] Here is how I calculated the first Ice Cream example (values rounded to 1 or 0 decimal places): As a formula it is: - Σ is Sigma, the symbol for "sum up" - is each x-value minus the mean of x (called "a" above) - is each y-value minus the mean of y (called "b" above) You probably won't have to calculate it like that, but at least you know it is not "magic", but simply a routine set of calculations. Note for Programmers You can calculate it in one pass through the data. Just sum up x, y, x2, y2 and xy (no need for a or b calculations above) then use the formula: There are other ways to calculate a correlation coefficient, such as "Spearman's rank correlation coefficient", but I prefer using a spreadsheet like above.
http://www.mathsisfun.com/data/correlation.html
4.15625
Scientists can probe the composition of clouds and atmospheric pollution with lasers using a process known as light detection and ranging (LiDAR). By measuring the amount of light reflected back by the atmosphere, it is possible to calculate the concentrations of fine drops of noxious chemicals such as nitrous oxide, sulfur dioxide and ozone. More detailed information regarding the size of the liquid droplets could lead to better understanding of the pollutants' movements, but such data is harder to come by. To that end, findings published in the July 15 issue of Physical Review Letters could prove helpful. Researchers report that extremely short laser pulses can generate an intense plasma within a miniscule water droplet, causing light to be reflected preferentially back toward the laser source. Liquid droplets of different sizes focus the laser pulse to varying degrees, producing distinctive wavelengths of emitted light that could provide important clues to aerosol size distribution. Jean-Pierre Wolf of the University of Lyon 1 in France and his colleagues flashed femtosecond laser pulses on individual water droplets that were less than 70 microns in diameter. The team found that the light reflected by a microscopic sphere back toward the laser source was 35 times more intense than the light sent in any other direction. The researchers posit that this phenomenon arises because of a nanosized plasma that forms within the water drop and is hot enough to emit in the visible spectrum. Though further tests are required to determine how the technique applies to situations that involve more than one liquid drop, the researchers suggest that ultrashort high-intensity laser pulses may enhance light-detection and ranging (LiDAR) signals. "The backward-enhanced plasma emission spectrum from water droplets or biological agents," they write, "could be attractive for remotely determining the composition of atmospheric aerosol."
http://www.scientificamerican.com/article/plasma-in-water-droplets/
4.15625
Back To CourseBiology 101: Intro to Biology 24 chapters | 226 lessons The word symbiosis literally means 'living together,' but when we use the word symbiosis in biology, what we're really talking about is a close, long-term interaction between two different species. There are many different types of symbiotic relationships that occur in nature. In many cases, both species benefit from the interaction. This type of symbiosis is called mutualism. An example of mutualism is the relationship between bullhorn acacia trees and certain species of ants. Each bullhorn acacia tree is home to a colony of stinging ants. True to its name, the tree has very large thorns that look like bull's horns. The ants hollow out the thorns and use them as shelter. In addition to providing shelter, the acacia tree also provides the ants with two food sources. One food source is a very sweet nectar that oozes from the tree at specialized structures called nectaries. The second food source is in the form of food nodules called beltian bodies that grow on the tips of the leaves. Between the nectar and the beltian bodies, the ants have all of the food they need. So, the ants get food and shelter, but what does the tree get? Quite a lot actually, you see the ants are very territorial and aggressive. They will attack anything and everything that touches the tree - from grasshoppers and caterpillars to deer and humans. They will even climb onto neighboring trees that touch their tree and kill the whole branch and clear all vegetation in a perimeter around their tree's trunk, as well. The ants protect the tree from herbivores and remove competing vegetation, so the acacia gains a big advantage from the relationship. In this case, the acacia is considered a host because it is the larger organism in a symbiotic relationship upon or inside of which the smaller organism lives, and the ant is considered to be a symbiont, which is the term for the smaller organism in a symbiotic relationship that lives in or on the host. An astounding number of mutualistic relationships occur between multicellular organisms and microorganisms. Termites are only able to eat wood because they have mutualistic protozoans and bacteria in their gut that helps them digest cellulose. Inside our own bodies, there are hundreds of different types of bacteria that live just in our large intestine. Most of these are uncharacterized, but we do know a lot about E. coli, which is one of the normal bacteria found in all human large intestines. Humans provide E. coli with food and a place to live. In return, the E. coli produce vitamin K and make it harder for pathogenic bacteria to establish themselves in our large intestine. Whether or not most of the other species of bacteria found in our digestive tract aid in digestion, absorption, or vitamin production isn't completely known, but they all make it harder for invasive pathogens to establish a foothold inside us and cause disease. Now, let's say by some chance, a pathogenic bacteria does manage to establish itself in a person's large intestine. The host provides a habitat and food for the bacteria, but in return, the bacteria cause disease in the host. This is an example of parasitism or an association between two different species where the symbiont benefits and the host is harmed. Not all parasites have to cause disease. Lice, ticks, fleas, and leeches are all examples of parasites that don't usually cause disease directly, but they do suck blood from their host, and that is causing some harm, not to mention discomfort to their host. Parasites can also act as vectors or organisms that transmit disease-causing pathogens to other species of animals. The bacteria that cause the bubonic plague are carried by rodents, such as rats. The plague bacteria then infect fleas that bite the rats. Infected fleas transmit the bacteria to other animals they bite, including humans. In this case, both the flea and the bacteria are parasites, and the flea is also a vector that transmits the disease causing bacteria from the rat to the person. Commensalism is an association between two different species where one species enjoys a benefit, and the other is not significantly affected. Commensalism is sometimes hard to prove because in any symbiotic relationship, the likelihood that a very closely associated organism has no effect whatsoever on the other organism is pretty unlikely. But, there are a few examples where commensalism does appear to exist. For example, the cattle egret follows cattle, water buffalo, and other large herbivores as they graze. The herbivores flush insects from the vegetation as they move, and the egrets catch and eat the insects when they leave the safety of the vegetation. In this relationship the egret benefits greatly, but there is no apparent effect on the herbivore. Some biologists maintain that algae and barnacles growing on turtles and whales have a commesalistic relationship with their hosts. Others maintain that the presence of hitchhikers causes drag on the host as it moves through the water and therefore the host is being harmed, albeit slightly. In either case, it is unlikely that the fitness of the host is really affected by the hitchhikers, so commensalism is probably the best way to describe these relationships as well. Amensalism is an association between two organisms of different species where one species is inhibited or killed and the other is unaffected. Amensalism can occur in a couple different ways. Most commonly, amensalism occurs through direct competition for resources. For example, if there is a small sapling that is trying to grow right next to a mature tree, the mature tree is likely to outcompete the sapling for resources. It will intercept most of the light and its mature root system will do a much better job of absorbing water and nutrients - leaving the sapling in an environment without enough light, water, or nutrients and causing it harm. However, the large tree is relatively unaffected by the presence of the sapling because it isn't blocking light to the taller tree, and the amount of water and nutrients it can absorb are so small that the mature tree will not notice the difference. Amensalism can also occur if one species uses a chemical to kill or inhibit the growth of another species. A very famous example of this type of amensalism led to the discovery of the antibiotic known as penicillin. Alexander Fleming observed amensalism occurring on a plate of staphylococcus aureus surrounding a contaminating spot of penicillium mold. He was the first to recognize that the mold was secreting a substance that was killing the bacteria surrounding it. Other scientists later developed methods for mass-producing the bacteria-killing chemical which we now call penicillin. Let's review. In biology, symbiosis refers to a close, long-term interaction between two different species. But, there are many different types of symbiotic relationships. Mutualism is a type of symbiosis where both species benefit from the interaction. An example of mutualism is the relationship between bullhorn acacia trees and certain species of ants. The acacia provides food and shelter for the ants and the ants protect the tree. Parasitism is an association between two different species where the symbiont benefits and the host is harmed. Fleas, ticks, lice, leeches, and any bacteria or viruses that cause disease, are considered to be parasitic. Commensalism is an association between two different species where one species enjoys a benefit and the other is not significantly affected. Probably the best example of commensalism is the relationship between cattle egrets and large herbivores. The cattle egret benefits when insects are flushed out of the vegetation while the herbivore is unaffected by the presence of the cattle egret. And finally, amensalism is an association between two organisms of different species where one species is inhibited or killed and the other is unaffected. This can occur either through direct competition for resources, or it can happen when one species uses a chemical to kill or inhibit the growth of other species around it. A classic example of amensalism is the ability of penicillium mold to secrete penicillin, which kills certain types of bacteria. To unlock this lesson you must be a Study.com Member. Create your account Did you know… We have over 49 college courses that prepare you to earn credit by exam that is accepted by over 2,000 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Back To CourseBiology 101: Intro to Biology 24 chapters | 226 lessons
http://study.com/academy/lesson/symbiotic-relationships-mutualism-commensalism-amensalism.html
4
The following basketball resource, designed for Middle School students (aged 11-14) in the USA, has been carefully aligned with NASPE National Standards, and addresses: Moving efficiently in general space, throwing and catching, Muscular strength and endurance, aerobic capacity, Cooperation, leadership, accepting challenges. In 'This is How We Roll' the object is for O(ffence) to score a basket off of a pick and roll. Learning the pick and roll gives your team another way to take high percentage shots. Please also see the accompanying Basketball Practice Plan. In this unit, pupils will focus on developing more advanced skills and apply them in game situations in order to outwit opponents. Pupils will prepare tournaments and compete in them. They will work in groups taking on a range of roles and responsibilities to help each other to prepare and improve as a team and to develop a deeper understanding about healthy lifestyles and fitness.
https://www.pescholar.com/pe/resource/activity/basketball/
4.21875
Explain how a lunar eclipse occurs. A lunar eclipse occurs when the Sun, Moon, and Earth align as the Moon moves into Earth's shadow. Identify two properties of Earth that cause it to have changing seasons. Identify two properties of the Earth that cause it to have changing seasons. Earth's tilted axis and Earth's revolution around the Sun are the properties that lead to Earth's changing seasons. Explain the effect of each of the properties you named in the previous question. Explain the effect of each of the properties you named in the previous question. During winter in Texas, Earth's tilt causes the Norther n Hemisphere to be pointed away from the Sun. This means the Sun's rays are spread out over a large area. During summer, the Northern Hemisphere is pointed toward the Sun. Sunlight is less spread out, so areas get more solar energy and heat up. As Earth revolves around the Sun, the tilt of its axis does not change. So, when Earth gets to the other side of the Sun, it is tilted so that the Northern Hemisphere is away from the Sun . Describe how the length of the days in the Northern Hemisphere changes with the four seasons Describe how the length of the days in the Northern Hemisphere changes with the four seasons In the Northern Hemisphere, the longest days are in the summer months. The day length decreases through the fall. In winter, days are shortest. Day length increases during the spring. Earth's revolution around the Sun is a major cause of the changing day length. A particular slide catching your eye? Clipping is a handy way to collect important slides you want to go back to later.
http://www.slideshare.net/DavidSP1996/spacecyclesthinking
4
The vaccine caused the mice to create antibodies against neuraminidase, a flu protein that lets newly born virus particles escape from infected cells. The researchers also found signs that some humans carry similar antibodies. "It's hard to prove but my gut feeling would be, if people had high enough levels of this antibody, there certainly would be a reduction in severity" from H5N1 infection, says virologist Richard Webby of St. Jude Children's Research Hospital in Memphis, Tenn., whose group performed the research. "But that's the million dollar question: How much of this antibody do you have to have?" Researchers name flu viruses based on the type of hemagglutinin (HA) and neuraminidase (NA) proteins they containhence the numbers after "H" and "N" in H5N1. Flu vaccines are designed to prevent infection by eliciting antibodies against HA, which the virus uses to break into cells lining the airways. But experts have speculated that antibodies against one type of neuraminidase could provide protection against multiple flu viruses sharing the same NA type. Some studies suggest, for example, that the 1968 H3N2 flu pandemic, which killed 1 million people worldwide, was less severe than it might have been because of neuraminidase antibodies left over from the 1957 H2N2 pandemic, which killed twice as many people. Neuraminidase antibodies would not prevent a person from getting sick with the flu, because they do not stop the virus from infecting cells. To see if they would suffice to make H5N1 infection less severe, Webby and his co-workers injected mice with DNA for the neuraminidase gene from human H1N1, one of three flu subtypes covered by this winter's flu shot. Next they infected the mice with avian H5N1. After two weeks, five out of 10 of these mice survived, but none of the control mice lived. The researchers also looked at blood serum samples from human volunteers. Of 38 samples, 31 contained antibodies against H1N1 neuraminidase, presumably from past infections or vaccinations. In test tubes, seven of the serum samples inhibited the activity of neuraminidase from H5N1. The results are "very intriguing" but "it is premature to conclude that immunity induced by the [H1N1] virus will provide significant protection from illness associated with avian influenza H5N1," caution Laura Gillim-Ross and Kanta Subbarao of the National Institute of Allergy and Infectious Disease in an editorial accompanying the report, published online February 12 by PLoS Medicine. They note that with less than 300 confirmed human cases of H5N1 infection, researchers would be hard pressed to determine the amount of antibodies needed to confer protection. "There's no doubt we've got to focus on hemagglutinin" for developing pandemic flu vaccines, Webby says. The amount of neuraminidase in seasonal flu shots, he says, is unknown and likely varies from batch to batch.
http://www.scientificamerican.com/article/can-seasonal-flu-shots-help/
4.25
Scatter Plot Tool Create a scatter plot using the form below. All you have to do is type your X and Y data. Optionally, you can add a title a name to the axes. - Looking for math or statistics homework help? Our friends at My Geeky Tutor can definitely help. They are one of the most reliable and most referred statistics homework help service on the net. More about scatterplots: Scatterplots are bivariate graphical devices. The term "bivariate" means that it is constructed to analyze the type of association between to two interval variables \(X\) and \(Y\). The data need to come in the form of ordered pairs \((X_i, Y_i)\), and those pairs are plotted in a set of cartesian ads. Typically, a scatterplot is used to assess whether or not the variables \(X\) and \(Y\) have a linear association, but there could be other types of non-linear associations (quadratic, exponential, etc.). The existence of a linear association is assess by establishing how tightly the data are around a straight line. Data pairs \((X_i, Y_i)\) that are loosely clustered around a straight line have a weak or non-existing linear association, whereas data pairs \((X_i, Y_i)\) that are tightly clustered around a straight line have a strong linear association. A numerical (quantitative) way of assessing the degree of linear association for a set of data pairs is by calculating the correlation coefficient. Calculation of Confidence Intervals
http://www.mathcracker.com/scatter_plot.php
4.1875
|Search||Hot Links||What's New!| Please let me remind all of you--this material is copyrighted. Though partially funded by NASA, it is still a private site. Therefore, before using our materials in any form, electronic or otherwise, you need to ask permission. There are two ways to browse the site: (1) use the search button above to find specific materials using keywords; or, (2) go to specific headings like history, principles or careers at specific levels above and click on the button. Teachers may go directly to the Teachers' Guide from the For Teachers button above or site browse as in (1) and (2). The Mach number is a ratio between the aircraft's speed (v) and the speed of sound (a). That is, M = v/a The Mach number is named for the Austrian physicist, Ernst Mach (1838-1916). Technically, as you can see, the Mach Number is not a speed but a speed ratio. However, it is used to indicate how fast one is going when compared to the speed of sound. Scientifically, the speed at which sound travels through a gas depends on 1) the ratio of the specific heat at constant pressure to the constant volume, 2) the temperature of the gas, and 3) the gas constant (pressure/density X temperature). This is represented by the formula: a = Square Root(g R T) a = speed of sound g = ratio of the specific heat at constant pressure to the specific heat at constant volume R = universal gas constant T = Temperature (Kelvin or Rankin) Fortunately, in the earth's atmosphere (a gas) several of these variables are constant. In our atmosphere, g is a constant 1.4. R is a constant 1718 ft-lb/slug-degrees Rankin (in the English system of units) or 287 N-m/kg-degree Kelvin (in SI units). With g and R as constant values, this results in the speed of sound depending solely on the square root of the temperature of the atmosphere. Since aircraft and engines are affected by atmospheric conditions and these conditions are rarely (if ever) the same, we use a "standard day atmosphere" to give a basis for determining aircraft performance characteristics. The temperature for this standard day is 59 degrees Fahrenheit (15 degrees Celsius) or 519 degrees Rankin (288 degrees Kelvin) at sea level Thus, the speed of sound at sea level on a standard day is: a = SQRT[ (1.4) X (1718) X (519) ] = 1116 feet/second To convert this to miles per hour use the formula 1 foot/second = 0.682 miles per hour (statute miles). 1116 X 0.682 = 761 miles per hour. Explanation: How can you confirm 0.682 times the number of feet per second will equal provide the miles per hour equivalent? Let's do the math! Convert 1 foot per second to feet per minute (1 ft/sec x 60 seconds/min) = 60 ft/min Convert feet per minute to feet per hour (60 ft/min x 60 minutes/hr) = 3600 ft/hr Thus, 1 foot/second = 3600 feet/hour. All that is required now is to convert feet into miles. One statue mile = 5280 feet. Thus we divide 3600 by 5280 and our answer is 0.682. One method commonly used to prevent reinventing the wheel is to develop charts with ratios to mathematical equations. One chart available to F-15E aircrews is the "Standard Atmosphere Table." This table provides a "Speed of Sound ratio" column. This column provides the speed of sound (standard day data) ratio for any altitude based on the speed of sound at sea level of 761 MPH. |Altitude in feet||Speed of Sound ratio| Using the chart, on a standard day, the speed of sound at 10,000 feet is 761 x 0.9650 or 734 miles per hour. The ratio continues to get smaller until 37,000 feet, where it remains at 0.8671. Any idea why? HINT: Remember what we stated earlier was the ONLY factor that affected the speed of sound in the earth's atmosphere? If you stated that the temperature of the atmosphere stopped decreasing you're correct! At 37,000 feet, the temperature is a balmy -69.7 degrees Fahrenheit (or -56.5 degrees Celsius). While several supersonic aircraft like the F-15E are capable of flying faster than twice the speed of sound, they can only reach these speeds at very high altitudes where the air is thin and extremely cold. At sea level, supersonic aircraft are limited to speeds just above Mach 1 due to the atmosphere's temperature and density ("thicker" air that causes more drag on the aircraft). This page is an enhanced version of the page found on the 90th Fighter Squadron's website. Send all comments to email@example.com © 1995-2016 ALLSTAR Network. All rights reserved worldwide. |Funded in part by||Used with permission from 90th Fighter Squadron Updated: March 12, 2004
http://www.allstar.fiu.edu/aero/mach.htm
4
Even baby sharks need a safe haven—and now scientists have found the oldest known nursery for the predatory fish. Several 230-million-year-old teeth and egg capsules uncovered at a fossil site in southwestern Kyrgyzstan suggest hundreds of young sharks once congregated in a shallow lake, a new study says. Called hybodontids, the animals were likely bottom feeders, like modern-day nurse sharks. Mothers would've attached their eggs to horsetails and other marshy plants along the lakeshore. Once born, the Triassic-era babies would've had their pick from a rich food supply of tiny invertebrates, while dense vegetation offered protection from predators. Yet there's no evidence of any fin that rocked the cradle, so to speak—the babies were likely on their own, said study leader Jan Fischer, a paleontologist at the Geologisches Institut at TU Bergakademie Freiberg in Germany. (See "Shark Nursery Yields Secrets of Breeding.") In general, "shark-nursery areas are very important, because they are essential habitats for sharks' survival," Catalina Pimiento, a biologist at the Florida Museum of Natural History, said in an email. "This study expands the time range in which sharks [are known to] have used nursery areas in order to protect their young," noted Pimiento, who wasn't involved in the study. "This expansion of the range of time reinforces the importance of such zones." Ancient Sharks Lived in Fresh Water? When Fischer and colleagues first found shark egg capsules at the field site, they knew teeth shed by the newborns must be also embedded in the earth. So the team collected several sediment samples, took them to Germany, and dissolved the sediments in the lab. The work yielded about 60 teeth, all of which belonged to babies, except for a single adult tooth. The high number of baby teeth plus freshwater chemical signatures in those teeth suggest the ancient sharks spawned in fresh water, far from the ocean, according to the study, published in the September issue of the Journal of Vertebrate Paleontology. Fischer also suspects that the ancient sharks spent their whole lives in lakes and rivers, in contrast with modern egg-laying sharks, whose life cycles are exclusively marine, he noted. It's possible that, like modern-day salmon, shark adults could have migrated hundreds of kilometers upstream between the ocean and the nursery to spawn. But Fischer finds this scenario "improbable," mainly because of the sheer distance that the fish would have to cover. (Related: "Sharks Travel 'Superhighways,' Visit 'Cafes.'") Shark Fossils a Rare Find Whatever the answer, learning more about ancient sharks via fossils is rare, the study authors noted. Sharks' cartilaginous skeletons decay quickly, leaving just tiny clues as to their lifestyles. (Also see "Oldest Shark Braincase Shakes Up Vertebrate Evolution.") The "fact they got these shark teeth fossils with the egg capsules is what makes it really neat," noted Andrew Heckert, a vertebrate paleontologist at Appalachian State University in North Carolina. "Usually you find either a trace fossil [such as a skin impression] or a body fossil, and you're always trying to make the argument that these represent one or the other" theory, said Heckert, who was not involved in the study. In other words, having only one type of fossil is often not enough to draw definitive conclusions about an ancient species' behavior. "Those egg capsules," he added, "are spectacular."
http://news.nationalgeographic.com/news/2011/09/110909-baby-sharks-teeth-nursery-lakes-animals-science/
4.03125
While everyone knows about the “five senses” – sight, hearing, smell, taste, touch – little attention is paid to another of important sense, the sense of balance, unless problems develop. Many of the neuromuscular diseases affect balance. The sense of balance informs the brain about where one’s body is in space, including what direction the body moves and points and if the body remains still or moves. The sense of balance relies on sensory input from a number of systems. Disruption in any of the following systems can affect balance and equilibrium: --Proprioception involves the sense of where one’s body is in space. Sensory nerves in the neck, torso, feet and joints provide feedback to the brain that allows the brain to keep track of the position of the legs, arms, and torso. The body then can automatically make tiny changes in posture to help maintain balance. --Sensors in the muscles and joints also provide information regarding which parts of the body or in motion or are still. --Visual information provides the brain with observations regarding the body’s placement in space. In addition, the eyes observe the direction of motion. --The inner ears (labyrinth and vastibulocochlear nerve) provide feedback regarding direction of movement, particularly of the head. --Pressure receptors in the skin send information to the brain regarding what parts of the body are in space and which part touch the ground (when standing), a chair (when sitting), or the bed (when reclining). --The central nervous system (the brain and spinal cord) integrates and processes the information from each of these sources to provide one with a “sense of balance.” Problems with balance may occur in many individuals with neuromuscular disease. For example, problems with proprioception often occur in individuals with diseases such as Freidreich’s ataxia, Charcot Marie Tooth, myopathy, and spinal muscular atrophy due to loss of sensation in the joint. Sensory loss at the skin may affect the pressure receptors of a person with Charcot Marie Tooth. Visual losses and severe muscle weakness can lead to balance difficulties for those with mitochondrial myopathy. Loss of muscle strength occurring in many of the neuromuscular disease can also contribute to balance problems. Problems with balance can also contribute to problems such as poor gait, clumsiness, and falling in children and adults. Falling can cause injury, including minor injuries such as cuts and bruises, as well as major injuries such as bone fractures and head injury. Poor balance can also lead to sensations such as disequilibrium, light-headedness, dizziness, and vertigo. Balance problems may also lead to social embarrassment. Individuals with problems in one of the systems related to balance may rely more heavily on other systems for maintaining balance. For example, an individual with a deficit in proprioception may rely more heavily on visual input to maintain balance. Balance may then be more obviously impaired when input from that sense is not available, for example when walking in darkness. Individuals with neuromuscular disease may benefit from consulting their physicians regarding methods for improving balance. Methods may include learning new movement habits, improving concentration and attention to movement, engaging in physical therapy and appropriate moderate exercise, making home modification, and using of assistive devices. Even though sense of balance has often been overlooked unless problems develop, one’s sense of balance provides important sensory information that impacts quality of life. A better understanding of this important sense can help individuals to cope better with the challenges of living with neuromuscular disease. CMTA, (n.d.). What is CMT? Retrieved on 11/3/15 from http://www.cmtausa.org/understanding-cmt/what-is-cmt/ . Kids Health from Nemours, (n.d.). Balance Disorders. Retrieved on 11/3/15 from http://kidshealth.org/parent/medical/ears/balance_disorders.html . MDA, (2008). A Teacher’s Guide to Neuromuscular Disease. Retrieved on 11/3/15 from http://www.mda.org/publications/tchrdmd/ . MedicineNet.com, (2002). Balance – How Do We Do It? Retrieved on 11/3/15 from http://www.medicinenet.com/script/main/art.asp?articlekey=14637 . Medvascek, C., (2002). All Fall Down: Staying Upright With a Neuromuscular Disease. Quest, 9:6. Retrieved on 11/3/15 from http://www.mda.org/publications/quest/q96fall_down.html . NINDS, (2014). Friendreich’s Ataxia Fact Sheet. National Institute of Neurological Disorders and Stroke, National Institutes of Health website. Retrieved on 11/3/15 from http://www.ninds.nih.gov/disorders/friedreichs_ataxia/detail_friedreichs_ataxia.htm .
http://www.bellaonline.com/ArticlesP/art173822.asp
4.28125
Average Rate of Change A rate is a value that expresses how one quantity changes with respect to another quantity. For example, a rate in "miles per hour" expresses the increase in distance with respect to the number of hours we've been driving. If we drive at a constant rate, the distance we travel is equal to the rate at which we travel multiplied by time: Dividing both sides by time, we have If we drive at 50 mph for two hours, the distance we'll travel is 50 mph × 2 hrs = 100 miles. If we drive at a constant speed for 3 hours and travel 180 miles, we must have been driving In real life, though, we don't drive at a constant rate. When we start our trip through Shmoopville, we first climb into the car, traveling at a whopping 0 miles per hour. We speed up gradually (hopefully), maybe need to slow down and speed up again for traffic lights, and finally slow down back to a speed of 0 when reaching our destination, The Candy Stand. We can still divide the distance we travel by the time it takes for the trip, but now we'll find our average rate: To calculate the average rate of change of a dependent variable y with respect to the independent variable x on a particular interval, we need to know - the size of the interval for the independent variable, and - the change in the dependent variable from the beginning to the end of the interval. Depending on the problem, we may also need to know - the units of the independent and dependent variables. Then we can find The average rate of change of y with respect to x is the slope of the secant line between the starting and ending points of the interval: Relating this to the more math-y approach, think of the dependent variable as a function f of the independent variable x. Let h be the size of the interval for x: and let a be one endpoint of the interval, so the endpoints are a and a + h, with corresponding y-values f(a) and f(a + h): Then the slope of the secant line is We also write this as This is the definition of the slope of the secant line from (a, f(a)) to (a + h, f(a + h)).
http://www.shmoop.com/derivatives/average-rate-change.html
4.03125
Some books and web-sites describe the tides as being a result of "centrifugal force" due to the Earth and Moon moving. But that's not actually correct, because the tides would exist even if the Earth and Moon weren't moving! "Velocities" are not really involved. So those books are actually wrong, even textbooks! If the Earth and Moon were somehow standing still, there would still be tidal bulges, at least for a while until they later crashed together due to gravity! Even the tide that exists on the backside of the Earth would exist!| Newton's equation for Gravitation is all you need. It is in all books that describe gravitation. F = G * M1 * M2 / R2. Gravitation therefore depends on the "mass" (most people incorrectly think of it as weight) of each of the objects involved. It is an "inverse square" relationship, where the attractive force depends on the distance between the two objects. There is also a constant 'G' that just makes the numbers come out right for the system of measurements (feet, meters, seconds, etc) that we use. One way of looking at it is that the water from other Oceans would try to flow to that spot, because of that "excess acceleration" is essentially trying to "pile up water" there. So, even if the Earth and Moon did not move, or rotate, it would form a "hill of water" or "tidal bulge" at that location, directly under the location of the Moon, due to the upward "excess acceleration" due to the Moon's gravitation and that lesser distance. It turns out that that "tidal bulge" would be a little over two feet high, not noticeable in the enormity of the Earth! But the Earth and Moon both move! Specifically, the Earth rotates once every day. This makes the Earth actually rotate UNDERNEATH that (unavoidable) tidal bulge that the Moon's gravitation constantly causes. It seems to us that there are tides that move across the oceans, but the actual tidal bulges do not really move very much, and it is actually that the Earth (and us) are rotating past that bulge that makes it seem to be moving to us. By the way, each time the Moon is overhead, YOU are also being attracted UPWARD by the Moon, as well as downward by the Earth. It actually changes your weight! But by only by a really, really tiny amount! (around one part in nine million, one one-hundredth of a gram!) Since R is small compared to D (about 1/60 of it) this is nearly equal to G * Mmoon * (2 * D * R) / D4, or (a constant) / D3. Tidal acceleration is therefore approximately proportional to the CUBE of the distance of the attracting body. If the Moon were only half as high above the Earth, the tidal accelerations would be EIGHT TIMES as great as they are now! As it happens, both the Moon and the Sun cause such tides in our oceans. The Moon is about 400 times closer than the Sun, so it causes a tidal acceleration equal to 4003 or 64 million times that of an identical mass. But the Moon's mass is only about 1/27,000,000 that of the Sun. The result is that the Moon causes a tidal acceleration that is about 64,000,000/27,000,000 or 64/27, or a little more than double of that of the Sun. That's why the tides due to the Moon are larger than those due to the Sun. At different times of a month, the Sun and Moon can cause tides that add to each other (called Spring Tides) (at a Full Moon or a New Moon, when their effects are lined up.) However, at First Quarter or Third Quarter Moon, the tides cause by the two sort of cancel each other out (with the Moon always winning!) and so there are lower tides then, called Neap Tides. It's possible to even estimate the size of the tides, using that same result above, G * Mmoon * (2 * D * R) / D4, or G * Mmoon * 2 * R) / D3. We know the values of all those quantities! In the metric system, G = 6.672 * 10-11, Mmoon = 7.3483 * 1022 kg, R = 6.37814 * 106 m, and D = 3.844 * 108 m. This gives a result for the relative acceleration as being 1.129 * 10-6 m/s2, about one nine-millionth of the acceleration of the Earth's gravity on us. In case you are curious, we know that the acceleration due to gravity of the Earth is around 9.8 m/s2. So the actual downward acceleration on us and water is reduced by about 1/8,700,000, when the Moon is at its average distance, and the Moon is overhead. If you happen to weigh 200 lbs (90 kg), your scale would show you lighter by about 1/2500 ounce (or 0.01 gram), when the Moon was overhead as compared to six hours earlier or later. That is not enough that you could ever be aware of it or measure it. That 1.129 * 10-6 m/s2 is the distorting ACCELERATION, and that number is precise. Of course, it depends on the current distance of the Moon which is in an elliptic orbit, and also the celestial latitude of the Moon for the specific location. A truly precise value is difficult since it changes every minute! We can now compare two locations on Earth regarding this matter, one directly above the other. We know that we have F = G * M1 * M2 / R2 for both locations. By eliminating the mass of our object, we have a = G * M1 / R2. So we could make a proportion for the two locations, alower G * Mearth / Rlower2 _____ = _________________ ahigher G * Mearth / Rhigher2 The mass of the Earth and G cancel out and we have the proportion of the accelerations equal to Rhigher2 / Rlower2. alower Rhigher2 _____ = _________________ ahigher Rlower2 In our case, we know the proportion of the accelerations, different by 1.129 * 10-6 m/s2 /9.8 m/s2 so the left fraction is 1.0000001152. The radius R therefore has to be in the proportion of 1.0000000576 so that its square would be the acceleration proportion. If we multiply this proportion (difference) by the radius of the Earth, 6.378 * 106 meters, we get 0.367 meter, which is the required difference in Earth radius to account for the difference in the acceleration. If the Moon were NOT there, the equilibrium height of the ocean would have been the radius of the Earth. We have just shown that due to the differential acceleration of the Moon on the water, we have a new equilibrium radius of the Earth's oceans there which is around a third of a meter greater. This suggests that the oceans' tidal bulge directly under where the Moon is should be around 0.367 meter or 14.5 inches high out in the middle of the ocean. The Sun's tidal bulge is similarly calculated to around 0.155 meter or 6.1 inches. When both tides match up, at Spring Tide at Full Moon or New Moon, that open ocean tide should be around 0.522 meter or 21 inches high. When they compete, at Neap Tide at First or Third Quarter Moon, they should be around 0.212 meter or 8.5 inches high in the open ocean. These numbers are difficult to measure experimentally, but all experiments have given results that are close to these theoretical values. However, the Earth itself also "bends" due to the tidal effects and so the Earth itself (the ocean bottom) has an "Earth tide" that occurs. It is not well known as to size, and it is certainly small, but estimates are that it is probably just a few inches in rise/fall. The actual distortion that occurs in the Earth and in the waters of the oceans are extremely complex, so we are just using an estimate here. (We are preparing another web-page that discusses the actual flow patterns of the tides in the oceans of the Earth.) In any case, this then results in open ocean tides as being around a foot or foot and a half high, which seems to agree with general data, such as at remote islands in the middle of oceans. For the water, we have been assuming that water can flow fast enough to balance out, therefore the level of water would be at an iso-gravitational constant value. This is not really the case, and water takes a while to build up that tidal bulge. If the Earth rotated REALLY slowly, it would be more true. But at the Equator, the rotation of the Earth is over 1000 miles per hour, and the depth of the oceans causes a limit on the speed of deep ocean waves to be around 750 mph. This results in the tidal bulge always being dragged or carried along by the rapidly rotating Earth, to a location which is actually not underneath where the Moon is! The tidal bulge is NEVER directly under the Moon as it is always shown in school books! Instead, the reality is that the tidal bulges are generally several thousand miles away from that location. We see reason to cut some slack in school books regarding this, as it would just add an additional complication into the basic concept of the tides. Notice that no centrifugal force was ever mentioned in this calculation, and it is a direct application of Newton's equation for gravitation. The books that, in attempting to describe why tides exist, DO choose to use arguments that involve centrifugal and centripetal forces, can therefore be rather misleading. There IS a way to do that and get an approximately correct result, but I think that it really overlooks the fact that it all is purely a simple gravitational result, and it gives the impression of tides only occurring due to a "whirling" effect of the Earth and Moon, which is NOT true. The only real value I see in those approaches is that the tide on the backside of the Earth might SEEM to be a little more logical. The mathematical description given above accurately describes that opposite tide, too, and even shows that it is always slightly less high, which might not seem as obvious when trying to use the "whirling" idea. The same is still true when considering the "solid earth" and a particle of water in an ocean opposite where the Moon happens to be. It might seem amazing, but the Moon actually gives the entire Earth a greater acceleration, because of being closer, than it does for that puny bit of water in that backside ocean! And so all the equations above can be used again, with the one difference that our new distance from the Moon to the water is D + R instead of D - R. When you plug in the numbers, you get a slightly smaller differential acceleration, 1.074 * 10-6 m/s2 (instead of the slightly larger 1.129 * 10-6 m/s2 that we got for the front side water.) Using the same calculation as above, we get a Moon-caused tidal change in radius of 0.349 meter (instead of 0.367 meter as before) or 13.8 inches. That is only 0.018 meter or around 3/4 of an inch difference in the heights of the front and back (Moon-caused) tides. The rear side tide IS therefore slightly smaller, but not by very much! The same reasoning and calculation applies to the Sun-caused tidal bulges, so the total Spring Tide difference is actually around one inch. (I have never seen any other presentation explain why the rear-side tide is slightly smaller than the Moon-side tide is, or the calculation of exactly how much that difference is!) Again, please notice that we have not only proven that the rear side tide exists but even calculated its size, without having to use any centrifugal force! It is frightening that even a lot of school textbooks present a REALLY wrong explanation! Well, water has a lot of friction, both with itself (viscosity) and with the seafloor (drag), so the tidal bulges travel (relatively) at slower speeds across the oceans! Since the Earth rotates so fast, this results in the tidal bulge(s) lagging and being "dragged forward" of that location. Also, instead of the actual water shooting all over the oceans at extremely high speed, the actual water tends to move relatively little, and the wavefront of the tide can pass at such high speed (often around 700 mph in deep open ocean) without much noticeable effect of actual movement of water. (No sailor ever senses any water shooting past his ship at 700 mph! Instead, he basically senses nothing, as his ship is gradually raised up about a foot over a period of several hours.) The 700 mph speed cited here is dependent on the DEPTH of the water, by a well-known and rather simple formula giving the maximum speed of wave velocity due to the local water depth. For the common depths of the central oceans, that speed is around 700 mph. The fact that friction between the water and the seafloor causes the tidal bulges to be shifted away from the line between the Earth and Moon has some long-term effects. That friction is actually converting a tiny amount of the Earth's rotational energy into frictional heat energy, which is gradually slowing down the Earth's rotation! Our days are actually getting longer, just because of the Moon! But really slowly. A thousand years from now, the day will still be within one second of the length it is now. The long-term effects of this are discussed below. By the way, a "tiny amount" of the Earth's rotational energy is still quite impressive! The amount of frictional energy lost from the Earth's rotation is actually around 1.3 * 1023 Joules/century (Handbook of Chemistry and Physics). That is 1.3 * 1021 watt-seconds per year, or 3.6 * 1014 kilowatt-hours each year. For comparison, figures provided in the World Almanac indicate that the entire electric consumption of all of the USA, including all residential, industrial, commercial, municipal and governmental usages, and waste, totaled around one one-hundredth of that amount (3.857 * 1012 kWh) in 2003! (This fact, that natural ocean-seafloor frictional losses due to tidal motions are taking a hundred times the energy from the Earth's rotation as all the electricity we use, and still barely having any effect on the length of our day, inspired me to seriously research the concept of trying to capture some of the Earth's rotational energy to convert it into electricity. It would be an ideal source of electricity, with no global warming, no pollution, and no rapid consumption of coal, oil or nuclear just to produce electricity! A web-page on that concept is at Earth Spinning Energy - Perfect Energy Source) There are actually two situations regarding tides approaching land, which are actually closely related. Say that our one+ foot high tidal bulge of extra water is traveling at VERY high speed across the open ocean, and then it arrives at a Vee-shaped Bay, like the Bay of Fundy in Canada. Say that the widest part of such a bay is 100 miles wide, so we start out with a wave that is one foot high and 100 miles wide. Remember that water is incompressible! Now, follow this wave as it enters such a bay. As it gets around halfway up the bay, the bay is only half as wide, 50 miles. But there is still just as much water in the oncoming wave. In order for all that water to squeeze into a 50 mile width, do you see that it must now be TWO feet high? Continuing this logic, if the bay narrows to two miles wide at the end, all that water would have to still be there, and it would now have to be a fifty-foot-tall tide. There are additional compounding effects in that waves travel at speeds that depend on the depth of the water. As the waves are moving up such a bay, the water keeps getting shallower, and the wave velocity (called celerity) greatly slows. The very inner end of the Bay of Fundy actually has such tides! This "funneling effect" of the shape of that bay causes it. All during that process, though, there is a lot of friction, with the bottom of the bay, and among the waters and surf. These energy losses actually reduce the growth effect described above, and for bays that do not have such funnel shapes, tides in those bays are relatively moderate in size. But in those uniquely shaped bays, the tides are very impressive. At the very inner end, the tide comes in so quickly and so strongly that it even has a special name, a "tidal bore". A fairly large river at the end of the Bay of Fundy winds up flowing BACKWARDS briefly about twice each day due to the intensity of the tidal bore there! The other situation is when a tide from the open ocean runs into a continent straight on. Due to natural erosion and many other effects, most shorelines gradually slope outward, getting deeper and deeper as you go farther from the shore. When an approaching tidal bulge gets to this vertically tapering area, a situation a lot like the funnel-shaped-bay width effect occurs. The wave essentially gets lifted upward as it moves up the "ramp". The actual shape of the contours of the slopes near continents greatly affects this process. If the slope is too shallow or too steep, or if it has irregular slope, much more energy is lost in friction and minimal tides are seen at the shoreline. But for some locations, it explains why the measured tides are much higher than the open ocean tides are. Most of the East Coast of the US has tides that are several feet high. This discussion should help to explain the incredible complexity of the actual tides seen. Even worse, erosion and deposition are continuously modifying the contours of the ocean bottom, so these effects change over time. All these effects contribute to making the precise prediction of the size and arrival time of tides a very complex and imperfect science. There is a way to calculate that eventual length of day/month. It relies on the fact that angular momentum must remain constant in a system that has no external torques applied to it. That means we must first calculate the total angular momentum of the Earth-Moon system. There are four separate components of it, the rotation and the revolution of each of the Earth and Moon. Angular momentum is the product of the rotational inertia (I) and the angular velocity (ω). For two of these situations, the revolution components, the rotational inertia is I = M * r2. The r dimension is the distance between the barycenter of the system and the center of the individual body. (The Moon does NOT actually revolve around the Earth, but they BOTH revolve around each other, or actually a location that is called the barycenter of the system. In the case of the Earth-Moon system, the barycenter happens to always be inside the Earth, roughly 1/4 of the way down toward the Center of the Earth, on a line between the centers of the Earth and Moon.) For the Earth, this gives I = 2.435 * 1038 kg-m2. For the Moon, this gives I = I = 1.049 * 1040 kg-m2. For these two components, the angular velocity is one revolution per month, or 6.28 radians in 27.78 days, or 2.616 * 10-6 radians/second. That makes these two (revolution) angular momentum components: Earth = 6.371 * 1032 kg-m2/sec Moon = 2.745 * 1034 kg-m2/sec The rotational inertia of the Earth and Moon are somewhat more complicated to calculate, primarily because the density gets greater toward the center of each. You can find the derivation in some advanced geophysics texts. For the Earth, the currently accepted value is: I = 8.07 * 1037 kg-m2 Since the Earth rotates once a day, the angular velocity of it is 7.27 * 10-5 radians/second. That makes the Earth-rotational angular momentum: Earth = 5.861 * 1033 kg-m2/sec It turns out that the component due to the Moon's rotation is extremely small. The actual rotational inertia of the rotation of the Moon is actually not known, but it is around 6 * 1034 kg-m2. That makes the Moon-rotational angular momentum: Moon = 1.6 * 1029 kg-m2/sec Totaling up these four components, we have 3.391 * 1034 kg-m2/sec as the TOTAL angular momentum of the system. The laws of Physics say that this angular momentum must be conserved, must always exist with the same total. The four terms are now: Earthrev = m * R2 * (ω) or 6 * 1024 kg * (D/60.37)2 * 6.28 / (Day-length) or 1.03 * 1022 * D2 / (Day-length) Moonrev = m * R2 * (ω) or 7.34 * 1022 kg * ((D*59.37)/60.37)2 * 6.28 / (Day-length) or 4.458 * 1023 * D2 / (Day-length) Earthrot = I * (ω) or 8.07 * 1037 * 6.28 / (Day-length) or 5.03 * 1038 / (Day-length) Moonrot = I * (ω) or 6 * 1034 * 6.28 / (Day-length) or 3.77 * 1035 / (Day-length) Totaling all four components, the two pairs can be combined: 4.561 * 1023 * D2 / (Day-length) + 5.03 * 1038 / (Day-length). This total must equal the previously calculated total angular momentum of 3.391 * 1034 kg-m2/sec. It turns out that Kepler discovered a relationship between the time interval of revolution and the distance between them. The square of the time interval is proportional to the cube of the distance between them. In our problem, this results in the D2 term being able to be replaced by (Day-Length)1.333 * 4.589 * 108. This results in our equation becoming: 2.093 * 1032 * (Day-length)0.333 = 3.391 * 1034 which solves to a Day-length of 4.208 * 106 seconds, which is equal to about 48.7 of our current days. This arrangement would have a spacing of 5.58 * 108 meters or around 347,000 miles, as compared to the current 238,000 miles. This then indicates that the Moon will continue to very slowly move outward, apparently for hundreds of millions of years, until it eventually gets to that distance. At the same time, the length of our day will continue to very slowly get longer until it becomes about 49 times as long as now! All these results are directly calculated from the basic laws of Gravitation! I have seen previous estimates where the final distance would be about 400,000 miles and the period would be 55 days. Those numbers were apparently calculated over a hundred years ago, by the mathematician Sir George Darwin. It is not obvious that anyone else has done these calculations, a central reason for my composing this presentation. The possibility exists that the figure for the rotational inertia of the Earth was not known as well by Darwin as it is now. I believe that the above calculations reflect the current knowledge of the values. There is also a factor regarding the Earth losing kinetic energy of rotation due to frictional heating of the ocean tides against the ocean bottoms and the continents. This is considered to be about 1.3 * 1021 Joules (watt-seconds) per year. (Handbook of Chemistry and Physics). (For comparison, this energy consumption is equal to approximately 100 times all the electricity used in the USA!) Researchers have rather accurately determine that each year is around 16 microseconds longer than the year before. Interestingly, the friction of the tides against the seafloor and the continents should cause a slowing of around 22 microseconds each year but that there are some effects that actually cause a secular increase in the rate of spin of the Earth (which we shall not discuss here!) We therefore have a day that (should) increase in length due to this effect by 22 microseconds each year. The amount of friction between the water and the moving Earth under and in front of it depends on the speed of that differential motion. It is a reasonable assumption that it depends directly (proportionally) on the speed of that motion, that is the rotation rate of the Earth. This being the case, we can apply some simple Calculus to do a little differentiating and integrating to establish that there is a simple exponential relationship. Specifically, we find that ln(86400 - 0.000022) - ln(86400) = k * t, where t is the number of seconds in a year, that is 3.1557 * 107. This lets us calculate the value of k to be 8.0806 * 10-18. This value uses base e, and we can convert to a formula which uses base 2 by simply dividing this value by the value of ln(2), which then gives 1.16578 * 10-17. We then have that the ratio of the rate of the Earth's spin is equal to 2-t * k. To find how long it will be until the Earth would be rotating at half the current rate (twice as long a day) (due ONLY to the tidal friction effect!), just set this equal to 1/2. For this, clearly t must equal the inverse of k, so that we would have 2-1 which is 1/2. Therefore, that would be 1 / (1.16578 * 10-17) seconds from now, which is 8.59138 * 1016 seconds or 2.722 billion years! Quite a while! If you're still around 2.7 billion years from now, each day figures to be twice as long as it is now! Going back the other way, 2.7 billion years ago, the Earth was spinning much faster. Again, if we consider just the effects of tidal friction, and if the oceans and continents were similar to as they are now, the Earth would have been spinning twice as fast, with 730 days each year! Probably even faster, because the Moon was then closer, and therefore the tidal effect would have been still greater. If the oceans formed around 4 billion years ago, then this equation gives a t * k product of around 1.5, and our spin ratio would have been 2.83, so the Earth must have then spun in around 8.5 of our modern hours, nearly three times as fast as now. And probably even faster than that, because the Moon was certainly much closer then and also those giant tidal bulges must have had tremendous friction when running into a continent at 3,000 miles per hour! I betcha that beach erosion must have been something really impressive then! Every four hours, a giant tidal wave (ACTUALLY a tidal wave and not the mis-named tsunami!) would crash into each continent at the 3,000 mph rotation speed of the Earth. Wow! In any case, as a bonus, we have shown that the early Earth must have rotated probably more than three times as rapidly than today, at least until the oceans formed and the Moon started having the tidal effects that has slowed down our rotation to what it is today. And also that the slowing effect is pretty slow, and the exponential equation given above indicates that around 9 billion years from now, the Earth will have slowed to around 1/10 the current rate (36 REALLY long days every year!) and that would still be far from the eventual day length when the Earth and Moon will have finally gone into being locked-up facing each other forever. Since we believe the Sun will have used up its Hydrogen fuel far before that, it would occur in total darkness! If you are an inquisitive sort, you could use the angular momentum conservation analysis given way above to figure out just how close the Moon must have been at that time when the oceans were forming! I WILL give you a clue that the month was then around 5 modern earth days long, and the Moon was HUGE in the sky, ballpark around 1/3 as far away as now! Do it! Show me that you can! Keep in mind that the precise values were probably not actually these, as we have only considered the Moon's tidal effects and have ignored various other effects that affect the rotation rate of the Earth. In fact, we made a basic assumption regarding the effect BEING PROPORTIONAL TO THE RELATIVE SPEED. That is not necessarily true, and for example, it might instead be proportional to the SQUARE of the relative speed, which would materially change these calculations. The general time scales would still be extremely long though. With really accurate modern equipment, it has been found that even the seasonal difference of the weight of snow accumulating near the pole in winter has a measurable effect on changing our total rotational inertia (as water, much of that mass of water would have been nearer the equator, slightly changing I) so we speed up and slow down for a lot of such reasons, but all of those effects are really pretty small! In addition to all this, we know that continents wander around due to Plate Tectonics over these same long time scales. It seems clear that there must have been times when the arrangement of continents were better or worse than they are now regarding interfering with the tidal flows around the Earth, which is another factor which cannot be decently estimated. So numerical results such as these are necessarily very approximate, which is part of the reason we chose to use the simplest of all assumptions regarding the relative speed factor. The reasoning presented above, based on that assumption, would indicate that the length of the day on Earth would have been 16 (modern) hours around 1.6 billion years ago. I have recently been told that someone has recently apparently done a similar analysis where their results imply that the day was 16 hours long around 0.9 billion years ago. I am not familiar with the specific analysis used to get such a number, so it is not possible to comment on its likely accuracy. (The analysis given here implies that the day was around 19 [modern] hours long at that time.) I am not really concerned regarding the precise accuracy of either result, but rather have interest in the PROCESS of the analysis, so that any reader of this presentation should be capable of doing the complete analysis for him/herself. Link to a thorough mathematical discussion of assorted Moon issues, is on Origin of the Moon - A New Theory Link to a slightly unrelated subject, that of trying to capture some rotational energy of the Earth for generating electric power Earth Spinning Energy - Perfect Energy Source This page - - - - is at This subject presentation was last updated on - - C Johnson, Theoretical Physicist, Physics Degree from Univ of Chicago
http://mb-soft.com/public/tides.html
4
July 12, 2014 How To Measure A Sun-Like Star’s Age John P. Millis, Ph.D. for redOrbit.com - Your Universe Online The holy grail of planetary astronomy is to find a solar system that mirrors our own. While a lot of effort has been placed on finding a planet with Earth-like properties – the right size, an atmosphere, the right temperature – are of equal importance in the search for a Sun-like star.Such a glowing orb would need to have a similar mass, temperature, and spectral type. These parameters are somewhat easy to measure, but of greater difficulty is measuring stellar age. Over time stars change in brightness, consequently their interaction with the planets that orbit around them evolves as well. A new technique has emerged that is helping researchers estimate the age of a star. Known as gyrochronology, astronomers measure the changing brightness of a star caused by dark spots – known as starspots – crossing the stellar surface. From this the rotation speed of the star can be determined. This is important because stars initially spin more rapidly, and then slow as they age. The challenge is that the variation in the stellar brightness is small, typically less than a few percent, but luckily NASA’s Kepler spacecraft has a sensitivity great enough to discern such minute changes. By measuring the spins of stars in a 1-billion year old star cluster known as NGC 6811, a previous study led by astronomer Soren Meiborn, was able to create a calibration table correlating the spin rate with age for various star types. Prior to this new study, accepted for publication in The Astrophysical Journal Letters, researchers had only cataloged two Sun-like stars with measured spins and ages. But the forthcoming paper details 22 new objects meeting the criteria. "We have found stars with properties that are close enough to those of the Sun that we can call them 'solar twins,'" says lead author Jose Dias do Nascimento of the Harvard-Smithsonian Center for Astrophysics (CfA). "With solar twins we can study the past, present, and future of stars like our Sun. Consequently, we can predict how planetary systems like our solar system will be affected by the evolution of their central stars." Nascimento and his team also found that the Sun-like stars identified in their study had an average rotational period of about 21 days, similar to the 25-day rotation period of our Sun at its equator. (The Sun displays a differential rotation, meaning that it rotates faster at the equator than it does at its poles.) Unfortunately, none of the 22 stars in this study are known to have planets around them. But, as this work is expanded upon to include other stars, astronomers can begin understanding how the evolution of a star affects the evolution of the planets that orbit about it. Image 2 (below): Finder chart for one of the most Sun-like stars examined in this study. KIC 12157617 is located in the constellation Cygnus, about halfway between the bright stars Vega and Deneb (two members of the Summer Triangle). An 8-inch or larger telescope is advised for trying to spot this 12th-magnitude star. Credit: CfA, created using StarWalkHD (VT 7.0.3) and MAST and the Virtual Observatory (VO) Keep an eye on the cosmos with Telescopes from Amazon.com
http://www.redorbit.com/news/space/1113189918/astronomical-hunt-for-sun-like-stars-071214/
4.0625
New Carbon Nanomaterial A simple chemical trick changes graphene into a compound with different electronic properties. Graphene, a single layer of carbon atoms arranged in a honeycomb-like structure, has captured worldwide interest because of its attractive electronic properties. Now, by adding hydrogen to graphene, researchers at the University of Manchester, U.K., have made a new material that could prove useful for hydrogen storage and future carbon-based integrated circuits. While graphene is highly conductive, the new material, called graphane, is an insulator. The researchers can easily convert it back into conductive graphene by heating it to a high temperature. Andre Geim, who led the research and first discovered the nanomaterial in 2004 with Kostya Novoselov, says that the findings suggest that graphene could be used as a base for making entirely new compounds. The hydrogenated compound graphane had been theoretically predicted before, but no one had attempted to create it. “What’s important is that you can make another compound of [graphene] and can chemically tune its electronic properties to what you want so easily,” Geim says. Adding hydrogen to graphene is just one possibility. Using other chemicals could yield materials with even more appealing properties, such as a semiconductor. “Hydrogenation may not be the end of the exploration; it may be just the beginning,” says Yu-Ming Lin, a nanotechnology researcher at the IBM Thomas J. Watson Research Center, in Yorkstown Heights, NY. The latest findings are a step toward practical carbon-based integrated circuits, which could be used for low-power, ultrafast logic processors of the future. The findings also open up the possibility of using graphene for hydrogen storage in fuel cells. “Graphene is the ultimate surface because it doesn’t have any bulk–only two faces,” Geim says. This large surface area would make an excellent high-density storage material. As described in Science, the researchers make graphane by exposing graphene pieces to hydrogen plasma–a mixture of hydrogen ions and electrons. Hydrogen atoms attach to each carbon atom in graphene, creating the new compound. Heating the piece to 450 °C for 24 hours reverts it back to the original state. Geim says that the researchers did not expect to be able to make the new substance so easily. One of graphene’s promises for electronics is that it can transport electrons very quickly. Transistors made from graphene could run hundreds of times faster than today’s silicon transistors while consuming less power. Researchers are making progress toward such ultrahigh-radio-frequency transistors. But combining the transistors into circuits is a challenge because graphene is not an ideal semiconductor like silicon. Silicon transistors can be switched on and off between two different states of conductivity. Graphene, however, continues to conduct electrons in its off state. Circuits made from such transistors would be dysfunctional and waste a lot of energy. One way to improve the on-off ratio in graphene transistors and bring them on par with those made of silicon is to cut the carbon sheet into narrow ribbons less than 100 nanometers wide. But making consistently good-quality ribbons is difficult. Altering the material chemically may be an easier way to tailor its electronic properties and get the properties sought, Geim says. And that means that researchers could fabricate graphene circuits with nanoscale transistors that are smaller and faster than those made from silicon. “Imagine a wafer made entirely of graphene, which is highly conductive,” he says. “[You can] modify specific places on the wafer to make it semiconducting and make transistors at those places.” Areas between the transistors could be converted into insulating graphane, in order to isolate the transistors from each other. The new work is just a preliminary first step. The researchers still need to thoroughly test the electronic and mechanical properties of graphane. Converting the material into a decent semiconductor might take a lot more chemical tinkering. Besides, graphene researchers face one big challenge before they can do anything practical: coming up with an easy way to make large pieces of good-quality material in sufficient quantities. “For many applications, one needs a significant amount of material,” says Hannes Schniepp, who studies graphene at the College of William and Mary. “And that’s yet to be demonstrated for graphene or graphane.”
https://www.technologyreview.com/s/411829/new-carbon-nanomaterial/
4.21875
The vast majority of robots do have several qualities in common. First of all, almost all robots have a movable body. Some only have motorized wheels, and others have dozens of movable segments, typically made of metal or plastic. Like the bones in your body, the individual segments are connected together with joints. Robots spin wheels and pivot jointed segments with some sort of actuator. Some robots use electric motors and solenoids as actuators; some use a hydraulic system; and some use a pneumatic system (a system driven by compressed gases). Robots may use all these actuator types. A robot needs a power source to drive these actuators. Most robots either have a battery or they plug into the wall. Hydraulic robots also need a pump to pressurize the hydraulic fluid, and pneumatic robots need an air compressor or compressed air tanks. The actuators are all wired to an electrical circuit. The circuit powers electrical motors and solenoids directly, and it activates the hydraulic system by manipulating electrical valves. The valves determine the pressurized fluid's path through the machine. To move a hydraulic leg, for example, the robot's controller would open the valve leading from the fluid pump to a piston cylinder attached to that leg. The pressurized fluid would extend the piston, swiveling the leg forward. Typically, in order to move their segments in two directions, robots use pistons that can push both ways. The robot's computer controls everything attached to the circuit. To move the robot, the computer switches on all the necessary motors and valves. Most robots are reprogrammable -- to change the robot's behavior, you simply write a new program to its computer. Not all robots have sensory systems, and few have the ability to see, hear, smell or taste. The most common robotic sense is the sense of movement -- the robot's ability to monitor its own motion. A standard design uses slotted wheels attached to the robot's joints. An LED on one side of the wheel shines a beam of light through the slots to a light sensor on the other side of the wheel. When the robot moves a particular joint, the slotted wheel turns. The slots break the light beam as the wheel spins. The light sensor reads the pattern of the flashing light and transmits the data to the computer. The computer can tell exactly how far the joint has swiveled based on this pattern. This is the same basic system used in computer mice. These are the basic nuts and bolts of robotics. Roboticists can combine these elements in an infinite number of ways to create robots of unlimited complexity. In the next section, we'll look at one of the most popular designs, the robotic arm.
http://science.howstuffworks.com/robot1.htm
4.0625
An Oil Tanker Runs Aground Off the California Coast; Plan and. Execute an Appropriate ... High School Physics, Mathematics, Oceanography. Grades 9-12. California .... single vector (Students do exactly this in Part 2 of this lesson). In order to ... This sample Mathematics lesson plancaptioned "CeNCOOS Classroom Series – Module 1" provides more info about math, etc. To make sure that this data is what you need, before you download this Mathematics lesson plan, please learn this data first by click the following link. On the other hand, if you want to save this data directly into your computer, you can download this pdf Mathematics lesson plan through the following download link. Learners of pre-kindergarten through 8th level can be determined as young learners. Designing teaching note for young learners should be imaginative in selecting the teaching practices because of their characteristics. It is better for trainer to know the characteristics of young learners before designing the lesson plan for them. Wendy A. Scott and Lisbeth H. Ytreberg describe the characteristics of young learners. Here we will talk it in related to the class inspiration. First, Young learners like to co-operate and physically active. That is why teachers have to design the practices which involve the students to participate for example: desigining a work in group or individually, like interesting quizes which involved the physical practices. Second, young learners are happy to play. They learn best when they are enjoying themselves. In related in playing, we now have many improvements in teaching Math strategy for children through games or even the full-colored and entertaining worksheet design. It is easy to look for where we can find the fun Math worksheet on the online resources. Third, young learners cannot concentrate for a long time. Teacher should have a great techniquein dividing teaching time from the beginning till the end of the class. Young learners are happier with different materials and they cannot remember things for a long time if it is not repeated. So keep repeating the lesson with various fun ways. Five, seven or twelve years old Students will grow as thinkers who can be trustworthy and take responsibility for class practices and routines. They will also learn how to play and organize the best way to bring an activity, work with others and learn from others. Moreover, young Students still depend on teacher. They should be guided and accompanied well. So keep guiding students with the best way. Teaching Math is fun and let us make them happy to learn Math.
http://www.padjane.com/cencoos-classroom-series-module-1/
4.03125
|This article needs additional citations for verification. (January 2008)| In electronics, a linear regulator is a system used to maintain a steady voltage. The resistance of the regulator varies in accordance with the load resulting in a constant output voltage. The regulating device is made to act like a variable resistor, continuously adjusting a voltage divider network to maintain a constant output voltage, and continually dissipating the difference between the input and regulated voltages as waste heat. By contrast, a switching regulator uses an active device that switches on and off to maintain an average value of output. Because the regulated voltage of a linear regulator must always be lower than input voltage, efficiency is limited and the input voltage must be high enough to always allow the active device to drop some voltage. Linear regulators may place the regulating device in parallel with the load (shunt regulator) or may place the regulating device between the source and the regulated load (a series regulator). Simple linear regulators may only contain a Zener diode and a series resistor; more complicated regulators include separate stages of voltage reference, error amplifier and power pass element. Because a linear voltage regulator is a common element of many devices, integrated circuit regulators are very common. Linear regulators may also be made up of assemblies of discrete solid-state or vacuum tube components. The transistor (or other device) is used as one half of a potential divider to establish the regulated output voltage. The output voltage is compared to a reference voltage to produce a control signal to the transistor which will drive its gate or base. With negative feedback and good choice of compensation, the output voltage is kept reasonably constant. Linear regulators are often inefficient: since the transistor is acting like a resistor, it will waste electrical energy by converting it to heat. In fact, the power loss due to heating in the transistor is the current multiplied by the voltage difference between input and output voltage. The same function can often be performed much more efficiently by a switched-mode power supply, but a linear regulator may be preferred for light loads or where the desired output voltage approaches the source voltage. In these cases, the linear regulator may dissipate less power than a switcher. The linear regulator also has the advantage of not requiring magnetic devices (inductors or transformers) which can be relatively expensive or bulky, being often of simpler design, and being quieter. Some designs of linear regulators use only transistors, diodes and resistors, which are easier to fabriacate into an integrated circuit, further reducing their weight, footprint on a PCB, and price. All linear regulators require an input voltage at least some minimum amount higher than the desired output voltage. That minimum amount is called the dropout voltage. For example, a common regulator such as the 7805 has an output voltage of 5V, but can only maintain this if the input voltage remains above about 7V, before the output voltage begins sagging below the rated output. Its dropout voltage is therefore 7V − 5V = 2V. When the supply voltage is less than about 2V above the desired output voltage, as is the case in low-voltage microprocessor power supplies, so-called low dropout regulators (LDOs) must be used. When the output regulated voltage must be higher than the available input voltage, no linear regulator will work (not even a Low dropout regulator). In this situation, a switching regulator of the "boost" type must be used. Most linear regulators will continue to provide some output voltage approximately the dropout voltage below the input voltage for inputs below the nominal output voltage until the input voltage drops significantly. Linear regulators exist in two basic forms: shunt regulators and series regulators. Most linear regulators have a maximum rated output current. This is generally limited by either power dissipation capability, or by the current carrying capability of the output transistor. The shunt regulator works by providing a path from the supply voltage to ground through a variable resistance (the main transistor is in the "bottom half" of the voltage divider). The current through the shunt regulator is diverted away from the load and flows uselessly to ground, making this form usually less efficient than the series regulator. It is, however, simpler, sometimes consisting of just a voltage-reference diode, and is used in very low-powered circuits where the wasted current is too small to be of concern. This form is very common for voltage reference circuits. A shunt regulator can usually only sink (absorb) current. Series regulators are the more common form. The series regulator works by providing a path from the supply voltage to the load through a variable resistance (the main transistor is in the "top half" of the voltage divider). The power dissipated by the regulating device is equal to the power supply output current times the voltage drop in the regulating device. A series regulator can usually only source (supply) current. Simple shunt regulator The image shows a simple shunt voltage regulator that operates by way of the Zener diode's action of maintaining a constant voltage across itself when the current through it is sufficient to take it into the Zener breakdown region. The resistor R1 supplies the Zener current as well as the load current IR2 (R2 is the load). R1 can be calculated as , where is the Zener voltage, and IR2 is the required load current. This regulator is used for very simple low-power applications where the currents involved are very small and the load is permanently connected across the Zener diode (such as voltage reference or voltage source circuits). Once R1 has been calculated, removing R2 will allow the full load current (plus the Zener current) through the diode and may exceed the diode's maximum current rating, thereby damaging it. The regulation of this circuit is also not very good because the Zener current (and hence the Zener voltage) will vary depending on and inversely depending on the load current. In some designs, the Zener diode may be replaced with another similarly functioning device, especially in an ultra-low-voltage scenario, like (under forward bias) several normal diodes or LEDs in series. Simple series regulator Adding an emitter follower stage to the simple shunt regulator forms a simple series voltage regulator and substantially improves the regulation of the circuit. Here, the load current IR2 is supplied by the transistor whose base is now connected to the Zener diode. Thus the transistor's base current (IB) forms the load current for the Zener diode and is much smaller than the current through R2. This regulator is classified as "series" because the regulating element, viz., the transistor, appears in series with the load. R1 sets the Zener current (IZ) and is determined as where, VZ is the Zener voltage, IB is the transistor's base current, K = 1.2 to 2 (to ensure that R1 is low enough for adequate IB) and where, IR2 is the required load current and is also the transistor's emitter current (assumed to be equal to the collector current) and hFE(min) is the minimum acceptable DC current gain for the transistor. This circuit has much better regulation than the simple shunt regulator, since the base current of the transistor forms a very light load on the Zener, thereby minimising variation in Zener voltage due to variation in the load. Note that the output voltage will always be about 0.65V less than the Zener due to the transistor's VBE drop. Although this circuit has good regulation, it is still sensitive to the load and supply variation. This can be resolved by incorporating negative feedback circuitry into it. This regulator is often used as a "pre-regulator" in more advanced series voltage regulator circuits. The circuit is readily made adjustable by adding a potentiometer across the Zener, moving the transistor base connection from the top of the Zener to the pot wiper. It may be made step adjustable by switching in different Zeners. Finally it is occasionally made microadjustable by adding a low value pot in series with the Zener; this allows a little voltage adjustment, but degrades regulation (see also capacitance multiplier). "Fixed" three-terminal linear regulators are commonly available to generate fixed voltages of plus 3 V, and plus or minus 5 V, 6V, 9 V, 12 V, or 15 V, when the load is less than 1.5 A. The "78xx" series (7805, 7812, etc.) regulate positive voltages while the "79xx" series (7905, 7912, etc.) regulate negative voltages. Often, the last two digits of the device number are the output voltage (e.g., a 7805 is a +5 V regulator, while a 7915 is a −15 V regulator). There are variants on the 78xx series ICs, such as 78L and 78S, some of which can supply up to 2 Amps. Adjusting fixed regulators By adding another circuit element to a fixed voltage IC regulator, it is possible to adjust the output voltage. Two example methods are: - A Zener diode or resistor may be added between the IC's ground terminal and ground. Resistors are acceptable where ground current is constant, but are ill-suited to regulators with varying ground current. By switching in different Zener diodes, diodes or resistors, the output voltage can be adjusted in a step-wise fashion. - A potentiometer can be placed in series with the ground terminal to increase the output voltage variably. However, this method degrades regulation, and is not suitable for regulators with varying ground current. |This section requires expansion. (October 2012)| An adjustable regulator generates a fixed low nominal voltage between its output and its adjust terminal (equivalent to the ground terminal in a fixed regulator). This family of devices includes low power devices like LM723 and medium power devices like LM317 and L200. Some of the variable regulators are available in packages with more than three pins, including dual in-line packages. They offer the capability to adjust the output voltage by using external resistors of specific values. For output voltages not provided by standard fixed regulators and load currents of less than 7 A, commonly available adjustable three-terminal linear regulators may be used. The LM317 series (+1.25V) regulates positive voltages while the LM337 series (−1.25V) regulates negative voltages. The adjustment is performed by constructing a potential divider with its ends between the regulator output and ground, and its centre-tap connected to the 'adjust' terminal of the regulator. The ratio of resistances determines the output voltage using the same feedback mechanisms described earlier. Single IC dual tracking adjustable regulators are available for applications such as op-amp circuits needing matched positive and negative DC supplies.[which?] Some have selectable current limiting as well. Some regulators require a minimum load. Linear IC voltage regulators may include a variety of protection methods: - Current limiting such as constant-current limiting or foldback - Thermal shutdown - Safe operating area protection Sometimes external protection is used, such as crowbar protection. Using a linear regulator Linear regulators can be constructed using discrete components but are usually encountered in integrated circuit forms. The most common linear regulators are three-terminal integrated circuits in the TO-220 package. Common solid-state series voltage regulators are the LM78xx (for positive voltages), LM79xx (for negative voltages), and the AMS1117 (low drop out, for lower positive voltages than LM78xx allows) series. Common fixed voltages are 1.8V, 3.3V (both for low-voltage CMOS logic circuits), 5 V (for transistor-transistor logic circuits) and 12 V (for communications circuits and peripheral devices such as disk drives). In fixed voltage regulators the reference pin is tied to ground, whereas in variable regulators the reference pin is connected to the centre point of a fixed or variable voltage divider fed by the regulator's output. A variable voltage divider such as a potentiometer allows the user to adjust the regulated voltage. - Voltage regulator - Bandgap voltage reference - List of LM-series integrated circuits - Brokaw bandgap reference - Switched-mode power supply - Low-dropout regulator - When I[who?] designed my AM pocket radio powered by a 3.7 V lithium-ion battery, the 1.5–1.8 V power supply required by the TA7642 chip was provided using a Zener regulator using a red LED (with a forward voltage of 1.7 V) in forward in place of the Zener diode. This LED also doubled as the power indicator. - , Datasheet of L78xx Showing a model that can output 2 A - "Zener regulator" at Hyperphysics - Linear voltage regulator tutorial video in HD Includes practical examples. - ECE 327: LM317 Bandgap Voltage Reference Example — Brief explanation of the temperature-independent bandgap reference circuit within the LM317. - ECE 327: Procedures for Voltage Regulators Lab — Gives schematics, explanations, and analyses for Zener shunt regulator, series regulator, feedback series regulator, feedback series regulator with current limiting, and feedback series regulator with current foldback. Also discusses the proper use of the LM317 integrated circuit bandgap voltage reference and bypass capacitors. - ECE 327: Report Strategies for Voltage Regulators Lab — Gives more-detailed quantitative analysis of behavior of several shunt and series regulators in and out of normal operating ranges. - "7A SPX1580 regulator"
https://en.wikipedia.org/wiki/Linear_regulator
4.3125
Glaciers form when multiple snowfalls in mountainous or polar regions turn into ice that flows across the land, powered by gravity. They are present throughout the year. Glaciers are so powerful they can change the shape of mountain valleys. As a glacier flows down a valley it wears away the rock and changes it from a typical V-shape, created by river erosion, to a U-shape. This characteristic U-shape makes it easy to spot ancient glacial valleys. Scientists are particularly interested in merged large glaciers called ice caps or ice sheets in Greenland and Antarctica. They are investigating how manmade climate change is affecting these vast reservoirs of fresh water. Image: Wright Glacier in Alaska, extending from the Juneau Ice Field to a glacial lake (credit: Gregory G. Dimijian, M.D./SPL) See amazing footage of a glacier moving towards the sea. Frozen Planet programme maker Jeff Wilson describes a fast moving glacier moving towards the sea. Manmade tunnels lead to the Svartisen glacier's beautiful underside. Dr Iain Stewart follows manmade tunnels that take him deep below the Svartisen glacier in Norway to see why moving sheets of ice are so powerful. Sound recordings from deep within the ice reveal movement. Dr Iain Stewart introduces sound recordings made deep within glaciers. It is possible to hear the glaciers creak and groan as they move downhill. Ice that flows across the land shapes our planet's surface. Dr Iain Stewart explains how annual snowfall accumulations are gradually compacted to form layers of hard ice that in turn form glaciers. Many of the world's glaciers are retreating. Dr Iain Stewart describes the retreat of many of the Earth's glaciers and the break up of polar ice sheets. A glacier (US // or UK //) is a persistent body of dense ice that is constantly moving under its own weight; it forms where the accumulation of snow exceeds its ablation (melting and sublimation) over many years, often centuries. Glaciers slowly deform and flow due to stresses induced by their weight, creating crevasses, seracs, and other distinguishing features. They also abrade rock and debris from their substrate to create landforms such as cirques and moraines. Glaciers form only on land and are distinct from the much thinner sea ice and lake ice that form on the surface of bodies of water. On Earth, 99% of glacial ice is contained within vast ice sheets in the polar regions, but glaciers may be found in mountain ranges on every continent except Australia, and on a few high-latitude oceanic islands. Between 35°N and 35°S, glaciers occur only in the Himalayas, Andes, Rocky Mountains, a few high mountains in East Africa, Mexico, New Guinea and on Zard Kuh in Iran. Glaciers cover about 10 percent of Earth's land surface. Continental glaciers cover nearly 5 million square miles or about 98 percent of Antarctica's 5.1 million square miles, with an average thickness of 7,000 feet (2,100 m). Greenland and Patagonia also have huge expanses of continental glaciers. Glacial ice is the largest reservoir of freshwater on Earth. Many glaciers from temperate, alpine and seasonal polar climates store water as ice during the colder seasons and release it later in the form of meltwater as warmer summer temperatures cause the glacier to melt, creating a water source that is especially important for plants, animals and human uses when other sources may be scant. Within high altitude and Antarctic environments, the seasonal temperature difference is often not sufficient to release meltwater. Because glacial mass is affected by long-term climate changes, e.g., precipitation, mean temperature, and cloud cover, glacial mass changes are considered among the most sensitive indicators of climate change and are a major source of variations in sea level. A large piece of compressed ice, or a glacier, appears blue as large quantities of water appear blue. This is because water molecules absorb other colors more efficiently than blue. The other reason for the blue color of glaciers is the lack of air bubbles. Air bubbles, which give a white color to ice, are squeezed out by pressure increasing the density of the created ice.
http://www.bbc.co.uk/science/earth/water_and_ice/glacier
4.5
Like other conic sections, hyperbolas can be created by "slicing" a cone and looking at the cross-section. Unlike other conics, hyperbolas actually require 2 cones stacked on top of each other, point to point. The shape is the result of effectively creating a parabola out of both cones at the same time. So the question is, should hyperbolas really be considered a shape all their own? Or are they just two parabolas graphed at the same time? Could "different" shapes be made from any of the other conic sections if two cones were used at the same time? - Khan Academy: Conic Sections Hyperbolas 2 In addition to their focal property, hyperbolas also have another interesting geometric property. Unlike a parabola, a hyperbola becomes infinitesimally close to a certain line as the x− or y−coordinates approach infinity. What we mean by “infinitesimally close?” Here we mean two things: 1) The further you go along the curve, the closer you get to the asymptote, and 2) If you name a distance, no matter how small, eventually the curve will be that close to the asymptote. Or, using the language of limits, as we go further from the vertex of the hyperbola the limit of the distance between the hyperbola and the asymptote is 0. These lines are called asymptotes. There are two asymptotes, and they cross at the point at which the hyperbola is centered: For a hyperbola of the form x2a2−y2b2=1, the asymptotes are the lines: y=bax and y=−bax. For a hyperbola of the form y2a2−x2b2=1 the asymptotes are the lines: y=abx and y=−abx. (For a shifted hyperbola, the asymptotes shift accordingly.) Graph the following hyperbola, drawing its foci and asymptotes and using them to create a better drawing: 9x2−36x−4y2−16y−16=0. First, we put the hyperbola into the standard form: So a=2, b=3 and c=4+9−−−−√=13−−√. The hyperbola is horizontally oriented, centered at the point (2,-2), with foci at (2+13−−√,−2) and (2−13−−√,−2). After taking shifting into consideration, the asymptotes are the lines: y+2=32(x−2) and y+2=−32(x−2). So graphing the vertices and a few points on either side, we see the hyperbola looks something like this: Graph the following hyperbola, drawing its foci and asymptotes and using them to create a better drawing: 16x2−96x−9y2−36y−84=0 Graph the following hyperbola, drawing its foci and asymptotes and using them to create a better drawing: y2−14y−25x2−200x−376=0 Concept question wrap-up Hyperbolas are considered different shapes, because there are specific behaviors that are unique to hyperbolas. Also, though hyperbolas are the result of dual parabolas, none of the other conics really create unique shapes with dual cones - just double figures - and in any case require multiple "slices". A hyperbola is a conic section where the cutting plane intersects both sides of the cone, resulting in two infinite “U”-shapes curves. An unbounded shape is so large that no circle, no matter how large, can enclose the shape. An asymptote is a line which a curve approaches as the curve and the line approach infinity, eventually becoming closer than any given positive number. A perpendicular hyperbola is a hyperbola where the asymptotes are perpendicular. 1) Find the equation for a hyperbola with asymptotes of slopes 512 and −512, and foci at points (2,11) and (2,1). 2) A hyperbola with perpendicular asymptotes is called perpendicular. What does the equation of a perpendicular hyperbola look like? 3) Find an equation of the hyperbola with x-intercepts at x = –7 and x = 5, and foci at (–6, 0) and (4, 0). 2) The slopes of perpendicular lines are negative reciprocals of each other. This means that ab=ba, which, for positive a and b means a=b 3) To find the equation: The foci have the same y-coordinates, so this is a left/right hyperbola with the center, foci, and vertices on a line paralleling the x-axis. Since it is a left/right hyperbola, the y part of the equation will be negative and equation will lead with the x2 term (since the leading term is positive by convention and the squared term must have different signs if this is a hyperbola). :The center is midway between the foci, so the center (h,k)=(−1,0). The foci c are 5 units to either side of the center, so c=5→c2=25 The x-intercepts are 4 units to either side of the center, and the foci are on the x-axis so the intercepts must be the vertices a a=4→a2=16 Use Pythagoras a2+b2=c2 to get b2=25−16=9 Substitute the calculated values into the standard form (x−h)2a−(y−k)2b=1 to get Find the equations of the asymptotes Graph the hyperbolas, give the equation of the asymptotes, use the asymptotes to enhance the accuracy of your graph.
http://www.ck12.org/book/CK-12-Math-Analysis-Concepts/r4/section/6.7/
4.03125
committee, one or more persons appointed or elected to consider, report on, or take action on a particular matter. Because of the advantages of a division of labor, legislative committees of various kinds have assumed much of the work of legislatures in many nations. Standing committees are appointed in both houses of the U.S. Congress at the beginning of every session to deal with bills in the different specific classes. Important congressional committees include ways and means; appropriations; commerce; armed services; foreign relations; and judiciary. The number, but not the scope, of the committees was much reduced in 1946. Since then there has been a large increase in the number of subcommittees, which have become steadily more important. Members of committees are in effect elected by caucuses of the two major parties in Congress; the majority party is given the chairmanship and majority on each committee, and chairmanships, as well as membership on important committees, are influenced by seniority, but seniority is no longer the sole deciding factor and others may override it. The presiding officer of either house may appoint special committees, including those of investigation, which have the power to summon witnesses and compel the submission of evidence. The presiding officers also appoint committees of conference to obtain agreement between the two houses on the content of bills of the same general character. The U.S. legislative committee system conducts most congressional business through its powers of scrutiny and investigation of government departments. In France the constitution of the Fifth Republic permits each legislative chamber to have no more than six standing committees. Because these committees are large, unofficial committees have formed that do much of the real work of examining bills. As in the U.S. government, these committees are quite powerful because of their ability to delay legislation. In Great Britain devices such as committees of the whole are used in the consideration of money bills and there are large standing committees of the House of Commons, but committees have not been very important in the British legislature. Recently attempts have been made to form specialized committees. See L. A. Froman, The Congressional Process (1967); G. Goodwin, Jr., The Little Legislatures (1970); Congressional Quarterly, Guide to Congress (3d ed. 1982). The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
http://www.factmonster.com/encyclopedia/history/committee.html
4.21875
Gary Rockswold teaches algebra in context, answering the question, “Why am I learning this?” By experiencing math through applications, students see how it fits into their lives, and they become motivated to succeed. Rockswold’s focus on conceptual understanding helps students make connections between the concepts and as a result, students see the bigger picture of math and are prepared for future courses. This streamlined text covers linear, quadratic, nonlinear, exponential, and logarithmic functions and systems of equations and inequalities, which gets to the heart of what students need from this course. A more comprehensive college algebra text is also available. Table of Contents 1. Introduction to Functions and Graphs 1.1 Numbers, Data, and Problem Solving 1.2 Visualizing and Graphing Data 1.3 Functions and Their Representations 1.4 Types of Functions 1.5 Functions and Their Rates of Change 2. Linear Functions and Equations 2.1 Linear Functions and Models 2.2 Equations of Lines 2.3 Linear Equations 2.4 Linear Inequalities 2.5 Absolute Value Equations and Inequalities 3. Quadratic Functions and Equations 3.1 Quadratic Functions and Models 3.2 Quadratic Equations and Problem Solving 3.3 Complex Numbers 3.4 Quadratic Inequalities 3.5 Transformations of Graphs 4. More Nonlinear Functions and Equations 4.1 More Nonlinear Functions and Their Graphs 4.2 Polynomial Functions and Models 4.3 Division of Polynomials 4.4 Real Zeros of Polynomial Functions 4.5 The Fundamental Theorem of Algebra 4.6 Rational Functions and Models 4.7 More Equations and Inequalities 4.8 Radical Equations and Power Functions 5. Exponential and Logarithmic Functions 5.1 Combining Functions 5.2 Inverse Functions and Their Representations 5.3 Exponential Functions and Models 5.4 Logarithmic Functions and Models 5.5 Properties of Logarithms 5.6 Exponential and Logarithmic Equations 5.7 Constructing Nonlinear Models 6. Systems of Equations and Inequalities 6.1 Functions and Systems of Equations in Two Variables 6.2 Systems of Inequalities in Two Variables 6.3 Systems of Linear Equations in Three Variables 6.4 Solutions to Linear Systems Using Matrices 6.5 Properties and Applications of Matrices 6.6 Inverses of Matrices Reference: Basic Concepts from Algebra and Geometry R.1 Formulas from Geometry R.2 Integer Exponents R.3 Polynomial Expressions R.4 Factoring Polynomials R.5 Rational Expressions R.6 Radical Notation and Rational Exponents R.7 Radical Expressions Appendix A: Using the Graphing Calculator Appendix B: A Library of Functions Appendix C: Partial Fractions Digital Choices ? MyLab & Mastering with Pearson eText is a complete digital substitute for a print value pack at a lower price. MyLab & Mastering ? MyLab & Mastering products deliver customizable content and highly personalized study paths, responsive learning tools, and real-time evaluation and diagnostics. MyLab & Mastering products help move students toward the moment that matters most—the moment of true understanding and learning. $99.95 | ISBN-13: 978-0-321-72670-4 $60.50 | ISBN-13: 978-0-321-73052-7 With VitalSource eTextbooks, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs. Access your course materials on iPad, Android and Kindle devices with VitalSource Bookshelf, the textbook e-reader that helps you read, study and learn brilliantly. Features include: - See all of your eTextbooks at a glance and access them instantly anywhere, anytime from your Bookshelf - no backpack required. - Multiple ways to move between pages and sections including linked Table of Contents and Search make navigating eTextbooks a snap. - Highlight text with one click in your choice of colors. Add notes to highlighted passages. Even subscribe to your classmates' and instructors' highlights and notes to view in your book. - Scale images and text to any size with multi-level zoom without losing page clarity. Customize your page display and reading experience to create a personal learning experience that best suits you. - Print only the pages you need within limits set by publisher - Supports course materials that include rich media and interactivity like videos and quizzes - Easily copy/paste text passages for homework and papers - Supports assistive technologies for accessibility by vision and hearing impaired users $73.99 | ISBN-13: 978-0-321-83073-9 Loose Leaf Version ? Books a la Carte are less-expensive, loose-leaf versions of the same textbook. $122.67 | ISBN-13: 978-0-321-72672-8
http://www.mypearsonstore.com/bookstore/essentials-of-college-algebra-with-modeling-and-visualization-0321715284
4.125
Snow and Ice Overview A tenth of Earth’s land surface is permanently occupied by ice sheets or glaciers, but the domain of the cryosphere – that part of the world where snow and ice can form – extends three times further still. The cryosphere is an important regulator of global climate, its bright albedo reflecting sunlight back to space and its presence influencing regional weather and global ocean currents. Some 77% of the globes freshwater are bound up within the ice – but the cryosphere appears disproportionately sensitive to the effects of global warming. Imaging radar systems like those of ERS and Envisat pierce through clouds or seasonal darkness to chart ice extent, possessing sensitivity to different ice types – from kilometers - thick ice sheets to new-born floating ‘pancake’ ice – supplemented by optical observations. Radar altimeters gather data on changing ice height and mass: in 2009 ESA launched CryoSat-2 as the first altimetry mission specifically designed to accurately measure the thickness of sea ice and land ice margins. Snow and Ice News 21 December 2015 How can access to Sentinel data increase Canada's ability to offer improved information on sea ice? 17 December 2015 Using data from ESA's CryoSat mission, scientists have produced the best maps yet of the changing height of Earth's biggest ice sheets. Specific Topics on Snow and Ice The enormous permafrost areas of the world show seasonal change which has impact on not only vegetation and hydrological cycles, but also on the planning and safety of huge gas and oil pipelines which traverse these areas. Sea ice is formed from ocean water that freezes, whether along coasts or to the sea floor (fast ice) or floating on the surface (drift ice) or packed together (pack ice). The most important areas of pack ice are the polar ice packs. Because of vast amounts of water added to or removed from the oceans and atmosphere, the behavior of polar ice packs have a significant impact of the global changes in climate. Related Data Types Related (Key) Documentation Related Research Results
https://earth.esa.int/web/guest/earth-topics/snow-and-ice
4.03125
MARK J. ROZELL The Constitutional Convention of 1787 formally created the American presidency. George Washington put the office into effect. Indeed, Washington was very cognizant of the fact that his actions as president would establish the office and have consequences for his successors. The first president’s own words evidenced how conscious he was of the crucial role he played in determining the makeup of the office of the presidency. He had written to James Madison, ‘‘As the first of everything, in our situation will serve to establish a precedent, it is devoutly wished on my part that these precedents be fixed on true principles.’’ 1 In May 1789 Washington wrote, ‘‘Many things which appear of little importance in themselves and at the beginning, may have great and durable consequences from their having been established at the commencement of a new general government.’’ 2 All presidents experience the burdens of the office. Washington’s burdens were unique in that only he had the responsibility to establish the office in practice. The costs of misjudgments to the future of the presidency were great. The parameters of the president’s powers remained vague when Washington took office. The executive article of the Constitution (Article II) lacked the specificity of the legislative article (Article I), leaving the first occupant of the presidency imperfect guidance on the scope and limits of his authority. Indeed, it may very well have been because Washington was the obvious choice of first occupant of the office that the Constitutional Convention officers left many of the powers of the presidency vague. Willard Sterne Randall writes, ‘‘No doubt no other president would have been trusted with such latitude.’’ 3 Acutely aware of his burdens, Washington set out to exercise his powers prudently yet firmly when necessary. Perhaps Washington’s greatest legacy to the presidency was his substantial
https://www.questia.com/read/101007847/george-washington-and-the-origins-of-the-american
4
Impressionism developed in France in the nineteenth century and is based on the practice of painting out of doors and spontaneously ‘on the spot’ rather than in a studio from sketches. Main impressionist subjects were landscapes and scenes of everyday life Impressionism was developed by Claude Monet and other Paris-based artists from the early 1860s. (Though the process of painting on the spot can be said to have been pioneered in Britain by John Constablein around 1813–17 through his desire to paint nature in a realistic way). Instead of painting in a studio, the impressionists found that they could capture the momentary and transient effects of sunlight by working quickly, in front of their subjects, in the open air (en plein air) rather than in a studio. This resulted in a greater awareness of light and colour and the shifting pattern of the natural scene. Brushwork became rapid and broken into separate dabs in order to render the fleeting quality of light. The first group exhibition was in Paris in 1874 and included work by Monet, Auguste Renoir, Edgar Degas and Paul Cezanne. The work shown was greeted with derision with Monet’s Impression, Sunrise particularly singled out for ridicule and giving its name (used by critics as an insult) to the movement. Seven further exhibitions were then held at intervals until 1886. River of dreams In this article, art historian John House and filmmaker Patrick Keiller talk about how London’s light has had an impact on the depiction of its river, referencing the work of Monet, Whistler and Turner. Impressionism at Tate - Take a look at impressionism in Tate’s collection - Or browse the selection of works in the slideshow below Many of the core impressionist artists have all had exhibitions at Tate. These online exhibition guides provide an introduction to their work. - Paul Cézanne: an Exhibition of Watercolours (11 April 1946 – 12 May 1946) - Paintings by Cézanne (29 September 1954 – 27 October 1954) - Degas (20 September 1952 – 19 October 1952) - Degas, Sickert and Toulouse-Lautrec (5 October 2005 – 15 January 2006) - Claude Monet (26 September 1957 – 3 November 1957) - Oil Paintings by Camille Pissarro (27 June 1931 – 3 October 1931) - Renoir (25 September 1953 – 25 October 1953) Monet in focus When Monet’s paintings first appeared they must have looked absolutely astonishing…those lurid artificial colours must have seemed as though they had come from out of space or something Beauty, power and space Jeremy Lewison, the curator of Turner Monet Twombly, explores the influence of Turner on Monet’s work. Monet had admired Turner’s paintings in 1871 when he was in London with Camille Pissarro. Curator Jeremy Lewison explains why Monet created a lilly pond in his back garden and the sadness behind Monet’s iconic impressionist pieces. Impressionism in context Impressionism reached prominence in during the 1870s and 1880s. Watch curator Allison Smith discuss what else was happening at the time in the art world. Impressionism in Britain It’s the picture that made impressionism excepted in England. Caroline Corbeau-Parsons, Assistant Curator, British Art, 1850–1915 Impressionism for kids These blog posts, games and activites are a fun and simple way to introduce impressionism to kids, whether in the classroom or at home. What is impressionism? Who were the key impressionist artists and why was the weather important to them? This Tate Kids blog post answers these important questions. Who is Paul Gauguin? …and why did his travels influence his work? This piece explains all. Who is Edgar Degas? Did you know Degas was supposed to be a lawyer? This blog post looks at who Degas was and his famous artworks. Play and create They will love bringing Degas’ Little Dancer to life with this extraordinary game. Inspire kids to use brushstrokes and markers like the impressionists, with this airbrush game.
http://www.tate.org.uk/learn/online-resources/glossary/i/impressionism
4.03125
August 7, 2012 Carnegie Airborne Observatory Helps Manage Elephants April Flowers for redOrbit.com - Your Universe Online Scientists have debated how big a role elephants play in toppling trees in South African savannas for years. Now, using some very high tech airborne equipment, they finally have an answer.Tree loss is a natural process, but in some regions it is increasing beyond what could naturally be expected. This extreme tree loss has cascading effects on the habitats of many species. Studying savannas across Kruger National Park, Carnegie scientists have quantitatively determined tree losses for the first time. The team found that elephants, as previously thought, are the primary agents of tree loss. Their browsing habits knock trees over at a rate averaging 6 times higher than in areas that are inaccessible to elephants. The study, published in Ecology Letters, found that elephants prefer trees in the 16 to 30 foot height range, with annual losses of up to 20% at that size. The findings of this study will bolster our understanding of elephant and savanna conservation needs. "Previous field studies gave us important clues that elephants are a key driver of tree losses, but our airborne 3-D mapping approach was the only way to fully understand the impacts of elephants across a wide range of environmental conditions found in savannas," commented lead author Greg Asner of Carnegie's Department of Global Ecology. "Our maps show that elephants clearly toppled medium-sized trees, creating an "elephant trap" for the vegetation. These elephant-driven tree losses have a ripple effect across the ecosystem, including how much carbon is sequestered from the atmosphere." Previously, researchers used aerial photography and field-based approaches to quantifying the tree loss and the impact of elephant browsing. This team used Light Detection and Ranging (LiDAR), mounted on the fixed wing of Carnegie Airborne Observatory (CAO). The LiDAR provides detailed 3-D images of the vegetation canopy at tree-level resolution using laser pulses that sweep across the African savanna. Able to detect even small changes in individual tree height, CAO's vast coverage is far superior to previous methods. Using four study landscapes within Kruger National Park, and in very large areas fenced off to prevent herbivore entry, the scientists considered an array of environmental variables. There are six such enclosures, four of which keep out all herbivores larger than a rabbit, and two which allow herbivores smaller than elephants. The team identified and monitored 58,00 individual trees from the air across this landscape in 2008 and again in 2010. They found that nearly 9% of the trees decreased in height in two years. They also mapped treefall changes and linked them to different climate and terrain conditions. Most of the tree loss occurred in lowland areas with higher moisture and on soils high in nutrients that harbor trees preferred by elephants for browsing. Comparison with the herbivore free enclosures definitively identified elephants, as opposed to other herbivores or fire, as the major agent in tree losses over the two year study period. "These spatially explicit patterns of treefall highlight the challenges faced by conservation area managers in Africa, who must know where and how their decisions impact ecosystem health and biodiversity. They should rely on rigorous science to evaluate alternative scenarios and management options, and the CAO helps provide the necessary quantification," commented co-author Shaun Levick. Danie Pienaar, head of scientific services of the South African National Parks remarked, "This collaboration between external scientists and conservation managers has led to exciting and ground-breaking new insights to long-standing questions and challenges. Knowing where increasing elephant impacts occur in sensitive landscapes allows park managers to take appropriate and focused action. These questions have been difficult to assess with conventional ground-based field approaches over large scales such as those in Kruger National Park."
http://www.redorbit.com/news/science/1112670642/carnegie-airborne-observatory-elephants-080712/
4.1875
In the classical world, large-scale, freestanding statues were among the most highly valued and thoughtfully positioned works of art. Sculpted in the round, and commonly made of bronze or stone, statues embodied human, divine, and mythological beings, as well as animals. Our understanding of where and how they were displayed relies on references in ancient texts and inscriptions, and images on coins, reliefs, vases (08.258.25; 06.1021.230), and wall paintings (03.14.13), as well as on the archaeological remains of monuments and sites. Even in the most carefully excavated and well-preserved locations, bases usually survive without their corresponding statues; dispersed fragments of heads and bodies provide little indication of the visual spectacle of which they once formed a part. Numerous statues exhibited at the Metropolitan Museum, for example, have known histories of display in old European collections, but their ancient contexts can only be conjectured (03.12.13; 03.12.14). Among the earliest Greek statues were images of divinities housed in temples, settings well suited to communicate their religious potency. The Greeks situated these standing or seated figures, which often wore real clothing and held objects associated with their unique powers, on axis with temple entrances for maximum visual impact. By the mid-seventh century B.C., rigidly upright statues in stone, referred to as kouroi (youths) and korai (maidens), marked gravesites and were dedicated to the gods as votive offerings in sanctuaries (32.11.1). Greek sanctuaries were sacred, bounded areas, typically encompassing an altar and one or more temples. Evidence for a range of display contexts for statues is more extensive for the Classical period. Public spaces ornamented with statues included open places of assembly like the Athenian agora (06.311; 26.60.1), temples, altars, gateways, and cemeteries (44.11.2,3). Statues of athletic contest winners were often erected at large sanctuaries such as Olympia and Delphi, or sometimes in the victors’ hometowns (25.78.56). Throughout the Archaic and Classical periods, however, the focus of statue production and display remained the representation and veneration of the gods. From the sixth to fifth centuries B.C., hundreds of statues were erected in honor of Athena on the Athenian Akropolis. Whether in a shrine or temple, or in a public space less overtly religious, such as the Athenian agora, statues of deities were reminders of the influence and special protection of the gods, which permeated all aspects of Greek life. By the end of the fifth century B.C., a few wealthy patrons began to exhibit panel paintings, murals, wall hangings, and mosaics in their houses. Ancient authors are vocal in their condemnation of the private ownership and display of such art objects as inappropriate luxuries (for example, Plato’s Republic, 372 D–373 A), yet pass no judgment on the statuettes these patrons also exhibited at home. The difference in attitude suggests that even in private space, statues retained religious significance. The ideas and conventions governing the exhibition of statues were as reliant on political affairs as they were on religion. Thus the changes in state formation set in motion by Alexander the Great’s conquest of the eastern Mediterranean brought about new and important developments in statue display. During the Hellenistic period, portrait statuary provided a means of communicating across great distances both the concept of government by a single ruler and the particular identities of Hellenistic dynasts. These portraits, which blend together traditional, idealized features with particularized details that promote individual recognition, were a prominent feature of sanctuaries dedicated to ruler cults (2002.66). Elaborate victory monuments showcased statues of both triumphant (2003.407.7) and defeated warriors. Well-preserved examples of such monuments have been discovered at Pergamon, in the northwestern region of modern-day Turkey. For the first time, nonidealized human forms, including the elderly and infirm, became popular subjects for large-scale sculptures. Statues of this kind were offered as votives in temples and sanctuaries (09.39). The extensive collections housed within the palaces of Hellenistic dynasts became influential models for generals and politicians in far-away Rome, who coveted such displays of power, prestige, and cultural sophistication. During the early Roman Republic, the principal types of statue display were divinities enshrined in temples and other images of gods taken as spoils of war from the neighboring communities that Rome fought in battle. The latter were exhibited in public spaces alongside commemorative portraits. Roman portraiture yielded two major sculptural innovations: “verism” and the portrait bust. Both probably had their origins in the funerary practices of the Roman nobility, who displayed death masks of their ancestors at home in their atria and paraded them through the city on holidays. Initially, only elected officials and former elected officials were allowed the honor of having their portrait statues occupy public spaces. As is clear from many of the inscriptions accompanying portrait statues, which assert that they should be erected in prominent places, location was of crucial importance. Civic buildings such as council houses and public libraries were enviable locations for display. Statues of the most esteemed individuals were on view by the rostra or speaker’s platform in the Roman Forum. In addition to contemporary statesmen, the subjects of Roman portraiture also included great men of the past, philosophers and writers, and mythological figures associated with particular sites. Beginning in the third century B.C., victorious Roman generals during the conquest of Magna Graecia (present-day southern Italy and Sicily) and the Greek East brought back with them not just works of art, but also exposure to elaborate Hellenistic architectural environments, which they desired to emulate. If granted a triumph by the senate, generals constructed and consecrated public buildings to commemorate their conquests and house the spoils. By the late Republic, statues adorned basilicas, sanctuaries and shrines, temples, theaters, and baths. As individuals became increasingly enriched through the process of conquest and empire, statues also became an important means of conveying wealth and sophistication in the private sphere: sculptural displays filled the gardens and porticoes of urban houses and country villas (09.39; 1992.11.71). The fantastical vistas depicted in luxurious domestic wall paintings included images of statues as well (03.14.13). After the transition from republic to empire, the opportunity to undertake large-scale public building projects in the city of Rome was all but limited to members of the imperial family. Augustus, however, initiated a program of construction that created many more locations for the display of statues. In the Forum of Augustus, the historical heroes of the Republic appeared alongside representations of Augustus (07.286.115) and his human, legendary, and divine ancestors. Augustus publicized his own image to an extent previously unimaginable, through official portrait statues and busts, as well as images on coins. Statues of Augustus and the subsequent emperors were copied and exhibited throughout the empire. Wealthy citizens incorporated features of imperial portraiture into statues of themselves (14.130.1). Roman governors were honored by portrait statues in provincial cities and sanctuaries. The most numerous and finely crafted portraits that survive from the imperial period, however, portray the emperors and their families (26.229). Summoning up the image of a “forest” of statues or a second “population” within the city, the sheer number of statues on display in imperial Rome dwarfs anything seen before or since. Many of the types of statues used in Roman decoration are familiar from the Greek and Hellenistic past: these include portraits of Hellenistic kings and Greek intellectuals, as well as so-called ideal or idealizing figures that represent divinities, mythological figures, heroes, and athletes. The relationship of such statues to Greek models varies from work to work. A number of those displayed in prestigious locations in Rome were transplanted Greek masterpieces, such as the Venus sculpted by Praxiteles in the fourth century B.C. for the inhabitants of the Greek island of Cos, which was set up in Rome’s Temple of Peace, a museumlike structure set aside for the display of art. More often, however, the relationship to an original is one of either close copying or eclectic and inventive adaptation. Some of these copies and adaptations were genuine imports, but many others were made locally by foreign, mainly Greek, craftsmen. A means of display highly characteristic of the Roman empire was the arrangement of statues in tiers of niches adorning public buildings, including baths (03.12.13; 03.12.14), theaters, and amphitheaters. Several of the most impressive surviving statuary displays come from ornamental facades constructed in the Eastern provinces (Library of Celsus, Ephesus, Turkey). Bolstered by wealth drawn from around the Mediterranean, the imperial families established their own palace culture that was later emulated by kings and emperors throughout Europe. Exemplary of the lavish sculptural displays that decorated imperial residences is the statuary spectacle inside a cave employed as a summer dining room of the palace of Tiberius at Sperlonga, on the southern coast of Italy. Visitors to this cave were confronted with a panoramic view of groups of full-scale statues reenacting episodes from Odysseus’ legendary travels. Few statues from antiquity have survived both in situ and intact, but the evidence suggests an ever-changing and expanding range of contexts for their display. The exhibition of statues in the Greek and Roman Galleries of the Metropolitan Museum allows them to be seen in close proximity to one another and exploits the capacity of natural light to reveal varying aspects of their beauty over the course of each day. Nichols, Marden. “Contexts for the Display of Statues in Classical Antiquity.” In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. http://www.metmuseum.org/toah/hd/disp/hd_disp.htm (April 2010) Harward, Vernon J. "Greek Domestic Sculpture and the Origins of Private Art Patronage." Ph.D. diss., Harvard University, 1982. Hemingway, Seán A. The Horse and Jockey from Artemision. Berkeley: University of California Press, 2004. Kleiner, Diana E. E. Roman Sculpture. New Haven: Yale University Press, 1992. Richter, Gisela M. A. Catalogue of Greek Sculptures in the Metropolitan Museum of Art. Cambridge, Mass.: Harvard University Press, 1954. Ridgway, Brinilde S. "The Setting of Greek Sculpture." Hesperia 40, no. 3 (1971), pp. 336–56. Stewart, Peter. Statues in Roman Society: Representation and Response. Oxford: Oxford University Press, 2003.
http://metmuseum.org/toah/hd/disp/hd_disp.htm
4
This web page contains three interactive tutorials for secondary learners on the common chemicals and molecular compounds found in everyday life. The first tutorial, House and Garden is appropriate for Grades 4-6. The second, Do You Know Your Molecules, is an interactive problem set for Grades 6-9. The last tutorial, Symmetry and Point Groups, is targeted to high school chemistry and preparatory chemistry courses. Reciprocal Net is a database of information about molecular structures. The project involves research scientists from a number of universities who collaborate to provide educators, students, and the general public with learning tools related to crystallography, chemistry, and biochemistry. biochemicals, chemical symmetry, chemistry tutorial, compounds, crystallography, molecular structure, molecular structure tutorial, molecule, point groups Metadata instance created August 19, 2011 by Caroline Hall August 19, 2011 by Caroline Hall Last Update when Cataloged: December 12, 2009 AAAS Benchmark Alignments (2008 Version) 4. The Physical Setting 4D. The Structure of Matter 3-5: 4D/E4a. When a new material is made by combining two or more materials, it has properties that are different from the original materials. 3-5: 4D/E6. All materials have certain physical properties, such as strength, hardness, flexibility, durability, resistance to water and fire, and ease of conducting heat. 6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope. 6-8: 4D/M1cd. Atoms may link together in well-defined molecules, or may be packed together in crystal patterns. Different arrangements of atoms into groups compose all substances and determine the characteristic properties of substances. 6-8: 4D/M5. Chemical elements are those substances that do not break down during normal laboratory reactions involving such treatments as heating, exposure to electric current, or reaction with acids. All substances from living and nonliving things can be broken down to a set of about 100 elements, but since most elements tend to combine with others, few elements are found in their pure form. 6-8: 4D/M6c. Carbon and hydrogen are common elements of living matter. 6-8: 4D/M10. A substance has characteristic properties such as density, a boiling point, and solubility, all of which are independent of the amount of the substance and can be used to identify it. 6-8: 4D/M11. Substances react chemically in characteristic ways with other substances to form new substances with different characteristic properties. 9-12: 4D/H7b. An enormous variety of biological, chemical, and physical phenomena can be explained by changes in the arrangement and motion of atoms and molecules. 9-12: 4D/H8. The configuration of atoms in a molecule determines the molecule's properties. Shapes are particularly important in how large molecules interact with others. %0 Electronic Source %A Huffman, John %D December 12, 2009 %T Reciprocal Net: Crystals and Chemicals in Everyday Life %V 2016 %N 8 February 2016 %8 December 12, 2009 %9 text/html %U http://www.reciprocalnet.org/edumodules/chemistry/index.html Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
http://www.thephysicsfront.org/items/detail.cfm?ID=11407
4.25
Usually, when we talk about force, there is more than one force involved, and these forces are applied in different directions. Let's look at a diagram of a car. When the car is sitting still, gravity exerts a downward force on the car (this force acts everywhere on the car, but for simplicity, we can draw the force at the car's center of mass). But the ground exerts an equal and opposite upward force on the tires, so the car does not move. This content is not compatible on this device. Figure 1. Animation of forces on a car When the car begins to accelerate, some new forces come into play. The rear wheels exert a force against the ground in a horizontal direction; this makes the car start to accelerate. When the car is moving slowly, almost all of the force goes into accelerating the car. The car resists this acceleration with a force that is equal to its mass multiplied by its acceleration. You can see in Figure 1 how the force arrow starts out large because the car accelerates rapidly at first. As it starts to move, the air exerts a force against the car, which grows larger as the car gains speed. This aerodynamic drag force acts in the opposite direction of the force of the tires, which is propelling the car, so it subtracts from that force, leaving less force available for acceleration. Eventually, the car will reach its top speed, the point at which it cannot accelerate any more. At this point, the driving force is equal to the aerodynamic drag, and no force is left over to accelerate the car.
http://auto.howstuffworks.com/auto-parts/towing/towing-capacity/information/fpte3.htm
4.34375
A gravitational lens refers to a distribution of matter (such as a cluster of galaxies) between a distant source and an observer, that is capable of bending the light from the source, as it travels towards the observer. This effect is known as gravitational lensing and the amount of bending is one of the predictions of Albert Einstein's general theory of relativity. (Classical physics also predicts bending of light, but only half that of general relativity's.) Although Orest Khvolson (1924) or Frantisek Klin (1936) are sometimes credited as being the first ones to discuss the effect in print, the effect is more commonly associated with Einstein, who published a more famous article on the subject in 1936. Fritz Zwicky posited in 1937 that the effect could allow galaxy clusters to act as gravitational lenses. It was not until 1979 that this effect was confirmed by observation of the so-called "Twin QSO" SBS 0957+561. Unlike an optical lens, maximum 'bending' occurs closest to, and minimum 'bending' furthest from, the center of a gravitational lens. Consequently, a gravitational lens has no single focal point, but a focal line instead. If the (light) source, the massive lensing object, and the observer lie in a straight line, the original light source will appear as a ring around the massive lensing object. If there is any misalignment the observer will see an arc segment instead. This phenomenon was first mentioned in 1924 by the St. Petersburg physicist Orest Chwolson, and quantified by Albert Einstein in 1936. It is usually referred to in the literature as an Einstein ring, since Chwolson did not concern himself with the flux or radius of the ring image. More commonly, where the lensing mass is complex (such as a galaxy group or cluster) and does not cause a spherical distortion of space–time, the source will resemble partial arcs scattered around the lens. The observer may then see multiple distorted images of the same source; the number and shape of these depending upon the relative positions of the source, lens, and observer, and the shape of the gravitational well of the lensing object. There are three classes of gravitational lensing: 1. Strong lensing: where there are easily visible distortions such as the formation of Einstein rings, arcs, and multiple images. 2. Weak lensing: where the distortions of background sources are much smaller and can only be detected by analyzing large numbers of sources to find coherent distortions of only a few percent. The lensing shows up statistically as a preferred stretching of the background objects perpendicular to the direction to the center of the lens. By measuring the shapes and orientations of large numbers of distant galaxies, their orientations can be averaged to measure the shear of the lensing field in any region. This, in turn, can be used to reconstruct the mass distribution in the area: in particular, the background distribution of dark matter can be reconstructed. Since galaxies are intrinsically elliptical and the weak gravitational lensing signal is small, a very large number of galaxies must be used in these surveys. These weak lensing surveys must carefully avoid a number of important sources of systematic error: the intrinsic shape of galaxies, the tendency of a camera's point spread function to distort the shape of a galaxy and the tendency of atmospheric seeing to distort images must be understood and carefully accounted for. The results of these surveys are important for cosmological parameter estimation, to better understand and improve upon the Lambda-CDM model, and to provide a consistency check on other cosmological observations. They may also provide an important future constraint on dark energy. 3. Microlensing: where no distortion in shape can be seen but the amount of light received from a background object changes in time. The lensing object may be stars in the Milky Way in one typical case, with the background source being stars in a remote galaxy, or, in another case, an even more distant quasar. The effect is small, such that (in the case of strong lensing) even a galaxy with a mass more than 100 billion times that of the Sun will produce multiple images separated by only a few arcseconds. Galaxy clusters can produce separations of several arcminutes. In both cases the galaxies and sources are quite distant, many hundreds of megaparsecs away from our Galaxy. Gravitational lenses act equally on all kinds of electromagnetic radiation, not just visible light. Weak lensing effects are being studied for the cosmic microwave background as well as galaxy surveys. Strong lenses have been observed in radio and x-ray regimes as well. If a strong lens produces multiple images, there will be a relative time delay between two paths: that is, in one image the lensed object will be observed before the other image. Henry Cavendish in 1784 (in an unpublished manuscript) and Johann Georg von Soldner in 1801 (published in 1804) had pointed out that Newtonian gravity predicts that starlight will bend around a massive object as had already been supposed by Isaac Newton in 1704 in his famous Queries No.1 in his book Opticks. The same value as Soldner's was calculated by Einstein in 1911 based on the equivalence principle alone. However, Einstein noted in 1915 in the process of completing general relativity, that his (and thus Soldner's) 1911-result is only half of the correct value. Einstein became the first to calculate the correct value for light bending. The first observation of light deflection was performed by noting the change in position of stars as they passed near the Sun on the celestial sphere. The observations were performed in May 1919 by Arthur Eddington and his collaborators during a total solar eclipse, so that the stars near the Sun could be observed. Observations were made simultaneously in the cities of Sobral, Ceará, Brazil and in São Tomé and Príncipe on the west coast of Africa. The result was considered spectacular news and made the front page of most major newspapers. It made Einstein and his theory of general relativity world-famous. When asked by his assistant what his reaction would have been if general relativity had not been confirmed by Eddington and Dyson in 1919, Einstein famously made the quip: "Then I would feel sorry for the dear Lord. The theory is correct anyway." Spacetime around a massive object (such as a galaxy cluster or a black hole) is curved, and as a result light rays from a background source (such as a galaxy) propagating through spacetime are bent. The lensing effect can magnify and distort the image of the background source. According to general relativity, mass "warps" space–time to create gravitational fields and therefore bend light as a result. This theory was confirmed in 1919 during a solar eclipse, when Arthur Eddington and Frank Watson Dyson observed the light from stars passing close to the Sun was slightly bent, so that stars appeared slightly out of position. Einstein realized that it was also possible for astronomical objects to bend light, and that under the correct conditions, one would observe multiple images of a single source, called a gravitational lens or sometimes a gravitational mirage. However, as he only considered gravitational lensing by single stars, he concluded that the phenomenon would most likely remain unobserved for the foreseeable future. In 1937, Fritz Zwicky first considered the case where a galaxy (which he called 'nebulae' at that time) could act as a source, something that according to his calculations should be well within the reach of observations. It was not until 1979 that the first gravitational lens would be discovered. It became known as the "Twin QSO" since it initially looked like two identical quasistellar objects; it is officially named SBS 0957+561. This gravitational lens was discovered by Dennis Walsh, Bob Carswell, and Ray Weymann using the Kitt Peak National Observatory 2.1 meter telescope. In the 1980s, astronomers realized that the combination of CCD imagers and computers would allow the brightness of millions of stars to be measured each night. In a dense field, such as the galactic center or the Magellanic clouds, many microlensing events per year could potentially be found. This led to efforts such as Optical Gravitational Lensing Experiment, or OGLE, that have characterized hundreds of such events. Explanation in terms of space–time curvature In general relativity, light follows the curvature of spacetime, hence when light passes around a massive object, it is bent. This means that the light from an object on the other side will be bent towards an observer's eye, just like an ordinary lens. Since light always moves at a constant speed, lensing changes the direction of the velocity of the light, but not the magnitude. Light rays are the boundary between the future, the spacelike, and the past regions. The gravitational attraction can be viewed as the motion of undisturbed objects in a background curved geometry or alternatively as the response of objects to a force in a flat geometry. The angle of deflection is: toward the mass M at a distance r from the affected radiation, where G is the universal constant of gravitation and c is the speed of light in a vacuum. Since the Schwarzschild radius is defined as , this can also be expressed in simple form as Search for gravitational lenses Most of the gravitational lenses in the past have been discovered accidentally. A search for gravitational lenses in the northern hemisphere (Cosmic Lens All Sky Survey, CLASS), done in radio frequencies using the Very Large Array (VLA) in New Mexico, led to the discovery of 22 new lensing systems, a major milestone. This has opened a whole new avenue for research ranging from finding very distant objects to finding values for cosmological parameters so we can understand the universe better. A similar search in the southern hemisphere would be a very good step towards complementing the northern hemisphere search as well as obtaining other objectives for study. If such a search is done using well-calibrated and well-parameterized instrument and data, a result similar to the northern survey can be expected. The use of the Australia Telescope 20 GHz (AT20G) Survey data collected using the Australia Telescope Compact Array (ATCA) stands to be such a collection of data. As the data were collected using the same instrument maintaining a very stringent quality of data we should expect to obtain good results from the search. The AT20G survey is a blind survey at 20 GHz frequency in the radio domain of the electromagnetic spectrum. Due to the high frequency used, the chances of finding gravitational lenses increases as the relative number of compact core objects (e.g. Quasars) are higher (Sadler et al. 2006). This is important as the lensing is easier to detect and identify in simple objects compared to objects with complexity in them. This search involves the use of interferometric methods to identify candidates and follow them up at higher resolution to identify them. Full detail of the project is currently under works for publication. In a 2009 article on Science Daily a team of scientists led by a cosmologist from the U.S. Department of Energy's Lawrence Berkeley National Laboratory has made major progress in extending the use of gravitational lensing to the study of much older and smaller structures than was previously possible by stating that weak gravitational lensing improves measurements of distant galaxies. Astronomers from the Max Planck Institute for Astronomy in Heidelberg, Germany, the results of which are accepted for publication on Oct 21, 2013 in the Astrophysical Journal Letters (arXiv.org), discovered what at the time was the most distant gravitational lens galaxy termed as J1000+0221 using NASA’s Hubble Space Telescope. While it remains the most distant quad-image lensing galaxy known, an even more distant two-image lensing galaxy was subsequently discovered by an international team of astronomers using a combination of Hubble Space Telescope and Keck telescope imaging and spectroscopy. The discovery and analysis of the IRC 0218 lens was published in the Astrophysical Journal Letters on June 23, 2014. A research published Sep 30, 2013 in the online edition of Physical Review Letters, led by McGill University in Montreal, Québec, Canada, has discovered the B-modes, that are formed due to gravitational lensing effect, using National Science Foundation's South Pole Telescope and with help from the Herschel space observatory. This discovery would open the possibilities of testing the theories of how our universe originated. Solar gravitational lens Albert Einstein predicted in 1936 that rays of light from the same direction that skirt the edges of the Sun would converge to a focal point approximately 542 AU from the Sun. Thus, the Sun could act as a gravitational lens for magnifying distant objects in a way that provides some flexibility in aiming unlike the coincidence-based lens usage of more distant objects, such as intermediate galaxies. A probe's location could shift around as needed to select different targets relative to the Sun (acting as a lens). This distance is far beyond the progress and equipment capabilities of space probes such as Voyager 1, and beyond the known planets and dwarf planets, though over thousands of years 90377 Sedna will move further away on its highly elliptical orbit. The high gain for potentially detecting signals through this lens, such as microwaves at the 21-cm hydrogen line, led to the suggestion by Frank Drake in the early days of SETI that a probe could be sent to this distance. A multipurpose probe SETISAIL and later FOCAL was proposed to the ESA in 1993, but is expected to be a difficult task. If a probe does pass 542 AU, the gain and image-forming capabilities of the lens will continue to improve at further distances as the rays that come to a focus at these distances pass further away from the distortions of the Sun's corona. Measuring weak lensing Kaiser et al. (1995), Luppino & Kaiser (1997) and Hoekstra et al. (1998) prescribed a method to invert the effects of the Point Spread Function (PSF) smearing and shearing, recovering a shear estimator uncontaminated by the systematic distortion of the PSF. This method (KSB+) is the most widely used method in current weak lensing shear measurements. Galaxies have random rotations and inclinations. As a result, the shear effects in weak lensing need to be determined by statistically preferred orientations. The primary source of error in lensing measurement is due to the convolution of the PSF with the lensed image. The KSB method measures the ellipticity of a galaxy image. The shear is proportional to the ellipticity. The objects in lensed images are parameterized according to their weighted quadrupole moments. For a perfect ellipse, the weighted quadrupole moments are related to the weighted ellipticity. KSB calculate how a weighted ellipticity measure is related to the shear and use the same formalism to remove the effects of the PSF. KSB’s primary advantages are its mathematical ease and relatively simple implementation. However, KSB is based on a key assumption that the PSF is circular with an anisotropic distortion. It’s fine for current cosmic shear surveys, but the next generation of surveys (e.g. LSST) may need much better accuracy than KSB can provide. Because during that time, the statistical errors from the data are negligible, the systematic errors will dominate. Historical papers and references - Chwolson, O (1924). "Über eine mögliche Form fiktiver Doppelsterne". Astronomische Nachrichten 221 (20): 329–330. Bibcode:1924AN....221..329C. doi:10.1002/asna.19242212003. - Einstein, Albert (1936). "Lens-like Action of a Star by the Deviation of Light in the Gravitational Field". Science 84 (2188): 506–7. Bibcode:1936Sci....84..506E. doi:10.1126/science.84.2188.506. JSTOR 1663250. PMID 17769014. - Renn, Jürgen; Tilman Sauer; John Stachel (1997). "The Origin of Gravitational Lensing: A Postscript to Einstein's 1936 Science paper". Science 275 (5297): 184–6. Bibcode:1997Sci...275..184R. doi:10.1126/science.275.5297.184. PMID 8985006. - Drakeford, Jason; Corum, Jonathan; Overbye, Dennis (March 5, 2015). "Einstein’s Telescope - video (02:32)". New York Times. Retrieved December 27, 2015. - Overbye, Dennis (March 5, 2015). "Astronomers Observe Supernova and Find They’re Watching Reruns". New York Times. Retrieved March 5, 2015. - Cf. Kennefick 2005 for the classic early measurements by the Eddington expeditions; for an overview of more recent measurements, see Ohanian & Ruffini 1994, ch. 4.3. For the most precise direct modern observations using quasars, cf. Shapiro et al. 2004 - Gravity Lens – Part 2 (Great Moments in Science, ABS Science) - Dieter Brill, "Black Hole Horizons and How They Begin", Astronomical Review (2012); Online Article, cited Sept.2012. - Melia, Fulvio (2007). The Galactic Supermassive Black Hole. Princeton University Press. pp. 255–256. ISBN 0-691-13129-5. - Soldner, J. G. V. (1804). "On the deflection of a light ray from its rectilinear motion, by the attraction of a celestial body at which it nearly passes by". Berliner Astronomisches Jahrbuch: 161–172. - Newton, Isaac (1998). Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light. Also two treatises of the species and magnitude of curvilinear figures. Commentary by Nicholas Humez (Octavo ed.). Palo Alto, Calif.: Octavo. ISBN 1-891788-04-3. (Opticks was originally published in 1704). - Will, C.M. (2006). "The Confrontation between General Relativity and Experiment". Living Rev. Relativity 9: 39. arXiv:gr-qc/0510072. Bibcode:2006LRR.....9....3W. doi:10.12942/lrr-2006-3. - Dyson, F. W.; Eddington, A. S.; Davidson C. (1920). "A determination of the deflection of light by the Sun's gravitational field, from observations made at the total eclipse of 29 May 1919". Philosophical Transactions of the Royal Society 220A: 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009. - Stanley, Matthew (2003). "'An Expedition to Heal the Wounds of War': The 1919 Eclipse and Eddington as Quaker Adventurer". Isis 94 (1): 57–89. doi:10.1086/376099. PMID 12725104. - Rosenthal-Schneider, Ilse: Reality and Scientific Truth. Detroit: Wayne State University Press, 1980. p 74. (See also Calaprice, Alice: The New Quotable Einstein. Princeton: Princeton University Press, 2005. p 227.) - Dyson, F. W.; Eddington, A. S.; Davidson, C. (1 January 1920). "A Determination of the Deflection of Light by the Sun's Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 220 (571-581): 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009. - F. Zwicky (1937). "Nebulae as Gravitational lenses" (PDF). Physical Review 51 (4): 290. doi:10.1103/PhysRev.51.290. - Walsh, D.; Carswell, R. F.; Weymann, R. J. (31 May 1979). "0957 + 561 A, B: twin quasistellar objects or gravitational lens?". Nature 279 (5712): 381–384. Bibcode:1979Natur.279..381W. doi:10.1038/279381a0. PMID 16068158. - Cosmology: Weak gravitational lensing improves measurements of distant galaxies - Sci-News.com (21 Oct 2013). "Most Distant Gravitational Lens Discovered". Sci-News.com. Retrieved 22 October 2013. - van der Wel, A.; et al. (2013). "Discovery of a Quadruple Lens in CANDELS with a Record Lens Redshift". ApJ Letters 777: L17. arXiv:1309.2826. Bibcode:2013ApJ...777L..17V. doi:10.1088/2041-8205/777/1/L17. - Wong, K.; et al. (2014). "Discovery of a Strong Lensing Galaxy Embedded in a Cluster at z = 1.62". ApJ Letters 789: L31. arXiv:1405.3661. Bibcode:2014ApJ...789L..31W. doi:10.1088/2041-8205/789/2/L31. - NASA/Jet Propulsion Laboratory (October 22, 2013). "Long-sought pattern of ancient light detected". ScienceDaily. Retrieved October 23, 2013. - Hanson, D.; et al. (Sep 30, 2013). "Detection of B-Mode Polarization in the Cosmic Microwave Background with Data from the South Pole Telescope". Physical Review Letters. 14 111. arXiv:1307.5830. Bibcode:2013PhRvL.111n1301H. doi:10.1103/PhysRevLett.111.141301. - Clavin, Whitney; Jenkins, Ann; Villard, Ray (7 January 2014). "NASA's Hubble and Spitzer Team up to Probe Faraway Galaxies". NASA. Retrieved 8 January 2014. - Chou, Felecia; Weaver, Donna (16 October 2014). "RELEASE 14-283 - NASA’s Hubble Finds Extremely Distant Galaxy through Cosmic Magnifying Glass". NASA. Retrieved 17 October 2014. - "Lens-Like Action of a Star by the Deviation of Light in the Gravitational Field". Science 84 (2188): 506–507. 1936. Bibcode:1936Sci....84..506E. doi:10.1126/science.84.2188.506. PMID 17769014. - Claudio Maccone (2009). Deep Space Flight and Communications: Exploiting the Sun as a Gravitational Lens. Springer. - Kaiser, Nick; Squires, Gordon; Broadhurst, Tom (August 1995). "A Method for Weak Lensing Observations". The Astrophysical Journal 449: 460. arXiv:astro-ph/9411005. Bibcode:1995ApJ...449..460K. doi:10.1086/176071. - Luppino, G. A.; Kaiser, Nick (20 January 1997). "Detection of Weak Lensing by a Cluster of Galaxies at = 0.83". The Astrophysical Journal 475 (1): 20–28. arXiv:astro-ph/9601194. Bibcode:1997ApJ...475...20L. doi:10.1086/303508. - Loff, Sarah; Dunbar, Brian (February 10, 2015). "Hubble Sees A Smiling Lens". NASA. Retrieved February 10, 2015. - "Most distant gravitational lens helps weigh galaxies". ESA/Hubble Press Release. Retrieved 18 October 2013. - "ALMA Rewrites History of Universe's Stellar Baby Boom". ESO. Retrieved 2 April 2013. - "Accidental Astrophysicists". Science News, June 13, 2008. - "XFGLenses". A Computer Program to visualize Gravitational Lenses, Francisco Frutos-Alfaro - "G-LenS". A Point Mass Gravitational Lens Simulation, Mark Boughen. - Newbury, Pete, "Gravitational Lensing". Institute of Applied Mathematics, The University of British Columbia. - Cohen, N., "Gravity's Lens: Views of the New Cosmology", Wiley and Sons, 1988. - "Q0957+561 Gravitational Lens". Harvard.edu. - "Gravitational lensing". Gsfc.nasa.gov. - Bridges, Andrew, "Most distant known object in universe discovered". Associated Press. February 15, 2004. (Farthest galaxy found by gravitational lensing, using Abell 2218 and Hubble Space Telescope.) - Analyzing Corporations ... and the Cosmos An unusual career path in gravitational lensing. - "HST images of strong gravitational lenses". Harvard-Smithsonian Center for Astrophysics. - "A planetary microlensing event" and "A Jovian-mass Planet in Microlensing Event OGLE-2005-BLG-071", the first extra-solar planet detections using microlensing. - Gravitational lensing on arxiv.org - NRAO CLASS home page - AT20G survey - A diffraction limit on the gravitational lens effect (Bontz, R. J. and Haugan, M. P. "Astrophysics and Space Science" vol. 78, no. 1, p. 199-210. August 1981) - Further reading - Blandford & Narayan; Narayan, R (1992). "Cosmological applications of gravitational lensing". ARA&A 30 (1): 311–358. Bibcode:1992ARA&A..30..311B. doi:10.1146/annurev.aa.30.090192.001523. - Matthias Bartelmann and Peter Schneider (2000-08-17). "Weak Gravitational Lensing" (PDF). - Khavinson, Dmitry; Neumann, Genevra (June–July 2008). "From Fundamental Theorem of Algebra to Astrophysics: A "Harmonious" Path" (PDF). Notices (AMS) 55 (6): 666–675.. - Petters, Arlie O.; Levine, Harold; Wambsganss, Joachim (2001). Singularity Theory and Gravitational Lensing. Progress in Mathematical Physics 21. Birkhäuser. - Tools for the evaluation of the possibilities of using parallax measurements of gravitationally lensed sources (Stein Vidar Hagfors Haugan. June 2008) |Wikimedia Commons has media related to Gravitational lensing.| - Video: Evalyn Gates – Einstein's Telescope: The Search for Dark Matter and Dark Energy in the Universe, presentation in Portland, Oregon, on April 19, 2009, from the author's recent book tour. - Audio: Fraser Cain and Dr. Pamela Gay – Astronomy Cast: Gravitational Lensing, May 2007 Featured in science-fiction works
https://en.wikipedia.org/wiki/Gravitational_lensing
4.03125
The Plateau area extended from above the Canadian border through the plateau and mountain area of the Rocky Mts. to the Southwest and included much of California. Typical tribes were the Spokan, the Paiute, the Nez Percé, and the Shoshone. This was an area of great linguistic diversity. Because of the inhospitable environment the cultural development was generally low. The Native Americans in the Central Valley of California and on the California coast, notably the Pomo, were sedentary peoples who gathered edible plants, roots, and fruit and also hunted small game. Their acorn bread, made by pounding acorns into meal and then leaching it with hot water, was distinctive, and they cooked in baskets filled with water and heated by hot stones. Living in brush shelters or more substantial lean-tos, they had partly buried earth lodges for ceremonies and ritual sweat baths. Basketry, coiled and twined, was highly developed. To the north, between the Cascade Range and the Rocky Mts., the social, political, and religious systems were simple, and art was nonexistent. The Native Americans there underwent (c.1730) a great cultural change when they obtained from the Plains Indians the horse, the tepee, a form of the sun dance, and deerskin clothes. They continued, however, to fish for salmon with nets and spears and to gather camas bulbs. They also gathered ants and other insects and hunted small game and, in later times, buffalo. Their permanent winter villages on waterways had semisubterranean lodges with conical roofs; a few Native Americans lived in bark-covered long houses. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
http://www.factmonster.com/encyclopedia/society/natives-north-american-the-plateau-area.html
4.15625
Heliotropism, a form of tropism, is the diurnal motion or seasonal motion of plant parts (flowers or leaves) in response to the direction of the sun. The habit of some plants to move in the direction of the sun was already known by the Ancient Greeks. They named one of those plants after that property Heliotropium, meaning sun turn. The Greeks assumed it to be a passive effect, presumably the loss of fluid on the illuminated side, that did not need further study. Aristotle's logic that plants are passive and immobile organisms prevailed. In the 19th century, however, botanists discovered that growth processes in the plant were involved, and conducted increasingly ingenious experiments. A. P. de Candolle called this phenomenon in any plant heliotropism (1832). It was renamed phototropism in 1892, because it is a response to light rather than to the sun, and because the phototropism of algae in lab studies at that time strongly depended on the brightness (positive phototropic for weak light, and negative phototropic for bright light, like sunlight). A botanist studying this subject in the lab, at the cellular and subcellular level, or using artificial light, is more likely to employ the more abstract word phototropism. The French scientist Jean-Jacques d'Ortous de Mairan was one of the first to study heliotropism when he experimented with the Mimosa pudica plant. Heliotropic flowers track the sun's motion across the sky from east to west. During the night, the flowers may assume a random orientation, while at dawn they turn again toward the east where the sun rises. The motion is performed by motor cells in a flexible segment just below the flower, called a pulvinus. The motor cells are specialized in pumping potassium ions into nearby tissues, changing their turgor pressure. The segment flexes because the motor cells at the shadow side elongate due to a turgor rise. Heliotropism is a response to light from the sun. Several hypotheses have been proposed for the occurrence of heliotropism in flowers: - The pollinator attraction hypothesis holds that the warmth associated with full insolation of the flower is a direct reward for pollinators. - The growth promotion hypothesis assumes that effective absorption of solar energy and the consequent rise in temperature has a favourable effect on pollen germination, growth of the pollen tube and seed production. - The cooling hypothesis, appropriate to flowers in hot climates, assumes that the position of flowers is adjusted to avoid overheating. Some solar tracking plants are not purely heliotropic: in those plants the change of orientation is an innate circadian motion triggered by light, which continues for one or more periods if the light cycle is interrupted. Tropical convolvulaceous flowers show a preferred orientation, pointing in the general direction of the sun but not exactly tracking the sun. They demonstrated no diurnal heliotropism but strong seasonal heliotropism. If solar tracking is exact, the sun’s rays would always enter the corolla tube and warm the gynoecium, a process which could be dangerous in a tropical climate. However, by adopting a certain angle away from the solar angle, this is prevented. The trumpet shape of these flowers thus acts as a parasol shading the gynoecium at times of maximum solar radiation, and not allowing the rays to impinge on the gynoecium. In case of sunflower, a common misconception is that sunflower heads track the Sun across the sky. The uniform alignment of the flowers does result from heliotropism in an earlier development stage, the bud stage, before the appearance of flower heads. The buds are heliotropic until the end of the bud stage, and finally face east. The flower of the sunflower preserves the final orientation of the bud, thus keeping the mature flower facing east. Leaf heliotropism is the solar tracking behavior of plant leaves. Some plant species have leaves that orient themselves perpendicularly to the sun's rays in the morning (diaheliotropism), and others have those that orient themselves parallel to these rays at midday (paraheliotropism). Floral heliotropism is not necessarily exhibited by the same plants that exhibit leaf heliotropism. - Whippo, Craig W. (2006). "Phototropism: Bending towards Enlightenment". The Plant Cell 18 (5): 1110–1119. doi:10.1105/tpc.105.039669. PMC 1456868. PMID 16670442. Retrieved 2012-08-08. - Hart, J.W. (1990). Plant Tropisms: And other Growth Movements. Springer. p. 36. Retrieved 2012-08-08. - "Phototropism and photomorphogenesis of Vaucheria". - Donat-Peter Häder,Michael Lebert (2001). Photomovement. Elsevier. p. 676. Retrieved 2012-08-08. - Hocking B., Sharplin D. (1965). "Flower basking by arctic insects" (PDF). Nature 206 (4980): 206–215. doi:10.1038/206215b0. - Kevan, P.G. (1975). "Sun-tracking solar furnaces in high arctic flowers: significance for pollination and insects.". Science 189 (4204): 723–726. doi:10.1126/science.189.4204.723. - Lang A.R.G., Begg J.E. (1979). "Movements of Helianthus annuus leaves and heads". J Appl Ecol 16: 299–305. doi:10.2307/2402749. line feed character in |title=at position 31 (help) - Kudo, G. (1995). "Ecological Significance of Flower Heliotropism in the Spring Ephemeral Adonis ramosa (Ranunculaceae)". Oikos 72 (1): 14–20. doi:10.2307/3546032. - Patiño, S.; Jeffree, C.; Grace, J. (2002). "The ecological role of orientation in tropical convolvulaceous flowers" (PDF). Oecologia 130: 373–379. doi:10.1007/s00442-001-0824-1. - officially replaced by diaphototropism and paraphototropism - Animation of Heliotropic Leaf Movements in Plants - 24-hour heliotropism of Arctic poppy exposed to midnight sun
https://en.wikipedia.org/wiki/Heliotropism
4.1875
The word levée (from French, noun use of infinitive lever, "rising", from Latin levāre, "to raise") originated in the Levée du Soleil (Rising of the Sun) of King Louis XIV (1643–1715). It was his custom to receive his male subjects in his bedchamber just after arising, a practice that subsequently spread throughout Europe. In the 18th century the levée in Great Britain and Ireland became a formal court reception given by the sovereign or his/her representative in the forenoon or early afternoon. In the New World colonies the levée was held by the governor acting on behalf of the monarch. Only men were received at these events. It was in Canada that the levée became associated with New Year's Day. The fur traders had the tradition of paying their respects to the master of the fort (their government representative) on New Year's Day. This custom was adopted by the governor general and lieutenant governors for their levées. The first recorded levée in Canada was held on January 1, 1646, in the Chateau St. Louis by Charles Huault de Montmagny, Governor of New France from 1636 to 1648. In addition to wishing a happy new year to the citizens the governor informed guests of significant events in France as well as the state of affairs within the colony. In turn, the settlers were expected to renew their pledges of allegiance to the Crown. The levée tradition was continued by British colonial governors in Canada and subsequently by both the governor general and lieutenant governors. It continues to the present day. Over the years the levée has become almost solely a Canadian observance. Today, levées are the receptions (usually, but not necessarily, on New Year's Day) held by the governor general, the lieutenant governors of the provinces, the military and others, to mark the start of another year and to provide an opportunity for the public to pay their respects. Today the levée has evolved from the earlier, more boisterous party into a more sedate and informal one. It is an occasion to call upon representatives of the monarch, military and municipal governments and to exchange New Year's greetings and best wishes for the new year, to renew old acquaintances and to meet new friends. It is also an opportunity to reflect upon the events of the past year and to welcome the opportunities of the New Year. The province of Prince Edward Island maintains a more historical approach to celebrating levée day. On New Year's Day, all Legions and bars are opened and offer moosemilk (egg nog and rum) from the early morning until the late night. Though there are still the formal receptions held at Government House and Province House, levée day is not only a formal event. It is something that attracts a large number of Islanders, which is quite unusual in comparison to the other provinces where it has gradually become more subdued. Prince Edward Island levées begin at 9 a.m. The historic town of Niagara-on-the-Lake (the first capital of Upper Canada) holds a levée complete with firing of a cannon at Navy Hall (a historic building close to Fort George) The levée is well attended by townspeople and visitors. Toasts are made to the Queen, "our beloved Canada", the Canadian Armed Forces, veterans, "our fallen comrades", as well as "our American friends and neighbours" (this final toast would not have been made two centuries ago, when the town was founded). Greetings are brought from all levels of government and it is a great community event. Some religious leaders, such as the Bishop of the Anglican Diocese of Ontario, hold a Levée on New Year's Day. As has the levée itself, refreshments served at levées have undergone changes (both in importance and variety) over the years. In colonial times, when the formalities of the levée had been completed, guests were treated to wine and cheeses from the homeland. Wines did not travel well during the long ocean voyage to Canada. To make the cloudy and somewhat sour wine more palatable it was heated with alcohol and spices. The concoction came to be known as le sang du caribou ("reindeer blood"). Under British colonial rule the wine in le sang du caribou was replaced with whisky (which travelled better). This was then mixed with goat's milk and flavoured with nutmeg and cinnamon to produce an Anglicized version called "moose milk". Today's versions of moose milk, in addition to whisky (or rum) and spices may use a combination of eggnog and ice cream, as well as other alcoholic supplements. The exact recipes used by specific groups may be jealously guarded secrets. q.v. External links. Refreshments were clearly an important element in the New Year's festivities. A report of the New Year's levée held in Brandon House in Manitoba in 1797 indicated that "... in the morning the Canadians (men of the North West Company) make the House and Yard ring with saluting (the firing of rifles). The House then filled with them when they all got a dram each." Simpson's Athabasca Journal reports that on January 1, 1821, "the Festivities of the New Year commenced at four o'clock this morning when the people honoured me with a salute of fire arms, and in half an hour afterwards the whole Inmates of our Garrison assembled in the hall dressed out in their best clothes, and were regaled in a suitable manner with a few flaggon's Rum and some Cakes. A full allowance of Buffaloe meat was served out to them and a pint of spirits for each man." When residents called upon the governor to pay their respects they expected a party. In 1856 on Vancouver Island, there was "an almighty row" when the colonial governor's levée was not to the attendees' liking. Municipalities with levées - Ajijic, Jalisco, Mexico - Almonte, Ontario - Bracebridge, Ontario - Brampton, Ontario - Brantford, Ontario - Brockville, Ontario - Cape Breton Regional Municipality, Nova Scotia - Cambridge, Ontario - Cobourg, Ontario - Charlottetown, Prince Edward Island - Grand Manan, New Brunswick - Edmonton, Alberta - Elliot Lake, Ontario - Esquimalt, British Columbia - Guelph, Ontario - Halifax, Nova Scotia - Hamilton, Ontario - Kingston, Ontario - Kitchener, Ontario - Langford, British Columbia - London, Ontario - Medicine Hat, Alberta - Milton, Ontario - Mississauga, Ontario - Moncton, New Brunswick - Niagara-on-the-Lake, Ontario - North Saanich, British Columbia - Oak Bay, British Columbia - Oakville, Ontario - Orangeville, Ontario - Oshawa, Ontario - Owen Sound, Ontario - Parrsboro, Nova Scotia - Pictou, Nova Scotia - Picton, Ontario - Redwater, Alberta - Rivers, Manitoba - Riverview, New Brunswick - Saanich, British Columbia - Shelburne, Nova Scotia - Sioux Lookout, Ontario - St. Catharines, Ontario - Stellarton, Nova Scotia - Summerside, Prince Edward Island - Toronto, Ontario - Victoria, British Columbia - Windsor, Ontario - Winnipeg, Manitoba - Woodstock, New Brunswick - Yarmouth, Nova Scotia The levée has a long tradition in the Canadian Forces as one of the activities associated with New Year's Day. Military commanders garrisoned throughout Canada held local levées since, as commissioned officers, they were expected to act on behalf of the Crown on such occasions. On Vancouver Island (the base for the Royal Navy's Pacific Fleet), levées began in the 1840s. Today, members of the various Canadian Forces units and headquarters across Canada receive and greet visiting military and civilian guests on the first day of the new year. In military messes, refreshments take a variety of forms: moose milk (with rum often substituted for whisky); the special flaming punch of the Royal Canadian Hussars of Montreal; the Atholl Brose of the Seaforth Highlanders of Vancouver; "Little Black Devils", (Dark Rum and Creme de menthe) of the Royal Winnipeg Rifles. Members of Le Régiment de Hull use sabres to uncork bottles of champagne. - "levee". Dictionary.com. Dictionary.com Unabridged. Random House, Inc. Accessed January 10, 2013. - "Le Lieutenant Gouverneur de la province de Québec recevra les messieurs qui desirement lui faire visite samedi le 1er janvier 1916, de midi a 1 h. p.m., dans la chambre du Conseil Legislatif Hôtel du gouvernment." [The Lieutenant Governor of the Province of Quebec will receive gentlemen who wish to pay him a visit Saturday, January 1, from noon until 1 p.m., in the Legislative Council chamber of the Parliament building.]. L'Action Catholique. December 31, 1915. Retrieved June 3, 2012. - Legislative Assembly of Alberta:New Year's Levée - Rukavina, Peter: I Went to the Levée, 2004 - Canada's Navy: MARPAC - Maritime Forces Pacific - Town of Parrsboro, Nova Scotia Canada - "County levees planned to ring in a new year". Pictou Advocate. December 30, 2015. Retrieved 31 December 2015. - Town of Yarmouth, Nova Scotia
https://en.wikipedia.org/wiki/Lev%C3%A9e_(event)
4.09375
Classical liberalism stressed not only human rationality but the importance of individual property rights, natural rights, the need for constitutional limitations on government, and, especially, freedom of the individual from any kind of external restraint. Classical liberalism drew upon the ideals of the Enlightenment and the doctrines of liberty supported in the American and French revolutions. The Enlightenment, also known as the Age of Reason, was characterized by a belief in the perfection of the natural order and a belief that natural laws should govern society. Logically it was reasoned that if the natural order produces perfection, then society should operate freely without interference from government. The writings of such men as Adam Smith, David Ricardo, Jeremy Bentham, and John Stuart Mill mark the height of such thinking. In Great Britain and the United States the classic liberal program, including the principles of representative government, the protection of civil liberties, and laissez-faire economics, had been more or less effected by the mid-19th cent. The growth of industrial society, however, soon produced great inequalities in wealth and power, which led many persons, especially workers, to question the liberal creed. It was in reaction to the failure of liberalism to provide a good life for everyone that workers' movements and Marxism arose. Because liberalism is concerned with liberating the individual, however, its doctrines changed with the change in historical realities. Sections in this article: The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. See more Encyclopedia articles on: Political Science: Terms and Concepts
http://www.infoplease.com/encyclopedia/history/liberalism-classical-liberalism.html
4.09375
All matter can exhibit wave-like behaviour. For example a beam of electrons can be diffracted just like a beam of light or a water wave. Matter waves are a central part of the theory of quantum mechanics, being an example of wave–particle duality. The concept that matter behaves like a wave is also referred to as the de Broglie hypothesis (//) due to having been proposed by Louis de Broglie in 1924. Matter waves are often referred to as de Broglie waves. Wave-like behaviour of matter was first experimentally demonstrated in the Davisson–Germer experiment using electrons, and it has also been confirmed for other elementary particles, neutral atoms and even molecules. The wave-like behaviour of matter is crucial to the modern theory of atomic structure and particle physics. - 1 Historical context - 2 The de Broglie hypothesis - 3 Experimental confirmation - 4 de Broglie relations - 5 Interpretations - 6 De Broglie's phase wave and periodic phenomenon - 7 See also - 8 References - 9 Further reading - 10 External links At the end of the 19th century, light was thought to consist of waves of electromagnetic fields which propagated according to Maxwell’s equations, while matter was thought to consist of localized particles (See history of wave and particle viewpoints). In 1900, this division was exposed to doubt, when, investigating the theory of black body thermal radiation, Max Planck proposed that light is emitted in discrete quanta of energy. It was thoroughly challenged in 1905. Extending Planck's investigation in several ways, including its connection with the photoelectric effect, Albert Einstein proposed that light is also propagated and absorbed in quanta. Light quanta are now called photons. These quanta would have an energy given by the Planck–Einstein relation: and a momentum where ν (lowercase Greek letter nu) and λ (lowercase Greek letter lambda) denote the frequency and wavelength of the light, c the speed of light, and h Planck’s constant. In the modern convention, frequency is symbolized by f as is done in the rest of this article. Einstein’s postulate was confirmed experimentally by Robert Millikan and Arthur Compton over the next two decades. The de Broglie hypothesis De Broglie, in his 1924 PhD thesis, proposed that just as light has both wave-like and particle-like properties, electrons also have wave-like properties. By rearranging the momentum equation stated in the above section, we find a relationship between the wavelength, λ associated with an electron and its momentum, p, through the Planck constant, h: The relationship is now known to hold for all types of matter: all matter exhibits properties of both particles and waves. |“||When I conceived the first basic ideas of wave mechanics in 1923–24, I was guided by the aim to perform a real physical synthesis, valid for all particles, of the coexistence of the wave and of the corpuscular aspects that Einstein had introduced for photons in his theory of light quanta in 1905.||”| — De Broglie Matter waves were first experimentally confirmed to occur in the Davisson-Germer experiment for electrons, and the de Broglie hypothesis has been confirmed for other elementary particles. Furthermore, neutral atoms and even molecules have been shown to be wave-like. In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target. The angular dependence of the diffracted electron intensity was measured, and was determined to have the same diffraction pattern as those predicted by Bragg for x-rays. Before the acceptance of the de Broglie hypothesis, diffraction was a property that was thought to be only exhibited by waves. Therefore, the presence of any diffraction effects by matter demonstrated the wave-like nature of matter. When the de Broglie wavelength was inserted into the Bragg condition, the observed diffraction pattern was predicted, thereby experimentally confirming the de Broglie hypothesis for electrons. This was a pivotal result in the development of quantum mechanics. Just as the photoelectric effect demonstrated the particle nature of light, the Davisson–Germer experiment showed the wave-nature of matter, and completed the theory of wave-particle duality. For physicists this idea was important because it meant that not only could any particle exhibit wave characteristics, but that one could use wave equations to describe phenomena in matter if one used the de Broglie wavelength. Experiments with Fresnel diffraction and an atomic mirror for specular reflection of neutral atoms confirm the application of the de Broglie hypothesis to atoms, i.e. the existence of atomic waves which undergo diffraction, interference and allow quantum reflection by the tails of the attractive potential. Advances in laser cooling have allowed cooling of neutral atoms down to nanokelvin temperatures. At these temperatures, the thermal de Broglie wavelengths come into the micrometre range. Using Bragg diffraction of atoms and a Ramsey interferometry technique, the de Broglie wavelength of cold sodium atoms was explicitly measured and found to be consistent with the temperature measured by a different method. This effect has been used to demonstrate atomic holography, and it may allow the construction of an atom probe imaging system with nanometer resolution. The description of these phenomena is based on the wave properties of neutral atoms, confirming the de Broglie hypothesis. Recent experiments even confirm the relations for molecules and even macromolecules that otherwise might be supposed too large to undergo quantum mechanical effects. In 1999, a research team in Vienna demonstrated diffraction for molecules as large as fullerenes. The researchers calculated a De Broglie wavelength of the most probable C60 velocity as 2.5 pm. More recent experiments prove the quantum nature of molecules with a mass up to 6910 amu. de Broglie relations where h is Planck's constant. The equations can also be written as allows the equations to be written as where denotes the particle's rest mass, its velocity, the Lorentz factor, and the speed of light in a vacuum. See below for details of the derivation of the de Broglie relations. Group velocity (equal to the particle's speed) should not be confused with phase velocity (equal to the product of the particle's frequency and its wavelength). In the case of a non-dispersive medium, they happen to be equal, but otherwise they are not. Albert Einstein first explained the wave–particle duality of light in 1905. Louis de Broglie hypothesized that any particle should also exhibit such a duality. The velocity of a particle, he concluded, should always equal the group velocity of the corresponding wave. The magnitude of the group velocity is equal to the particle's speed. Both in relativistic and non-relativistic quantum physics, we can identify the group velocity of a particle's wave function with the particle velocity. Quantum mechanics has very accurately demonstrated this hypothesis, and the relation has been shown explicitly for particles as large as molecules. De Broglie deduced that if the duality equations already known for light were the same for any particle, then his hypothesis would hold. This means that where m is the mass of the particle and v its velocity. Also in special relativity we find that where v is the velocity of the particle regardless of wave behavior. By the de Broglie hypothesis, we see that Using relativistic relations for energy and momentum, we have where E is the total energy of the particle (i.e. rest energy plus kinetic energy in kinematic sense), p the momentum, the Lorentz factor, c the speed of light, and β the speed as a fraction of c. The variable v can either be taken to be the speed of the particle or the group velocity of the corresponding matter wave. Since the particle speed for any particle that has mass (according to special relativity), the phase velocity of matter waves always exceeds c, i.e. and as we can see, it approaches c when the particle speed is in the relativistic range. The superluminal phase velocity does not violate special relativity, because phase propagation carries no energy. See the article on Dispersion (optics) for details. Using 4-Vectors, the De Broglie relations form a single equation: which is frame-independent. Likewise, the relation between group/particle velocity and phase velocity is given in frame-independent form by: The physical reality underlying de Broglie waves is a subject of ongoing debate. Some theories treat either the particle or the wave aspect as its fundamental nature, seeking to explain the other as an emergent property. Some, such as the hidden variable theory, treat the wave and the particle as distinct entities. Yet others propose some intermediate entity that is neither quite wave nor quite particle but only appears as such when we measure one or the other property. The Copenhagen interpretation states that the nature of the underlying reality is unknowable and beyond the bounds of scientific enquiry. Schrödinger's quantum mechanical waves are conceptually different from ordinary physical waves such as water or sound. Ordinary physical waves are characterized by undulating real-number 'displacements' of dimensioned physical variables at each point of ordinary physical space at each instant of time. Schrödinger's "waves" are characterized by the undulating value of a dimensionless complex number at each point of an abstract multi-dimensional space, for example of configuration space. |“||If one wishes to calculate the probabilities of excitation and ionization of atoms [M. Born, Zur Quantenmechanik der Stossvorgange, Z. f. Phys., 37 (1926), 863; [Quantenmechanik der Stossvorgange], ibid., 38 (1926), 803] then one must introduce the coordinates of the atomic electrons as variables on an equal footing with those of the colliding electron. The waves then propagate no longer in three-dimensional space but in multi-dimensional configuration space. From this one sees that the quantum mechanical waves are indeed something quite different from the light waves of the classical theory.||”| At the same conference, Erwin Schrödinger reported likewise. |“||Under [the name 'wave mechanics',] at present two theories are being carried on, which are indeed closely related but not identical. The first, which follows on directly from the famous doctoral thesis by L. de Broglie, concerns waves in three-dimensional space. Because of the strictly relativistic treatment that is adopted in this version from the outset, we shall refer to it as the four-dimensional wave mechanics. The other theory is more remote from Mr de Broglie's original ideas, insofar as it is based on a wave-like process in the space of position coordinates (q-space) of an arbitrary mechanical system.[Long footnote about manuscript not copied here.] We shall therefore call it the multi-dimensional wave mechanics. Of course this use of the q-space is to be seen only as a mathematical tool, as it is often applied also in the old mechanics; ultimately, in this version also, the process to be described is one in space and time. In truth, however, a complete unification of the two conceptions has not yet been achieved. Anything over and above the motion of a single electron could be treated so far only in the multi-dimensional version; also, this is the one that provides the mathematical solution to the problems posed by the Heisenberg-Born matrix mechanics.||”| In 1955, Heisenberg reiterated this. |“||An important step forward was made by the work of Born [Z. Phys., 37: 863, 1926 and 38: 803, 1926] in the summer of 1926. In this work, the wave in configuration space was interpreted as a probability wave, in order to explain collision processes on Schrödinger's theory. This hypothesis contained two important new features in comparison with that of Bohr, Kramers and Slater. The first of these was the assertion that, in considering "probability waves", we are concerned with processes not in ordinary three-dimensional space, but in an abstract configuration space (a fact which is, unfortunately, sometimes overlooked even today); the second was the recognition that the probability wave is related to an individual process.||”| It is mentioned above that the "displaced quantity" of the Schrödinger wave has values that are dimensionless complex numbers. One may ask what is the physical meaning of those numbers. According to Heisenberg, rather than being of some ordinary physical quantity such as for example Maxwell's electric field intensity, or for example mass density, the Schrödinger-wave packet's "displaced quantity" is probability amplitude. He wrote that instead of using the term 'wave packet', it is preferable to speak of a probability packet. The probability amplitude supports calculation of probability of location or momentum of discrete particles. Heisenberg recites Duane's account of particle diffraction by probabilistic quantal translation momentum transfer, which allows, for example in Young's two-slit experiment, each diffracted particle probabilistically to pass discretely through a particular slit. Thus one does not need necessarily think of the matter wave, as it were, as 'composed of smeared matter'. These ideas may be expressed in ordinary language as follows. In the account of ordinary physical waves, a 'point' refers to a position in ordinary physical space at an instant of time, at which there is specified a 'displacement' of some physical quantity. But in the account of quantum mechanics, a 'point' refers to a configuration of the system at an instant of time, every particle of the system being in a sense present in every 'point' of configuration space, each particle at such a 'point' being located possibly at a different position in ordinary physical space. There is no explicit definite indication that, at an instant, this particle is 'here' and that particle is 'there' in some separate 'location' in configuration space. This conceptual difference entails that, in contrast to de Broglie's pre-quantum mechanical wave description, the quantum mechanical probability packet description does not directly and explicitly express the Aristotelian idea, referred to by Newton, that causal efficacy propagates through ordinary space by contact, nor the Einsteinian idea that such propagation is no faster than light. In contrast, these ideas are so expressed in the classical wave account, through the Green's function, though it is inadequate for the observed quantal phenomena. The physical reasoning for this was first recognized by Einstein. De Broglie's phase wave and periodic phenomenon De Broglie's thesis started from the hypothesis, "that to each portion of energy with a proper mass m0 one may associate a periodic phenomenon of the frequency ν0 , such that one finds: hν0 = m0c2. The frequency ν0 is to be measured, of course, in the rest frame of the energy packet. This hypothesis is the basis of our theory." De Broglie followed his initial hypothesis of a periodic phenomenon, with frequency ν0 , associated with the energy packet. He used the special theory of relativity to find, in the frame of the observer of the electron energy packet that is moving with velocity , that its frequency was apparently reduced to using the same notation as above. The quantity is the velocity of what de Broglie called the "phase wave". Its wavelength is and frequency . De Broglie reasoned that his hypothetical intrinsic particle periodic phenomenon is in phase with that phase wave. This was his basic matter wave conception. He noted, as above, that , and the phase wave does not transfer energy. While the concept of waves being associated with matter is correct, de Broglie did not leap directly to the final understanding of quantum mechanics with no missteps. There are conceptual problems with the approach that de Broglie took in his thesis that he was not able to resolve, despite trying a number of different fundamental hypotheses in different papers published while working on, and shortly after publishing, his thesis. These difficulties were resolved by Erwin Schrödinger, who developed the wave mechanics approach, starting from a somewhat different basic hypothesis. - Bohr model - Faraday wave - Kapitsa–Dirac effect - Matter wave clock - Schrödinger equation - Theoretical and experimental justification for the Schrödinger equation - Thermal de Broglie wavelength - De Broglie–Bohm theory - Feynman, R.; QED the Strange Theory of Light and matter, Penguin 1990 Edition, page 84. - Einstein, A. (1917). Zur Quantentheorie der Strahlung, Physicalische Zeitschrift 18: 121–128. Translated in ter Haar, D. (1967). The Old Quantum Theory. Pergamon Press. pp. 167–183. LCCN 66029628. - J. P. McEvoy & Oscar Zarate (2004). Introducing Quantum Theory. Totem Books. pp. 110–114. ISBN 1-84046-577-8. - Louis de Broglie "The Reinterpretation of Wave Mechanics" Foundations of Physics, Vol. 1 No. 1 (1970) - Mauro Dardo, Nobel Laureates and Twentieth-Century Physics, Cambridge University Press 2004, pp. 156–157 - R.B.Doak; R.E.Grisenti; S.Rehbein; G.Schmahl; J.P.Toennies; Ch. Wöll (1999). "Towards Realization of an Atomic de Broglie Microscope: Helium Atom Focusing Using Fresnel Zone Plates". Physical Review Letters 83 (21): 4229–4232. Bibcode:1999PhRvL..83.4229D. doi:10.1103/PhysRevLett.83.4229. - F. Shimizu (2000). "Specular Reflection of Very Slow Metastable Neon Atoms from a Solid Surface". Physical Review Letters 86 (6): 987–990. Bibcode:2001PhRvL..86..987S. doi:10.1103/PhysRevLett.86.987. PMID 11177991. - D. Kouznetsov; H. Oberst (2005). "Reflection of Waves from a Ridged Surface and the Zeno Effect". Optical Review 12 (5): 1605–1623. Bibcode:2005OptRv..12..363K. doi:10.1007/s10043-005-0363-9. - H.Friedrich; G.Jacoby; C.G.Meister (2002). "quantum reflection by Casimir–van der Waals potential tails". Physical Review A 65 (3): 032902. Bibcode:2002PhRvA..65c2902F. doi:10.1103/PhysRevA.65.032902. - Pierre Cladé; Changhyun Ryu; Anand Ramanathan; Kristian Helmerson; William D. Phillips (2008). "Observation of a 2D Bose Gas: From thermal to quasi-condensate to superfluid". arXiv:0805.3519. - Shimizu; J.Fujita (2002). "Reflection-Type Hologram for Atoms". Physical Review Letters 88 (12): 123201. Bibcode:2002PhRvL..88l3201S. doi:10.1103/PhysRevLett.88.123201. PMID 11909457. - D. Kouznetsov; H. Oberst; K. Shimizu; A. Neumann; Y. Kuznetsova; J.-F. Bisson; K. Ueda; S. R. J. Brueck (2006). "Ridged atomic mirrors and atomic nanoscope". Journal of Physics B 39 (7): 1605–1623. Bibcode:2006JPhB...39.1605K. doi:10.1088/0953-4075/39/7/005. - Arndt, M.; O. Nairz; J. Voss-Andreae; C. Keller; G. van der Zouw; A. Zeilinger (14 October 1999). "Wave-particle duality of C60". Nature 401 (6754): 680–682. Bibcode:1999Natur.401..680A. doi:10.1038/44348. PMID 18494170. - Gerlich, S.; S. Eibenberger; M. Tomandl; S. Nimmrichter; K. Hornberger; P. J. Fagan; J. Tüxen; M. Mayor & M. Arndt (5 April 2011). "Quantum interference of large organic molecules". Nature Communications 2 (263): 263–. Bibcode:2011NatCo...2E.263G. doi:10.1038/ncomms1263. PMC 3104521. PMID 21468015. - Resnick, R.; Eisberg, R. (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd ed.). New York: John Wiley & Sons. ISBN 0-471-87373-X. - Z.Y.Wang (2016). "Generalized momentum equation of quantum mechanics". Optical and Quantum Electronics 48 (2). doi:10.1007/s11082-015-0261-8. - Holden, Alan (1971). Stationary states. New York: Oxford University Press. ISBN 0-19-501497-9. - Williams, W.S.C. (2002). Introducing Special Relativity, Taylor & Francis, London, ISBN 0-415-27761-2, p. 192. - de Broglie, L. (1970). The reinterpretation of wave mechanics, Foundations of Physics 1(1): 5–15, p. 9. - Born, M., Heisenberg, W. (1928). Quantum mechanics, pp. 143–181 of Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique, tenu à Bruxelles du 24 au 29 Octobre 1927, sous les Auspices de l'Institut International de Physique Solvay, Gauthier-Villars, Paris, p. 166; this translation at p. 425 of Bacciagaluppi, G., Valentini, A. (2009), Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge University Press, Cambridge UK, ISBN 978-0-521-81421-8. - Schrödinger, E. (1928). Wave mechanics, pp. 185–206 of Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique, tenu à Bruxelles du 24 au 29 Octobre 1927, sous les Auspices de l'Institut International de Physique Solvay, Gauthier-Villars, Paris, pp. 185–186; this translation at p. 447 of Bacciagaluppi, G., Valentini, A. (2009), Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge University Press, Cambridge UK, ISBN 978-0-521-81421-8. - Heisenberg, W. (1955). The development of the interpretation of the quantum theory, pp. 12–29, in Niels Bohr and the Development of Physics: Essays dedicated to Niels Bohr on the occasion of his seventieth birthday, edited by W. Pauli, with the assistance of L. Rosenfeld and V. Weisskopf, Pergamon Press, London, p. 13. - Heisenberg, W. (1927). Über den anschlaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198, translated by eds. Wheeler, J.A., Zurek, W.H. (1983), at pp. 62–84 of Quantum Theory and Measurement, Princeton University Press, Princeton NJ, p. 73. Also translated as 'The actual content of quantum theoretical kinematics and mechanics' here - Heisenberg, W. (1930). The Physical Principles of the Quantum Theory, translated by C. Eckart, F. C. Hoyt, University of Chicago Press, Chicago IL, pp. 77–78. - Fine, A. (1986). The Shaky Game: Einstein Realism and the Quantum Theory, University of Chicago, Chicago, ISBN 0-226-24946-8 - Howard, D. (1990). "Nicht sein kann was nicht sein darf", or the prehistory of the EPR, 1909–1935; Einstein's early worries about the quantum mechanics of composite systems, pp. 61–112 in Sixty-two Years of Uncertainty: Historical Philosophical and Physical Inquiries into the Foundations of Quantum Mechanics, edited by A.I. Miller, Plenum Press, New York, ISBN 978-1-4684-8773-2. - de Broglie, L. (1923). Waves and quanta, Nature 112: 540. - de Broglie, L. (1924). Thesis, p. 8 of Kracklauer's translation. - Medicus, H.A. (1974). Fifty years of matter waves, Physics Today 27(2): 38–45. - MacKinnon, E. (1976). De Broglie's thesis: a critical retrospective, Am. J. Phys. 44: 1047–1055. - Espinosa, J.M. (1982). Physical properties of de Broglie's phase waves, Am. J. Phys. 50: 357–362. - Brown, H.R., Martins, R.deA. (1984). De Broglie's relativistic phase waves and wave groups, Am. J. Phys. 52: 1130–1140. - Bacciagaluppi, G., Valentini, A. (2009). Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge University Press, Cambridge UK, ISBN 978-0-521-81421-8, pp. 30–88. - Martins, Roberto de Andrade (2010). "Louis de Broglie's Struggle with the Wave-Particle Dualism, 1923-1925". Quantum History Project, Fritz Haber Institute of the Max Planck Society and the Max Planck Institute for the History of Science. Retrieved 2015-01-03. - L. de Broglie, Recherches sur la théorie des quanta (Researches on the quantum theory), Thesis (Paris), 1924; L. de Broglie, Ann. Phys. (Paris) 3, 22 (1925). English translation by A.F. Kracklauer. And here. - Broglie, Louis de, The wave nature of the electron Nobel Lecture, 12, 1929 - Tipler, Paul A. and Ralph A. Llewellyn (2003). Modern Physics. 4th ed. New York; W. H. Freeman and Co. ISBN 0-7167-4345-0. pp. 203–4, 222–3, 236. - Zumdahl, Steven S. (2005). Chemical Principles (5th ed.). Boston: Houghton Mifflin. ISBN 0-618-37206-7. - An extensive review article "Optics and interferometry with atoms and molecules" appeared in July 2009: http://www.atomwave.org/rmparticle/RMPLAO.pdf. - "Scientific Papers Presented to Max Born on his retirement from the Tait Chair of Natural Philosophy in the University of Edinburgh", 1953 (Oliver and Boyd)
https://en.wikipedia.org/wiki/De_Broglie_Wavelength
4.125
The Modern Language Association Style Manual is one of the most common style guides and often the first one used by students. Students writing research papers in the arts and humanities, as well as in middle and high school, are often required to use MLA style. MLA provides guidance for formatting papers and citing references, ensuring that your paper is consistent and easy to read. The first page of an MLA paper has a double-spaced header in the top left corner. Type your name on the first line, with your professor's name on the following line. The third line is for the class title and the fourth line should have the date formatted with the day first, such as 10 August 2013. Each page should have your last name followed by the page number in the upper right hand corner, and your paper needs a title centered on the first page immediately after the header. MLA papers use 1-inch margins all around and an easy-to-read font that is distinct from italics, such as Times New Roman. Your paper must be typed and printed on 8.5-by-11-inch paper. Use italics for titles of books, magazines and other references. MLA papers do not use footnotes. Instead, use endnotes, and put the endnotes page immediately before your Works Cited page. On your endnotes page, type and center the word Endnotes. MLA style does not mandate an endnotes page. Instead endnotes are an option for providing further clarification and details. MLA papers require double-spacing. This makes it easier to edit them and ensures that your professor can add notes to your paper. Paragraphs should begin on a new line without any additional spacing, but must be indented one-half inch. On most computers, pressing the tab key will yield the proper indentation. If you insert a quotation that is more than four lines long, the quotation should be indented on both sides and preceded by a colon and one line of space. When you cite specific facts and figures in your paper or quote a source, use parenthetical citations. Put the author of the work first, followed by the page number. For example, if you're quoting page 9 of a book by John Smith, your citation would look like this: (Smith, 9). List all of your sources in alphabetical order on a Works Cited page at the end of your paper, with Works Cited centered and typed at the top of the page. Cite references using a hanging indent, with all lines after the first line indented. The author's name, last name first, goes first. If you are citing a specific chapter or article title, put the title in quotation marks, followed by the italicized name of the source. For books, list the city of publication, publisher's name and year of publications. For articles, list the volume and issue number and date of publication. Conclude with the page numbers you used or, if citing an entire book, omit the page numbers. Style Your World With Color Create balance and growth throughout your wardrobe.View Article Let your imagination run wild with these easy-to-pair colors.View Article Understand how color and its visual effects can be applied to your closet.View Article See if her signature black pairs well with your personal style.View Article - Purdue Online Writing Lab: MLA Formatting and Style Guide - California State University at Los Angeles: MLA Format - The University of Wisconsin - Madison Writing Center: MLA Documentation Guide - MLA Handbook for Writers of Research Papers, 7th Edition; Modern Language Association - Creatas Images/Creatas/Getty Images
http://classroom.synonym.com/mla-guidelines-students-1345.html
4.40625
The Cockroach Life Cycle and Behavior As with many animals, cockroach reproduction relies on eggs from a female and sperm from a male. Usually, the female releases pheromones to attract a male, and in some species, males fight over available females. But exactly what happens after the male deposits his sperm into the female varies from species to species. Most roaches are oviparous -- their young grow in eggs outside of the mother's body. In these species, the mother roach carries her eggs around in a sac called an ootheca, which is attached to her abdomen. The number of eggs in each ootheca varies from species to species. Many female roaches drop or hide their ootheca shortly before the eggs are ready to hatch. Others continue to carry the hatching eggs and care for their young after they are born. But regardless of how long the mother and her eggs stay together, the ootheca has to stay moist in order for the eggs to develop. Other roaches are ovoviviparous. Rather than growing in an ootheca outside of the mother's body, the roaches grow in an ootheca inside the mother's body. In a few species, the eggs grow inside the mother's uterus without being surrounded by an ootheca. The developing roaches inside feed on the eggs' yolks, just as they would if the eggs were outside the body. One species is viviparous -- its young develop in fluid in the mother's uterus the way most mammals do. Ovoviviparous and viviparous species give birth to live young. Whether mother roaches care for their young also varies from one species to another. Some mothers hide or bury their ootheca and never see their offspring. Others care for their offspring after birth, and scientists believe that some offspring have the ability to recognize their mothers. The number of young that one roach can bear also varies considerably. A German cockroach and her young can produce 300,000 more roaches in one year. An American cockroach and her young can produce a comparatively small 800 new roaches per year. Newly hatched roaches, known as nymphs, are usually white. Shortly after birth, they turn brown, and their exoskeletons harden. They begin to resemble small, wingless adult roaches. Nymphs molt several times as they become adults. The period between each molt is known as an instar. Each instar is progressively more like an adult cockroach. In some species, this process takes only a few weeks. In others, like the oriental cockroach, it takes between one and two years. The overall life span of cockroaches differs as well -- some live only a few months while others live for more than two years. Cockroaches generally prefer warm, humid, dark areas. In the wild, they are most common in tropical parts of the world. They are omnivores, and many species will eat virtually anything, including paper, clothing and dead bugs. A few live exclusively on wood, much like termites do. Although cockroaches are closely related to termites, they are not as social as termites are. Termite colonies have an organized social structure in which different members have different roles. Cockroaches do not have these types of roles, but they do tend to prefer living in groups. A study at the Free University of Brussels in Belgium revealed that groups of cockroaches make collective decisions about where to live. When one space was large enough for all of the cockroaches in the study, the cockroaches all stayed there. But when the large space was not available, the roaches divided themselves into equal groups to fit in the smallest number of other enclosures. Another study suggests that cockroaches have a collective intelligence made up of the decisions of individual roaches. European scientists developed a robot called InsBot that was capable of mimicking cockroach behavior. The researchers applied cockroach pheromones to the robot so real roaches would accept it. By taking advantage of roaches' tendencies to follow each other, InsBot was able to influence the behavior of entire groups, including convincing roaches to leave the shade and move into lighted areas. Scientists theorize that similar robots could be used to herd animals or to control cockroach populations. In addition to robotic intervention, there are several steps that people can take to reduce or eliminate cockroach populations. We'll look at these next.
http://animals.howstuffworks.com/insects/cockroach2.htm
4.09375
Importance of free body diagram to determine the forces and stresses is as follows: • A free body diagram represents all the forces both in magnitude and direction acting on an object when it is isolated from the system. The forces include reaction forces, self-weight, tension in the string etc. • It is very easy to find the unknown forces and moments by applying static equilibrium principles for a free body diagram. • Stresses can be determined by using the stress - force equation from the calculated forces. • Engineers can use the mathematical equations to determine the loads, nature of loads, and geometry involved to find the stress at various points easily with the help of free body diagrams. The following example gives a brief explanation about the engineering application of the free body diagram.
http://www.chegg.com/homework-help/fundamentals-of-machine-component-design-5th-edition-chapter-4-problem-76p-solution-9781118012895
4.03125
This interactive activity from NOVA challenges students' knowledge of igloo construction. The quiz format includes questions concerning where igloos were traditionally built, the best type of snow for building, and the shape on which these traditional Canadian Inuit structures were modeled. Detailed explanations provide further insight into how these ingenious snow shelters enabled entire families to survive the brutal Arctic winters. This interactive activity requires Adobe Flash Player. Learn more here.
http://knpb.pbslearningmedia.org/resource/ipy07.sci.engin.design.igloo101/igloo-101/
4.4375
A bell-shaped graph, or bell curve, displays the distribution of variability for a given data set. For example, the most well-known example, the IQ graph, shows that the average intelligence of humans falls around a mean score of 100 and trails off in both directions around that center score. You can generate your own bell curve graphs by calculating a standard deviation and mean for any collected set of data. Items you will need Gather your data of interest. For example, if you study economics, you may wish to collect the average annual income of citizens of a given state. To ensure your graph looks more bell-shaped, aim for a high population sample, such as forty or more individuals. Calculate your sample mean. The mean is an average of all of your samples. Therefore, add up your total data set and divide by the population sample size, n. Compute your standard deviation. To do this, subtract your mean from each of your individual datum. Then square the result. Add up all of these squared results and divide that sum by n -- 1, which is your sample size minus one. Lastly, take the square root of this result. The standard deviation formula reads as follows: s = sqrt[ sum( (data -- mean)^2 ) / (n -- 1) ]. Plot your mean along the x-axis. Make increments from your mean spaced by a distance of one, two and three times your standard deviation. For example, if your mean is 100 and your standard deviation is 15, then you would have a marking for your mean at x = 100, another important marking around x = 115 and x = 75 (100 + or - 15), another around x = 130 and x = 60 (100 + or - 2(15)) and a final marking around x = 145 and x = 45 (100 + or - 3(15)). Sketch the bell curve. The highest point will be at your mean. The y-value of your mean does not precisely matter, but as you smoothly descend left and right to your next incremental marking, you should reduce the height by about one-third. Once you pass your third standard deviation left and right of your mean, the graph should have a height of almost zero, tracing just above the x-axis as it continues in its respective direction. Style Your World With Color Explore a range of deep greens with the year's "it" colors.View Article Barack Obama's signature color may bring presidential power to your wardrobe.View Article Create balance and growth throughout your wardrobe.View Article See how the colors in your closet help determine your mood.View Article - A graphing calculator or spreadsheet can help produce means and standard deviations faster than doing all calculations by hand. - bell image by Vaida from Fotolia.com
http://classroom.synonym.com/create-bell-curve-graph-2797.html
4.3125
Though it is often viewed both as the archetypal Anglo-Saxon literary work and as a cornerstone of modern literature, Beowulf has a peculiar history that complicates both its historical and its canonical position in English literature. By the time the story of Beowulf was composed by an unknown Anglo-Saxon poet around 700 a.d., much of its material had been in circulation in oral narrative for many years. The Anglo-Saxon and Scandinavian peoples had invaded the island of Britain and settled there several hundred years earlier, bringing with them several closely related Germanic languages that would evolve into Old English. Elements of the Beowulf story—including its setting and characters—date back to the period before the migration. The action of the poem takes place around 500 a.d. Many of the characters in the poem—the Swedish and Danish royal family members, for example—correspond to actual historical figures. Originally pagan warriors, the Anglo-Saxon and Scandinavian invaders experienced a large-scale conversion to Christianity at the end of the sixth century. Though still an old pagan story, Beowulf thus came to be told by a Christian poet. The Beowulf poet is often at pains to attribute Christian thoughts and motives to his characters, who frequently behave in distinctly un-Christian ways. The Beowulf that we read today is therefore probably quite unlike the Beowulf with which the first Anglo-Saxon audiences were familiar. The element of religious tension is quite common in Christian Anglo-Saxon writings (The Dream of the Rood, for example), but the combination of a pagan story with a Christian narrator is fairly unusual. The plot of the poem concerns Scandinavian culture, but much of the poem’s narrative intervention reveals that the poet’s culture was somewhat different from that of his ancestors, and that of his characters as well. The world that Beowulf depicts and the heroic code of honor that defines much of the story is a relic of pre–Anglo-Saxon culture. The story is set in Scandinavia, before the migration. Though it is a traditional story—part of a Germanic oral tradition—the poem as we have it is thought to be the work of a single poet. It was composed in England (not in Scandinavia) and is historical in its perspective, recording the values and culture of a bygone era. Many of those values, including the heroic code, were still operative to some degree in when the poem was written. These values had evolved to some extent in the intervening centuries and were continuing to change. In the Scandinavian world of the story, tiny tribes of people rally around strong kings, who protect their people from danger—especially from confrontations with other tribes. The warrior culture that results from this early feudal arrangement is extremely important, both to the story and to our understanding of Saxon civilization. Strong kings demand bravery and loyalty from their warriors, whom they repay with treasures won in war. Mead-halls such as Heorot in Beowulf were places where warriors would gather in the presence of their lord to drink, boast, tell stories, and receive gifts. Although these mead-halls offered sanctuary, the early Middle Ages were a dangerous time, and the paranoid sense of foreboding and doom that runs throughout Beowulf evidences the constant fear of invasion that plagued Scandinavian society. Only a single manuscript of Beowulf survived the Anglo-Saxon era. For many centuries, the manuscript was all but forgotten, and, in the 1700s, it was nearly destroyed in a fire. It was not until the nineteenth century that widespread interest in the document emerged among scholars and translators of Old English. For the first hundred years of Beowulf’s prominence, interest in the poem was primarily historical—the text was viewed as a source of information about the Anglo-Saxon era. It was not until 1936, when the Oxford scholar J. R. R. Tolkien (who later wrote The Hobbit and The Lord of the Rings, works heavily influenced by Beowulf) published a groundbreaking paper entitled “Beowulf: The Monsters and the Critics” that the manuscript gained recognition as a serious work of art. Beowulf is now widely taught and is often presented as the first important work of English literature, creating the impression that Beowulf is in some way the source of the English canon. But because it was not widely read until the 1800s and not widely regarded as an important artwork until the 1900s, Beowulf has had little direct impact on the development of English poetry. In fact, Chaucer, Shakespeare, Marlowe, Pope, Shelley, Keats, and most other important English writers before the 1930s had little or no knowledge of the epic. It was not until the mid-to-late twentieth century that Beowulf began to influence writers, and, since then, it has had a marked impact on the work of many important novelists and poets, including W. H. Auden, Geoffrey Hill, Ted Hughes, and Seamus Heaney, the 1995 recipient of the Nobel Prize in Literature, whose recent translation of the epic is the edition used for this SparkNote. Beowulf is often referred to as the first important work of literature in English, even though it was written in Old English, an ancient form of the language that slowly evolved into the English now spoken. Compared to modern English, Old English is heavily Germanic, with little influence from Latin or French. As English history developed, after the French Normans conquered the Anglo-Saxons in 1066, Old English was gradually broadened by offerings from those languages. Thus modern English is derived from a number of sources. As a result, its vocabulary is rich with synonyms. The word kingly, for instance, descends from the Anglo-Saxon word cyning, meaning “king,” while the synonym royal comes from a French word and the synonymregal from a Latin word. Fortunately, most students encountering Beowulf read it in a form translated into modern English. Still, a familiarity with the rudiments of Anglo-Saxon poetry enables a deeper understanding of the Beowulf text. Old English poetry is highly formal, but its form is quite unlike anything in modern English. Each line of Old English poetry is divided into two halves, separated by a caesura, or pause, and is often represented by a gap on the page, as the following example demonstrates: Setton him to heafdon hilde-randas. . . . Because Anglo-Saxon poetry existed in oral tradition long before it was written down, the verse form contains complicated rules for alliteration designed to help scops, or poets, remember the many thousands of lines they were required to know by heart. Each of the two halves of an Anglo-Saxon line contains two stressed syllables, and an alliterative pattern must be carried over across the caesura. Any of the stressed syllables may alliterate except the last syllable; so the first and second syllables may alliterate with the third together, or the first and third may alliterate alone, or the second and third may alliterate alone. For instance: Lade ne letton. Leoht eastan com. Lade, letton, leoht, and eastan are the four stressed words. In addition to these rules, Old English poetry often features a distinctive set of rhetorical devices. The most common of these is the kenning, used throughout Beowulf. A kenning is a short metaphorical description of a thing used in place of the thing’s name; thus a ship might be called a “sea-rider,” or a king a “ring-giver.” Some translations employ kennings almost as frequently as they appear in the original. Others moderate the use of kennings in deference to a modern sensibility. But the Old English version of the epic is full of them, and they are perhaps the most important rhetorical device present in Old English poetry. i need to find the part in the book where beowulf lands upon the shore and the guard comes down n' confronts him. 10 out of 29 people found this helpful i really love this great epic.....i enjoy reading it line by line Reading this was my prison punishment. Eye brows on fleek. Finna get crunk. 7 out of 9 people found this helpful
http://www.sparknotes.com/lit/beowulf/context.html
4
The discovery of a planet outside our solar system used to be so important that a big announcement from NASA or other professional planet-finders would usually bring news of a single planet, or perhaps a few. Not so anymore. The more we look, the more we find. Exoplanet discoveries are so plentiful these days that leading groups have started unveiling them by the dozen. That is just what scientists from NASA's Kepler mission did January 26, when the team announced the discovery of 26 newfound planets orbiting distant stars. Astronomers have now identified more than 700 exoplanets, all of them in the past two decades or so. Kepler illustrates how new technologies have improved our ability to discover faraway worlds. It is a space-based observatory that tracks the brightness of more than 150,000 stars near the constellation Cygnus. For those stars hosting planetary systems, and for those planetary systems whose orbital plane is aligned with Kepler's line of sight, the spacecraft registers a periodic dip in starlight when an orbiting planet passes across the star's face. Using this method, mission scientists have identified more than 2,300 planetary candidates awaiting follow-up observation and confirmation. (Some astrophysical phenomena, such as a pair of eclipsing binary stars in the background, can mimic a planetlike dimming of starlight.) The Kepler team confirmed most of the latest batch of planets—11 planetary systems containing up to five worlds apiece—by measuring transit timing variations, or orbital disturbances caused by the gravitational pull of planetary neighbors. The newfound Kepler worlds are depicted as green orbs in the graphic above, with the planets of the solar system in blue for comparison. Purple dots are possible additional planets that have not yet been validated. The orbital spacing of the planets is not to scale; all 26 of the Kepler planets orbit closer to their host stars than Venus, the second-innermost planet in the solar system, does to the sun. The exoplanets range from roughly 1.5 times the diameter of Earth (marked as "Sol d" here, as it would be under exoplanetary nomenclature) to approximately 1.3 times the diameter of Jupiter (Sol f).
http://www.scientificamerican.com/gallery/dozens-and-dozens-nasas-kepler-spies-packs-of-new-exoplanets/?shunter=1455118371677
4.0625
These lessons, by one of our most consistent FaithWriters' Challenge Champions, should not be missed. So we're making a permanent home for them here. A dialect is a pattern of speech that is found in a particular region. It is not a separate language, but it may differ from the standard language of the country in its vocabulary, its pronunciation, and its sentence structure. Some examples of dialect might be the “Jersey Shore” speech heard in the television show of that same name, a Cockney accent from London’s east end, or the “Yooper” dialect from my own home state of Michigan. If I make the definition a bit broader and include accents, the list of possibilities is virtually endless. Writers frequently have one or more of their characters speak in a dialect. This can often be a good thing, but there are also some pitfalls for writers to avoid. I’ll try to cover both of those in this lesson. When you write your character’s voice in a dialect, you are telling your reader several things about that character. Dialect can indicate a character’s position in life: her level of education, her age, her economic status, her geographical background—and if I add jargon (the vocabulary particular to a profession or some other distinct group), it might even indicate her occupation. So dialect can aid you in characterization—these are things that you do not have to tell the reader, saving you more words that you can then use to tell your story. Dialect is also a good tool for giving your characters unique voices. If you have two characters who can be similarly described—both, for example, are middle-aged men—then giving one of them a dialect or an accent will help your readers to keep track of who’s who. Additionally, well-written dialect can give your writing a unique rhythm, and it can be really fun to read. I recommend that you give writing in dialect a try if you’re looking for a way to stretch yourself or to make your piece stand out. However—there are a few warnings for those who write in a dialect. 1. Be sure that you get it right. If it’s not a dialect that is very familiar to you, spend some time listening to speakers of that dialect, and perhaps transcribing what you hear. If that’s not possible for reasons of time or geography, find something that’s written with that dialect and take note of how it is written. If you get it wrong, it will reflect on your writing, and someone who is more familiar with that dialect will call you on it. Although King James English isn’t exactly a dialect, I can use it to demonstrate this point. We’ve all heard people with only a nodding acquaintance with the language of that version of the Bible when they attempt what they think is “biblical” language. Thou is makething me laughest. Ye shouldeth not doeth that. It makes you cringe, doesn’t it? That’s the way a poorly-written or inauthentic dialect will sound to those who are familiar with its rhythms. 2. Be careful not to overdo the writing of non-standard phrases; your reader will get weary of the work they have to do to mentally translate the dialect. Take a look at this, written in a bad approximation of a southern dialect: Ah jes’ couldn’ belive mah eyes! Lawd, thet young’un were a sight, ‘n’ ah never knewed whut dun hit me. She wuz so purty it made me wanna slap mah muther, ‘n’ she wuz jist a-grinnin’ an’ a-laughin’ et me lak nobuddy’s bidness. That’s exhausting to read, isn’t it? I’d suggest that if you have a character who speaks in a dialect, you should pick a few words or linguistic quirks that are suggestive of that dialect--just enough to give your reader the idea of that character’s speech. 3. Finally, you should be very careful—very, very careful—that your rendering of dialect does not come across as a stereotype, exaggeration, or satire of any particular group’s speech patterns, and that nothing you write could be considered insulting to members of the group for whom that dialect is their native tongue. If you’re not sure, have someone from that group read it. They will tell you if it is accurate, and also if it is offensive. HOMEWORK: (Choose one or more of the following exercises) 1. Write a paragraph or two with some dialect or accent. 2. Link to a challenge entry you wrote with a dialect or accent, and tell whether you think you did it effectively. Also tell why you used the dialect—what did it bring to the story? 3. Tell about a book you’ve read that uses dialect effectively. 4. Ask a question or make a comment about the use of dialect. Finally, I'd like to encourage you to check out the Critique Circle. I know that Mike and Bea have elicited the help of several seasoned writers and editors to stop by there frequently and to critique new additions. There's a new category there for "Challenge Entries"--a great place to put that entry that you loved, but didn't score well for a more in-depth critique. Ha, I just discovered I can access FW on my e-reader! I have tried on my smartphone, and it shuts down my browser every time. I agree with all your points, Jan. I would add that if you incorporate terms from a particular region, along with the dialect, that it is very clear from the context what you are referring to. An example of that would be an eastern Canadian (specifically a Newfoundlander) saying something to the effect, "She has a tongue like a logan." Meaning she talks a lot, and often gossips. But the reference is to a boot called a logan, a tall, lace-up boot worn by a fisherman, with a very long tongue. A non-maritimer wouldn't know that -- I may be saying it incorrectly after these years. It's been a while since I heard Mrs. Meta -- so while it's colourful, there's no meaning if the reader is unacquainted with the dialect. But revelation must be subtle. There's nothing more annoying than a series of definitions injected into the text. Dialect must always flow naturally. Here's a link to a story I wrote with dialect. (I have forgotten how to link the title to the link.) http://www.faithwriters.com/wc-article- ... p?id=31783 I used dialect for two reasons. To show place and time. To give my characters life. Recently, I've read several books with heavy use of dialect. Cane River by Lalita Tademy. Wonderful flow which kept the reader 'in time' and empathetic to the characters. Another is The Birth House by Ami McKay. Takes place in maritime Canada just before WW1. I would consider each brilliant examples of bringing authenticity to the stories through the use of dialect and the vernacular without its being forced or awkward. "What remains of a story after it is finished? Another story..." Eli Wiesel Writing in dialect can be fun - but YES, it can definitely be overdone. If I'm spending too much time deciphering the words that I can't concentrate on the story, it's time to tone it down a bit. But, I'll admit, it's VERY hard to figure out where you are on that slipperly slope! Here's a challenge entry I wrote with dialect Ol' Hairy Ears 'n Me. I used the dialect for characterization - but also, if I recall, because there was a discussion here on the boards about dialect, and I went back through my challenge entries and realized, as of that time, I had never done it. Looking back on it now, I think I may have overdone it a bit - though I will say that I wrote this almost six years ago, and that's a good enough excuse for me! It definitely gave the piece a fun angle. Oh, I love regional dialects. Here is a link to "The Rev'rend Makes a Sick Call". It is a retelling of one of my favorite Bible stories which I always imagined in an Appalachian setting. As mentioned in some of the comments, I made a number of mistakes. Many FW members are more familiar with Appalachian dialect than I: This is a link to "The Cardinal Visits the Bishop", written in Scottish dialect, which lies somewhere between Scottish English and the Scots language. In this story, an Italian cardinal arrives in Scotland to find that his perfect Oxford English is not considered "proper English" by the Scottish bishop. I lived in Scotland for two years, and so have more of an ear for this dialect: I, like many Americans, love to hear Scottish people talk---even when we can't understand a thing they are saying. Ann, as always, your story was lovely, and it was no effort at all to read your dialect. In fact, even though it's not one that I've ever heard, I felt as if I could hear it as I was reading it, and that's exactly the desired effect. Thanks for the link! Jo, I don't think you overdid it at all. A great story that I actually remember reading first time around. Thanks for sharing it again! I wrote one just to play with the idea of dialect. The POV was voiced in a back hills, uneducated American dialect and he is telling about trying to communicate with a refined, educated relative in England. My challenge in this one was in writing the Englishman's dialect as repeated by the American (with much lost in the effort). I hope the "proper" English feel came through even though it was intentionally mutilated. The title is "The Problem With Englishmen" and the link is http://www.faithwriters.com/wc-article- ... p?id=27512. I figure there must have been some dialect overkill because it didn't place well at all. The thought of writing dialects freaks me out, but I sure enjoy reading stories by good writers who do it well. I'm reading The Adventures of Huckleberry Finn out loud to my kids right now, and I love reading Huck's voice, but Jim's is crazy hard to read out loud, and half of the time I don't know what he's saying. I don't really think that this counts as dialect, but I tried to write a story in the conversational tone of a precocious 11 year-old. My FaithWriters profile: RachelM FW member profile Rachel, you're right that this isn't really a dialect--but still, it makes an important point. However a writer imagines their characters, they need to be sure that their characters speak authentically. I've read lots of entries in which children speak far too wisely for their years (less often, far too young for their years). And I've read entries that featured teens, or doctors, or teachers, or truckers, or any number of other identifiers, in which their speech did not ring true. That's not the case with this story--you got it right! Here's an excerpt from one of my favorite classics, The Grapes of Wrath: Joad looked at (the cat), and his face was puzzled. "I know what's the matter," he cried. "That cat jus' made me figger what's wrong." Seems to me there's lots wrong," said Casy. "No, it's more'n jus' this place. Whyn't that cat jus' move in with some neighbors--with Rances. How come nobody ripped some lumber off this house? Ain't been nobody here for three-four months, an' nobody's stole no lumber. Nice planks on the barn shed, plenty good planks on the house, winda frames--an' nobody's took 'em. That ain't right. That's what was botherin' me, an' I couldn't catch hold of (the cat). "I don' know. Seems like maybe there ain't any neighbors..." I love this book. The story, the characters, and even the strong dialect depicting the poor lower class in all their humanity, as the author rides the line so dangerously close to being offensive. (Or maybe Steinbeck stepped over the line once or twice). Thanks for the lesson, Jan. Dialect is something I struggle with in my own writing and I have a question. Where can a writer find a really great resource for studying a Texas drawl? Theresa, I wish I knew the answer to that, but I don't. I think maybe you should take a nice vacation to Texas! My favorite example of powerful dialect writing is in "To Kill a Mockingbird," especially during the courtroom scenes when Mayella Ewing is speaking. When I was teaching, I team-taught for a few years with a male English teacher, and when we got to the courtroom chapters, he and I read those scenes to the class as if we were Mayella and Atticus. Could be my favorite classroom memory. Dialect is very hard! I don’t think I’ve ever written in it, but here are a few thoughts triggered by the lesson and others’ comments. 1. I have written in “King James English” once (in this piece) and had an interesting experience. I made sure all the pronouns and verb endings were correct. It’s really not that hard—all you have to understand is 1st, 2nd, 3rd person; singular and plural pronouns; and nominative, objective, and possessive case. But I over-thought things and decided that some readers would think some of the correct usages were incorrect, so I deliberately changed some correct ones to incorrect ones. As you can see from the comments, someone—I don’t remember who (cough cough Jan cough cough)—busted me. 2. As folks may remember, Twain wrote this at the beginning of Adventures of Huckleberry Finn: 3. Dialects can change quickly, often within 20 or 30 miles. When I was a forester in North and South Carolina, I loved to hear the differences over the areas I worked. Typically, I would be assigned to an 8 - 10 county area and the dialects would vary greatly. One town, spelled “Whiteville,” was pronounced “Whahdvul” by the locals. In that same area, “whatever” meant “what,” and if you really wanted to convey “whatever,” you had to say “whatever what.” Same with “when,” “whenever,” and “whenever when”; and other “-ever” word groups. In and around Charleston, some people say “case quarter” for “quarter” (the coin). I could go on and on, as could we all; but my point is how SMALL an area a dialect can be accurate for. "When the Round Table is broken every man must follow Galahad or Mordred; middle things are gone." C.S. Lewis “The chief purpose of life … is to increase according to our capacity our knowledge of God by all the means we have, and to be moved by it to praise and thanks. To do as we say in the Gloria in Excelsis ... We praise you, we call you holy, we worship you, we proclaim your glory, we thank you for the greatness of your splendor.” J.R.R. Tolkien The Adventures of Huckleberry Finn was another one that I loved reading aloud to my students. So much fun! (Sorry for dinging you on the King James English.) Who is online Users browsing this forum: No registered users and 3 guests Does God exist? Build a writers website Does truth exist? Website online in minutes
http://www.faithwriters.com/Boards/phpBB2/viewtopic.php?f=67&t=38089
4.15625
A helix (pl: helices), from the Greek word έλικας/έλιξ, is a three-dimensional, twisted shape. Common objects formed like a helix are a spring, a screw, and a spiral staircase (though the last would be more correctly called helical). Helices are important in biology, as the DNA molecule is formed as two intertwined helices, and many proteins have helical substructures, known as alpha helices. Helices can be either right-handed or left-handed. With the line of sight being the helical axis, if clockwise movement of the helix corresponds to axial movement away from the observer, then it is a right-handed helix. If counter-clockwise movement corresponds to axial movement away from the observer, it is a left-handed helix. Handedness (or chirality) is a property of the helix, not of the perspective: a right-handed helix cannot be turned or flipped to look like a left-handed one unless it is viewed through a mirror, and vice versa. A double helix typically consists geometrically of two congruent helices with the same axis, differing by a translation along the axis, which may or may not be half-way. A conic helix may be defined as a spiral on a conic surface, with the distance to the apex an exponential function of the angle indicating direction from the axis. An example of a helix would be the Corkscrew roller coaster at Cedar Point amusement park. In cylindrical coordinates (r, θ, h), the same helix is described by: Another way of mathematically constructing a helix is to plot a complex valued exponential function (e^xi) taking imaginary arguments (see Euler's formula). Except for rotations, translations, and changes of scale, all right-handed helices are equivalent to the helix defined above. The equivalent left-handed helix can be constructed in a number of ways, the simplest being to negate either the x, y or z component. The length of a general helix expressed in rectangular coordinates as equals , its curvature is .
http://www.wikidoc.org/index.php/Helix
4.5
To graph linear inequalities, start by drawing the line in the same fashion as you would with a linear equation. A linear inequality has many solutions that can lie above or below ... How to Graph Linear Inequalities A linear equation is an equation that makes a line when graphed. A linear inequality is the same type of expression with an inequality sign rather than an equals sign. For example, the general formula for a linear equation is y = mx + b, where m is the... Demonstrates, step-by-step and with illustrations, how to graph linear (two- variable) inequalities such as 'y < 3x + 2'. Learn how to graph two-variable linear inequalities. Sal graphs the inequality y<3x+5. ... Solving and graphing linear inequalities in two variables 1 ... Constraint solution sets of two-variable linear inequalities. This is a graph of a linear inequality: The inequality y ≤ x + 2. You can see the y = x + 2 line, and the shaded area is where y is less than or equal to x + 2 ... Fun math practice! Improve your skills with free problems in 'Graph a linear inequality in the coordinate plane' and thousands of other practice lessons. To understand how to graph the equations of linear inequalities such y ≥ x + 1, make sure that you have a good understanding of how to graph the equation of a www.ask.com/youtube?q=Graphing Linear Inequalities&v=5h6YzRRxzO4 Jan 9, 2011 ... This is a video lesson on Graphing Linear Inequalities. For questions 1 - 4, you will need paper and pencil to draw your graphs. If you have graph paper, please use your graph paper. You should be able to graph ...
http://www.ask.com/web?qsrc=6&o=102341&oo=102341&l=dir&gc=1&q=Graphing+Linear+Inequalities
4
Attempts to create controlled nuclear fusion - the process that powers stars - have been a source of continuing controversy. Scientists have struggled for decades to effectively harness nuclear fusion in hot plasma for energy generation - potentially a cleaner alternative to the current nuclear-fission reactors - but have so far been unsuccessful at turning this into an economically viable process. Meanwhile, claims of cheap "bench-top" fusion by electrolysis of heavy water ("cold fusion") and by sonic bubble-formation in water (sonoluminescence) have been greeted with skepticism, and have not been successfully reproduced. In this week's Nature, Brian Naranjo and colleagues report a new kind of "bench-top" nuclear fusion, based on measurements that seem considerably more convincing than these previous claims. The publication was written by a UCLA team that includes Brian Naranjo, a graduate student in physics; James Gimzewski, professor of chemistry; and Seth Putterman, professor of physics. Gimzewski and Putterman are members of the California NanoSystems Institute at UCLA. The team initiates fusion of deuterium — heavy hydrogen, the fuel used in conventional plasma fusion research — using the strong electric field generated in a pyroelectric crystal. Such materials produce electric fields when heated, and the researchers concentrated this field at the tip of a tungsten needle connected to the crystal. In an atmosphere of deuterium gas, this generates positively charged deuteron ions and accelerates them to high energy in a beam. When this beam strikes a target of erbium deuteride, Naranjo and colleagues detect neutrons coming from the target with precisely the energy expected if they were generated by the nuclear fusion of two deuterium nuclei. The neutron emission is 400 times stronger than the usual background level. The researchers say that this method of producing nuclear fusion won't be useful for normal power generation, but it might find applications in the generation of neutron beams for research purposes, and perhaps as a propulsion mechanism for miniature spacecraft. Publication: The Journal Nature, April 28, 2005 "Observation of Nuclear Fusion Driven by a Pyroelectric Crystal" For more information about the project, visit rodan.physics.ucla.edu/pyrofusion Explore further: Seeing where energy goes may bring scientists closer to realizing nuclear fusion
http://phys.org/news/2005-05-ucla-nuclear-fusion-lab.html
4.09375
Arctic sea ice ecology and history The Arctic sea ice covers less area in the summer than in the winter. The multi-year (i.e. perennial) sea ice covers nearly all of the central deep basins. The Arctic sea ice and its related biota are unique, and the year-round persistence of the ice has allowed the development of ice endemic species, meaning species not found anywhere else. There are differing scientific opinions about how long perennial sea ice has existed in the Arctic. Estimates range from 700,000 to 4 million years. The specialized, sympagic (i.e. ice-associated) community within the sea ice is found in the tiny (mostly <1mm diameter) liquid filled network of pores and brine channels or at the ice-water interface. The organisms living within the sea ice are consequently small (<1mm), and dominated by bacteria, and unicellular plants and animals. Diatoms, a certain type of algae, are considered the most important primary producers inside the ice with more than 200 species occurring in Arctic sea ice. In addition, flagellates contribute substantially to biodiversity, but their species number is unknown. Protozoan and metazoan ice meiofauna, in particular turbellarians, nematodes, crustaceans and rotifers, can be abundant in all ice types year-round. In spring, larvae and juveniles of benthic animals (e.g. polychaetes and molluscs) migrate into coastal fast ice to feed on the ice algae for a few weeks. A partially endemic fauna, comprising mainly gammaridean amphipods, thrive at the underside of ice floes. Locally and seasonally occurring at several 100 individuals m-2, they are important mediators for particulate organic matter from the sea ice to the water column. Ice-associated and pelagic crustaceans are the major food sources for polar cod (Boreogadus saida) that occurs in close association with sea ice and acts as the major link from the ice-related food web to seals and whales. While previous studies of coastal and offshore sea ice provided a glimpse of the seasonal and regional abundances and the diversity of the ice-associated biota, biodiversity in these communities is virtually unknown for all groups, from bacteria to metazoans. Many taxa are likely still undiscovered due to the methodological problems in analyzing ice samples. The study of diversity of ice related environments is urgently required before they ultimately change with altering ice regimes and the likely loss of the multi-year ice cover. Dating Arctic ice Estimates of how long the Arctic Ocean has had perennial ice cover vary. Those estimates range from 700,000 years in the opinion of Worsley and Herman, to 4 million years in the opinion of Clark. Here is how Clark refuted the theory of Worsley and Herman: Recently, a few coccoliths have been reported from late Pliocene and Pleistocene central Arctic sediment (Worsley and Herman, 1980). Although this is interpreted to indicate episodic ice-free conditions for the central Arctic, the occurrence of ice-rafted debris with the sparse coccoliths is more easily interpreted to represent transportation of coccoliths from ice-free continental seas marginal to the central Arctic. The sediment record as well as theoretical considerations make strong argument against alternating ice-covered and ice-free....The probable Middle Cenozoic development of an ice cover, accompanied by Antarctic ice development and a late shift of the Gulf Stream to its present position, were important events that led to the development of modern climates. The record suggests that altering the present ice cover would have profound effects on future climates. More recently, Melnikov has noted that, "There is no common opinion on the age of the Arctic sea ice cover." Experts apparently agree that the age of the perennial ice cover exceeds 700,000 years but disagree about how much older it is. However, some research indicates that a sea area north of Greenland may have been open during the Eemian interglacial 120,000 years ago. Evidence of subpolar foraminifers (Turborotalita quinqueloba) indicate open water conditions in that area. This is in contrast to Holocene sediments that only show polar species. - Arctic amplification - Arctic Climate Impact Assessment - Arctic ecology - Arctic Ocean - Arctic sea ice decline - Arctic shrinkage - Climate of the Arctic - Bluhm, B., Gradinger R. (2008) "Regional Variability In Food Availability For Arctic Marine Mammals." Ecological Applications 18: S77–96 (link to free PDF) - Gradinger, R.R., K. Meiners, G.Plumley, Q. Zhang,and B.A. Bluhm (2005) "Abundance and composition of the sea-ice meiofauna in off-shore pack ice of the Beaufort Gyre in summer 2002 and 2003." Polar Biology 28: 171 – 181 - Melnikov I.A.; Kolosova E.G.; Welch H.E.; Zhitina L.S. (2002) "Sea ice biological communities and nutrient dynamics in the Canada Basin of the Arctic Ocean." Deep Sea Res 49: 1623–1649. - Christian Nozais, Michel Gosselin, Christine Michel, Guglielmo Tita (2001) "Abundance, biomass, composition and grazing impact of the sea-ice meiofauna in the North Water, northern Baffin Bay." Mar Ecol Progr Ser 217: 235–250 - Bluhm BA, Gradinger R, Piraino S. 2007. "First record of sympagic hydroids (Hydrozoa, Cnidaria) in Arctic coastal fast ice." Polar Biology 30: 1557–1563. - Horner, R. (1985) Sea Ice Biota. CRC Press. - Melnikov, I. (1997) The Arctic Sea Ice Ecosystem. Gordon and Breach Science Publishers. - Thomas, D., Dieckmann, G. (2003) Sea Ice. An Introduction to its Physics, Chemistry, Biology and Geology. Blackwell. - Butt, F. A.; H. Drange; A. Elverhoi; O. H. Ottera; A. Solheim (2002). "The Sensitivity of the North Atlantic Arctic Climate System to Isostatic Elevation Changes, Freshwater and Solar Forcings" (PDF) 21 (14-15). Quaternary Science Reviews: 1643–1660. OCLC 108566094. - Worsley, Thomas R.; Yvonne Herman (1980-10-17). "Episodic Ice-Free Arctic Ocean in Pliocene and Pleistocene Time: Calcareous Nannofossil Evidence". Science 210 (4467): 323–325. doi:10.1126/science.210.4467.323. PMID 17796050. - Clark, David L. (1982). "The Arctic Ocean and Post-Jurassic Paleoclimatology". Climate in Earth History: Studies in Geophysics. Washington D.C.: The National Academies Press. p. 133. ISBN 0-309-03329-2. - Melnokov, I. A. (1997). The Arctic Sea Ice Ecosystem (pdf). Google Book Search (CRC Press). p. 172. ISBN 2-919875-04-3. - Mikkelsen, Naja et al. "Radical past climatic changes in the Arctic Ocean and a geophysical signature of the Lomonosov Ridge north of Greenland" (2004).
https://en.wikipedia.org/wiki/Arctic_sea_ice_ecology_and_history
4.1875
IntroductionThe element uranium is the heaviest atom found in nature, and is the only element with all its natural isotopes radioactive. Since an isotope differs from other isotopes of the same element in having a different relative atomic mass (r.a.m.), or atomic weight, it may be defined by the name of the element to which it belongs and the r.a.m. of the isotope. Thus uranium-238, which is the most common uranium isotope, has 92 protons (the atomic number of uranium is 92) and 146 neutrons in each atomic nucleus and therefore has a r.a.m. of 238. Uranium-235, the next most common uranium isotope, has 3 neutrons less in each nucleus. These are the two most common uranium isotopes. They are both unstable and therefore decay radioactively into other elements by emitting charged particles from the nuclei of their atoms. Instead of decaying into stable atoms, they decay into atoms which are themselves radioactive. These then decay into a different atom and the process is repeated until a stable atom is reached. A system where atoms decay through a series of elements in this way is called a radioactive series. The natural radioactive seriesThere are three entirely separate radioactive series found in nature, the longest of which begins with the decay of uranium-238 and is known as the naturally radioactive uranium-238 series. The second naturally occurring radioactive series originates with the second most commonly occurring isotope of uranium, uranium-235, and this is called the uranium-235 series. The third series is the naturally radioactive thorium-232 series. These series arise because of the loss of either a beta particle or an alpha particle from an atom, a process which changes the charge on the nucleus of the decaying atom. When a beta particle is lost, it means that the charge on the nucleus (and therefore the atomic number of the atom) is increased by one. When an alpha particle is emitted the atomic number of the atom is reduced by two and its atomic weight by four. The newly-created atom then emits a particle to become an atom of a different element which then itself decays into yet another different atom. Most often, all atoms of a particular radioactive isotope of an element emit either alpha particles and no beta particles, or all beta particles and no alpha particles. Some members of the radioactive series have, however, some atoms which emit alpha particles and some which emit beta particles. (No single atom emit both alpha and beta particles.) In these cases, a branch occurs in the radioactive series, with some of the decaying atoms converted into another element. For example, in the uranium-235 series, actinium-227 decays either into francium-223 by loss of an alpha particle, or into thorium-227 by loss of a beta particle. Both of these then decay, the francium by loss of a beta particle, and the thorium by loss of an alpha particle, into radium-223. The radium-223 produced by either route is identical, and the branch in the series is thus closed. The radium then decays to produce the next member in the series, radon-219. The complete details of the three naturally occurring series are as in the diagrams, with the branchings which occur in the series shown. It will be noticed that all of the natural series terminate with isotopes of lead, i.e., lead-206, lead-207, and lead-208. It happens that all of these are stable isotopes of the element and no further radioactive emission therefore takes place. The times taken (as measured by the half-life) for the various radionuclides in the radioactive series to decay differs widely. For example the time taken for half an amount of uranium-238 to decay to thorium-234, which is the first step in the uranium-238 series, is 4.5 × 109 (4,500 million) years. By contrast, the half-life of thorium-234 is only 24.5 days, and the half-life of the next member in the series, protactinium-234, a mere 1.14 minutes. An artificial seriesAfter the discovery of the three radioactive series of naturally found isotopes, many physicists searched in the hope of finding more series. In 1940 new elements were artificially made which had atomic numbers greater than 92, the atomic number of uranium. These were called transuranic elements. The transuranic elements neptunium (atomic number 93) and plutonium (atomic number 94) were the first to be produced and isolated. In 1945 others were discovered, including americium (atomic number 95). These three elements are radioactive in any of their isotopic forms, and since they are produced artificially, they are said to be artificially radioactive. Later it was realized that a fourth radioactive series does exist – the neptunium radioactive series – but it cannot be called a naturally radioactive series since it contains some transuranic elements. The neptunium radioactive series (so-called because neptunium-237 is the most stable radioactive isotope in the series) is shown in the bottom diagram. As with the naturally radioactive series, alpha particle and beta particle emission causes the decay of the radioactive isotopes. Branching again occurs with a bismuth isotope, in this case where some atoms of bismuth-213 decay by alpha particle emission to thallium-209 atoms, while others decay by beta particle emission to atoms of polonium-213. Thallium-209, by beta particle emission, and polonium-213 by alpha particle emission, both decay to identical atoms of lead-209, which decays to bismuth-209, a stable isotope, which forms the end of the neptunium radioactive series. As with the naturally radioactive series, a quantity of any isotope in the series will eventually decay to become a similar quantity of the stable isotope – in this case bismuth-209. Most of the natural radioactive isotopes take their places in a radioactive series. Only seven of the naturally found radioactive isotopes do not appear in one of the three naturally radioactive series. Forty-six isotopes do appear in these three series – all of which are isotopes of the elements with atomic numbers between 81 and 92. In contrast, most of the artificial radioactive isotopes do not belong to a radioactive series. Related category• ATOMIC AND NUCLEAR PHYSICS Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact
http://www.daviddarling.info/encyclopedia/R/radioactive_series.html
4.28125
Explore! There are resources out there that can inform you as you work with all learners in your classroom. As you find resources that you think should be added to this list, please send them to Linda Gamble at email@example.com and we will post them on this resource page. Image courtesy of [duron123] / FreeDigitalPhotos.net Consider the response of the children http://www.dailydot.com/lol/kids-react-cheerios-commercial-race/ Did you know that there was a controversy over a Cheerios ad? Watch this video and consider your own response. Why do you think it was controversial? What does this controversy say about children? How could you use this in your classroom? Choosing to Participate Poster Exhibit http://www.tolerance.org/choosing-to-participate A Smithsonian Institution Traveling Exhibit and online resource for teachers and administrators. This page includes resources and downloadable posters and other educator materials. FREE and useful to encourage dialogue. From their website: “As our world grows increasingly interconnected, it’s more important than ever to inspire people of all ages to create positive social change. When we stop to consider the consequences of our everyday choices—to discover how “little things are big”—we can make a huge difference.” Books for Classroom Use: Skipping Stones A listing of great books to use with children and youth about topics related to diversity.You may also find interesting information at Skipping Stones: An international multicultural magazine.http://www.skippingstones.org/ Film Short: “Immersion” http://learning.snagfilms.com/film/immersion This 12-minute film is thought-provoking and an important resource that can be used to help learners and to think about immersion strategies and the experience of ELLs. YouTube: Strategies for teaching ESL students in your classroom. This is a great introduction and is only 7 minutes long. http://www.youtube.com/watch?v=_Ub0NJ6UClI&feature=related YouTube: ESL Struggles and Strategies. Another good look at what it is like to be an English Language Learner. Just under 7 minutes long and begins by asking if English should be our “official” language. http://www.youtube.com/watch?v=-bWU238PymM&NR=1 YouTube: Scaffolding Language Skills. A short introduction to strategies you may already know but haven’t thought about in terms of working with English Language Learners. Just under 2 minutes long. http://www.youtube.com/watch?v=lmJoOjLQM3U&feature=related Families of the World: a website that has clips from a series of PBS documentaries. The documentaries provide insight into the world of families and children who live in different cultures. The clips are interesting to watch and a teacher might use this as a resource if a family has ties to a different part of the world or if a class is studying a specific country or region. http://www.familiesoftheworld.com/ August to June: the story of a different approach to schools and learning. A clip from a film that demonstrates there are different ways to educate. http://www.augusttojune.com/ A Refugee Camp: Doctors without Borders site which illustrates the life of a refugee leaving home through pictures and words. http://www.doctorswithoutborders.org/events/refugeecamp/guide/index.cfm Walking the “Only” Road: Psychological Tight Spaces by Dr. Shawn Arango Ricks, past-president of the Southern Organization for Human Services (SOHS). This is a very thought provoking piece about concepts such as what does it mean to live in a “post-racial” or “color-blind” America. Image courtesy of [suphakit73] / FreeDigitalPhotos.net Teaching Tolerance – A new text selection tool from Teaching Tolerance. According to their information it can help you select culturally responsive texts and meet the requirements of the Common Core. TED-Ed —A guide to creating “FLIPPED” lessons and online resources. http://ed.ted.com/ Five Things High School Students Should Know About Race Understanding Language –– This article provides a solid, thoughtful overview of the knowledge that students should learn about race. Extremely well written article from the Harvard Education Letter. http://www.hepg.org/hel/article/553#home Understanding Language: Language, Literacy and Learning in the Content Area – This website has resources to help teachers prepare to support ELL students meeting the Common Core State Standards. http://ell.stanford.edu/ GLSEN – Here you’ll find lesson plans, curricular tools, information on teacher training programs and more. http://glsen.org/educate/resources Teaching in Racially Diverse Classrooms – a tip-sheet from the Derek Bok Center for Teaching and Learning at Harvard University. Thought provoking! http://isites.harvard.edu/fs/html/icb.topic58474/TFTrace.html Multicultural Education Pavilion – many resources to use in teaching, learning, and professional development. http://www.edchange.org/multicultural/index.html Teaching Tolerance – Southern Poverty Law Center – tools to help teachers guide students as they learn to live in their diverse world. An outstanding resource! The Teaching Diverse Students Initiative – Southern Poverty Law Center – tools, case studies and learning resources to explore and consider diversity in the classroom. Project Implicit – An opportunity to sample your conscious and unconscious bias and learn more about yourself. everythingESL.net – ESL lesson plan samples, teaching tips, resources, ESL national news. http://www.everythingesl.net Beatrice Moore Luchin 2011 Maine workshops “How Do I Know What they Know and Understand? Instant Math Assessment Techniques” – April 29, 2010 “How Do I Know What They Know? Building Your Toolkit of Assessment Strategies” PowerPoint in pdf | Handouts from the workshop Colorin Colorado –a tremendous resource for working with Spanish speaking families but also useful for other ELL families http://www.colorincolorado.org/ ePals Global Community – A k-12 Social Learning Network. Lots of great opportunities to expand your students’ horizons. http://www.epals.com/ Discover Education – many free lesson plans including some specifically targeting diversity issues. http://www.discoveryeducation.com/teachers/ Understanding Language – Stanford University website about working with English Language Learners that includes professional papers and teaching resources that will be updated frequently. http://ell.stanford.edu/ Image courtesy of [stockimages] / FreeDigitalPhotos.net Southern Poverty Law Commonly Held Beliefs Survey http://www.tolerance.org/supplement/test-yourself-hidden-bias This is an important first step! Take this quiz and check in with yourself about your beliefs. Mass Customized Learning: One teacher’s vision to help you understand MCL. Imagine Learning – Archived Webinars about education with an emphasis on working with English Language Learners. High quality professional development at your desk! http://www.imaginelearning.com/webinars/#AcademicSuccessForELs Preparing All Teachers to Meet the Needs of English Language Learners – A comprehensive report from the Center for American Progress. http://www.americanprogress.org/issues/education/report/2012/04/30/11372/preparing-all-teachers-to-meet-the-needs-of-english-language-learners/ Choosing to Participate – Educator resources, including a self-paced educator workshop about the importance of choosing to be part of positive social change. This site also includes materials that would be useful in the classroom. http://www.choosingtoparticipate.org/ IRIS Resources – Resources, including case studies, that guide learning about inclusive environments http://iris.peabody.vanderbilt.edu/resources.html Educating English Language Learners: Building Teacher Capacity – A roundtable report that explores what teachers need to know to work with English Language Learners NCELA – The National Clearing House for English Language Acquisition with links to professional development materials http://www.ncela.us/professional-development Multicultural Resources – Maine DHHS Office of Multicultural Affairs Bridging Refugee Youth and Children’s Services (BRYCS) – Information about working with refugee families and youth. http://www.brycs.org/ Image courtesy of [twobee] / FreeDigitalPhotos.net 10 Ways Well-Meaning White Teachers Bring Racism into our Schools: An interesting and thought provoking article. http://everydayfeminism.com/2015/08/10-ways-well-meaning-white-teachers-bring-racism-into-our-schools/ KAHNAcademy: A resource for supplemental learning opportunities. Check this out and think about how it might support and encourage student learning. http://www.khanacademy.org/ Tell Me More: This NPR radio program focuses on issues of interest to all from very diverse perspectives. Of particular interest might be the twitter conference on Education discussed on the October 11th, 2012 program. http://www.npr.org/programs/tell-me-more/ Mom (and Dad’s) View of ADHD: a web page and blog about life with ADHD. http://adhdmomma.com/ Down’s Syndrome Blogs: parents blogging about down’s syndrome: “For parents of children with Down’s Syndrome (also known as Downs Syndrome, Down Syndrome, Trisomy 21, DS, and T21), the world can sometimes seem like a hostile and unsympathetic place. We are cast alternately as demons, for allowing perceived genetically abnormal people to survive, or as saints doing charitable work. We are neither. We are parents. The Color of Life: parenting advice for transracial adoptive families. This site underscores some of the issues and challenges that children and families confront and also includes resources that could be useful in the classroom. https://www.adoptivefamilies.com/category/transracial-adoption/ MamiVerse: a website of parenting advice for Latino families. This site includes good resources for teachers as well as parents. For example, there is an article about great Latino Children’s Books is posted on this website. http://mamiverse.com/ Mahogany Momma’s Black Parenting Blog: a website of parenting advice for African American families. This site includes good resources for teachers as well as parents. For example, there is an article about positive images for black girls posted on this website. http://www.blackparentingblog.com A Parent’s Guide to Raising Multiracial Children: an excerpt from the book Does Anybody Else Look Like Me: A Parent’s Guide to Raising Multiracial Children by Donna Jackson Nakazawa. http://donnajacksonnakazawa.com/does-anybody-else/ See Baby Discriminate: an article that reports the results of a study about young children’s perception of race. Image courtesy of [kiddaikiddeestudio] / FreeDigitalPhotos.net Fresh Air Fund – Work with children from New York City who are experiencing life outside the city. Literacy Volunteers – Volunteer as a tutor! Training is provided and you can make the difference in a student’s life. Opportunities in Franklin County: http://lvfranklin-somerset.maineadulted.org/, the greater Augusta area: http://lva-augusta.org/ or the Waterville area: http://www.lvwaterville.com/
http://www2.umf.maine.edu/teachereducation/resources-for-pre-service-and-in-service-teachers/diversity-resources/
4
Changes in air temperature, not precipitation, drove the expansion and contraction of glaciers in Africa's Rwenzori Mountains at the height of the last ice age, according to a Dartmouth-led study. The results -- along with a recent Dartmouth-led study that found air temperature also likely influenced the fluctuating size of South America's Quelccaya Ice Cap over the past millennium -- support many scientists' suspicions that today's tropical glaciers are rapidly shrinking primarily because of a warming climate rather than declining snowfall or other factors. The two studies will help scientists to understand the natural variability of past climate and to predict tropical glaciers' response to future global warming. The most recent study, which marks the first time that scientists have used the beryllium-10 surface exposure dating method to chronicle the advance and retreat of Africa's glaciers, appears in the journal Geology. Africa's glaciers, which occur atop the world's highest tropical mountains, are among the most sensitive components of the world's frozen regions, but the climatic controls that influence their fluctuations are not fully understood. Dartmouth glacial geomorphologist Meredith Kelly and her team used the beryllium-10 method to determine the ages of quartz-rich boulders atop moraines in the Rwenzori Mountains on the border of Uganda and the Democratic Republic of Congo. These mountains have the most extensive glacial and moraine systems in Africa. Moraines are ridges of sediments that mark the past positions of glaciers. The results indicate that glaciers in equatorial East Africa advanced between 24,000 and 20,000 years ago at the coldest time of the world's last ice age. A comparison of the moraine ages with nearby climate records indicates that Rwenzori glaciers expanded contemporaneously with regionally dry, cold conditions and retreated when air temperature increased. The results suggest that, on millennial time scales, past fluctuations of Rwenzori glaciers were strongly influenced by air temperature. The study was funded by the National Geographic Society and the National Science Foundation. Cite This Page:
https://www.sciencedaily.com/releases/2014/04/140416143309.htm
4.09375
Significance: Confederate currency—produced by the Confederate government and by individual states in the Confederacy—was critical to the South during the U.S. Civil War in its attempts to establish its own union. This currency was to be credited after the Confederacy’s victory but became worthless after its defeat. It later became a collector’s item, fetching prices from a few dollars to tens of thousands of dollars for the rarest denominations. The Confederate government began to issue currency in April of 1861, the month the Civil War began. The main printing press for central government- issued currency was in Richmond, Virginia, but currency was also printed by states, local municipalities, and merchants. Paper money was printed as well as coins, and both included symbolic representations of the Old South, including images of historical figures, military technology, and slavery. Because it was philosophically opposed to federalism, the Confederate government was not able to tax its citizens sufficiently to prepare for the war effort. In addition, European markets were gaining access to alternative sources of cotton, such as India and Egypt. As a result, American cotton was selling for lower prices overseas, exacerbating the South’s financial problems. Thus, Confederate currency was sure to experience high inflation should the South struggle in the war. Counterfeiting of Confederate currency was common. Since Confederate currency was printed at a number of different venues and by different levels of government, Northern counterfeiters were easily able to buy Southern goods with replica money. The resulting increase in the amount of currency in circulation contributed to the high inflation that began to mount as the tide of the war turned in the North’s favor. Confederate money was relatively valuable when the Civil War began. The gold dollar was the standard of value at the time, and a Confederate dollar was worth as much as 95 cents against the gold dollar. Shortly after the Battle of Gettysburg (1863), as the likelihood of a Southern victory decreased, the value of a Confederate dollar dropped to roughly 33 cents against the gold dollar. Investors shied away from trading for currency that could become worthless if the South lost the war. Instead, they began to accumulate goods and services that would be redeemable regardless of the war’s outcome. At the end of the war, the value of a Confederate dollar was about one penny against the gold dollar, and the currency ceased to be traded soon thereafter. Shull, Hugh. Guide Book of Southern States Currency. Florence, Ala.: Whitman, 2006. Slabaugh, Arlie. Confederate States Paper Money. Lola, Wis.: Krause, 1998. Tremmel, George. Confederate Currency of the Confederate States of America. Jefferson, N.C.: McFarland, 2003. See also: U.S. Mint
http://ebusinessinusa.com/2400-confederate-currency.html
4.03125
- Our Services - Events and Training November 15, 2010 In traditional mass walls, e.g. a wall of solid masonry or earth, the resistance to rain penetration was only one aspect of enclosure performance (Photograph 1). Heat flow was also controlled by the thermal storage capacity of the massive walls, not just by virtue of the materials' thermal conductivity like the specialized insulation layers commonly used in modern building assemblies. The sun's heat was absorbed, stored, and slowly released to the interior and exterior, effectively damping typical daily fluctuations and thus increasing comfort. Vapor and airflow were also controlled by the mass of the wall. It is little wonder that such walls were used for thousands of years. Built of only brick and mortar, the wall carried all structural loads as well as performing as an acceptable enclosure. The small unit size of the brick allowed for planning flexibility so that such walls could be used for most purposes. Because mass brick walls allow a considerable amount of heat to pass through, the exterior surface temperature remained elevated throughout the winter and thus freeze-thaw durability and interstitial condensation problems were avoided. Compared to the poor control of airflow through windows and doors, the walls seemed airtight to the occupants. If the wall was sheltered by topography, other buildings, and roof overhangs, the amount of rainwater reaching the surface was so little that the wall could control this water before it reached the inner surface and caused damage. The biggest drawback to such wall systems was the large amounts of material and labor needed to construct them and the poor thermal control. With the change from low-rise buildings with solid load-bearing walls to taller framed buildings, the dead weight and cost of traditional mass wall systems became prohibitive. Chicago's 16-storey Monadnock Building, constructed with 6-foot thick base walls between 1889 and 1891, pushed to the limit the load-bearing mass masonry wall (Photograph 2). Taller buildings with mass walls were practically impossible with the combination of high dead weight and low compressive strength. A large percentage of valuable ground floor area was lost to load-bearing walls and the resistance to seismic loads was poor. Today, poor control of rain penetration, heat, air, and vapor flow can be added to the list of drawbacks. Photograph 2: Monadnock Building (www.monadnockbuilding.com) The industrial revolution and the scientific knowledge and technical confidence it provided resulted in attempts to produce perfect barrier wall systems. These systems very often fail to be perfect barriers because of defects in design, construction, or materials although they may still perform as required. While a unit of sealed glazing will not fail to resist rain (unless the glass cracks) the joint between the glazing and the window frame may. Similarly, metal panel systems developed in the post-war period rarely failed, but the joints and interfaces did. These examples reinforce the importance of considering the wall as a three-dimensional assemblage including joints. In many manufactured curtain walls, a small amount of rain penetration will cause no harm and either goes unnoticed or a drainage system is incorporated to deal with these small failures. Corbusier is largely credited with popularizing the idea of separating the primary structural system from the enclosure system. Although the concept itself was well-developed by his time, the Domino house project made this approach desirable (Photograph 3). However, it is only in recent decades that the separation of the enclosure into layers and sub-systems for specific functions (support and control) has become more widely accepted and actually applied to building enclosures. Photograph 3: Le Corbusier's Domino House (www.usc.edu) The current best practice in building enclosure design emphasizes the use of drainage as a rain control strategy, and demands a well-defined rain control layer, air control layer, and unbroken thermal control layer. Building science research and field experience over the last two decades have demonstrated how powerful the drained approach to rain control can be. However, other changes have also occurred over this time, specifically the use of air barriers, and steadily increasing insulation requirements. The increase in airtightness and thermal control (insulation, white roofs, radiant barriers) reduced the energy flow across the enclosure available to dry this remaining moisture. Hence, the potential duration of wetting for materials in high-performance enclosures is increasing, and this can cause durability problems. Drainage does not remove all water that penetrates the cladding, as any rainwater absorbed by materials or clinging to surfaces can only be removed by evaporation. Similarly, air leakage condensation, which is now more likely in frequency, and severe in intensity because of higher levels of thermal insulation and higher cold-weather interior humidity levels (themselves the result of increased airtightness), cannot be dried as quickly as in the past. This lack of drying capacity, when combined with changing materials (masonry to gypsum sheathing, brick veneers to metal panels) and the substitution of traditional materials with often less durable modern ones, has increased the probability of moisture-related enclosure failures. What is needed is a re-evaluation of how we assemble enclosures, and improvements in ensuring continuity of the control functions. As we change the insulation levels, airtightness, and materials, we need to consider changes in how materials are assembled in enclosures. The steel-stud framed walls of the 80’s and 90’s cannot continue to be built in the same manner in the 2010’s and 2020’s. It is now clear that such walls did not provide continuous thermal control. Better rain control is a critical part of the needed change, as are increases in tightness, better control of thermal bridging, and a protection of moisture-sensitive materials from extreme temperatures and prolonged wetting. The “perfect wall” approach (Figure 1) described above provides all of these improvements. Once thought of as an ideal enclosure assembly that would rarely be built, it is becoming the new standard for durable, energy-efficient, high-performance enclosures. Figure 1: The Perfect Wall — see also BSI-001: The Perfect Wall
http://buildingscience.com/documents/insights/bsi-042-historical-development-building-enclosure?topic=resources/freeze-thaw-damage
4.09375
Microeconomics: A Brief History by Marc Davis As early as the 18th century, economists were studying the decision-making processes of consumers, a principal concern of microeconomics. Swiss mathematician Nicholas Bernoulli (1695-1726) proposed an extensive theory of how consumers make their buying choices in what was perhaps the first written explanation of how this often mysterious and always complex process works. According to Bernoulli's theory, consumers make buying decisions based on the expected results of their purchases. Consumers are assumed to be rational thinkers who are able to forecast with reasonable accuracy the hopefully satisfactory consequences of what they buy. They select to purchase, among the choices available, the product or service they believe will provide maximum satisfaction or well-being. For some 200 years beginning in the mid-1700s, the dominant economic theory was Adam's Smith's laissez-faire (French for "leave alone" or "let do") approach to the economy, which advocated a government hands-off policy regarding free markets and the machinery of capitalism. The laissez-faire theory argues that an economy functions best when the "invisible hand" of self-interest is allowed to operate freely, without government intervention. Smith and Marshall Scottish-born Smith (1723-1790) wrote in his book, "Wealth of Nations," that if the government does not tamper with the economy, a nation's resources will be most efficiently used, free-market problems will correct themselves and a country's welfare and best interests will be served. (For further reading on Adam Smith see, Adam Smith: the Father of Economics.) Smith's views on the economy prevailed through two centuries, but in the late 19th and early 20the century, the ideas of Alfred Marshall (1842-1924), a London-born economist, had a major impact on economic thought. In Marshall's book, "Principles of Economics, Vol. 1." published in 1890, he proposed, as Bernoulli had three centuries earlier, the study of consumer decision making. Marshall proposed a new idea as well - the study of specific, individual markets and firms, as a means of understanding the dynamics of economics. Marshall also formulated the concepts of consumer utility, price elasticity of demand and the demand curve, all of which will be discussed in the following chapter. At the time of Marshall's death, John Maynard Keynes (1883-1946), who would become the most influential economist of the 20th century starting in the 1930s, was already at work on his revolutionary ideas about government management of the economy. Born in Cambridge, England, Keynes' contributions to economic theory have guided the thinking and policy-making of central bankers and government economists for decades, both globally and in the U.S. (To learn more see, Can Keynesian Economics Reduce Boom-Bust Cycles?) So much of U.S. monetary policy, the setting of key interest rates, government spending to stimulate the economy, support of private enterprise through various measures, tax policy and government borrowing through the issuance of Treasury bonds, bills and notes, have been influenced by the revolutionary ideas of Keynes, which he introduced in his books and essays. What all these concepts had in common was their advocacy of government management of the economy. Keynes advocated government intervention into free markets and into the general economy when market crises warranted, an unprecedented idea when proposed during the Great Depression. (For more on this read, What Caused The Great Depression?) Government spending to stimulate an economy, a Keynesian idea, was used during the Depression to put unemployed people to work, thus providing cash to millions of consumers to buy the country's products and services. Most of Keynes' views were the exact opposites of Adam Smith's. An economy, for optimum functioning, must be managed by government, Keynes wrote. (For related reading, see The Federal Reserve.) Thus was born the modern science of macroeconomics – the big picture view of the economy – evolving in large part from what came to be called Keynesian economic theory. These are among the tools of microeconomics, and their principles, along with others, are still employed today by economists who specialize in this area. Keynes' policies, to varying degrees, have been, and continue to be, employed with generally successful results worldwide in almost all modern capitalist economies. If and when economic problems occur, many economists often attribute them to some misapplication or non-application of a Keynesian principle. While Keynesian economic theory was being applied in most of the world's major economies, the new concept of microeconomics, pioneered by Marshall, was also taking hold in economic circles. The study of smaller, more focused aspects of the economy, which previously were not given major importance, was fast becoming an integral part of the entire economic picture. (For further information on past economists, read How Influential Economists Changed Our History.) Microeconomics had practical appeal to economists because it sought to understand the most basic machinery of an economic system: consumer decision-making and spending patterns, and the decision-making processes of individual businesses. The study of consumer decision-making reveals how the price of products and services affects demand, how consumer satisfaction – although not precisely measurable – works in the decision-making process, and provides useful information to businesses selling products and services to these consumers. The decision-making processes of a business would include how much to make of a certain product and how to price these products to compete in the marketplace against other similar products. The same decision-making dynamic is true of any business that sells services rather than products. Although economics is a broad continuum of all the factors - both large and small - that make up an economy, microeconomics does not take into direct account what macroeconomics considers. Macroeconomics is concerned principally with government spending, personal income taxes, corporate taxes, capital gains taxes and other taxes; the key interest rates set by the Federal Reserve, the banking system and other economic factors such as consumer confidence, unemployment or gross national product, which may influence the entire economy. (For more on macroeconomics read, Macroeconomic Analysis.) Economics, like all sciences, is continually evolving, with new ideas being introduced regularly, and old ideas being refined, revised, and rethought. Some 200 years after Bernoulli's theory was first introduced, it was expanded upon by Hungarian John von Neumann (1903-1957), and Austrian Oskar Morgenstern (1920-1976). A more detailed and nuanced theory than Bernoulli's and Marshall's emerged from their collaboration, which they called utility theory. The theory was elaborated in their book, "Theory of Games and Economic Behavior," published in 1944. In the 1950s, Herbert A. Simon (1916-2001), a 1978 Nobel Memorial Prize-winner in economics, introduced a simpler theory of consumer behavior called "satisficing". The satisficing theory contends that when consumers find what they want, they then abandon the quest and decision-making processes, and buy the product or service which seems to them as "good enough." (For more on the Nobel Memorial Prize, read Nobel Winners Are Economic Prizes.) And so the history of microeconomics continues to unfold, awaiting perhaps another Bernoulli, Adam Smith, Alfred Marshall, or John Maynard Keynes, to provide it with some new, revolutionary ideas. An area of economics that studies the economic impact of environmental ... An economic theory from the 18th century that is strongly opposed ... When a company expands its business into areas that are at different ... The study and use of how economic theory and methods influences ... A situation in which the supply and demand for a good or service ... Economy is the large set of inter-related economic production ... Positive correlation exists when two variables move in the same direction. A basic example of positive correlation is height ... Read Full Answer >> Utility is a loose and controversial topic in microeconomics. Generally speaking, utility refers to the degree of removed ... Read Full Answer >> According to economic theory, the law of demand states that the relative demand for a good or service is inversely correlated ... Read Full Answer >> Microeconomics can be, but is not necessarily, math-intensive. Fundamental microeconomic assumptions about scarcity, human ... Read Full Answer >> Fracking helps to keep natural gas prices lower by increasing the available supply to consumers. This is perhaps more true ... Read Full Answer >> Classical economic theory presumed that if demand for a commodity or service was raised, then prices would rise correspondingly ... Read Full Answer >>
http://www.investopedia.com/university/microeconomics/microeconomics1.asp
4.25
The floor of a legislature or chamber is the place where members sit and make speeches. When a person is speaking there formally, they are said to have the floor. The House of Commons and the House of Lords In the United Kingdom, the U.S. House of Representatives and the U.S. Senate all have "floors" with established procedures and protocols. Activity on the floor of a council or legislature, such as debate, may be contrasted with meetings and discussion which takes place in committee, for which there are often separate committee rooms. Some actions, such as the overturning of an executive veto, may only be taken on the floor. In the United Kingdom's House of Commons a rectangular configuration is used with the government ministers and their party sitting on the right of the presiding Speaker and the opposing parties sitting on the benches opposite. Members are not permitted to speak between the red lines on the floor which mark the boundaries of each side. These are traditionally two sword lengths apart to mitigate the possibility of physical conflict. If a member changes allegiance between the two sides, they are said to cross the floor. Only members and the essential officers of the house such as the clerks are permitted upon the floor while parliament is in session. The two important debating floors of the U.S. Federal government are in the House of Representatives and the Senate. The rules of procedure of both floors have evolved to change the balance of power and decision making between the floors and the committees. Both floors were publicly televised by 1986. The procedures for passing legislation are quite varied with differing degrees of party, committee and conference involvement. In general, during the late 20th century, the power of the floors increased and the number of amendments made on the floor increased significantly. The procedures used upon legislative floors are based upon standard works which include - Erskine May: Parliamentary Practice, which was written for the UK House of Commons - Jefferson's Manual, which was written for the US Senate and was incorporated into the rules for the US House of Representatives. On the other hand, the following work was initially based on the procedures used upon legislative floors: - Robert's Rules of Order, which was based upon the rules of the US House of Representatives and is intended for use by ordinary bodies and societies such as church meetings. - Floor of the United States House of Representatives - Plenary session - Floor leader - Recognition (parliamentary procedure) - assignment of the floor - Robert J. McKeever, Brief Introduction to US Politics - David M. Olson, The legislative process: a comparative approach, p. 350 - William McKay, Charles W. Johnson, Parliament and Congress: Representation and Scrutiny in the Twenty-first Century - Steven S. Smith, Call to order: floor politics in the House and Senate
https://en.wikipedia.org/wiki/Floor_(legislative)
4.1875
Students encounter the concept of scarcity in their daily tasks but have little comprehension as to its meaning or how to deal with the concept of scarcity. Scarcity is really about knowing that often life is 'This OR That' not 'This AND That'. This lesson plan for students in grades K-2 and 3-5 introduces the concept of scarcity by illustrating how time is finite and how life involves a series of choices. Specifically, this lesson teaches students about scarcity and choice: Scarcity means we all have to make choices and all choices involve "costs." Not only do you have to make a choice every minute of the day because of scarcity, but, when making a choice, you have to give up something. This cost is called oppportunity cost. Opportunity cost is defined as the value of the next best thing you would have chosen. It is not the value of all things you could have chosen. Choice gives us 'benefits' and choice gives us 'costs'. Not only do you have to make a choice every minute of the day, because of scarcity, but also, when making a choice, you have to give up something of value (opportunity cost). To be asked to make a choice between 'this toy OR that toy' is difficult for students who want every toy. A goal in life for each of us is to look at our wants, determine our opportunities, and try and make the best choices by weighing the benefits and costs. The introduction to this lesson is a brief online story about a little girl’s visit to a pet store with her father. She considers several pets before choosing a “cute and cuddly” dog. Students are reminded that pet owners are responsible for keeping their pets safe, healthy and happy. A discussion of a pet owners desire to provide the best for their pets leads to an exploration of people’s wants. The activities that follow challenge students to explore the wants of a pet owner and their desire to provide the best for their pet fish, and then the wants of a person. The students learn that the ability to discover their wants will help them establish priorities when they are faced with scarcity. During the evaluation process, students identify some of their personal wants. As a class, they discuss why some choices are the same and others are different. They take the discussion a step further exploring how their wants compare with those of siblings and adults in their lives. They discover that age, lifestyle, likes (tastes and preferences) and what one views as important (values) help to explain the differences. When individuals produce goods or services, they normally trade (exchange) most of them to obtain other more desired goods or services. In doing so, individuals are immediately confronted with the problem of scarcity - as consumers they have many different goods or services to choose from, but limited income (from their own production) available to obtain the goods and services. Scarcity dictates that consumers must choose which goods and services they wish to purchase. When consumers purchase one good or service, they are giving up the chance to purchase another. The best single alternative not chosen is their opportunity cost. Since a consumer choice always involves alternatives, every consumer choice has an opportunity cost. The following lessons come from the Council for Economic Education's library of publications. Clicking the publication title or image will take you to the Council for Economic Education Store for more detailed information. Designed primarily for elementary and middle school students, each of the 15 lessons in this guide introduces an economics concept through activities with modeling clay. 17 out of 17 lessons from this publication relate to this EconEdLink lesson. This publication contains 16 stories that complement the K-2 Student Storybook. Specific to grades K-2 are a variety of activities, including making coins out of salt dough or cookie dough; a song that teaches students about opportunity cost and decisions; and a game in which students learn the importance of savings. 9 out of 18 lessons from this publication relate to this EconEdLink lesson. This interdisciplinary curriculum guide helps teachers introduce their students to economics using popular children's stories. 8 out of 29 lessons from this publication relate to this EconEdLink lesson.
http://www.econedlink.org/economic-standards/EconEdLink-related-publications.php?lid=738
4
Don't forget to look at the how to guide in the welcome lounge and help forum. Discussion in 'Early Years' started by tinyears, Jan 22, 2008. We talk about 2D shapes being flat (hands together) and 3D shapes being fat (hands wide apart). I introduced 3d shapes by "blowing up" 2d shapes in my magic bag. See the link below for an explanation: Of course, the problem with the plastic 2d shapes that we all use at school for sorting etc, is that they are really very thin 3d shapes. I'm not sure how you get around that problem. We look at 3D shapes and call them '3D shapes' and then ask what the difference is between 3D and Flat (as others have said). Then we look for all the 2D shapes in 3D shapes eg cylinder = 2 circles + rectangle .. they could be 'open' so no circles!! We do 'Transformation of Shape' activities ie "Change this rectangle piece of card into a 3D cylinder" etc via problem solving activities. Through transformation (hands-on) the kids seem to get the idea. The more you practice, the more they get it!! This is one of my 'bugbears'. I don't believe we should use 'flats' to teach 2D shape at all. Flats ARE 3D shapes and it is totally flawed to use them to teach 2D shape and the name of 2D shapes. Why don't we just stick to what is correct in the first place? You can easily use resources which have 'drawn' 2D shapes on. You can explain easily that we can hold 3D shapes - that they are solid. You can talk about 'faces' and 'edges' at an early stage and use terms like the cylinder has two 'cicular' faces and so on. You can show that you can hold the paper or card that the 2D shapes are drawn on but that you cannot hold the 2D shapes themselves. You can show that flats are not flat at all because if you make a pile of them, they have height. It is the easiest thing in the world to teach 2D and 3D shape when they are taught together so that you can point out the differences in meaning and definition. It is just a mess when we try to use flats and other 3D shapes to teach the names and understanding of 2D shapes. But - to try this idea out of 'correctness', you have to be prepared to disregard some of the resources and guidance that exists already. However, don't I always encourage teachers to be discerning, thinking, challenging and brave? Go for it. 'cicular' - try "circular"! This is one of my bugbears too. I was told that I was "getting philosophical" when I pointed out that the circle I had drawn on a sheet of paperwas 2D because it didn't exist on the other side. I think the teacher who said this browses this site. I'd like to reassure her that I have the greatest respect for her talent and subject knowledge... But I think I'm right on this. We can all do one thing right from the start by using the word face. Little ones accept that word with no problem, so we can use it fearlessly. I say 2D shapes are like the ones we draw but can't really hold in our hands and 3D are the solid ones we can hold. I wonder if you explore interactive white board resources for 2D and 3D shapes at secondary level you will find something that illustrates your points nicely? Sorry not to be more specific than this but I think there's probably something on mymaths etc that would do the job nicely. Also while you are at it, my other bugbear is talking about circles having one curved side. With an interactive white board resource you can show so nicely how if you draw a regular polygon and keep on increasing the number if sides you get closer and closer to a circle. I like to use the wonderful bit in Joyce's Portrait of an Artist as a Young Man in which eternity is described by a priest in terms of a bird taking a grain of sand and building a mountain...and thebn... ...but with older children, of course.. Mystery10, I wonder why you're so worried when you're clearly doing everything yoiu can for your daughter. And I can't resist pointing out that you;ve used the word nicely three times in your post. Do you think we're all duffers? i'd love you all to read the book but know that time takes it toll. I was going to explain this but am currently so downhearted by the fact that the "superhighways" haven't increased general knowledge despite all the hype. never mind "what is a 2D shape or what is a 3D shape". What is a "shape"? Let's start at the beginning, before irreconcilable contradictions have been embedded. are we going with shape is the form of an object? I agree with debbie - this is a bugbear. But where do you stop?? So many concepts are (initially) incorrectly formed at this young age - some are refined and altered, others remain fixed until adulthood. Weight /mass? , most of my staff talk about the scales for weighing, but many of them are actually just pan-balances, a ruler is really a rule.... The list is endless!!!! Anyway - I tell my children that 3D shapes are solid and that means you can pick them up or wrap your hands round them. Some children sometimes point out that you can do this with flat shapes - if they are intelligent enough to realise that, then I go on to explain that they actually are 3D. As a previous poster implied- in EYFS it's just as important to be matching, comparing, pointing out shapes (without necessarily naming) Learning about corners, sides, faces, edges I'm in pre-school though... I'm interested in this one as a maths teacher / reception parent, occasionally seeing things that might explain later problems. I tend to agree about the plastic "2D" shapes - and interactive whiteboards give a good kinaesthetic way of exploring genuinely 2D shapes. My daughter started using a ruler, and explained to me that you start drawing the line from 1 on the ruler. Might be sensible advice when drawing with one of those rulers that starts directly at 0, but probably feeds into the problem of measuring from 1 instead of 0. The word rhombus got replaced in her vocabulary by diamond, whilst at pre-school. (I don't mind her knowing the word diamond, but it did seem to replace rhombus - I suspect somebody must have said "no, it's a diamond", as two words for the same thing isn't usually a problem for her.) Cariadlet and I were chatting about 2d and 3d shapes the other day. We eventually decided that even when you draw a shape it is really 3d, it's just that the thickness is so tiny you can't see it with the naked eye. But there must be some height to a drawn shape - otherwise pencils wouldn't get shorter and shorter the more you use them. That led us onto deciding that the only truly 2d shapes must be imaginary ones that you picture in your head (we hadn't thought about the IWB). Mind you Cariadlet is 8 - I don't think I'd have that conversation in my Reception classroom. You can get a set of 3D shapes that are hollow and have one face missing so that you can stuff a piece of material in there and pull it out like a magician. They are also very useful in the sand and water tray as the children can contrast the cube that fills up with water versus the square that you can hold in your hand but will not fill up no matter how much you pour. I tell my Y2s that 2D shapes have height and width but no depth but I'm not sure reception have the concept of height , width or depth to understand and really they just need lots of experience of handling 3D shapes and developing that understanding
https://community.tes.com/threads/how-do-you-explain-the-difference-between-3d-and-2d-shapes-to-children.151254/
4.4375
Trigonometry/For Enthusiasts/Trigonometry Done Rigorously< Trigonometry ||This page started life as an introduction to the most basic concepts of trigonometry, such as measuring an angle. Done properly this is an advanced topic. This page has now been moved/renamed into book 3 to take on that role, and needs considerable revision. Some content that is still here may need to be integrated back into book 1| - 1 Introduction to Angles - 2 Definition of an Angle - 3 Definition of a Triangle - 4 Introduction to Radian Measure Introduction to AnglesEdit An angle is formed when two lines intersect; the point of intersection is called the vertex. We can think of an angle as the wedge-shaped space between the lines where they meet. Note that if both lines are extended through the meeing point, there are in fact four angles. The size of the angle is the degree of rotation between the lines. The more we must rotate one line to meet the other, the larger the angle is. Suppose you wish to measure the angle between two lines exactly so that you can tell a remote friend about it: draw a circle with its center located at the meeting of the two lines, making sure that the circle is small enough to cross both lines, but large enough for you to measure the distance along the circle's edge, the circumference, between the two cross points. Obviously this distance depends on the size of the circle, but as long as you tell your friend both the radius of the circle used, and the length along the circumference, then your friend will be able to reconstruct the angle exactly. Definition of an AngleEdit An angle is determined by rotating a ray about its endpoint. The starting position of the ray is called the initial side of the angle. The ending position of the ray is called the terminal side. The endpoint of the ray is called its vertex. Positive angles are generated by counter-clockwise rotation. Negative angles are generated by clockwise rotation. Consequently an angle has four parts: its vertex, its initial side, its terminal side, and its rotation. An angle is said to be in standard position when it is drawn in a cartesian coordinate system in such a way that its vertex is at the origin and its initial side is the positive x-axis. Definition of a TriangleEdit A triangle is a planar (flat) shape with three straight sides. An angle is formed between each two sides of a triangle, and a triangle has three angles, hence the name tri-angle. So a triangle has three straight sides and three angles. If you give me three lengths, I can only make a triangle from them if the greatest length is less than the sum of the other two. Three lengths that do not make the sides of a triangle are your height, the height of the nearest tree, the distance from the top of the tree to the center of the sun. Angles are not affected by the length of lines: an angle is invariant under transformations of scale, that is: An angle of particular significance is the right angle: the angle at each corner of a square or a rectangle. A rectangle can always be divided into two triangles by drawing a line from one corner of the rectangle to the opposite corner. It is also true that every right-angled triangle is half a rectangle. A rectangle has four sides; they are generally of two different lengths: two long sides and two short sides. (A rectangle with all sides equal is a square.) When we split the rectangle into two right-angled triangles, each triangle has a long side and a short side from the rectangle as well as a copy of the split line. So the area of a right-angled triangle is half the area of the rectangle from which it was split. Looking at a right-angled triangle, we can tell what the long and short sides of that rectangle were; they are the sides, the lines, that meet at a right angle. The area of the complete rectangle is the long side times the short side. The area of a right-angled triangle is therefore half as much. Right Triangles and MeasurementEdit It is possible to bisect any angle using only circles (which can be drawn with a compass) and straight lines by the following procedure: - Call the vertex of the angle O. Draw a circle centered at O. - Mark where the circle intersects each ray. Call these points A and B. - Draw circles centered at A and B with equal radii, but make sure that these radii are large enough to make the circles intersect at two points. One sure way to do this is to draw line segment AB and make the radius of the circles equal to the length of that line segment. On the diagram, circles A and B are shown as near-half portions of a circle. - Mark where these circles intersect, and connect these two points with a line. This line bisects the original angle. A proof that the line bisects the angle is found in Proposition 9 of Book 1 of the Elements. Given a right angle, we can use this process to split that right angle indefinitely to form any binary fraction (i.e., , e.g. ) of it. Thus, we can measure any angle in terms of right angles. That is, a measurement system in which the size of the right angle is considered to be one. Introduction to Radian MeasureEdit Trigonometry is simplified if we choose the following strange angle as "one": Understanding the three sides of a RadianEdit To illustrate how the three sides of a radian relate to one another try the below thought experiment: - Assume you have a piece of string that is exactly the length of the radius of a circle. - Assume you have drawn a radian in the same circle. The radian has 3 points. One is the center of your circle and the other two are on the circumference of the circle where the sides of the radian intersect with the circle. - Attach one end of the string at one of the points where the radian intersects with the circumference of the circle. - Take the other end of the string and, starting at the point you chose in Step 2 above, trace the circumference of the circle towards the second point where the radian intersects with the circle until the string is pulled tight. - You will see that the end of the string travels past the second point. - This is because the string is now in a straight line. However, the radian has an arc for its third side, not a straight line. Even though a radian has three equal sides, the arc's curve causes the two points where the radian intersects with the circle to be closer together than they are to the third point of the triangle, which is at the center of our circle in this example. - Now, with the string still pulled tight, find the half way point of the string, then pull it onto the circumference of the circle while allowing the end of the string to move along the circle's circumference. The end of the string is now closer to the second point because the path of the string is closer to the path of the circle's circumference. - We can keep improving the fit of the string's path to the path of the circle's circumference by dividing each new section of the string in half and pulling it towards the circumference of the circle. Is a radian affected by the size of its circle?Edit Does it matter what size circle is used to measure in radians? Perhaps using the radius of a large circle will produce a different angle than that produced by the radius of a small circle. The answer is no. Recall our radian and circle from the experiment in the subsection above. Draw another circle inside the first circle, with the same center, but with half the radius. You will see that you have created a new radian inside the smaller circle that shares the same angle as the radian in the larger circle. We know that the two sides of the radians emanating from the center of the circles are equal to the radius of their respective circle. We also know that the third side of the radian in the larger circle (the arc) is also equal to the larger circle's radius. But how do we know that the third side of the radian in the smaller circle (the arc that follows the circumference of the smaller circle) is equal to the radius of the smaller circle? To see why we do know that the third side of the smaller circle's radian is equal to its radius, we first connect the two points of each radian that intersect with the circle with each other. By doing so, you will have created two isosceles triangles (triangles with two equal sides and two equal angles). An isosceles triangle has two equal angles and two equal sides. If you know one angle of any isosceles triangle and the length of two sides that make up that angle, then you can easily deduce the remaining characteristics of the isosceles triangle. For instance, if the two equidistant sides of an isosceles triangle intersect to form an angle that is equal to 40o, then you know the remaining angles must both equal 70º. Since we know that the equidistant sides of our two isosceles triangles make up our known angle, then we can deduce that both of our radians (when converted to isosceles triangles with straight lines) have identical second and third angles. We also know that triangles with identical angles, regardless of their size, will have the length of their sides in a constant ratio to each other. For instance, we can deduce that an isosceles triangle will have sides that measure 2 meters by 4 meters by 4 meters if we know that an isoscleses triangle with identical angles measures 1 meter by 2 meters by 2 meters. Therefore, in our example, our isosceles triangle formed by the second smaller circle will have a third side exaclty equal to half of the third side of the isosceles triangle created by the larger circle. The relationship between the size of the sides of two triangles that share identical angles is also found in the relationship between radius and circumference of two circles that share the same center point - they will share the exact same ratio. In our example then, since the radius of our second smaller circle is exactly half of the larger circle, the circumference between the two points where the radian of the smaller circle intersect (which we have shown is one half of the distance between the two similar points on the larger circle) will share the same exact ratio. And there you have it - the size of the circle does not matter. Using Radians to Measure AnglesEdit Once we have an angle of one radian, we can chop it up into binary fractions as we did with the right angle to get a vast range of known angles with which to measure unknown angles. A protractor is a device which uses this technique to measure angles approximately. To measure an angle with a protractor: place the marked center of the protractor on the corner of the angle to be measured, align the right hand zero radian line with one line of the angle, and read off where the other line of the angle crosses the edge of the protractor. A protractor is often transparent with angle lines drawn on it to help you measure angles made with short lines: this is allowed because angles do not depend on the length of the lines from which they are made. If we agree to measure angles in radians, it would be useful to know the size of some easily defined angles. We could of course simply draw the angles and then measure them very accurately, though still approximately, with a protractor: however, we would then be doing physics, not mathematics. The ratio of the length of the circumference of a circle to its radius is defined as 2π, where π is an invariant independent of the size of the circle by the argument above. Hence if we were to move 2π radii around the circumference of a circle from a given point on the circumference of that circle, we would arrive back at the starting point. We have to conclude that the size of the angle made by one circuit around the circumference of a circle is 2π radians. Likewise a half circuit around a circle would be π radians. Imagine folding a circle in half along an axis of symmetry: the resulting crease will be a diameter, a straight line through the center of the circle. Hence a straight line has an angle of size π radians. Folding a half circle in half again produces a quarter circle which must therefore have an angle of size π/2 radians. Is a quarter circle a right angle? To see that it is: draw a square whose corner points lie on the circumference of a circle. Draw the diagonal lines that connect opposing corners of the square, by symmetry they will pass through the center of the circle, to produce 4 similar triangles. Each such triangle is isoceles, and has an angle of size 2π/4 = π/2 radians where the two equal length sides meet at the center of the circle. Thus the other two angles of the triangles must be equal and sum to π/2 radians, that is each angle must be of size π/4 radians. We know that such a triangle is right angled, we must conclude that an angle of size π/2 radians is indeed a right angle. Summary and Extra NotesEdit In summary: it is possible to make deductions about the sizes of angles in certain special conditions using geometrical arguments. However, in general, geometry alone is not powerful enough to determine the size of unknown angles for any arbitrary triangle. To solve such problems we will need the help of trigonometric functions. In principle, all angles and trigonometric functions are defined on the unit circle. The term unit in mathematics applies to a single measure of any length. We will later apply the principles gleaned from unit measures to larger (or smaller) scaled problems. All the functions we need can be derived from a triangle inscribed in the unit circle: it happens to be a right-angled triangle. The center point of the unit circle will be set on a Cartesian plane, with the circle's centre at the origin of the plane — the point (0,0). Thus our circle will be divided into four sections, or quadrants. Quadrants are always counted counter-clockwise, as is the default rotation of angular velocity (omega). Now we inscribe a triangle in the first quadrant (that is, where the x- and y-axes are assigned positive values) and let one leg of the angle be on the x-axis and the other be parallel to the y-axis. (Just look at the illustration for clarification). Now we let the hypotenuse (which is always 1, the radius of our unit circle) rotate counter-clockwise. You will notice that a new triangle is formed as we move into a new quadrant, not only because the sum of a triangle's angles cannot be beyond 180°, but also because there is no way on a two-dimensional plane to imagine otherwise.
https://en.m.wikibooks.org/wiki/Trigonometry/For_Enthusiasts/Trigonometry_Done_Rigorously
4.03125
Found only in the southern part of Madagascar in the dry forest and bush, the ring-tailed lemur is a large, vocal primate with brownish-gray fur and a distinctive tail with alternating black and white rings. Male and female ring-tailed lemurs are similar physically. They are roughly the same size, measuring about 42.5 cm (1.4 ft.) from head to rump and weighing roughly 2.25 kg (5 lb.). Highly social creatures, ring-tailed lemurs live in groups averaging 17 members. Their society is female-dominant, and a group will often contain multiple breeding females. Females reproduce starting at 3 years of age, generally giving birth to one baby a year. When born, a ring-tailed lemur baby weighs less than 100 g (3 oz.). The newborn is carried on its mother’s chest for 1-2 weeks and then is carried on her back. At 2 weeks, the baby starts eating solid food and begins venturing out on its own. But the juvenile is not fully weaned until 5 months of age. Although they are capable climbers, ring-tailed lemurs spend a third of their time on the ground foraging for food. They range far to find leaves, flowers, bark, sap, and small invertebrates to eat. When the lemurs travel over ground, they keep their tails in the air to ensure everyone in the group is in sight and stays together. Aside from using visual cues, ring-tailed lemurs also communicate via scent and vocalizations. They mark their territory by scent. A male lemur will also engage in stink fights during mating seasons, wiping his tail with the scent glands on his wrists and waving it at another male while staring menacingly. Eventually one male will back down and run away. Vocally, ring-tailed lemurs have several different alarms calls that alert members to danger. They have several predators, including fossas (mammals related to the mongoose), Madagascar Harrier-hawks, Madagascar buzzards, Madagascar ground boas, civets, and domestic cats and dogs. Ring-tailed lemurs are considered endangered by the IUCN Red List. The main threat to their population is habitat destruction. Much of their habitat is being converted to farmland or burned for the production of charcoal. However, the ring-tailed lemur is popular in zoos, and they do comparatively well in captivity and reproduce regularly. In captivity, ring-tailed lemurs can live for nearly 30 years, compared to up to 20 in the wild. What You Can Do to Help You can help ring-tailed lemurs by contributing to the Lemur Conservation Foundation through volunteer work or donations. The WWF also provides the opportunity to adopt a lemur. The money donated goes to help establish and manage parks and protected areas in Madagascar. Ring-tailed Lemur Distribution Ring-tailed Lemur Resources - Duke University Lemur Center – Ring-tailed Lemur - National Geographic – Ring-Tailed Lemur - IUCN Redlist – Lemur catta You Might Also Like Blog Posts about the Ring-tailed Lemur - Ring-tailed Lemur Baby at Taronga Western Plains Zoo - October 20, 2015 - Baby Ring-Tailed Lemurs at Busch Gardens - May 21, 2015 - Nevada Zoo Welcomes Ring-tailed Lemur Baby - April 22, 2010 Last updated on August 24, 2014.
http://www.animalfactguide.com/animal-facts/ring-tailed-lemur/
4.1875
Kindergarten students do not have textbooks. They learn through units that teachers use to deliver instruction based on kindergarten learning objectives focused around a specific theme. These units form the year's curriculum. The time spent on each unit ranges from one week to a month. Some kindergarten themes are teacher-created; others are produced by publishers of kindergarten reading programs. Many teachers use a transportation theme because it allows children to learn about the many ways people travel. Kindergarten students do not learn social studies as a separate subject. Instead, it is embedded in the themes they study all year. In the transportation unit, students learn to classify different modes of transportation according to where they are used -- sky, sea or land. Teachers also emphasize the importance of vehicles in people's daily lives. Teachers focus on reading skills during the majority of the school day. Thematic units give them many opportunities to incorporate reading objectives, such as blending vowel-consonant sounds orally to make words. In a transportation unit, teachers help students sound out words such as "jet", "truck" and "bike." Teachers read stories and poems that use these words so children can hear them being used in the correct context. Kindergarten teachers help students build vocabulary and language skills through a transportation unit. During whole- and small-group discussions, students learn the names of various kinds of transportation and how to use these words in complete sentences. This type of activity improves their ability to communicate, too. Kindergarten students begin writing letters at the start of the year and then progress to words and sentences. In the transportation unit, the teacher may have students draw or color a picture of their favorite kind of transportation and write a sentence about it. This reinforces knowledge that language is written and read from left to right, while giving students the opportunity to practice correct letter formation. Kindergarten students take math as a separate subject, but thematic units incorporate math skills for reinforcement. Students can make trains from colorful, interlocking plastic cubes, and kindergarten teachers can use this lesson to reinforce mathematical concepts. Teachers can have students make their train with different colors to teach patterns, or they can let students have train races. For a race, two students take turns rolling a die. Each student moves his train according to the number he has thrown on the die. The student whose train reaches the finish line first wins. Style Your World With Color Barack Obama's signature color may bring presidential power to your wardrobe.View Article Let your clothes speak for themselves with this powerhouse hue.View Article Create balance and growth throughout your wardrobe.View Article See how the colors in your closet help determine your mood.View Article - Comstock/Comstock/Getty Images
http://classroom.synonym.com/objectives-transportation-units-kindergarten-3581.html
4.21875
Dylan came storming in the door after a busy day at school. He slammed his books down on the kitchen table. “What is the matter?” his Mom asked sitting down at the table. “Well, I made this great geodesic dome. It is finished and doing great, but Mrs. Patterson wants me to investigate other shapes that you could use to make a dome. I don’t want to do it. I feel like my project is finished,” Dylan explained. “Maybe Mrs. Patterson just wanted to give you an added challenge.” “Maybe, but what other shapes can be used to form a dome? The triangle makes the most sense,” Dylan said. “Yes, but to figure this out, you need to know what other shapes tessellate,” Mom explained. “What does it mean to tessellate? And how can I figure that out?” Pay attention to this Concept and you will know how to answer these questions by the end of it. We can use translations and reflections to make patterns with geometric figures called tessellations. A tessellation is a pattern in which geometric figures repeat without any gaps between them. In other words, the repeated figures fit perfectly together. They form a pattern that can stretch in every direction on the coordinate plane. Take a look at the tessellations below. This tessellation could go on and on. We can create tessellations by moving a single geometric figure. We can perform translations such as translations and rotations to move the figure so that the original and the new figure fit together. How do we know that a figure will tessellate? If the figure is the same on all sides, it will fit together when it is repeated. Figures that tessellate tend to be regular polygons. Regular polygons have straight sides that are all congruent. When we rotate or slide a regular polygon, the side of the original figure and the side of its translation will match. Not all geometric figures can tessellate, however. When we translate or rotate them, their sides do not fit together. Remember this rule and you will know whether a figure will tessellate or not! Think about whether or not there will be gaps in the pattern as you move a figure. Sure. To make a tessellation, as we have said, we can translate some figures and rotate others. Take a look at this situation. Create a tessellation by repeating the following figure. First, trace the figure on a piece of stiff paper and then cut it out. This will let you perform translations easily so you can see how best to repeat the figure to make a tessellation. This figure is exactly the same on all sides, so we do not need to rotate it to make the pieces fit together. Instead, let’s try translating it. Trace the figure. Then slide the cutout so that one edge of it lines up perfectly with one edge of the figure you drew. Trace the cutout again. Now line the cutout up with another side of the original figure and trace it. As you add figures to the pattern, the hexagons will start making themselves! Check to make sure that there are no gaps in your pattern. All of the edges should fit perfectly together. You should be able to go on sliding and tracing the hexagon forever in all directions. You have made a tessellation! Do the following figures tessellate? Why or why not? Solution: Yes, because it is a regular polygon with sides all the same length. Solution: No, because it is a circle and the sides are not line segments. Solution: Yes, because it is made up of two figures that tessellate. Now let's go back to the dilemma at the beginning of the Concept. First, let’s answer the question about tessellations. What does it mean to tessellate? To tessellate means that congruent figures are put together to create a pattern where there aren’t any gaps or spaces in the pattern. Figures can be put side by side and/or upside down to create the pattern. The pattern is called a tessellation. How do you figure out which figures will tessellate and which ones won’t? Figure that will tessellate are congruent figures. They have to be exactly the same length on all sides. They also have to be able to fit together. A circle will not tessellate because there aren’t sides to fit together. A hexagon, on the other hand, will tessellate as long as the same hexagon is used to create the pattern. a pattern made by using different transformations of geometric figures. A figure will tessellate if it is a regular geometric figure and if the sides all fit together perfectly with no gaps. Here is one for you to try on your own. Draw a tessellation of equilateral triangles. In an equilateral triangle each angle is 60∘. Therefore, six triangles will perfectly fit around each point. Directions: Will the following figures tessellate? - A regular pentagon - A regular octagon - A square - A rectangle - An equilateral triangle - A parallelogram - A circle - A cylinder - A cube - A cone - A sphere - A rectangular prism - A right triangle - A regular heptagon - A regular decagon
http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts-Grade-8/r9/section/6.16/
4.1875
Japanese American Internment was the relocation and internment by the United States government in 1942 of about 110,000 Japanese Americans and Japanese who lived along the Pacific coast of the United States to camps called “War Relocation Camps,” in the wake of Imperial Japan‘s attack on Pearl Harbor. The internment of Japanese Americans was applied unequally throughout the United States. All who lived on the West Coast of the United States were interned, while in Hawaii, where the 150,000-plus Japanese Americans composed over one-third of the population, an estimated 1,200 to 1,800 were interned. Of those interned, 62% were American citizens. Executive Order 9066, issued February 19, 1942, which allowed local military commanders to designate “military areas” as “exclusion zones,” from which “any or all persons may be excluded.” This power was used to declare that all people of Japanese ancestry were excluded from the entire Pacific coast, including all of California and much of Oregon, Washington and Arizona, except for those in internment camps. Many internees lost irreplaceable personal property due to the restrictions on what could be taken into the camps. Some Japanese-American farmers were able to find families willing to tend their farms for the duration of their internment. In other cases Japanese-American farmers had to sell their property in a matter of days, usually at great financial loss. These losses were compounded by theft and destruction of items placed in governmental storage. A number of persons died or suffered for lack of medical care, and several were killed by sentries. Loyalty questions and segregation Some Japanese Americans did question the American government, after finding themselves in internment camps. Several pro-Japan groups formed inside the camps, particularly at the Tule Lake location. When the government passed a law that made it possible for an internee to renounce American citizenship, 5,589 internees opted to do so; 5,461 of these were at Tule Lake. Of those who renounced their citizenship, 1,327 were repatriated to Japan. Many of these individuals would later face stigmatization in the Japanese-American community, after the war, for having made that choice, although even at the time they were not certain what their futures held were they to remain American, and remain interned. These renunciations of American citizenship have been highly controversial, for a number of reasons. Some apologists for internment have cited the renunciations as evidence that “disloyalty” or anti-Americanism was well represented among the interned peoples, thereby justifying the internment. Many historians have dismissed the latter argument, for its failure to consider that the small number of individuals in question were in the midst of persecution by their own government at the time of the “renunciation”: [T]he renunciations had little to do with “loyalty” or “disloyalty” to the United States, but were instead the result of a series of complex conditions and factors that were beyond the control of those involved. Prior to discarding citizenship, most or all of the renunciants had experienced the following misfortunes: forced removal from homes; loss of jobs; government and public assumption of disloyalty to the land of their birth based on race alone; and incarceration in a “segregation center” for “disloyal” ISSEI or NISEI… Minoru Kiyota, who was among those who renounced his citizenship and swiftly came to regret the decision, has stated that he wanted only “to express my fury toward the government of the United States,” for his internment and for the mental and physical duress, as well as the intimidation, he was made to face. [M]y renunciation had been an expression of momentary emotional defiance in reaction to years of persecution suffered by myself and other Japanese Americans and, in particular, to the degrading interrogation by the FBI agent at Topaz and being terrorized by the guards and gangs at Tule Lake. Civil rights attorney Wayne M. Collins successfully challenged most of these renunciations as invalid, owing to the conditions of duress and intimidation under which the government obtained them. Many of the deportees were Issei (first generation Japanese immigrants) who often had difficulty with English and often did not understand the questions they were asked. Even among those Issei who had a clear understanding, Question 28 posed an awkward dilemma: Japanese immigrants were denied US citizenship at the time, so when asked to renounce their Japanese citizenship, answering “Yes” would have made them stateless persons. When the government circulated a questionnaire seeking army volunteers from among the internees, 6% of military-aged male respondents volunteered to serve in the U.S. Armed Forces. Most of those who refused tempered that refusal with statements of willingness to fight if they were restored their rights as American citizens. 20,000 Japanese American men and many Japanese American women served in the U.S. Army during World War II. The famed 442nd Regimental Combat Team, which fought in Europe, was formed from those Japanese Americans who did agree to serve. This unit was the most highly decorated US military unit of its size and duration. Most notably, the 442nd was known for saving the 141st (or the “lost battalion”) from the Germans. The 1951 film Go For Broke! was a fairly accurate portrayal of the 442nd, and starred several of the RCT’s veterans. In 1980, President Jimmy Carter conducted an investigation to determine whether putting Japanese Americans into internment camps was justified well enough by the government. He appointed the Commission on Wartime Relocation and Internment of Civilians to investigate the camps. The commission’s report, named “Personal Justice Denied,” found little evidence of Japanese disloyalty at the time and recommended the government pay reparations to the survivors. They formed a payment of $20,000 to each individual internment camp survivor. These were the reparations passed by President Ronald Reagan. Manzanar – Historical Resource Study/Special History Study (Epilogue) On the 34th anniversary of the issuance of Executive Order 9066, President Gerald R. Ford formally rescinded the presidential proclamation, stating “We know now what we should have known then: not only was evacuation wrong, but Japanese-Americans were and are loyal Americans.” On November 25, 1978, the first “Day of Remembrance” program was conducted at Camp Harmony, Washington, site of the former Puyallup Assembly Center. In late January 1979, the JACL National Redress Committee met with Hawaii Senators Daniel Inouye and Spark Matsunaga and California Congressmen Norman Mineta and Robert Matsui to discuss strategies for obtaining redress. A study commission was proposed. Finally, on July 31, 1980, President Jimmy Carter signed into law the Commission on Wartime Relocation and Internment of Civilians (CWRIC) Act. Between July 14 and December 9, 1981, the CWRIC held twenty days of hearings in nine cities during which more than 750 witnesses testified. In December 1982, the CWRIC released its report, Personal Justice Denied, concluding that Executive Order 9066 was “not justified by military necessity” and was the result of “race prejudice, war hysteria, and a failure of political leadership.” In June 1983, the CWRIC issued five recommendations for redress to Congress. First, it called for a joint congressional resolution acknowledging and apologizing for the wrongs initiated in 1942. Second, it recommended a presidential pardon for persons who had been convicted of violating the several statutes establishing and enforcing the evacuation and relocation program. Third, it urged Congress to direct various parts of the government to deal liberally with applicants for restitution of status and entitlements lost because of wartime prejudice and discrimination, such as the less than honorable discharges that were given to many Japanese American soldiers in the weeks after Pearl Harbor. Fourth, it recommended that Congress appropriate money to establish a special foundation to sponsor research and public educational activities “so that the causes and circumstances of this and similar events may be illuminated”. For the entire article: http://www.cr.nps.gov/history/online_books/manz/hrse.htm - Gila River Relocation Camp - Granada Relocation Center NHL Nomination – nps.gov - Haiku Internment Camp - Heart Mountain Relocation Center NHL Nomination – nps.gov - Honouliuli National Monument (U.S. National Park Service) - Jerome Relocation Camp - Kalaheo Internment Camp - Kilauea Detention Center - Manzanar National Historic Site (U.S. National Park Service) - Minidoka National Historic Site (U.S. National Park Service) - Poston War Relocation Center - Rohwer Relocation Camp - Sand Island Internment Camp - Topaz Central Utah Relocation Center – Site NHL Nomination - Tule Lake Unit (U.S. National Park Service) – nps.gov Day of Remembrance: Japanese-Americans show support for Muslims, Sikhs SAN JOSE — For older Japanese-Americans, the discrimination and attacks on Muslims and Sikhs are opening afresh an old wound that never healed. To show support for Arab-Americans, the South Bay’s Japanese-American community held a somber candlelight ceremony and procession at San Jose Buddhist Church on Sunday evening, linking diverse faiths through similar fears. The “Day of Remembrance” is an annual commemoration of Feb. 19, 1942, a day that changed the lives of Japanese-Americans forever. Citing concerns about wartime sabotage and espionage, President Franklin D. Roosevelt signed an order that led to the internment of more than 110,000 people of Japanese ancestry at 10 camps scattered across seven states. But the gathering evoked memories of more recent horrors, such as the murder of the six worshipers at a Sikh temple in Oak Creek, Wisc., and the burning of a mosque in Joplin, Mo. “We have common issues in terms of justice, equity and fair treatment under the Constitution,” said Congressman Mike Honda, D-San Jose, who was interned in Colorado as a child. There is no justification for racism or denial of civil liberties, not in 1942 and not in 2013, said Honda. He also urged the acceptance of Latinos, gays and lesbians and others suffering from discrimination. US Minorities Civil Rights Timeline 1863-1963 (ProPresObama.org Civil Rights Timelines ™) US Minorities Civil Rights Timeline 1964-2009 (ProPresObama.org Civil Rights Timelines ™)
http://propresobama.org/2013/02/18/executive-order-9066-japanese-americans-incarceration/
4.03125
The tricolored blackbird forms the largest colonies of any North American land bird, often with breeding groups of tens of thousands of individuals. In the 19th century, some colonies contained more than a million birds — enough to make one observer exclaim over flocks darkening the sky “for some distance by their masses,” not unlike passenger pigeons. But because a small number of colonies may contain most of the population, human impacts can have devastating results. Over the past 70 years, destruction of the tricolor’s marsh and grassland homes has reduced its populations to a small fraction of their former enormity. While its big breeding colonies make the species seem abundant to casual observers, the blackbird’s gregarious nesting behavior renders these colonies vulnerable to large-scale failures. In agricultural habitat the birds experience huge losses of reproductive effort to crop-harvesting; every year, thousands of nests in dairy silage fields — where grass is being fermented and preserved for fodder — are lost to mowing. In what little remains of California’s native emergent-marsh habitat, tricolors are vulnerable to high levels of predation. The species has been in decline ever since widespread land conversion took hold in California. The Center submitted state and federal listing petitions for the species in 2004, but continuing threats to tricolors were ignored for many years. California announced its refusal to protect the species, as did the U.S. Fish and Wildlife Service in 2006. But in 2014, when the bird's population reached the smallest number ever recorded, only 145,000 — and when comprehensive statewide surveys showed that an additional two-thirds of remaining tricolored blackbirds had been lost since 2008 — 2014 the Center again petitioned for an endangered listing under the California Endangered Species Act, on an emergency basis. And finally, in December 2015, the California Fish and Game Comission announced it was making the species a "candidate" for state protection — a definitive victory, since candidates for state protection enjoy actual safeguards until they receive a place on the state's endangered species list (unlike federal "candidate" species). |Get the latest on our work for biodiversity and learn how to help in our free weekly e-newsletter.| Contact: Lisa Belenky
http://www.biologicaldiversity.org/species/birds/tricolored_blackbird/index.html
4
Cardiorespiratory refers to the ability of the circulatory and respiratory systems to supply oxygen to skeletal muscles during sustained physical activity. Regular exercise makes these systems more efficient by enlarging the heart muscle, enabling more blood to be pumped with each stroke, and increasing the number of small arteries in trained skeletal muscles, which supply more blood to working muscles. Exercise improves the respiratory system by increasing the amount of oxygen that is inhaled and distributed to body tissue. There are many benefits of cardiorespiratory fitness. It can reduce the risk of heart disease, lung cancer, type 2 diabetes, stroke, and other diseases. Cardiorespiratory fitness helps improve lung and heart condition, and increases feelings of wellbeing. The American College of Sports Medicine recommends aerobic exercise 3–5 times per week for 30–60 minutes per session, at a moderate intensity, that maintains the heart rate between 65–85% of the maximum heart rate. The cardiovascular system is responsible for a vast set of adaptations in the body throughout exercise. It must immediately respond to changes in cardiac output, blood flow, and blood pressure. Cardiac output is defined as the product of heart rate and stroke volume which represents the volume of blood being pumped by the heart each minute. Cardiac output increases during physical activity due to an increase in both the heart rate and stroke volume. At the beginning of exercise, the cardiovascular adaptations are very rapid: “Within a second after muscular contraction, there is a withdrawal of vagal outflow to the heart, which is followed by an increase in sympathetic stimulation of the heart. This results in an increase in cardiac output to ensure that blood flow to the muscle is matched to the metabolic needs”. Both heart rate and stroke volume vary directly with the intensity of the exercise performed and many improvements can be made through continuous training. Another important issue is the regulation of blood flow during exercise. Blood flow must increase in order to provide the working muscle with more oxygenated blood which can be accomplished through neural and chemical regulation. Blood vessels are under sympathetic tone, therefore the release of noradrenaline and adrenaline will cause vasoconstriction of non-essential tissues such as the liver, intestines, and kidneys, and decrease neurotransmitter release to the active muscles promoting vasodilatation. Also, chemical factors such as a decrease in oxygen concentration and an increase in carbon dioxide or lactic acid concentration in the blood promote vasodilatation to increase blood flow. As a result of increased vascular resistance, blood pressure rises throughout exercise and stimulates baroreceptors in the carotid arteries and aortic arch. “These pressure receptors are important since they regulate arterial blood pressure around an elevated systemic pressure during exercise”. Respiratory system adaptations Although all of the described adaptations in the body to maintain homeostatic balance during exercise are very important, the most essential factor is the involvement of the respiratory system. The respiratory system allows for the proper exchange and transport of gases to and from the lungs while being able to control the ventilation rate through neural and chemical impulses. In addition, the body is able to efficiently use the three energy systems which include the phosphagen system, the glycolytic system, and the oxidative system. In most cases, as the body is exposed to physical activity, the core temperature of the body tends to rise as heat gain becomes larger than the amount of heat lost. “The factors that contribute to heat gain during exercise include anything that stimulate metabolic rate, anything from the external environment that causes heat gain, and the ability of the body to dissipate heat under any given set of circumstances”. In response to an increase in core temperature, there are a variety of factors which adapt in order to help restore heat balance. The main physiological response to an increase in body temperature is mediated by the thermal regulatory center located in the hypothalamus of the brain which connects to thermal receptors and effectors. There are numerous thermal effectors including sweat glands, smooth muscles of blood vessels, some endocrine glands, and skeletal muscle. With an increase in the core temperature, the thermal regulatory center will stimulate the arterioles supplying blood to the skin to dilate along with the release of sweat on the skin surface to reduce temperature through evaporation. In addition to the involuntary regulation of temperature, the hypothalamus is able to communicate with the cerebral cortex to initiate voluntary control such as removing clothing or drinking cold water. With all regulations taken into account, the body is able to maintain core temperature within about two or three degrees Celsius during exercise. - Donatello, Rebeka J. (2005). Health, The Basics. San Francisco: Pearson Education, Inc. - Pollock, M.L.; Gaesser, G.A. (1998). "Acsm position stand: the recommended quantity and quality of exercise for developing and maintaining cardiorespiratory and muscular fitness, and flexibility in healthy adults". Medicine & Science in Sports & Exercise 30 (6): 975–991. doi:10.1097/00005768-199806000-00032. PMID 9624661. Retrieved 22 March 2012. - Brown, S.P.; Eason, J.M.; Miller, W.C. (2006). "Exercise Physiology: Basis of Human". Movement in Health and Disease: 75–247. - Howley, E.T., and Powers, S.K. (1990). Exercise Physiology: Theory and Application to Fitness and Performance. Dubuque, IA: Wm. C. Brown Publishers. pp. 131–267. - Shaver, L.G. (1981). Essentials of Exercise Physiology. minneapolis, MN: Burgess Publishing Company. pp. 1–132.
https://en.wikipedia.org/wiki/Cardiorespiratory_fitness
4.1875
Government of the Roman Republic - The Senate The Roman senate had much of the real power during the time of the republic. The senate was made up of 300 powerful Roman men (although it was increased to as many as 900 in the later years of the republic). Many Roman senators had held another high office before being appointed to the senate. In fact, once a Roman had served in a high office in the republic (consul, praetor, etc…), he was made a senator for life. Most members of the Roman Senate were patricians or members of wealthy landowning families. New senators were selected by other high ranking officials in the republic like consuls or tribunes. Senators were not elected by the people. They were more like what we would call the “good ole boys” club today – basically a group of noble or very wealthy men with lots of connections. The House of Lords in Britain would be similar also, although the House of Lords no longer has much real political power. The Roman Senate had plenty of power, especially during the republic. Other senators obtained office by being elected or appointed to another office. So, for example, the high priest of Rome, the pontifex maximus, automatically had a seat in the senate. These senators did not have an official vote in the senate, but they could participate in the debate and support one side or the other on a particular issue. Members of the Roman senate were not supposed to own or run businesses while they were in office. This regulation was often ignored and rarely enforced. Senators were also sometimes allowed special treatment (seating preference, etc.) at feasts, the circus, plays, or other important events or performances. Being a senator in ancient Rome was an honor, and with that honor came a lot of influence. High ranking officials (magistrates) in the government – consuls, dictators, praetors – could call a meeting of the senate for just about any purpose. During a senate meeting, magistrates could propose legislation. The senators would then debate the law and send it to the populous – the people in the assemblies – for a vote. Duties of the republican Roman senate included: • Public welfare (taking care of the people) • Overseeing Roman religious law • Debating and preparing legislation (laws) to be reviewed by the assembly – but – the senate, by itself, could not make law • Managing Rome’s affairs with other nations • Regulating the taxing and spending of money
http://project-history.blogspot.com/2008/02/government-of-roman-republic-senate.html
4.03125
Obsessive-compulsive disorder (OCD) is a psychiatric anxiety disorder most commonly characterized by a subject's obsessive, distressing, intrusive thoughts and related compulsions (tasks or "rituals") which attempt to neutralize the obsessions. To be diagnosed with obsessive-compulsive disorder, one must have the presence of obsessions, compulsions, or both, according to the Diagnostic and Statistical Manual of Mental Disorders (DSM)-V diagnostic criteria. The manual to the diagnostic criteria from DSM-V (2013) describes these obsessions and compulsions: Obsessions are defined by: - Recurrent and persistent thoughts, impulses, or images that are experienced at some time during the disturbance, as intrusive and undesirable, and that cause marked anxiety or distress. - The thoughts, impulses, or images are not simply excessive worries about real-life problems. - The person attempts to ignore or suppress such thoughts, impulses, or images, or to neutralize them with some other thought or action (for instance, by performing a compulsion). - The person recognizes that the obsessional thoughts, impulses, or images are a product of his or her own mind, and are not based in reality. Compulsions are defined by: - Repetitive behaviors or mental acts that the person feels driven to perform in response to an obsession, or according to rules that must be applied rigidly. - The behaviors or mental acts are aimed at preventing or reducing distress or preventing some dreaded event or situation; however, these behaviors or mental acts either are not connected in a realistic way with what they are designed to neutralize or prevent or are clearly excessive. In addition to these criteria, at some point during the course of the disorder, the obsessions or compulsions must be time-consuming (taking up more than one hour per day), cause distress, or cause impairment in social, occupational, or school functioning. The symptoms are not attributable to any physiological effects of a substance or other medical condition. The disturbance is also not better explained by symptoms of another mental disorder. Community studies have estimated 1-year prevalence of OCD to be 1.2% in the US and 1.1%-1.8% internationally. Research indicates that females are affected at a slightly higher rate than males in adulthood, and males are more commonly affected than females in childhood. OCD usually begins in adolescence or early adulthood, but it may also manifest in childhood. Typically, the onset of symptoms is gradual, although acute onset has also been reported. The majority of untreated individuals experience a chronic waxing and waning course, while others can experience episodic or deteriorating courses. The phrase "obsessive-compulsive" has worked its way into the wider English lexicon, and is often used in an offhand manner to describe someone who is meticulous or absorbed in a cause (see "anal retentive"). It is also important to distinguish OCD from other types of anxiety, including the routine tension and stress that appear throughout life. Although these signs are often present in OCD, a person who shows signs of infatuation or fixation with a subject/object, or displays traits such as perfectionism, does not necessarily have OCD, a specific and well-defined condition.
http://www.med.upenn.edu/ctsa/ocd_symptoms.html?6
4.09375
One of the realities of life is how so much of the world runs by mathematical rules. As one of the tools of mathematics, linear systems have multiple uses in the real world. Life is full of situations when the output of a system doubles if the input doubles, and the output cuts in half if the input does the same. That's what a linear system is, and any linear system can be described with a linear equation. In the Kitchen If you've ever doubled a favorite recipe, you've applied a linear equation. If one cake equals 1/2 cup of butter, 2 cups of flour, 3/4 tsp. of baking powder, three eggs and 1 cup of sugar and milk, then two cakes equal 1 cup of butter, 4 cups of flour, 1 1/2 tsp. of baking powder, six eggs and 2 cups of sugar and milk. To get twice the output, you put in twice the input. You might not have known you were using a linear equation, but that's exactly what you did. Suppose a water district wants to know how much snowmelt runoff it can expect this year. The melt comes from a big valley, and every year the district measures the snowpack and the water supply. It gets 60 acre-feet from every 6 inches of snowpack. This year surveyors measure 6 feet and 4 inches of snow. The district put that in the linear expression (60 acre-feet/6 inches) * 76 inches. Water officials can expect 760 acre-feet of snowmelt from the water. Just for Fun It's springtime and Irene wants to fill her swimming pool. She doesn't want to stand there all day, but she doesn't want to waste water over the edge of the pool, either. She sees that it takes 25 minutes to raise the pool level by 4 inches. She needs to fill the pool to a depth of 4 feet; she has 44 more inches to go. She figures out her linear equation: 44 inches * (25 minutes/4 inches) is 275 minutes, so she knows she has four hours and 35 minutes more to wait. Ralph has also noticed that it's springtime. The grass has been growing. It grew 2 inches in two weeks. He doesn't like the grass to be taller than 2 1/2 inches, but he doesn't like to cut it shorter than 1 3/4 inches. How often does he need to cut the lawn? He just puts that calculation in his linear expression, where (14 days/2 inches) * 3/4 inch tells hims he needs to cut his lawn every 5 1/4 days. He just ignores the 1/4 and figures he'll cut the lawn every five days. It's not hard to see other similar situations. If you want to buy beer for the big party and you've got $60 in your pocket, a linear equation tells you how much you can afford. Whether you need to bring in enough wood for the fire to burn overnight, calculate your paycheck, figure out how much paint you need to redo the upstairs bedrooms or buy enough gas to make it to and from your Aunt Sylvia's, linear equations provide the answers. Linear systems are, literally, everywhere. Where They Aren't One of the paradoxes is that just about every linear system is also a nonlinear system. Thinking you can make one giant cake by quadrupling a recipe will probably not work. If there's a really heavy snowfall year and snow gets pushed up against the walls of the valley, the water company's estimate of available water will be off. After the pool is full and starts washing over the edge, the water won't get any deeper. So most linear systems have a "linear regime" --- a region over which the linear rules apply--- and a "nonlinear regime" --- where they don't. As long as you're in the linear regime, the linear equations hold true. Style Your World With Color Explore a range of cool greys with the year's top colors.View Article See how the colors in your closet help determine your mood.View Article Barack Obama's signature color may bring presidential power to your wardrobe.View Article Explore a range of deep greens with the year's "it" colors.View Article - Comstock/Comstock/Getty Images
http://classroom.synonym.com/real-life-functions-linear-equations-2608.html
4.09375
A carry-save adder is a type of digital adder, used in computer microarchitecture to compute the sum of three or more n-bit numbers in binary. It differs from other digital adders in that it outputs two numbers of the same dimensions as the inputs, one which is a sequence of partial sum bits and another which is a sequence of carry bits. Consider the sum: 12345678 + 87654322 = 100000000 Using basic arithmetic, we calculate right to left, "8+2=0, carry 1", "7+2+1=0, carry 1", "6+3+1=0, carry 1", and so on to the end of the sum. Although we know the last digit of the result at once, we cannot know the first digit until we have gone through every digit in the calculation, passing the carry from each digit to the one on its left. Thus adding two n-digit numbers has to take a time proportional to n, even if the machinery we are using would otherwise be capable of performing many calculations simultaneously. In electronic terms, using bits (binary digits), this means that even if we have n one-bit adders at our disposal, we still have to allow a time proportional to n to allow a possible carry to propagate from one end of the number to the other. Until we have done this, - We do not know the result of the addition. - We do not know whether the result of the addition is larger or smaller than a given number (for instance, we do not know whether it is positive or negative). A carry look-ahead adder can reduce the delay. In principle the delay can be reduced so that it is proportional to logn, but for large numbers this is no longer the case, because even when carry look-ahead is implemented, the distances that signals have to travel on the chip increase in proportion to n, and propagation delays increase at the same rate. Once we get to the 512-bit to 2048-bit number sizes that are required in public-key cryptography, carry look-ahead is not of much help. The basic concept Here is an example of a binary sum: 10111010101011011111000000001101 + 11011110101011011011111011101111 Carry-save arithmetic works by abandoning the binary notation while still working to base 2. It computes the sum digit by digit, as 10111010101011011111000000001101 + 11011110101011011011111011101111 = 21122120202022022122111011102212 The notation is unconventional but the result is still unambiguous. Moreover, given n adders (here, n=32 full adders), the result can be calculated after propagating the inputs through a single adder, since each digit result does not depend on any of the others. If the adder is required to add two numbers and produce a result, carry-save addition is useless, since the result still has to be converted back into binary and this still means that carries have to propagate from right to left. But in large-integer arithmetic, addition is a very rare operation, and adders are mostly used to accumulate partial sums in a multiplication. Supposing that we have two bits of storage per digit, we can use a redundant binary representation, storing the values 0, 1, 2, or 3 in each digit position. It is therefore obvious that one more binary number can be added to our carry-save result without overflowing our storage capacity: but then what? The key to success is that at the moment of each partial addition we add three bits: - 0 or 1, from the number we are adding. - 0 if the digit in our store is 0 or 2, or 1 if it is 1 or 3. - 0 if the digit to its right is 0 or 1, or 1 if it is 2 or 3. To put it another way, we are taking a carry digit from the position on our right, and passing a carry digit to the left, just as in conventional addition; but the carry digit we pass to the left is the result of the previous calculation and not the current one. In each clock cycle, carries only have to move one step along, and not n steps as in conventional addition. Because signals don't have to move as far, the clock can tick much faster. There is still a need to convert the result to binary at the end of a calculation, which effectively just means letting the carries travel all the way through the number just as in a conventional adder. But if we have done 512 additions in the process of performing a 512-bit multiplication, the cost of that final conversion is effectively split across those 512 additions, so each addition bears 1/512 of the cost of that final "conventional" addition. At each stage of a carry-save addition, - We know the result of the addition at once. - We still do not know whether the result of the addition is larger or smaller than a given number (for instance, we do not know whether it is positive or negative). This latter point is a drawback when using carry-save adders to implement modular multiplication (multiplication followed by division, keeping the remainder only). If we cannot know whether the intermediate result is greater or less than the modulus, how can we know whether to subtract the modulus? Montgomery multiplication, which depends on the rightmost digit of the result, is one solution; though rather like carry-save addition itself, it carries a fixed overhead, so that a sequence of Montgomery multiplications saves time but a single one does not. Fortunately exponentiation, which is effectively a sequence of multiplications, is the most common operation in public-key cryptography. The carry-save unit consists of n full adders, each of which computes a single sum and carry bit based solely on the corresponding bits of the three input numbers. Given the three n - bit numbers a, b, and c, it produces a partial sum ps and a shift-carry sc: The entire sum can then be computed by: - Shifting the carry sequence sc left by one place. - Appending a 0 to the front (most significant bit) of the partial sum sequence ps. - Using a ripple carry adder to add these two together and produce the resulting n + 1-bit value. - Earle, J. G. et al U.S. Patent 3,340,388 "Latched Carry Save Adder Circuit for Multipliers" filed July 12, 1965 - Earle, J. (March 1965), "Latched Carry-Save Adder", IBM Technical Disclosure Bulletin 7 (10): 909–910 - John von Neumannn, Collected Works.
https://en.wikipedia.org/wiki/Carry-save_adder
4.09375
Parents' Guides to Student Success The Parents’ Guides to Student Success were developed by teachers, parents and education experts in response to the Common Core State Standards that more than 45 states have adopted. Created for grades K-8 and high school English, language arts/literacy and mathematics, the guides provide clear, consistent expectations for what students should be learning at each grade in order to be prepared for college and career. The guides include: - Key items children should be learning in English language arts and mathematics in each grade, once Common Core Standards are fully implemented. - Activities that parents can do at home to support their child's learning. Methods for helping parents build stronger relationships with their child's teacher. Tips for planning for college and career (high school only). What PTAs Can Do PTAs can play a pivotal role in how the standards are put in place at the state and district levels. PTA leaders are encouraged to meet with their school, district and/or state administrators to discuss their plans to implement the standards and how their PTA can support that work. The goal is that PTAs and education administrators will collaborate on how to share the guides with all of the parents and caregivers in their states or communities, once the Common Core Standards are fully implemented.
http://www.pta.org/parents/content.cfm?ItemNumber=2583&navItemNumber=3363
4.15625
Once a new national government had been established under a new Constitution, attention naturally Grade Range: 4-12 Resource Type(s): Interactives & Media, Lessons & Activities Duration: 10 minutes Date Posted: 3/1/2012 Use short videos, mini-activities, and practice questions to explore the basic elements of the United States government in this segment of Preparing for the Oath: U.S. History and Civics for Citizenship. The ten questions included in this segment cover topics such as federalism, the Constitution, and checks and balances. This site was designed with the needs of recent immigrants in mind. It is written at a “low-intermediate” ESL level. United States History Standards (Grades 5-12) 3: The institutions and practices of government created during the Revolution and how they were revised between 1787 and 1815 to create the foundation of the American political system based on the U.S. Constitution and the Bill of Rights
https://historyexplorer.si.edu/resource/preparing-oath-government-basics
4
Classroom Demonstrations and Lessons Find information about formulations, what common household products can be used to represent pesticide formulations, and exercise sheets. Find some incompatibility activities as well. What you need and how to do the LD50 demonstration. This demonstration illustrates how exposure to pesticides can be significantly reduced by wearing Personal Protective Equipment (PPE). This demonstration introduces biomagnification – when organisms accumulate chemical residues from the organisms they are eating underneath them in the food chain. Learn how to read a label by answering questions to compare two Hot Shot Fogger labels to see how these products are similar and different. Learn how lures and traps work to monitor and/or control pests in this lure demonstration. Learn how the EPA used a risk cup to limit the amount of aggregate exposure from all the pesticides with a common mode of action. Check out these posters to use as handouts or to post in your classroom. We have a form online to request a high quality PDF of these posters which will be sent to your email.
http://extension.psu.edu/pests/pesticide-education/educators/ag-and-science-teachers/classroom-demonstrations-and-lessons
4.0625
2 Answers | Add Yours The key concept here is density. Less dense objects will rise to the top of a more dense medium while more dense objects will sink to the bottom of a less dense medium. In the case of a hot air balloon floating in the sky, we are talking about hot air versus cold air here. The hot air in the balloon is less dense that the cold air medium that surrounds it, so the hot air in the balloon will lift it higher in the air. In the case of a boat on water, we are talking about air versus water. Since air is much less dense than water, the air trapped behind the hull of the boat will keep the entire boat floating on top of the water. In terms of forces, the more dense medium (cold air or water) exerts an upward force against the less dense object (hot air). If you drop a penny in water, however, it will instantly sink since copper (or metal in general) is much more dense than water. A boat floating in water, or a hot air balloon suspended in the air are examples of buoyancy. It was Archimedes who discovered the law of buoyancy, namely that a body immersed or suspended in a liquid or gas is buoyed up by a force equal to the weight of the liquid or gas displaced by the object. In the case of a boat, the force of buoyancy is equal to the weight of water displaced by the boat. As the boat enters the water it will sink into the water until enough water is displaced to equal the weight of the boat and its contents. Thus, as weight is added to a given boat, the water line will rise. A hot air balloon rises because hot air is less dense than cold air. This is because the molecules of air are driven farther apart from one another as the temperature increases. The air becomes less dense. The balloon displaces its entire volume. So the volume of the balloon is equal to the volume of the air it displaces. By Archimedes Law, the balloon is buoyed up by a force equal to the weight of the displaced (cold) air, which is greater than the weight of an equal volume of the lighter hot air contained in the balloon. Hence, the balloon rises. We’ve answered 301,114 questions. We can answer yours, too.Ask a question
http://www.enotes.com/homework-help/how-physics-boat-floating-water-same-hot-air-301727
4.375
In a familiar high-school chemistry demonstration, an instructor first uses electricity to split liquid water into its constituent gases, hydrogen and oxygen. Then, by combining the two gases and igniting them with a spark, the instructor changes the gases back into water with a loud pop. Scientists at the University of Illinois have discovered a new way to make water, and without the pop. Not only can they make water from unlikely starting materials, such as alcohols, their work could also lead to better catalysts and less expensive fuel cells. “We found that unconventional metal hydrides can be used for a chemical process called oxygen reduction, which is an essential part of the process of making water,” said Zachariah Heiden, a doctoral student and lead author of a paper accepted for publication in the Journal of the American Chemical Society. A water molecule (formally known as dihydrogen monoxide) is composed of two hydrogen atoms and one oxygen atom. But you can’t simply take two hydrogen atoms and stick them onto an oxygen atom. The actual reaction to make water is a bit more complicated: 2H2 + O2 = 2H2O + Energy. In English, the equation says: To produce two molecules of water (H2O), two molecules of diatomic hydrogen (H2) must be combined with one molecule of diatomic oxygen (O2). Energy will be released in the process. “This reaction (2H2 + O2 = 2H2O + Energy) has been known for two centuries, but until now no one has made it work in a homogeneous solution,” said Thomas Rauchfuss, a U. of I. professor of chemistry and the paper’s corresponding author. The well-known reaction also describes what happens inside a hydrogen fuel cell. In a typical fuel cell, the diatomic hydrogen gas enters one side of the cell, diatomic oxygen gas enters the other side. The hydrogen molecules lose their electrons and become positively charged through a process called oxidation, while the oxygen molecules gain four electrons and become negatively charged through a process called reduction. The negatively charged oxygen ions combine with positively charged hydrogen ions to form water and release electrical energy. The “difficult side” of the fuel cell is the oxygen reduction reaction, not the hydrogen oxidation reaction, Rauchfuss said. “We found, however, that new catalysts for oxygen reduction could also lead to new chemical means for hydrogen oxidation.” Rauchfuss and Heiden recently investigated a relatively new generation of transfer hydrogenation catalysts for use as unconventional metal hydrides for oxygen reduction. In their JACS paper, the researchers focus exclusively on the oxidative reactivity of iridium-based transfer hydogenation catalysts in a homogenous, non-aqueous solution. They found the iridium complex effects both the oxidation of alcohols, and the reduction of the oxygen. “Most compounds react with either hydrogen or oxygen, but this catalyst reacts with both,” Heiden said. “It reacts with hydrogen to form a hydride, and then reacts with oxygen to make water; and it does this in a homogeneous, non-aqueous solvent.” The new catalysts could lead to eventual development of more efficient hydrogen fuel cells, substantially lowering their cost, Heiden said. Source: University of Illinois at Urbana-Champaign Explore further: Research reveals mechanism for direct synthesis of hydrogen peroxide
http://phys.org/news/2007-10-scientists.html
4.03125
|This article does not cite any sources. (April 2014)| A redox titration is a type of titration based on a redox reaction between the analyte and titrant. Redox titration may involve the use of a redox indicator and/or a potentiometer. A common example of a redox titration is treating a solution of iodine with a reducing agent to produce iodide using a starch indicator to help detect the endpoint. Iodine (I2) can be reduced to iodide (I−) by e.g. thiosulfate (S2O32−), and when all iodine is spent the blue colour disappears. This is called an iodometric titration. Most often of all, the reduction of iodine to iodide is the last step in a series of reactions where the initial reactions are used to convert an unknown amount of the solute (the substance being analyzed) to an equivalent amount of iodine, which may then be titrated. Sometimes other halogens (or halogenoalkanes) than iodine are used in the intermediate reactions because they are available in better measurable standard solutions and/or react more readily with the solute. The extra steps in iodometric titration may be worth while because the equivalence point, where the blue turns a bit colourless, is more distinct than some other analytical or may be by volumetric methods. The main redox titration types are:
https://en.wikipedia.org/wiki/Redox_titration
4.34375
Trail of Tears OverviewIn 1830, President Andrew Jackson signed the Indian Removal Act which led to the removal of nearly 46,000 Native Americans from their homes east of the Mississippi River to lands west of Missouri. Some tribes went peacefully knowing that they were no match for the US Army, however some tribes were tricked into signing treaties giving up their land while others were forced to march thousands of miles to their new homes which they were promised would never be taken by white settlers. Students will be able to identify the five main tribes that were moved to the Permanent Indian Frontier. Students will be able to identify and explain the reasons why so many Native Americans died en route to their new homes. Students will be able to describe how well the Native Americans were treated by the US Army during the time period. White settlers had long been moving onto land that had once been roamed by nomadic Indian tribes. Nomadic tribes moved with the seasons and the migration of the animals that they hunted, those animals were the life blood of the tribes. Animals that were hunted not only provided food, but also clothing, shelter and the tools for everyday life. When the prairie was an open space, the buffalo especially would roam in large groups which allowed the Indians to hunt in large parties and capture many animals at a time. As more and more white settlers came to America, they wanted to move onto the land to farm, which meant that the Indians could no longer roam there. This caused many problems and led to many conflicts between the white settlers and the Indians. By 1830 President Andrew Jackson believed that the Indians needed to all be moved west of the Mississippi River to lands that would be set aside for the Indians alone. There was however some major problems with this idea, the major one being that most of the Indians did not want to move. By the year 1842 the US Army had established a series of forts along what they called the Permanent Indian Frontier, stretching from Fort Snelling in Minnesota to Fort Jesup in Louisiana. These forts were created to keep white settlers off of Indian lands and toe keep peace between the many tribes that would be forced to live in close proximity with each other. Between the years of 1831-1842 nearly 46,000 Native Americans would be moved from their eastern homelands to lands in what is today Kansas, Oklahoma and Nebraska. During these years, some tribes moved on their own and without major problems, while still others had to be removed by the US Army using force. The most infamous of these involved the Cherokee tribe being marched from Georgia to present day Oklahoma. However many of these types of marches took place. The tragedies that befell the Indian tribes were atrocious; some were preventable while others happened whenever large groups of people came together in those days. For example, cholera was a common illness when large groups congregated together because of lack of knowledge about sanitation. However there were other atrocities, such as not enough food or blankets to keep the tribes from freezing to death. Some tribes were not allowed to bring any of their belonging with them which left them at a distinct disadvantage in their ability to care for themselves. This lesson is designed for students to study the emigration of the tribes during Indian Removal as well as to get some understanding of the hardships that the tribes faced. Materials you will need for this lesson are the associated map activity which shows the location of the Indian tribes before and after removal and the hazard cards which are used to determine who survives and who dies in the reenactment of the journey. Before you begin: Make copies of the hazard cards that will be handed out. (There are 24 cards with this lesson; however you can change that to whatever number that you need.) Hand out maps of before relocation and after that show where tribes were marched to. Step 1: Introduce the topic of Indian Removal. Discuss why some tribes would voluntarily leave their homeland while others refused. Discuss the Five Civilized tribes and why they may have been called that. Also make sure that students know that there were many more tribes than just those 5. Have students look at the map and talk about why some tribes were given more land area than others. Step 2: Ask the students to make a educated guess of how many in the class would not have made it to the final destination. (The amount of hazard cards is slightly off from the actual percentages that would have died en route.) Give each student a random hazard card, but ask them not to look at them yet. Discuss the techniques used by the US Army to get the tribes to march west, make sure to discuss how some tribes were not allowed to take any belongings with them. Step 3: There are 2 ways to do this activity: 1. As the teacher continues discussing or reading about the Indian Removal the teacher can randomly call out a hazard card description and those students must either sit on the floor or put their heads on the desk. 2. If you have access to a gym or open area outside you could do this as a kinetic activity. Take students on a walk and talk to them about the tribes, as you walk randomly call out a hazard card. (For example you might say something like "Cholera has struck the camp- if you have a cholera card, please sit down on the floor.") If the student has the hazard card then they have to sit down. Teachers will need to make sure that they call out all the hazard cards by the end except for the survivor cards, those should be the only students still standing at the end of the activity. After all the cards have been called ask the students to look around- many of their classmates will either be sitting on the floor or with their heads down, they represent the number of Indians that would not have survived the trip west. Ask students to think about that number on the grand scale of how many people were displaced form their homes. At the conclusion of the activity, have students answer the following questions: 1. Why were Native American tribes being moved west? What was happening to the land they left behind? 2. What types of things happened to the tribes as they marched west? 3. How were the tribes treated during this time frame? Students should be able to write at least 3-4 sentences on each of the 3 questions. Teachers are checking for understanding based on the discussions that took place as well as the culminating writing activity. Fort Scott was created as a part of the Permanent Indian Frontier, the soldiers that were stationed at the fort kept peace between the tribes that had been relocated to this region. Tribes from east of the Mississippi River who had been forcibly moved to this area were promised that this would be "permanent" Indian terrritory. Soldiers at Fort Scott formed a "border patrol" keeping white settlers and Indian tribes seperated. Prior to the establishment of Fort Scott, a military garrison had been present at Fort Wayne in the heart of Cherokee land. The Cherokee objected to a military presence at this location and Fort Scott was established in part to placate the Cherokee tribe. There are many excellent books, both fiction and informational text, that would go right along with this lesson.
http://www.nps.gov/fosc/learn/education/classrooms/totlesson.htm
4.25
Fever or Chills, Age 12 and OlderSkip to the navigation Fever is the body's normal and healthy reaction to infection and other illnesses, both minor and serious. It helps the body fight infection. Fever is a symptom, not a disease. In most cases, having a fever means you have a minor illness. When you have a fever, your other symptoms will help you determine how serious your illness is. Temperatures in this topic are oral temperatures. Oral temperatures are usually taken in older children and adults. Normal body temperature Most people have an average body temperature of about 98.6°F (37°C), measured orally (a thermometer is placed under the tongue). Your temperature may be as low as 97.4°F (36.3°C) in the morning or as high as 99.6°F (37.6°C) in the late afternoon. Your temperature may go up when you exercise, wear too many clothes, take a hot bath, or are exposed to hot weather. A fever is a high body temperature. A temperature of up to 102°F (38.9°C) can be helpful because it helps the body fight infection. Most healthy children and adults can tolerate a fever as high as 103°F (39.4°C) to 104°F (40°C) for short periods of time without problems. Children tend to have higher fevers than adults. The degree of fever may not show how serious the illness is. With a minor illness, such as a cold, you may have a temperature, while a very serious infection may cause little or no fever. It is important to look for and evaluate other symptoms along with the fever. If you are not able to measure your temperature with a thermometer, you need to look for other symptoms of illness. A fever without other symptoms that lasts 3 to 4 days, comes and goes, and gradually reduces over time is usually not a cause for concern. When you have a fever, you may feel tired, lack energy, and not eat as much as usual. High fevers are not comfortable, but they rarely cause serious problems. Oral temperature taken after smoking or drinking a hot fluid may give you a false high temperature reading. After drinking or eating cold foods or fluids, an oral temperature may be falsely low. For information on how to take an accurate temperature, see the topic Body Temperature. Causes of fever Travel outside your native country can expose you to other diseases. Fevers that begin after travel in other countries need to be evaluated by your doctor. Fever and respiratory symptoms are hard to evaluate during the flu season. A fever of 102°F (38.9°C) or higher for 3 to 4 days is common with the flu. For more information, see the topic Respiratory Problems, Age 12 and Older. Recurrent fevers are those that occur 3 or more times within 6 months and are at least 7 days apart. Each new viral infection may cause a fever. It may seem that a fever is ongoing, but if 48 hours pass between fevers, then the fever is recurring. If you have frequent or recurrent fevers, it may be a symptom of a more serious problem. Talk to your doctor about your fevers. Treating a fever In most cases, the illness that caused the fever will clear up in a few days. You usually can treat the fever at home if you are in good health and do not have any medical problems or significant symptoms with the fever. Make sure that you are taking enough foods and fluids and urinating in normal amounts. Low body temperature An abnormally low body temperature (hypothermia) can be serious, even life-threatening. Low body temperature may occur from cold exposure, shock, alcohol or drug use, or certain metabolic disorders, such as diabetes or hypothyroidism. A low body temperature may also be present with an infection, particularly in newborns, older adults, or people who are frail. An overwhelming infection, such as sepsis, may also cause an abnormally low body temperature. Check your symptoms to decide if and when you should see a doctor. Check Your Symptoms Many prescription and nonprescription medicines can trigger an allergic reaction and cause a fever. A few examples are: - Barbiturates, such as phenobarbital. - Aspirin, if you take too much. Sudden drooling and trouble swallowing can be signs of a serious problem called epiglottitis. This problem can happen at any age. The epiglottis is a flap of tissue at the back of the throat that you can't see when you look in the mouth. When you swallow, it closes to keep food and fluids out of the tube (trachea) that leads to the lungs. If the epiglottis becomes inflamed or infected, it can swell and quickly block the airway. This makes it very hard to breathe. The symptoms start suddenly. A person with epiglottitis is likely to seem very sick, have a fever, drool, and have trouble breathing, swallowing, and making sounds. In the case of a child, you may notice the child trying to sit up and lean forward with his or her jaw forward, because it's easier to breathe in this position. Call 911 Now Based on your answers, you need emergency care. Call 911 or other emergency services now. Seek Care Today Based on your answers, you may need care soon. The problem probably will not get better without medical care. - Call your doctor today to discuss the symptoms and arrange for care. - If you cannot reach your doctor or you don't have one, seek care today. - If it is evening, watch the symptoms and seek care in the morning. - If the symptoms get worse, seek care sooner. Seek Care Now Based on your answers, you may need care right away. The problem is likely to get worse without medical care. - Call your doctor now to discuss the symptoms and arrange for care. - If you cannot reach your doctor or you don't have one, seek care in the next hour. - You do not need to call an - You cannot travel safely either by driving yourself or by having someone else drive you. - You are in an area where heavy traffic or other problems may slow you down. Shock is a life-threatening condition that may quickly occur after a sudden illness or injury. Symptoms of shock (most of which will be present) include: - Passing out. - Feeling very dizzy or lightheaded, like you may pass out. - Feeling very weak or having trouble standing. - Not feeling alert or able to think clearly. You may be confused, restless, fearful, or unable to respond to questions. Pain in children under 3 years It can be hard to tell how much pain a baby or toddler is in. - Severe pain (8 to 10): The pain is so bad that the baby cannot sleep, cannot get comfortable, and cries constantly no matter what you do. The baby may kick, make fists, or grimace. - Moderate pain (5 to 7): The baby is very fussy, clings to you a lot, and may have trouble sleeping but responds when you try to comfort him or her. - Mild pain (1 to 4): The baby is a little fussy and clings to you a little but responds when you try to comfort him or her. Certain health conditions and medicines weaken the immune system's ability to fight off infection and illness. Some examples in adults are: - Diseases such as diabetes, cancer, heart disease, and HIV/AIDS. - Long-term alcohol and drug problems. - Steroid medicines, which may be used to treat a variety of conditions. - Chemotherapy and radiation therapy for cancer. - Other medicines used to treat autoimmune disease. - Medicines taken after organ transplant. - Not having a spleen. Symptoms of serious illness may include: - A severe headache. - A stiff neck. - Mental changes, such as feeling confused or much less alert. - Extreme fatigue (to the point where it's hard for you to function). - Shaking chills. If you're not sure if a fever is high, moderate, or mild, think about these issues: With a high fever: - You feel very hot. - It is likely one of the highest fevers you've ever had. High fevers are not that common, especially in adults. With a moderate fever: - You feel warm or hot. - You know you have a fever. With a mild fever: - You may feel a little warm. - You think you might have a fever, but you're not sure. Sudden tiny red or purple spots or sudden bruising may be early symptoms of a serious illness or bleeding problem. There are two types. Petechiae (say "puh-TEE-kee-eye"): - Are tiny, flat red or purple spots in the skin or the lining of the mouth. - Do not turn white when you press on them. - Range from the size of a pinpoint to the size of a small pea and do not itch or cause pain. - May spread over a large area of the body within a few hours. - Are different than tiny, flat red spots or birthmarks that are present all the time. Purpura (say "PURR-pyuh-ruh" or “PURR-puh-ruh”): - Is sudden, severe bruising that occurs for no clear reason. - May be in one area or all over. - Is different than the bruising that happens after you bump into something. Temperature varies a little depending on how you measure it. For adults and children age 12 and older, these are the ranges for high, moderate, and mild, according to how you took the temperature. Oral (by mouth) temperature - High: 104°F (40°C) and higher - Moderate: 100.4°F (38°C) to 103.9°F (39.9°C) - Mild: 100.3°F (37.9°C) and lower A forehead (temporal) scanner is usually 0.5°F (0.3°C) to 1°F (0.6°C) lower than an oral temperature. Ear or rectal temperature - High: 105°F (40.6°C) and higher - Moderate: 101.4°F (38.6°C) to 104.9°F (40.5°C) - Mild: 101.3°F (38.5°C) and lower Armpit (axillary) temperature - High: 103°F (39.5°C) and higher - Moderate: 99.4°F (37.4°C) to 102.9°F (39.4°C) - Mild: 99.3°F (37.3°C) and lower Severe trouble breathing means: - You cannot talk at all. - You have to work very hard to breathe. - You feel like you can't get enough air. - You do not feel alert or cannot think clearly. Moderate trouble breathing means: - It's hard to talk in full sentences. - It's hard to breathe with activity. Mild trouble breathing means: - You feel a little out of breath but can still talk. - It's becoming hard to breathe with activity. Symptoms of difficulty breathing can range from mild to severe. For example: - You may feel a little out of breath but still be able to talk (mild difficulty breathing), or you may be so out of breath that you cannot talk at all (severe difficulty breathing). - It may be getting hard to breathe with activity (mild difficulty breathing), or you may have to work very hard to breathe even when you’re at rest (severe difficulty breathing). Severe dehydration means: - Your mouth and eyes may be extremely dry. - You may pass little or no urine for 12 or more hours. - You may not feel alert or be able to think clearly. - You may be too weak or dizzy to stand. - You may pass out. Moderate dehydration means: - You may be a lot more thirsty than usual. - Your mouth and eyes may be drier than usual. - You may pass little or no urine for 8 or more hours. - You may feel dizzy when you stand or sit up. Mild dehydration means: - You may be more thirsty than usual. - You may pass less urine than usual. You can get dehydrated when you lose a lot of fluids because of problems like vomiting or fever. Symptoms of dehydration can range from mild to severe. For example: - You may feel tired and edgy (mild dehydration), or you may feel weak, not alert, and not able to think clearly (severe dehydration). - You may pass less urine than usual (mild dehydration), or you may not be passing urine at all (severe dehydration). Try Home Treatment You have answered all the questions. Based on your answers, you may be able to take care of this problem at home. - Try home treatment to relieve the symptoms. - Call your doctor if symptoms get worse or you have any concerns (for example, if symptoms are not getting better as you would expect). You may need care sooner. Many things can affect how your body responds to a symptom and what kind of care you may need. These include: - Your age. Babies and older adults tend to get sicker quicker. - Your overall health. If you have a condition such as diabetes, HIV, cancer, or heart disease, you may need to pay closer attention to certain symptoms and seek care sooner. - Medicines you take. Certain medicines, herbal remedies, and supplements can cause symptoms or make them worse. - Recent health events, such as surgery or injury. These kinds of events can cause symptoms afterwards or make them more serious. - Your health habits and lifestyle, such as eating and exercise habits, smoking, alcohol or drug use, sexual history, and travel. Fever can be a symptom of almost any type of infection. Symptoms of a more serious infection may include the following: - Skin infection: Pain, redness, or pus - Joint infection: Severe pain, redness, or warmth in or around a joint - Bladder infection: Burning when you urinate, and a frequent need to urinate without being able to pass much urine - Kidney infection: Pain in the flank, which is either side of the back just below the rib cage - Abdominal infection: Belly pain Make an Appointment Based on your answers, the problem may not improve without medical care. - Make an appointment to see your doctor in the next 1 to 2 weeks. - If appropriate, try home treatment while you are waiting for the appointment. - If symptoms get worse or you have any concerns, call your doctor. You may need care sooner. Severe trouble breathing means: - The child cannot eat or talk because he or she is breathing so hard. - The child's nostrils are flaring and the belly is moving in and out with every breath. - The child seems to be tiring out. - The child seems very sleepy or confused. Moderate trouble breathing means: - The child is breathing a lot faster than usual. - The child has to take breaks from eating or talking to breathe. - The nostrils flare or the belly moves in and out at times when the child breathes. Mild trouble breathing means: - The child is breathing a little faster than usual. - The child seems a little out of breath but can still eat or talk. Pain in adults and older children - Severe pain (8 to 10): The pain is so bad that you can't stand it for more than a few hours, can't sleep, and can't do anything else except focus on the pain. - Moderate pain (5 to 7): The pain is bad enough to disrupt your normal activities and your sleep, but you can tolerate it for hours or days. Moderate can also mean pain that comes and goes even if it's severe when it's there. - Mild pain (1 to 4): You notice the pain, but it is not bad enough to disrupt your sleep or activities. Shock is a life-threatening condition that may occur quickly after a sudden illness or injury. Symptoms of shock in a child may include: - Passing out. - Being very sleepy or hard to wake up. - Not responding when being touched or talked to. - Breathing much faster than usual. - Acting confused. The child may not know where he or she is. It's easy to become dehydrated when you have a fever. In the early stages, you may be able to correct mild to moderate dehydration with home treatment measures. It is important to control fluid losses and replace lost fluids. Adults and children age 12 and older If you become mildly to moderately dehydrated while working outside or exercising: - Stop your activity and rest. - Get out of direct sunlight and lie down in a cool spot, such as in the shade or an air-conditioned area. - Prop up your feet. - Take off any extra clothes. - Drink a rehydration drink, water, juice, or sports drink to replace fluids and minerals. Drink 2 qt (2 L) of cool liquids over the next 2 to 4 hours. You should drink at least 10 glasses of liquid a day to replace lost fluids. You can make an inexpensive rehydration drink at home. But do not give this homemade drink to children younger than 12. Measure all ingredients precisely. Small variations can make the drink less effective or even harmful. Mix the following: - 1 quart (1 L) purified water - ½ teaspoon (2.5 mL) salt - 6 teaspoons (30 mL) sugar Rest and take it easy for 24 hours, and continue to drink a lot of fluids. Although you will probably start feeling better within just a few hours, it may take as long as a day and a half to completely replace the fluid that you lost. Many people find that taking a lukewarm [80°F (27°C) to 90°F (32°C)] shower or bath makes them feel better when they have a fever. Do not try to take a shower if you are dizzy or unsteady on your feet. Increase the water temperature if you start to shiver. Shivering is a sign that your body is trying to raise its temperature. Do not use rubbing alcohol, ice, or cold water to cool your body. Dress lightly when you have a fever. This will help your body cool down. Wear light pajamas or a light undershirt. Do not wear very warm clothing or use heavy bed covers. Keep room temperature at 70°F (21°C) or lower. If you are not able to measure your temperature, you need to look for other symptoms of illness every hour while you have a fever and follow home treatment measures. |Try a nonprescription medicine to help treat your fever or pain:| Talk to your child's doctor before switching back and forth between doses of acetaminophen and ibuprofen. When you switch between two medicines, there is a chance your child will get too much medicine. |Be sure to follow these safety tips when you use a nonprescription medicine:| Be sure to check your temperature every 2 to 4 hours to make sure home treatment is working. Symptoms to watch for during home treatment Call your doctor if any of the following occur during home treatment: - Level of consciousness changes. - You have signs of dehydration and you are unable to drink enough to replace lost fluids. Signs of dehydration include being thirstier than usual and having darker urine than usual. - Other symptoms develop, such as pain in one area of the body, shortness of breath, or urinary symptoms. - Symptoms become more severe or frequent. The best way to prevent fevers is to reduce your exposure to infectious diseases. Hand-washing is the single most important prevention measure for people of all ages. Immunizations can reduce the risk for fever-related illnesses, such as the flu. Although no vaccine is 100% effective, most routine immunizations are effective for 85% to 95% of the people who receive them. For more information, see the topic Immunizations. Preparing For Your Appointment To prepare for your appointment, see the topic Making the Most of Your Appointment. You can help your doctor diagnose and treat your condition by being prepared to answer the following questions: - What is the history of your fever? - When did you fever start? - How often do you have a fever? - How long does your fever last? - Does your fever have a pattern? - Are you able to measure your temperature? How high is your fever? - Have you had any other health problems over the past 3 months? - Have you recently been exposed to anyone who has a fever? - Have you recently traveled outside the country or been exposed to immigrants or other nonnative people? - Have you had any insect bites in the past 6 weeks, including tick bites? - What home treatment measures you have tried? Did they help? - What nonprescription medicines have you taken? Did they help? Keep a fever chart of what your temperature was before and after home treatment. - Do you have any health risks? Other Places To Get Help - Abdominal Pain, Age 11 and Younger - Abdominal Pain, Age 12 and Older - Ear Problems and Injuries, Age 11 and Younger - Ear Problems and Injuries, Age 12 and Older - Fever Seizures - Respiratory Problems, Age 11 and Younger - Respiratory Problems, Age 12 and Older - Sore Throat and Other Throat Problems - Urinary Problems and Injuries, Age 11 and Younger - Urinary Problems and Injuries, Age 12 and Older Primary Medical Reviewer William H. Blahd, Jr., MD, FACEP - Emergency Medicine Specialist Medical Reviewer H. Michael O'Connor, MD - Emergency Medicine Current as ofNovember 14, 2014 Current as of: November 14, 2014 To learn more about Healthwise, visit Healthwise.org © 1995-2015 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated.
http://www.uwhealth.org/health/topic/symptom/-fever-or-chills-age-12-and-older/fevr4.html
4.125
Bone is a hard substance that makes up the skeleton, which supports the body and provides protection for the organs. Bone is composed of minerals, mainly calcium and phosphate, which it stores and provides to the body as they are needed. Bone consists of three layers: the outside covering of the bone (periosteum); the hard middle (compact) bone; and the inner spongy (cancellous) bone. The covering of the bone contains nerves and blood vessels that feed the hard bone. Holes and channels run through the hard bone to supply oxygen and nutrients to the inner bone cells. The spongy bone contains bone marrow, which produces red and white blood cells and platelets. Normal bone is constantly dissolving and being absorbed into the body and then being rebuilt in a process called remodeling. This allows bones to react to changes in body weight and structure and to increase bone strength in areas of stress. eMedicineHealth Medical Reference from Healthwise To learn more visit Healthwise.org - Early Care for Your Premature Baby - What to Eat When You Have Cancer - When to Take More Pain Medication
http://www.emedicinehealth.com/script/main/art.asp?articlekey=137597&ref=128776
4.15625
Jetstream - On-line School for Weather, NOAA - National Weather Service Activity takes about 1 class period.Learn more about Teaching Climate Literacy and Energy Awareness» See how this Activity supports the Next Generation Science Standards» Middle School: 1 Performance Expectation, 2 Disciplinary Core Ideas, 8 Cross Cutting Concepts, 6 Science and Engineering Practices About Teaching Climate Literacy Other materials addressing 2b Excellence in Environmental Education Guidelines Other materials addressing: C) Systems and connections. Other materials addressing: D) Flow of matter and energy. Notes From Our Reviewers The CLEAN collection is hand-picked and rigorously reviewed for scientific accuracy and classroom effectiveness. Read what our review team had to say about this resource below or learn more about how CLEAN reviews teaching materials Teaching Tips | Science | Pedagogy | - Educators may wish to supplement this with background materials, see for example: http://www.srh.noaa.gov/jetstream/atmos/whatacycle_max.html. - Educators may also want each student to discuss their own pathway through the water cycle with the group to reinforce how complex the water cycle really is. - To connect to climate change introduce some "What if...?" scenarios in a post-activity discussion. e.g. "What if the temperature of the ocean sea surface increased? How might this change other elements of the cycle?" - Could use as is with elementary students; one could add complexity to it for middle school students. One concept to consider introducing is the energy gained or lost during evaporation or condensation, and students could leave or take a token at a station to represent the gain or loss of energy. Another concept to consider adding would be the flux of water molecules. About the Science - Activity gives students a visceral sense of where and how frequently water molecules move around in the water cycle. - As noted in its description, the activity is unrealistic as most water molecules are contained in the ocean. About half of the students are initially placed at the ocean station. - Comments from expert scientist: Creative way to engage students in a "game" to learn about the various interactions within the water cycle. Presents a thorough number of paths and parts of the water cycle, to illustrate water cycle complexity. The cards describe and define, in appropriate scientific terms, the process that takes place for the student (i.e. water molecule) to transition from one place in the cycle to the next. That's where the real learning can come in, in having the students learn about how those movements within the water system take place. About the Pedagogy - While the activity does not include much scientific background on the water cycle itself, it is a kinesthetic exercise that will give students a strong sense of what water molecules do within the water cycle, and the variety of pathways that a molecule can take. Technical Details/Ease of Use - The website includes printouts for both the station cards for each station in the water cycle and the water cycle worksheets for each student. These are in color but don't require a color printer. - Students must be mobile and the classroom space must be configured such that students can move around. Next Generation Science Standards See how this Activity supports: Performance Expectations: 1 MS-ESS2-4: Develop a model to describe the cycling of water through Earth's systems driven by energy from the sun and the force of gravity. Disciplinary Core Ideas: 2 MS-ESS2.C1:Water continually cycles among land, ocean, and atmosphere via transpiration, evaporation, condensation and crystallization, and precipitation, as well as downhill flows on land. MS-ESS2.C3:Global movements of water and its changes in form are propelled by sunlight and gravity. Cross Cutting Concepts: 8 MS-C4.2: Models can be used to represent systems and their interactions—such as inputs, processes and outputs—and energy, matter, and information flows within systems. MS-C5.1:Matter is conserved because atoms are conserved in physical and chemical processes. MS-C5.2: Within a natural or designed system, the transfer of energy drives the motion and/or cycling of matter. MS-C7.3:Stability might be disturbed either by sudden events or gradual changes that accumulate over time. MS-C1.2: Patterns in rates of change and other numerical relationships can provide information about natural and human designed systems MS-C2.2:Cause and effect relationships may be used to predict phenomena in natural or designed systems. MS-C3.1:Time, space, and energy phenomena can be observed at various scales using models to study systems that are too large or too small. MS-C3.3: Proportional relationships (e.g., speed as the ratio of distance traveled to time taken) among different types of quantities provide information about the magnitude of properties and processes. Science and Engineering Practices: 6 MS-P2.1:Evaluate limitations of a model for a proposed object or tool. MS-P2.4:Develop and/or revise a model to show the relationships among variables, including those that are not observable but predict observable phenomena. MS-P2.5:Develop and/or use a model to predict and/or describe phenomena. MS-P4.2:Use graphical displays (e.g., maps, charts, graphs, and/or tables) of large data sets to identify temporal and spatial relationships. MS-P5.4:Apply mathematical concepts and/or processes (e.g., ratio, rate, percent, basic operations, simple algebra) to scientific and engineering questions and problems. MS-P6.2:Construct an explanation using models or representations.
http://cleanet.org/resources/44660.html
4.125
A horseshoe orbit is a type of co-orbital motion of a small orbiting body relative to a larger orbiting body (such as Earth). The orbital period of the smaller body is very nearly the same as for the larger body, and its path appears to have a horseshoe shape in a rotating reference frame as viewed from the larger object. The loop is not closed but will drift forward or backward slightly each time, so that the point it circles will appear to move smoothly along the larger body's orbit over a long period of time. When the object approaches the larger body closely at either end of its trajectory, its apparent direction changes. Over an entire cycle the center traces the outline of a horseshoe, with the larger body between the 'horns'. Asteroids in horseshoe orbits with respect to Earth include 54509 YORP, 2002 AA29, 2010 SO16, 2015 SO2 and possibly 2001 GO2. A broader definition includes 3753 Cruithne, which can be said to be in a compound and/or transition orbit, or (85770) 1998 UP1 and 2003 YN107. Explanation of horseshoe orbital cycle The following explanation relates to an asteroid which is in such an orbit around the Sun, and is also affected by the Earth. The asteroid is in almost the same solar orbit as Earth. Both take approximately one year to orbit the Sun. It is also necessary to grasp two rules of orbit dynamics: - A body closer to the Sun completes an orbit more quickly than a body further away. - If a body accelerates along its orbit, its orbit moves outwards from the Sun. If it decelerates, the orbital radius decreases. The horseshoe orbit arises because the gravitational attraction of the Earth changes the shape of the elliptical orbit of the asteroid. The shape changes are very small but result in significant changes relative to the Earth. The horseshoe becomes apparent only when mapping the movement of the asteroid relative to both the Sun and the Earth. The asteroid always orbits the Sun in the same direction. However, it goes through a cycle of catching up with the Earth and falling behind, so that its movement relative to both the Sun and the Earth traces a shape like the outline of a horseshoe. Stages of the orbit Starting out at point A on the inner ring between L5 and Earth, the satellite is orbiting faster than the Earth. It's on its way toward passing between the Earth and the Sun. But Earth's gravity exerts an outward accelerating force, pulling the satellite into a higher orbit which (per Kepler's third law) decreases its angular speed. When the satellite gets to point B, it is traveling at the same speed as Earth. Earth's gravity is still accelerating the satellite along the orbital path, and continues to pull the satellite into a higher orbit. Eventually, at C, the satellite reaches a high enough, slow enough orbit and starts to lag behind Earth. It then spends the next century or more appearing to drift 'backwards' around the orbit when viewed relative to the Earth. Its orbit around the Sun still takes only slightly more than one Earth year. Eventually the satellite comes around to point D. Earth's gravity is now reducing the satellite's orbital velocity, causing it to fall into a lower orbit, which actually increases the angular speed of the satellite. This continues until the satellite's orbit is lower and faster than Earth's orbit. It begins moving out ahead of the earth. Over the next few centuries it completes its journey back to point A. A somewhat different, but equivalent, view of the situation may be noted by considering conservation of energy. It is a theorem of classical mechanics that a body moving in a time-independent potential field will have its total energy, E = T + V, conserved, where E is total energy, T is kinetic energy (always non-negative) and V is potential energy, which is negative. It is apparent then, since V = -GM/R near a gravitating body of mass M, that seen from a stationary frame, V will be increasing for the region behind M, and decreasing for the region in front of it. However, orbits with lower total energy have shorter periods, and so a body moving slowly on the forward side of a planet will lose energy, fall into a shorter-period orbit, and thus slowly move away, or be "repelled" from it. Bodies moving slowly on the trailing side of the planet will gain energy, rise to a higher, slower, orbit, and thereby fall behind, similarly repelled. Thus a small body can move back and forth between a leading and a trailing position, never approaching too close to the planet that dominates the region. - See also trojan (astronomy). Figure 1 above shows shorter orbits around the Lagrangian points L4 and L5 (e.g. the lines close to the blue triangles). These are called tadpole orbits and can be explained in a similar way, except that the asteroid's distance from the Earth does not oscillate as far as the L3 point on the other side of the Sun. As it moves closer to or farther from the Earth, the changing pull of Earth's gravitational field causes it to accelerate or decelerate, causing a change in its orbit known as libration. An example of a body in a tadpole orbit is Polydeuces, a small moon of Saturn which librates around the trailing L5 point relative to a larger moon, Dione. In relation to the orbit of Earth, the 300-meter-diameter asteroid 2010 TK7 is in a tadpole orbit around the leading L4 point.
https://en.wikipedia.org/wiki/Horseshoe_orbit
4.09375
How to identify parallel lines, a line parallel to a plane, and two parallel planes. How to write an equation for the coordinate planes or any plane that is parallel to one. How to find the angle between planes, and how to determine if two planes are parallel or perpendicular. How to find a vector normal (perpendicular) to a plane given an equation for the plane. How to form sentences with parallel structure. How resistors in parallel affect current flow How capacitors in parallel affect current flow. How to plot complex numbers on the complex plane. How to take the converse of the parallel lines theorem. How to mark parallel lines, how to show lines are parallel, and how to compare skew and parallel lines. How to describe and label point, line, and plane. How to define coplanar and collinear. How to determine whether lines are parallel, perpendicular, or neither. How to prove two triangles are similar using a line parallel to a base. How to determine whether two lines in space are parallel or perpendicular. How to prove that opposite angles in a cyclic quadrilateral are congruent; how to prove that parallel lines create congruent arcs in a circle. How to construct parallel lines using three different methods.
https://www.brightstorm.com/tag/parallel-planes/
4.0625
1 Answer | Add Yours The physical structure of the Earth, starting from the center, consists of a solid inner core surrounded by a liquid outer core. The outer core is enveloped by a highly viscous layer called mantle the physical properties of which are equivalent to that of a solid. The outermost layer is called the crust which is solid too and is approximately 6 km thick under the oceans and 50 km thick under the continents. Tectonic plates refer to contiguous segments of the crust that move independently of each other. The motion of the tectonic plates over billions of years has changed the surface of the Earth and led to the formation of the continents and the oceans as we see them today from a single large continent called Pangaea. The tectonic plates continue to move, though the very slow pace at which they do so compared to how long humans have existed on the Earth gives us an impression that the geographical structure of the Earth is constant. With the help of modern technology like satellite imagery it is possible for geologists to see the present movement of tectonic plates and make very rough estimates of how the Earth would look in the future. It is estimated that in 30 - 50 millions years, Africa would move towards Europe and the interaction of the two plates lead to the formation of a mountain range like the Himalayan mountain range. The North and South Americas are estimated to move farther away leading to an expansion in the Atlantic Ocean. Australia is estimated to move towards Asia with a formation of a joint continent. We’ve answered 301,523 questions. We can answer yours, too.Ask a question
http://www.enotes.com/homework-help/explain-how-earth-may-appear-30-million-years-from-359620
4
By attaching short sequences of single-stranded DNA to nanoscale building blocks, researchers can design structures that can effectively build themselves. The building blocks that are meant to connect have complementary DNA sequences on their surfaces, ensuring only the correct pieces bind together as they jostle into one another while suspended in a test tube. The spheres that make up the crystal follow each other in slipstreams, making some patterns more likely to form. (Ian Jenkins) Now, a University of Pennsylvania team has made a discovery with implications for all such self-assembled structures. Earlier work assumed that the liquid medium in which these DNA-coated pieces float could be treated as a placid vacuum, but the Penn team has shown that fluid dynamics play a crucial role in the kind and quality of the structures that can be made in this way. As the DNA-coated pieces rearrange themselves and bind, they create slipstreams into which other pieces can flow. This phenomenon makes some patterns within the structures more likely to form than others. The research was conducted by professors Talid Sinno and John Crocker, alongside graduate students Ian Jenkins, Marie Casey and James McGinley, all of the Department of Chemical and Biomolecular Engineering in Penn’s School of Engineering and Applied Science. It was published in the Proceedings of the National Academy of Sciences. The Penn team’s discovery started with an unusual observation about one of their previous studies, which dealt with a reconfigurable crystalline structure the team had made using DNA-coated plastic spheres, each 400 nanometers wide. These structures initially assemble into floppy crystals with square-shaped patterns, but, in a process similar to heat-treating steel, their patterns can be coaxed into more stable, triangular configurations. Surprisingly, the structures they were making in the lab were better than the ones their computer simulations predicted would result. The simulated crystals were full of defects, places where the crystalline pattern of the spheres was disrupted, but the experimentally grown crystals were all perfectly aligned. While these perfect crystals were a positive sign that the technique could be scaled up to build different kinds of structures, the fact that their simulations were evidently flawed indicated a major hurdle. “What you see in an experiment,” Sinno said, “is usually a dirtier version of what you see in simulation. We need to understand why these simulation tools aren’t working if we’re going to build useful things with this technology, and this result was evidence that we don’t fully understand this system yet. It’s not just a simulation detail that was missing; there’s a fundamental physical mechanism that we’re not including.” By process of elimination, the missing physical mechanism turned out to be hydrodynamic effects, essentially, the interplay between the particles and the fluid in which they are suspended while growing. The simulation of a system’s hydrodynamics is critical when the fluid is flowing, such as how rocks are shaped by a rushing river, but has been considered irrelevant when the fluid is still, as it was in the researchers’ experiments. While the particles’ jostling perturbs the medium, the system remains in equilibrium, suggesting the overall effect is negligible. “The conventional wisdom,” Crocker said, “was that you don't need to consider hydrodynamic effects in these systems. Adding them to simulations is computationally expensive, and there are various kinds of proofs that these effects don’t change the energy of the system. From there you can make a leap to saying, ‘I don’t need to worry about them at all.’” Particle systems like ones made by these self-assembling DNA-coated spheres typically rearrange themselves until they reach the lowest energy state. An unusual feature of the researchers’ system is that there are thousands of final configurations — most containing defects — that are just as energetically favorable as the perfect one they produced in the experiment. “It’s like you’re in a room with a thousand doors,” Crocker said. “Each of those doors takes you to a different structure, only one of which is the copper-gold pattern crystal we actually get. Without the hydrodynamics, the simulation is equally likely to send you through any one of those doors.” The researchers’ breakthrough came when they realized that while hydrodynamic effects would not make any one final configuration more energy-favorable than another, the different ways particles would need to rearrange themselves to get to those states were not all equally easy. Critically, it is easier for a particle to make a certain rearrangement if it’s following in the wake of another particle making the same moves. “It’s like slipstreaming,” Crocker said. “The way the particles move together, it’s like they’re a school of fish.” “How you go determines what you get,” Sinno said. “There are certain paths that have a lot more slipstreaming than others, and the paths that have a lot correspond to the final configurations we observed in the experiment.” The researchers believe that this finding will lay the foundation for future work with these DNA-coated building blocks, but the principle discovered in their study will likely hold up in other situations where microscopic particles are suspended in a liquid medium. “If slipstreaming is important here, it’s likely to be important in other particle assemblies,” Sinno said. It’s not just about these DNA-linked particles; it’s about any system where you have particles at this size scale. To really understand what you get, you need to include the hydrodynamics.” The research was supported by the National Science Foundation through its Chemical, Bioengineering, Environmental and Transport Systems Division. Evan Lerner | EurekAlert! Gene switch may repair DNA and prevent cancer 12.02.2016 | Institute for Integrated Cell-Material Sciences at Kyoto University New method opens crystal clear views of biomolecules 11.02.2016 | Deutsches Elektronen-Synchrotron DESY Today, plants and microorganisms are heavily used for the production of medicinal products. The production of biopharmaceuticals in plants, also referred to as “Molecular Pharming”, represents a continuously growing field of plant biotechnology. Preferred host organisms include yeast and crop plants, such as maize and potato – plants with high demands. With the help of a special algal strain, the research team of Prof. Ralph Bock at the Max Planck Institute of Molecular Plant Physiology in Potsdam strives to develop a more efficient and resource-saving system for the production of medicines and vaccines. They tested its practicality by synthesizing a component of a potential AIDS vaccine. The use of plants and microorganisms to produce pharmaceuticals is nothing new. In 1982, bacteria were genetically modified to produce human insulin, a drug... Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock which attains an accuracy which had only been predicted theoretically so far. Their optical ytterbium clock achieved a relative systematic measurement uncertainty of 3 E-18. The results have been published in the current issue of the scientific journal "Physical Review Letters". Atomic clock experts from the Physikalisch-Technische Bundesanstalt (PTB) are the first research group in the world to have built an optical single-ion clock... The University of Würzburg has two new space projects in the pipeline which are concerned with the observation of planets and autonomous fault correction aboard satellites. The German Federal Ministry of Economic Affairs and Energy funds the projects with around 1.6 million euros. Detecting tornadoes that sweep across Mars. Discovering meteors that fall to Earth. Investigating strange lightning that flashes from Earth's atmosphere into... Physicists from Saarland University and the ESPCI in Paris have shown how liquids on solid surfaces can be made to slide over the surface a bit like a bobsleigh on ice. The key is to apply a coating at the boundary between the liquid and the surface that induces the liquid to slip. This results in an increase in the average flow velocity of the liquid and its throughput. This was demonstrated by studying the behaviour of droplets on surfaces with different coatings as they evolved into the equilibrium state. The results could prove useful in optimizing industrial processes, such as the extrusion of plastics. The study has been published in the respected academic journal PNAS (Proceedings of the National Academy of Sciences of the United States of America). Exceeding critical temperature limits in the Southern Ocean may cause the collapse of ice sheets and a sharp rise in sea levels A future warming of the Southern Ocean caused by rising greenhouse gas concentrations in the atmosphere may severely disrupt the stability of the West... 12.02.2016 | Event News 09.02.2016 | Event News 02.02.2016 | Event News 12.02.2016 | Physics and Astronomy 12.02.2016 | Life Sciences 12.02.2016 | Medical Engineering
http://www.innovations-report.com/html/reports/life-sciences/the-motion-of-the-medium-matters-for-self-assembling-particles-penn-research-shows.html
4.34375
The Germanic umlaut (more usually called i-umlaut or i-mutation) is a type of linguistic umlaut in which a back vowel changes to the associated front vowel (fronting) or a front vowel becomes closer to /i/ (raising) when the following syllable contains /i/, /iː/, or /j/. It took place separately in various Germanic languages starting around 450 or 500 Ad and affected all of the early languages except Gothic. An example of the resulting vowel alternation is the English plural foot ~ feet (from Germanic */fōts/, pl. */fōtiz/). Germanic umlaut, as covered in this article, does not include other historical vowel phenomena that operated in the history of the Germanic languages such as Germanic a-mutation and the various language-specific processes of u-mutation, as well as the earlier Indo-European ablaut (vowel gradation), which is observable in the declension of Germanic strong verbs such as sing/sang/sung. - 1 Description - 2 Morphological effects - 3 German orthography - 4 False ablaut in verbs - 5 West Germanic languages - 6 North Germanic languages - 7 See also - 8 References - 9 Bibliography Umlaut is a form of assimilation or vowel harmony, the process by which one speech sound is altered to make it more like another adjacent sound. If a word has two vowels with one far back in the mouth and the other far forward, more effort is required to pronounce the word than if the vowels were closer together; therefore, one possible linguistic development is for these two vowels to be drawn closer together. Germanic umlaut is a specific historical example of this process that took place in the unattested earliest stages of Old English and Old Norse and apparently later in Old High German, and some other old Germanic languages. The precise developments varied from one language to another, but the general trend was this: - Whenever a back vowel (/a/, /o/ or /u/, whether long or short) occurred in a syllable and the front vowel /i/ or the front glide /j/ occurred in the next, the vowel in the first syllable was fronted (usually to /æ/, /ø/, and /y/ respectively). Thus, for example, West Germanic *mūsiz "mice" shifted to proto-Old English *mȳsiz, which eventually developed to modern mice, while the singular form *mūs lacked a following /i/ and was unaffected, eventually becoming modern mouse. - When a low or mid-front vowel occurred in a syllable and the front vowel /i/ or the front glide /j/ occurred in the next, the vowel in the first syllable was raised. This happened less often in the Germanic languages, partly because of earlier vowel harmony in similar contexts. However, for example, proto-Old English /æ/ became /e/ in, for example, */bæddj-/ > /bedd/ 'bed'. The fronted variant caused by umlaut was originally allophonic (a variant sound automatically predictable from the context), but it later became phonemic (a separate sound in its own right) when the context was lost but the variant sound remained. The following examples show how, when final -i was lost, the variant sound -ȳ- became a new phoneme in Old English: |Loss of final -z||West Germanic||*mūs||*mūsi||*fōt||*fōti| |Germanic umlaut||Pre-Old English||*mūs||*mȳsi||*fōt||*fø̄ti| |Loss of i after a heavy syllable||Pre-Old English||mūs||mȳs||fōt||fø̄t| |Unrounding of ø̄ (> ē)||Most Old English dialects||mūs||mȳs||fōt||fēt| |Unrounding of ȳ (> ī)||Early Middle English||mūs||mīs||fōt||fēt| |Great Vowel Shift||Early Modern and Modern English||/maʊs/||/maɪs/||/fʊt/||/fiːt/| Although umlaut was not a grammatical process, umlauted vowels often serve to distinguish grammatical forms (and thus show similarities to ablaut when viewed synchronically), as can be seen in the English word man. In ancient Germanic, it and some other words had the plural suffix -iz, with the same vowel as the singular. As it contained an i, this suffix caused fronting of the vowel, and when the suffix later disappeared, the mutated vowel remained as the only plural marker: men. In English, such plurals are rare: man, woman, tooth, goose, foot, mouse, louse, brother (archaic or specialized plural in brethren), and cow (poetic and dialectal plural in kine). It also can be found in a few fossilized diminutive forms, such as kitten from cat and kernel from corn, and the feminine vixen from fox. Umlaut is conspicuous when it occurs in one of such a pair of forms, but there are many mutated words without an unmutated parallel form. Germanic actively derived causative weak verbs from ordinary strong verbs by applying a suffix, which later caused umlaut, to a past tense form. Some of these survived into modern English as doublets of verbs, including fell and set vs. fall (older past *fefall) and sit. Umlaut could occur in borrowings as well if stressed vowel was coloured by a subsequent front vowel, such as German Köln, "Cologne", from Latin Colonia, or Käse, "cheese", from Latin caseus. Parallel umlauts in some modern Germanic languages |*fallaną - *fallijaną||fallen - fällen||to fall - fell||vallen - vellen||falla - fälla||falla - fella| |*fōts - *fōtiz||Fuß - Füße||foot - feet||voet - voeten (no umlaut)||fot - fötter||fótur - føtur| |*aldaz - *alþizô - *alþistaz||alt - älter - am ältesten||old - elder - eldest||oud - ouder - oudst (no umlaut)||gammal - äldre - äldst (irregular)||gamal - eldri - elstur (irregular)| |*fullaz - *fullijaną||voll - füllen||full - fill||vol - vullen||full - fylla||fullur - fylla| |*langaz - *langīn/*langiþō||lang - Länge||long - length||lang - lengte||lång - längd||langur - longd| |*lūs - *lūsiz||Laus - Läuse||louse - lice||luis - luizen (no umlaut)||lus - löss||lús - lýs| German orthography is generally consistent in its representation of i-umlaut. The umlaut diacritic, consisting of two dots above the vowel, is used for the fronted vowels, making the historical process much more visible in the modern language than is the case in English: a>ä, o>ö, u>ü, au>äu. Sometimes a word has a vowel affected by i-umlaut, but the vowel is not marked with the umlaut diacritic. Usually, the word with an umlauted vowel comes from an original word without umlaut, but the two are not recognized as a pair because the meaning of the umlauted word has changed. The adjective fertig ("ready", "finished"; originally "ready to go") contains an umlaut mutation, but it is spelled with e rather than ä as its relationship to Fahrt (journey) has, for most speakers of the language, been lost from sight. Likewise, alt (old) has the comparative älter (older), but the noun from this is spelled Eltern (parents). Aufwand (effort) has the verb aufwenden (to spend, to dedicate) and the adjective aufwendig (requiring effort) though the 1996 spelling reform now permits the alternative spelling aufwändig (but not aufwänden). For denken, see below. On the other hand, some foreign words have umlaut diacritics that do not mark a vowel produced by the sound change of umlaut. Notable examples are Känguru from English kangaroo, and Büro from French bureau. In the latter case, the diacritic is a pure phonological marker, with no regard to etymology; in case of the kangaroo (identical in sound to *Kenguru), it somewhat etymologically marks the fact that the sound is written with an a in English. Similarly, Big Mac can be spelt Big Mäc in German, which even used to be the official spelling used by McDonald's in Germany. In borrowings from Latin and Greek, Latin ae, oe, or Greek ai, oi, are rendered in German as ä and ö respectively (Ägypten, "Egypt", or Ökonomie, "economy"). However, Latin/Greek y is written y in German instead of ü (Psychologie); y ended up being used entirely instead of ü in Scandinavia for native words as well. Für "for" is a special case; it is an umlauted form of vor "before", but other historical developments changed the expected ö into ü. In this case, the ü marks a genuine but irregular umlaut. Other special cases are fünf "five" (expected form *finf) and zwölf "twelve" (expected form *zwälf/zwelf), in which modern umlauted vowel arose from a different process:rounding an unrounded front vowel (possibly from the labial consonants w/f occurring on both sides). Orthography and design history The German phonological umlaut is present in the Old High German period and continues to develop in Middle High German. From the Middle High German, it was sometimes denoted in written German by adding an e to the affected vowel, either after the vowel or, in the small form, above it. This can still be seen in some names:Goethe, Goebbels, Staedtler. In blackletter handwriting, as used in German manuscripts of the later Middle Ages and also in many printed texts of the early modern period, the superscript ⟨e⟩ still had a form that would now be recognisable as an ⟨e⟩, but in manuscript writing, umlauted vowels could be indicated by two dots since the late medieval period. Unusual umlaut designs are sometimes also created for graphic design purposes, such as to fit an umlaut into tightly-spaced lines of text. It may include umlauts placed vertically or inside the body of the letter. False ablaut in verbs Two interesting examples of umlaut involve vowel distinctions in Germanic verbs and often are subsumed under the heading "ablaut" in descriptions of Germanic verbs, giving them the name false ablaut. The German word Rückumlaut ("reverse umlaut") is the slightly misleading term given to the vowel distinction between present and past tense forms of certain Germanic weak verbs. Examples in English are think/thought, bring/brought, tell/told, sell/sold. (These verbs have a dental -t or -d as a tense marker; therefore, they are weak and the vowel change cannot be conditioned by ablaut.) The presence of umlaut is possibly more obvious in German denken/dachte ("think/thought"), especially if it is remembered that in German the letters <ä> and <e> are usually phonetically equivalent. The Proto-Germanic verb would have been *þankijaną; the /j/ caused umlaut in all the forms that had the suffix; subsequently, the /j/ disappeared. The term "reverse umlaut" indicates that if, with traditional grammar, the infinitive and the present tense as the starting point, there is an illusion of a vowel shift towards the back of the mouth (so to speak, <ä>→<a>) in the past tense, but of course, the historical development was simply umlaut in the present tense forms. A variety of umlaut occurs in the second and third person singular forms of the present tense of some Germanic strong verbs. For example, German fangen ("to catch") has the present tense ich fange, du fängst, er fängt. The verb geben ("give") has the present tense ich gebe, du gibst, er gibt, but the shift e→i would not be a normal result of umlaut in German. There are, in fact, two distinct phenomena at play here; the first is indeed umlaut as it is best known, but the second is older and occurred already in Proto-Germanic itself. In both cases, a following i triggered a vowel change, but in Proto-Germanic, it affected only e. The effect on back vowels did not occur until hundreds of years later, after the Germanic languages had already begun to split up: *fą̄haną, *fą̄hidi with no umlaut of a, but *gebanan, *gibidi with umlaut of e. West Germanic languages Although umlaut operated the same way in all the West Germanic languages, the exact words in which it took place and the outcomes of the process differ between the languages. Of particular note is the loss of word-final -i after heavy syllables. In the more southern languages (Old High German, Old Dutch, Old Saxon), forms that lost -i often show no umlaut, but in the more northern languages (Old English, Old Frisian), the forms do. Compare Old English ġiest "guest", which shows umlaut, and Old High German gast, which does not, both from Proto-Germanic *gastiz. That may mean that there was dialectal variation in the timing and spread of the two changes, with final loss happening before umlaut in the south but after it in the north. On the other hand, umlaut may have still been partly allophonic, and the loss of the conditioning sound may have triggered an "un-umlauting" of the preceding vowel. Nevertheless, medial -ij- consistently triggers umlaut although its subsequent loss is universal in West Germanic except for Old Saxon and early Old High German. I-mutation in Old English I-mutation generally affected Old English vowels as follows in each of the main dialects. It led to the introduction into Old English of the new sounds /y(:)/, /ø(:)/ (which, in most varieties, soon turned into /e(:)/ and a sound written in Early West Saxon manuscripts as ie but whose phonetic value is debated. |original||i-mutated||examples and notes| |a||æ, e||æ, e > e||æ, e||bacan "to bake", bæcþ "(he/she) bakes". a > e particularly before nasal consonants: mann "person", menn "people"| |ā||ǣ||ǣ||ǣ||lār "teaching" (cf. "lore"), lǣran "to teach"| |æ||e||e||e||þæc "covering" (cf. "thatch"), þeccan "to cover"| |e||i||i||i||not clearly attested due to earlier Germanic e > i before i, j| |o||oe > e||oe > e||oe > e||Latin olium, Old English oele, ele. Early forms in oe, representing /ø/, later unrounded to e| |ō||oe > ē||oe > ē||oe > ē||fōt "foot", foet, fēt "feet". Early forms in oe, representing /ø/, later unrounded to ē| |u||y||y > e||y||murnan "to mourn", myrnþ "(he/she) mourns"| |ū||ȳ||ȳ > ē||ȳ||mūs "mouse", mȳs "mice"| |ea||ie > y||e||e||eald "old", ieldra, eldra "older" (cf. "elder")| |ēa||īe > ȳ||ē||ē||nēah "near" (cf. "nigh"), nīehst "nearest" (cf. "next")| |eo||io > eo||io > eo||io > eo||examples are rare due to earlier Germanic e > i before i, j. io became eo in most later varieties of Old English| |ēo||īo > ēo||īo > ēo||īo > ēo||examples are rare due to earlier Germanic e > i before i, j. īo became ēo in most later varieties of Old English| |io||ie > y||io, eo||io, eo||*fiohtan "to fight", fieht "(he/she) fights". io became eo in most later varieties of Old English, giving alternations like beornan "to burn", biernþ "(he/she) burns"| |īo||īe > ȳ||īo, ēo||īo, ēo||līoht "light", līehtan "illuminate". īo became ēo in most later varieties of Old English, giving alternations like sēoþan "to boil" (cf. "seethe"), sīeþþ "(he/she) boils"| I-mutation is particularly visible in the inflectional and derivational morphology of Old English since it affected so many of the Old English vowels. Of 16 basic vowels and diphthongs in Old English, only the four vowels ǣ, ē, i, ī were unaffected by i-mutation. Although i-mutation was originally triggered by an /i(:)/ or /j/ in the syllable following the affected vowel, by the time of the surviving Old English texts, the /i(:)/ or /j/ had generally changed (usually to /e/) or been lost entirely, with the result that i-mutation generally appears as a morphological process that affects a certain (seemingly arbitrary) set of forms. These are most common forms affected: - The plural, and genitive/dative singular, forms of consonant-declension nouns (Proto-Germanic (PGmc) *-iz), as compared to the nominative/accusative singular – e.g., fōt "foot", fēt "feet"; mūs "mouse", mȳs "mice". Many more words were affected by this change in Old English vs. modern English – e.g., bōc "book", bēc "books"; frēond "friend", frīend "friends". - The second and third person present singular indicative of strong verbs (Pre-Old-English (Pre-OE) *-ist, *-iþ), as compared to the infinitive and other present-tense forms – e.g. helpan "to help", helpe "(I) help", hilpst "(you sg.) help" (cf. archaic "thou helpest"), hilpþ "(he/she) helps" (cf. archaic "he helpeth"), helpaþ "(we/you pl./they) help". - The comparative form of some adjectives (Pre-OE *-ira < PGmc *-izǭ, Pre-OE *-ist < PGmc *-istaz), as compared to the base form – e.g. eald "old", ieldra "older", ieldest "oldest" (cf. "elder, eldest"). - Throughout the first class of weak verbs (original suffix -jan), as compared to the forms from which the verbs were derived – e.g. fōda "food", fēdan "to feed" < Pre-OE *fōdjan; lār "lore", lǣran "to teach"; feallan "to fall", fiellan "to fell". - In the abstract nouns in þ(u) (PGmc *-iþō) corresponding to certain adjectives – e.g., strang "strong", strengþ(u) "strength"; hāl "whole/hale", hǣlþ(u) "health"; fūl "foul", fȳlþ(u) "filth". - In female forms of several nouns with the suffix -enn (PGmc *-injō) – e.g., god "god", gydenn "goddess" (cf. German Gott, Göttin); fox "fox", fyxenn "vixen". - In i-stem abstract nouns derived from verbs (PGmc *-iz) – e.g. cyme "a coming", cuman "to come"; byre "a son (orig., a being born)", beran "to bear"; fiell "a falling", feallan "to fall"; bend "a bond", bindan "to bind". Note that in some cases the abstract noun has a different vowel than the corresponding verb, due to Proto-Indo-European ablaut. - The phonologically expected umlaut of /a/ is /æ/. However, in many cases /e/ appears. Most /a/ in Old English stem from earlier /æ/ because of a change called a-restoration. This change was blocked when /i/ or /j/ followed, leaving /æ/, which subsequently mutated to /e/. For example, in the case of talu "tale" vs. tellan "to tell", the forms at one point in the early history of Old English were *tælu and *tælljan, respectively. A-restoration converted *tælu to talu, but left *tælljan alone, and it subsequently evolved to tellan by i-mutation. The same process "should" have led to *becþ instead of bæcþ. That is, the early forms were *bæcan and *bæciþ. A-restoration converted *bæcan to bacan but left alone *bæciþ, which would normally have evolved by umlaut to *becþ. In this case, however, once a-restoration took effect, *bæciþ was modified to *baciþ by analogy with bacan, and then later umlauted to bæcþ. - A similar process resulted in the umlaut of /o/ sometimes appearing as /e/ and sometimes (usually, in fact) as /y/. In Old English, /o/ generally stems from a-mutation of original /u/. A-mutation of /u/ was blocked by a following /i/ or /j/, which later triggered umlaut of the /u/ to /y/, the reason for alternations between /o/ and /y/ being common. Umlaut of /o/ to /e/ occurs only when an original /u/ was modified to /o/ by analogy before umlaut took place. For example, dohtor comes from late Proto-Germanic *dohter, from earlier *duhter. The plural in Proto-Germanic was *duhtriz, with /u/ unaffected by a-mutation due to the following /i/. At some point prior to i-mutation, the form *duhtriz was modified to *dohtriz by analogy with the singular form, which then allowed it to be umlauted to a form that resulted in dehter. A few hundred years after i-umlaut began, another similar change called double umlaut occurred. It was triggered by an /i/ or /j/ in the third or fourth syllable of a word and mutated all previous vowels but worked only when the vowel directly preceding the /i/ or /j/ was /u/. This /u/ typically appears as e in Old English or is deleted: - hægtess "witch" < PGmc *hagatusjō (cf. Old High German hagazussa) - ǣmerge "embers" < Pre-OE *āmurja < PGmc *aimurjǭ (cf. Old High German eimurja) - ǣrende "errand" < PGmc *ǣrundijaz (cf. Old Saxon ārundi) - efstan "to hasten" < archaic œfestan < Pre-OE *ofustan - ȳmest "upmost" < PGmc *uhumistaz (cf. Gothic áuhumists) As shown by the examples, affected words typically had /u/ in the second syllable and /a/ in the first syllable. Tge /æ/ developed too late to break to ea or to trigger palatalization of a preceding velar. I-mutation in High German I-mutation is visible in Old High German (OHG), c. 800 AD, only on /a/, which was mutated to /e/. By then, it had already become partly phonologized, since some of the conditioning /i/ and /j/ sounds had been deleted or modified. The later history of German, however, shows that /o/ and /u/ were also affected; starting in Middle High German, the remaining conditioning environments disappear and /o/ and /u/ appear as /ø/ and /y/ in the appropriate environments. That has led to a controversy over when and how i-mutation appeared on these vowels. Some (for example, Herbert Penzl) have suggested that the vowels must have been modified without being indicated for lack of a lack of proper symbols and/or because the difference was still partly allophonic. Others (such as Joseph Voyles) have suggested that the i-mutation of /o/ and /u/ was entirely analogical and pointed to the lack of i-mutation of these vowels in certain places where it would be expected, in contrast to the consistent mutation of /a/. Perhaps[original research?] the answer is somewhere in between — i-mutation of /o/ and /u/ was indeed phonetic, occurring late in OHG, but later spread analogically to the environments where the conditioning had already disappeared by OHG (this is where failure of i-mutation is most likely). It must also be kept in mind that it is an issue of relative chronology: already early in the history of attested OHG, some umlauting factors are known to have disappeared (such as word-internal j after geminates and clusters), and depending on the age of OHG umlaut, that could explain some cases where expected umlaut is missing. In modern German, umlaut as a marker of the plural of nouns is a regular feature of the language, and although umlaut itself is no longer a productive force in German, new plurals of this type can be created by analogy. Likewise, umlaut marks the comparative of many adjectives and other kinds of derived forms. Because of the grammatical importance of such pairs, the German umlaut diacritic was developed, making the phenomenon very visible. The result in German is that the vowels written as <a>, <o>, and <u> become <ä>, <ö>, and <ü>, and the diphthong <au> becomes <äu>: Mann/Männer ("man/men"), lang/länger ("long/longer"), Fuß/Füße ("foot/feet"), Maus/Mäuse ("mouse/mice"), Haus/Häuser ("house/houses"). On the phonetic realisation of these, see German phonology. I-mutation in Old Saxon In Old Saxon, umlaut is much less apparent than in Old Norse. The only vowel that is regularly fronted before an /i/ or /j/ is short /a/: gast – gesti, slahan – slehis. It must have had a greater effect than the orthography shows since all later dialects have a regular umlaut of both long and short vowels. I-mutation in Dutch The situation in Old Dutch is similar to the situation found in Old Saxon and Old High German. Late Old Dutch saw a merger of /u/ and /o/, causing their umlauted results to merge as well, giving /ʏ/. The lengthening in open syllables in early Middle Dutch then lengthened and lowered this short /ʏ/ to long /øː/ (spelled eu) in some words. This is parallel to the lowering of /i/ in open syllables to /eː/, as in schip ("ship") – schepen ("ships"). Later developments in Middle Dutch show that long vowels and diphthongs were not affected by umlaut in the more western dialects, including those in western Brabant and Holland that were most influential for standard Dutch. Thus, for example, where modern German has fühlen /ˈfyːlən/ and English has feel /fiːl/ (from Proto-Germanic *fōlijaną), standard Dutch retains a back vowel in the stem in voelen /ˈvulə(n)/. Thus, only two of the original Germanic vowels were affected by umlaut at all in western/standard Dutch: /a/, which became /ɛ/, and /u/, which became /ʏ/ (spelled u). As a result of this relatively sparse occurrence of umlaut, standard Dutch does not use umlaut as a grammatical marker. An exception is the noun stad "city" which has the irregular umlauted plural steden. The more eastern dialects of Dutch, including eastern Brabantian and all of Limburgish have umlaut of long vowels, however. Consequently, these dialects also make grammatical use of umlaut to form plurals and diminutives, much as most other modern Germanic languages do. Compare vulen /vylə(n)/ and menneke "little man" from man. North Germanic languages I-mutation in Old Norse |This section does not cite any sources. (August 2010)| The situation in Old Norse is complicated as there are two forms of i-mutation. Of these two, only one is phonologized.[clarification needed] I-mutation in Old Norse is phonological: - In Proto-Norse, if the syllable was heavy and followed by vocalic i (*gastiʀ > gestr, but *staði > *stað) or, regardless of syllable weight, if followed by consonantal i (*skunja > skyn). The rule is not perfect, as some light syllables were still umlauted: *kuni > kyn, *komiʀ > kømr. - In Old Norse, the following syllable contains a remaining Proto-Norse i.[why?] For example, the root of the dative singular of u-stems are i-mutated as the desinence contains a Proto-Norse i, but the dative singular of a-stems is not, as their desinence stems from P-N ē. I-mutation is not phonological if the vowel of a long syllable is i-mutated by a syncopated i. I-mutation does not occur in short syllables. |a||e (ę)||fagr (fair) / fegrstr (fairest)| |au||ey||lauss (loose) / leysa (to loosen)| |á||æ||Áss / Æsir| |jú||ý||ljúga (to lie) / lýgr (lies)| |o||ø||koma (to come) / kømr (comes)| |ó||œ||róa (to row) / rœr (rows)| |u||y||upp (up) / yppa (to lift up)| |ú||ý||fúll (foul) / fýla (stink, foulness)| |ǫ||ø||sǫkk (sank) / søkkva (to sink)| - Cercignani, Fausto (1980). "Early "Umlaut" Phenomena in the Germanic Languages". Language 56 (1): 126–136. doi:10.2307/412645. - Cercignani, Fausto (1980). "Alleged Gothic Umlauts". Indogermanische Forschungen 85: 207–213. - Campbell, A. 1959. Old English Grammar. Oxford: Clarendon Press. §§624-27. - Hogg, Richard M., ‘Phonology and Morphology’, in The Cambridge History of the English Language, Volume 1: The Beginnings to 1066, ed. by Richard M. Hogg (Cambridge: Cambridge University Press, 1992), pp. 67–167 (p. 113). - Table adapted from Campbell, Historical Linguistics (2nd edition), 2004, p. 23. See also Malmkjær, The Linguistics Encyclopedia (2nd Edition), 2002, pp. 230-233. - Ringe 2006, pp. 274, 280 - Duden, Die deutsche Rechtschreibung, 21st edition, p. 133. - Isert, Jörg. "Fast Food: McDonald's schafft "Big Mäc" und "Fishmäc" ab" [Fast food: McDonald's abolishes "Big Mäc" and "Fishmäc"]. Welt Online (in German). Axel Springer AG. Retrieved 21 April 2012. - In medieval German manuscripts, other digraphs could also be written using superscripts: in bluome 'flower', for example, the ⟨o⟩ was frequently placed above the ⟨u⟩, although this letter survives now only in Czech. Compare also the development of the tilde as a superscript n. - Hardwig, Florian. "Unusual Umlauts (German)". Typojournal. Retrieved 15 July 2015. - Hardwig, Florian. "Jazz in Town". Fonts in Use. Retrieved 15 July 2015. - "Flickr collection: vertical umlauts". Flickr. Retrieved 15 July 2015. - Hardwig, Florian. "Compact umlaut". Fonts in Use. Retrieved 15 July 2015. - Campbell, A. 1959. Old English Grammar. Oxford: Clarendon Press. §§112, 190–204, 288. - Penzl, H. (1949). "Umlaut and Secondary Umlaut in Old High German". Language 25 (3): 223–240. JSTOR 410084. - Voyles, Joseph (1992). "On Old High German i-umlaut". In Rauch, Irmengard; Carr, Gerald F.; Kyes, Robert L. On Germanic linguistics: issues and methods. - Malmkjær, Kirsten (Ed.) (2002). The linguistics encyclopedia (2nd ed.). London: Routledge, Taylor & Francis Group. ISBN 0-415-22209-5. - Campbell, Lyle (2004). Historical Linguistics: An Introduction (2nd ed.). Edinburgh University Press. - Cercignani, Fausto, Early "Umlaut" Phenomena in the Germanic Languages, in «Language», 56/1, 1980, pp. 126–136. - Cercignani, Fausto, Alleged Gothic Umlauts, in «Indogermanische Forschungen», 85, 1980, pp. 207–213.
https://en.wikipedia.org/wiki/I-umlaut
4
Names of Korea |This article needs additional citations for verification. (January 2007)| There are various names of Korea in use today, derived from ancient kingdoms and dynasties. The modern English name Korea is an exonym derived from the Goryeo period and is used by both North Korea and South Korea in international contexts. In the Korean language, the two Koreas use different terms to refer to the nominally unified nation: Chosŏn (조선) in North Korea, and Hanguk (한국) in South Korea. - 1 History - 2 Current usage - 3 Sobriquets for Korea - 4 See also - 5 Notes The earliest records of Korean history are written in Chinese characters called hanja. Even after the invention of hangul, Koreans generally recorded native Korean names with hanja, by translation of meaning, transliteration of sound, or even combinations of the two. Furthermore, the pronunciations of the same character are somewhat different in Korean and the various Korean dialects, and have changed over time. For all these reasons, in addition to the sparse and sometimes contradictory written records, it is often difficult to determine the original meanings or pronunciations of ancient names. Until 108 BC, northern Korea and Manchuria were controlled by Gojoseon. In contemporaneous Chinese records, it was written as 朝鮮, which is pronounced in modern Korean as Joseon (조선). Go (古), meaning "ancient", distinguishes it from the later Joseon Dynasty. The name Joseon is also now still used by North Koreans and Koreans living in China to refer to the peninsula, and as the official Korean form of the name of Democratic People's Republic of Korea. The word is also used in many Eurasian languages to refer to Korea, such as Japanese, Vietnamese, and Chinese. Possibly the Chinese characters phonetically transcribed a native Korean name, perhaps pronounced something like "Jyusin". Some speculate that it also corresponds to Chinese references to 肅愼 (숙신, suksin), 稷愼 (직신, jiksin) and 息愼 (식신, siksin), although these latter names probably describe the ancestors of the Jurchen. Other scholars believe 朝鮮 was a translation of the native Korean Asadal (아사달), the capital of Gojoseon: asa being a hypothetical Altaic root word for "morning", and dal meaning "mountain", a common ending for Goguryeo place names. An early attempt to translate these characters into English gave rise to the expression "The Land of the Morning Calm" for Korea, which parallels the expression "The Land of the Rising Sun" for Japan. While the wording is fanciful, the essence of the translation is valid. Around the time of Gojoseon's fall, various chiefdoms in southern Korea grouped into confederacies, collectively called the Samhan (삼한, "Three Han"). Han is a native Korean root for "leader" or "great", as in maripgan ("king", archaic), hanabi ("grandfather", archaic), and Hanbat ("Great Field", archaic name for Daejeon). It may be related to the Mongol/Turkic title Khan. Han was transliterated in Chinese records as 韓 (한, han), 幹 (간, gan), 刊 (간, gan), 干 (간, gan), or 漢 (한, han), but is unrelated to the Chinese people and states also called Han, which is a different character pronounced with a different tone. (See: Transliteration into Chinese characters). The Han dynasty did have an influence on the three Han as Chinese characters were eventually used called Han-ja. Around the beginning of the Common Era, remnants of the fallen Gojoseon were re-united and expanded by the kingdom of Goguryeo, one of the Three Kingdoms of Korea. It, too, was a native Korean word, probably pronounced something like "Guri", transcribed with various hanja characters: 高句麗, 高勾麗, or 高駒麗 (고구려, Goguryeo), 高麗 (고려, Goryeo), 高離 (고리, Gori), or 句麗 (구려, Guryeo). The source native name is thought to be either *Guru ("walled city, castle, fortress"; attested in Chinese historical documents, but not in native Korean sources) or Gauri (가우리, "center"; cf. Middle Korean *gaβɔndɔy and Standard Modern Korean gaunde 가운데). The theory that Goguryeo referenced the founder's surname has been largely discredited (the royal surname changed from Hae to Go long after the state's founding). Revival of the names In the south, the Samhan resolved into the kingdoms of Baekje and Silla, constituting, with Goguryeo, the Three Kingdoms of Korea. In 668, Silla unified the three kingdoms, and reigned as Unified Silla until 935. The succeeding dynasty called itself Goryeo (고려, 高麗), in reference to Goguryeo. Through the Silk Road trade routes, Muslim merchants brought knowledge about Silla and Goryeo to India and the Middle East. Goryeo was transliterated into Italian as "Cauli", the name Marco Polo used when mentioning the country in his Travels, derived from the Chinese form Gāolí. From "Cauli" eventually came the English names "Corea" and the now standard "Korea" (see English usage below). In 1392, a new dynasty established by a military coup revived the name Joseon (조선, 朝鮮). The hanja were often translated into English as "morning calm/sun", and Korea's English nickname became "The Land of the Morning Calm"; however, this interpretation is not often used in the Korean language, and is more familiar to Koreans as a back-translation from English. This nickname was coined by Percival Lowell in his book, "Choson, the Land of the Morning Calm," published in 1885. In 1897, the nation was renamed Daehan Jeguk (대한제국, 大韓帝國, literally, "Great Han Empire", known in English as Korean Empire). Han had been selected in reference to Samhan (Mahan, Jinhan, Byeonhan), which was synonymous with Samkuk, Three Kingdoms of Korea (Goguryeo, Silla, Baekje), at that time. So, Daehan Jeguk (대한제국, 大韓帝國) means it is an empire that rules the area of Three Kingdoms of Korea. This name was used to emphasize independence of Korea, because an empire can't be a subordinate country. When the Korean Empire came under Japanese rule in 1910, the name reverted to Joseon (officially, the Japanese pronunciation Chōsen). During this period, many different groups outside of Korea fought for independence, the most notable being the Daehan Minguk Imsi Jeongbu (대한민국 임시정부, 大韓民國臨時政府), literally the "Provisional Government of the Great Han People's Nation", known in English as the Provisional Government of the Republic of Korea (民國 = 民 ‘people’ + 國 country/nation’ = ‘republic’ in East Asian capitalist societies). In 1948, the South adopted the provisional government's name of Daehan Minguk (대한민국, 大韓民國; see above), known in English as the Republic of Korea. Meanwhile, the North became the Chosŏn Minjujuŭi Inmin Konghwaguk (조선민주주의인민공화국, 朝鮮民主主義人民共和國) literally the "Chosŏn Democratic People Republic", known in English as the Democratic People's Republic of Korea. The name itself was adopted from the short-lived People's Republic of Korea (PRK) formed in Seoul after liberation and later added the word "democratic" to its title. Today, South Koreans use Hanguk to refer to just South Korea or Korea as a whole, Namhan (남한, 南韓; "South Han") for South Korea, and Bukhan (북한, 北韓; "North Han") for North Korea. South Korea less formally refers to North Korea as Ibuk (이북, 以北; "The North"). In addition the official name for the Republic of Korea in the Korean language is "Dae Han Minguk" (대한민국; "The Republic of Korea"). North Koreans use Chosŏn, Namjosŏn (남조선, 南朝鮮; "South Chosŏn"), and Bukchosŏn (북조선, 北朝鮮; "North Chosŏn") when referring to Korea, South Korea, and North Korea, respectively. The term Bukchosŏn, however, is rarely used in the north, although it may be found in the Song of General Kim Il-sung. In the tourist regions in North Korea and the official meetings between South Korea and North Korea, Namcheuk (남측, 南側) and Bukcheuk (북측, 北側), or "Southern Side" and "Northern Side", are used instead of Namhan and Bukhan. The Korean language is called Hangugeo (한국어, 韓國語) or Hangukmal (한국말) in the South and Chosŏnmal (조선말, 朝鮮말) or Chosŏnŏ (조선어, 朝鮮語) in the North. The Korean script is called hangeul (한글) in South Korea and Chosŏn'gŭl (조선글) in North Korea. The Korean Peninsula is called Hanbando (한반도, 韓半島) in the South and Chosŏn Pando (조선반도, 朝鮮半島) in the North. In Chinese-speaking areas such as mainland China, Hong Kong, Macau and Taiwan, different naming conventions on several terms have been practiced according to their political proximity to whichever Korean government although there is a growing trend for convergence. In the Chinese language, the Korean Peninsula is usually called Cháoxiǎn Bàndǎo (simplified Chinese: 朝鲜半岛; traditional Chinese: 朝鮮半島) and in rare cases called Hán Bàndǎo (simplified Chinese: 韩半岛; traditional Chinese: 韓半島). Ethnic Koreans are also called Cháoxiǎnzú (朝鲜族), instead of Dàhán mínzú (大韓民族). However, the term Hánguó ren (韩国人) may be used to specifically refer to South Koreans. Before establishing diplomatic relations with South Korea, the People's Republic of China tended to use the historic Korean name Cháoxiǎn (朝鲜 "Joseon"), by referring to South Korea as Nán Cháoxiǎn (南朝鲜 ("South Joseon"). Since diplomatic ties were restored, China has used the names that each of the two sides prefer, by referring to North Korea as Cháoxiǎn and to South Korea as Hánguó (韩国 "Hanguo"). The Korean language can be referred to as either Cháoxiǎnyǔ (朝鲜语) or Hányǔ (韩语). The Korean War is also referred as Cháoxiǎn Zhànzhēng (朝鲜战争) in official documents but it is also popular to use hánzhàn (韓战) colloquially. Taiwan, on the other hand, uses the South Korean names, referring to North Korean as Běihán (北韓 "North Han") and South Korean as Nánhán (南韓 "South Han"). The Republic of China previously maintained diplomatic relations with South Korea, but has never had relations with North Korea. As a result, in the past, Hánguó (韓國) had been used to refer to the whole Korea, and Taiwanese textbooks treated Korea as a unified nation (like mainland China). The Ministry of Foreign Affairs of the Republic of China under the Democratic Progressive Party Government considered North and South Koreas two separate countries. However, general usage in Taiwan is still to refer to North Korea as Běihán (北韓 "North Han[guk]") and South Korea as Nánhán (南韓 "South Han[guk]") while use of Cháoxiǎn (朝鮮) is generally limited to ancient Korea. The Korean language is usually referred to as Hányǔ (韓語). Similarly, general usage in Hong Kong and Macau has traditionally referred to North Korea as Bak Hon (北韓 "North Han") and South Korea as Nam Hon (南韓 "South Han"). Under the influence of official usage, which is itself influenced by the official usage of the People's Republic of China government, the mainland practice of naming the two Koreas differently has become more common. In the Chinese language used in Singapore and Malaysia, North Korea is usually called Cháoxiǎn (朝鲜 "Chosŏn") with Běi Cháoxiǎn (北朝鲜 "North Chosŏn") and Běihán (北韩 "North Han") less often used, while South Korea is usually called Hánguó (韩国 "Hanguk") with Nánhán (南韩 "South Han[guk]") and Nán Cháoxiǎn (南朝鲜 "South Chosŏn") less often used. The above usage pattern does not apply for Korea-derived words. For example, Korean ginseng is commonly called Gāolì shēn (高麗參). In Japan, the name preferred by each of the two sides for itself is used, so that North Korea is called Kita-Chōsen (北朝鮮; "North Chosŏn") and South Korea Kankoku (韓国 "Hanguk"). However, North Koreans claim the name Kita-Chōsen is derogatory, as it only refers to the northern part of Korean Peninsula, whereas the government claims the sovereignty over its whole territory. Pro-North people such as Chongryon use the name Kyōwakoku (共和国; "the Republic") instead, but the ambiguous name is not popular among others. In 1972 Chongryon campaigned to get the Japanese media to stop referring to North Korea as Kita-Chōsen. This effort was not successful, but as a compromise most media companies agreed to refer to the nation with its full official title at least once in every article, thus they used the lengthy Kita-Chōsen (Chōsen Minshu-shugi Jinmin Kyōwakoku) (北朝鮮(朝鮮民主主義人民共和国) "North Chosŏn (The People's Democratic Republic of Chosŏn)"). By January 2003, this policy started to be abandoned by most newspapers, starting with Tokyo Shimbun, which announced that it would no longer write out the full name, followed by Asahi, Mainichi, and Nikkei For Korea as a whole, Chōsen (朝鮮; "Joseon") is commonly used. The term Chōsen, which has a longer usage history, continues to be used to refer to the Korean peninsula, the Korean ethnic group, and the Korean language, which are use cases that won't cause confusion between Korea and North Korea. When referring to both North Korean and South Korean nationals, the transcription of phonetic English Korean (コリアン, Korian) may be used because a reference to a Chōsen national may be interpreted as a North Korean national instead. The Korean language is most frequently referred to in Japan as Kankokugo (韓国語) or Chōsengo (朝鮮語). While academia mostly prefers Chōsengo, Kankokugo became more and more common in non-academic fields, thanks to the economic and cultural presence of South Korea. The language is also referred to as various terms, such as "Kankokuchōsengo" (韓国朝鮮語), "Chōsen-Kankokugo" (朝鮮・韓国語), "Kankokugo (Chōsengo)" (韓国語(朝鮮語)), etc. Some people refer to the language as Koriago (コリア語; "Korean Language"). This term is not used in ordinary Japanese, but was selected as a compromise to placate both nations in a euphemistic process called kotobagari. Likewise, when NHK broadcasts a language instruction program for Korean, the language is referred to as hangurugo (ハングル語; "hangul language"); although it's technically incorrect since hangul itself is a writing system, not a language. Some argue that even Hangurugo is not completely neutral, since North Korea calls the letter Chosŏn'gŭl, not hangul. Urimaru (ウリマル), a direct transcription of uri mal (우리 말, "our language") is sometimes used by Korean residents in Japan, as well as by KBS World Radio. This term, however, may not be suitable to ethnic Japanese whose "our language" is not necessarily Korean. In Japan, those who moved to Japan usually maintain their distinctive cultural heritages (such as the Baekje-towns or Goguryeo-villages). Ethnic Korean residents of Japan have been collectively called Zainichi Chōsenjin (在日朝鮮人 "Joseon People in Japan"), regardless of nationality. However, for the same reason as above, the euphemism Zainichi Korian (在日コリアン; "Koreans in Japan") is increasingly used today. Zainichi (在日; "In Japan") itself is also often used colloquially. People with North Korean nationality are called Zainichi Chōsenjin, while those with South Korean nationality, sometimes including recent newcomers, are called Zainichi Kankokujin (在日韓国人 "Hanguk People in Japan"). Mongols have their own word for Korea: Солонгос (Solongos). In Mongolian, solongo means "rainbow." And another theory is probably means derived from Solon tribe living in Manchuria, a tribe culturally and ethnically related to the Korean people. North and South Korea are, accordingly, Хойд Солонгос (Hoid Solongos) and Өмнөд Солонгос (Ömnöd Solongos). The name of either Silla or its capital Seora-beol was also widely used throughout Northeast Asia as the ethnonym for the people of Silla, appearing [...] as Solgo or Solho in the language of the medieval Jurchens and their later descendants, the Manchus respectively. In Vietnam, people call North Korea Triều Tiên (朝鮮; "Chosŏn") and South Korea Hàn Quốc (韓國; "Hanguk"). Prior to unification, North Vietnam used Bắc Triều Tiên (北朝鮮; Bukchosŏn) and Nam Triều Tiên (南朝鮮; Namjoseon) while South Vietnam used Bắc Hàn (北韓; Bukhan) and Nam Hàn (南韓; Namhan) for North and South Korea, respectively. After unification, the northern Vietnamese terminology persisted until the 1990s. When South Korea reestablished diplomatic relations with Vietnam in 1993, it requested that Vietnam use the name that it uses for itself, and Hàn Quốc gradually replaced Nam Triều Tiên in usage. In the Vietnamese language used in the United States, Bắc Hàn and Nam Hàn are most common used. Outside East Asia Both South and North Korea use the name "Korea" when referring to their countries in English. North Korea is sometimes referred to as "Korea DPR" (PRK) and South Korea is sometimes referred to as the "Korea Republic" (KOR), especially in international sporting competitions, such as FIFA football. As with other European languages, English historically had a variety of names for Korea derived from Marco Polo's rendering of Goryeo, "Cauli" (see Revival of the names above). These included Caule, Core, Cory, Caoli, and Corai as well as two spellings that survived into the 19th century, Corea and Korea. (The modern spelling, "Korea", first appeared in late 17th century in the travel writings of the Dutch East India Company's Hendrick Hamel.) Despite the coexistence of the spellings "Corea" and "Korea" in 19th-century English publications, some Koreans believe that Japan, around the time of the Japanese occupation, intentionally standardised the spelling on "Korea", so that "Japan" would appear first alphabetically. Both major English-speaking governments of the time (i.e. the United States and the United Kingdom and its Empire) used both "Korea" and "Corea" until the early part of the colonial period. English-language publications in the 19th century generally used the spelling Corea, which was also used at the founding of the British embassy in Seoul in 1890. However, the U.S. minister and consul general to Korea, Horace Newton Allen, used "Korea" in his works published on the country. At the official Korean exhibit at the World's Columbian Exhibition in Chicago in 1893 a sign was posted by the Korean Commissioner saying of his country's name that "'Korea' and 'Corea' are both correct, but the former is preferred." This may have had something to do with Allen's influence, as he was heavily involved in the planning and participation of the Korean exhibit at Chicago. A shift can also be seen in Korea itself, where postage stamps issued in 1884 used the name "Corean Post" in English, but those from 1885 and thereafter used "Korea" or "Korean Post". By the first two decades of the 20th century, "Korea" began to be seen more frequently than "Corea" - a change that coincided with Japan's consolidation of its grip over the peninsula. Most evidence of a deliberate name change orchestrated by Japanese authorities is circumstantial, including a 1912 memoir by a Japanese colonial official that complained of the Koreans' tendency "to maintain they are an independent country by insisting on using a C to write their country's name." However, the spelling "Corea" was occasionally used even under full colonial rule and both it and "Korea" were largely eschewed in favor of the Japanese-derived "Chosen", which itself was derived from "Joseon". European languages use variations of the name "Korea" for both North and South Korea. In general, Celtic and Romance languages spell it "Corea" (or variations) since "c" represents the /k/ sound in most Romance and Celtic orthographies. However, Germanic and Slavic languages largely use variants of "Korea" since, in many of these languages, "c" represents other sounds such as /ts/. In languages using other alphabets such as Russian (Cyrillic), variations phonetically similar to "Korea" are also used for example the Russian name for Korea is Корея, romanization Koreya. Outside of Europe, most languages also use variants of "Korea", often adopted to local orthographies. "Korea" in the Jurchen Jin's national language (Jurchen) is "Sogo". Emigrants who moved to Russia and Central Asia call themselves Goryeoin or Koryo-saram (고려인; 高麗人; literally "person or people of Goryeo"), or корейцы in Russian. Many Goryeoin are living in the CIS, including an estimated 106,852 in Russia, 22,000 in Uzbekistan, 20,000 in Kyrgyzstan, 17,460 in Kazakhstan, 8,669 in Ukraine, 2,000 in Belarus, 350 in Moldova, 250 in Georgia, 100 in Azerbaijan, and 30 in Armenia. As of 2005, there are also 1.9 million ethnic Koreans living in China who hold Chinese citizenship and a further 560,000 Korean expatriates from both North and South living in China. South Korean expatriates living in the United States, around 1.7 million, will refer to themselves as Jaemi(-)gyopo (재미교포; 在美僑胞, or "temporary residents in America"), or sometimes simply "gyopo" for short. Sobriquets for Korea In traditional Korean culture, as well as in the cultural tradition of East Asia, the land of Korea has assumed a number of sobriquets over the centuries, including: - 계림 (鷄林) Gyerim, "Rooster Forest", in reference to an early name for Silla. - 군자지국 (君子之國) Gunjaji-guk, or "Land of Scholarly Gentlemen". - 금수강산 (錦繡江山) Geumsu gangsan, "Land of Embroidered (or Splendid) Rivers and Mountains". - 단국 (檀國) Danguk, "Country of Dangun". - 대동 (大東) Daedong, "Great East". - 동국 (東國) Dongguk, "Eastern Country". - 동방 (東邦) Dongbang, literally "an Eastern Country" referring to Korea. - 동방예의지국 (東方禮義之國, 東方禮儀之國) Dongbang yeuiji-guk, "Eastern Country of Courtesy". - 동야 (東野) Dongya, "Eastern Plains". - 동이 (東夷) Dong-yi, or "Eastern Foreigners". - 구이 (九夷) Gu-yi, "Nine-yi", refers to ancient tribes in the Korean peninsula. - 동토 (東土) Dongto, "Eastern Land". - 배달 (倍達) Baedal, an ancient reference to Korea. - 백의민족 (白衣民族) Baeguiminjok, "The white-clad people". - 삼천리 (三千里) Three-thousand Ri, a reference to the length traditionally attributed to the country from its northern to southern tips plus eastern to western tips. - 소중화 (小中華) Sojunghwa, "Small China" or "Little Sinocentrism" was used by the Joseon Court. It is nowadays considered degrading and is not used. - 아사달 (阿斯達) Asadal, apparently an Old Korean term for Joseon. - 청구 (靑丘) Cheonggu, or "Azure Hills". The color Azure is associated with the East. - 팔도강산 (八道江山) Paldo gangsan, "Rivers and Mountains of the Eight Provinces", referring to the traditional eight provinces of Korea. - 근화향 (槿花鄕) Geunhwahyang, "Country of Mugunghwa" refer to Silla Kingdom. - 근역 (槿域) Geunyeok, "Hibiscus Territory", or Land of Hibiscus - 삼한 (三韓) Samhan, or "Three Hans", refers to Samhan confederacy that ruled Southern Korea. - 해동 (海東) Haedong, "East of the Sea" (here being the West Sea separating from Korea). - 해동삼국 (海東三國) Haedong samguk, "Three Kingdoms East of the Sea" refers to Three Kingdoms of Korea - 해동성국 (海東盛國) Haedong seongguk, literally "Flourishing Eastern Sea Country", historically refers to Balhae Kingdom of North-South period. - 진국 (震國,振國) Jinguk, "Shock Country", old name of Balhae Kingdom. - 진역 (震域) Jinyeok, "Eastern Domain". - 진단 (震檀,震壇) Jindan, "Eastern Country of Dangun". - 진국 (辰國) Jinguk, "Country of Early Morning", refer to the Jin state of Gojoseon period. - Kyu Chull Kim (8 March 2012). Rootless: A Chronicle of My Life Journey. AuthorHouse. p. 128. ISBN 978-1-4685-5891-3. Retrieved 19 September 2013. - [땅이름] 태백산과 아사달 / 허재영 (Korean) - Korea原名Corea? 美國改的名. United Daily News website. 5 July 2008. Retrieved 28 March 2014. (Chinese) - Actually Republic is 共和國 공화국 (″Mutually peaceful country″) as can be seen in the names of China and North Korea but Taiwan and South Korea coined the latter 民國 민국 - The North Korean Revolution, 1945–1950 By Charles K. Armstrong - Shane Green, Treaty plan could end Korean War, The Age, November 6, 2003 - Tokyo Shimbun, December 31, 2002 - Asahi, Mainichi, and Nikkei - In the program, however, teachers avoid the name Hangurugo, by always saying this language. They would say, for instance, "In this language, Annyeong haseyo means 'Hello' ". - Barbara Demick. "Breaking the occupation spell: Some Koreans see putdown in letter change in name." Boston Globe. 18 September 2003. Retrieved 5 July 2008. - "Korea versus Corea". 14 May 2005. Retrieved 12 November 2013. - Korea from around 1913 using the spelling "Corean" - H N Allen, MD Korean Tales: Being a Collection of Stories Translated from the Korean Folk Lore. New York: G. P. Putnam's Sons, 1889. - "Korea in the White City: Korea at the World's Columbian Exhibition (1893)." Transactions of the Korea Branch of the Royal Asiatic Society 77 (2002), 27. - KSS-Korbase's Korean Stamp Issuance Schedules - Commonwealth of Independent States Report, 1996. - 재외동포현황 Current Status of Overseas Compatriots, South Korea: Ministry of Foreign Affairs and Trade, 2009, retrieved 2009-05-21 - "The Korean Ethnic Group", China.org.cn, 2005-06-21, retrieved 2009-02-06 - Huang, Chun-chieh (2014). Humanism in East Asian Confucian Contexts. Verlag. p. 54. ISBN 9783839415542. Retrieved 23 July 2015. - Ancient History of the Manchuria By Lee Mosol, MD, MPH
https://en.wikipedia.org/wiki/Names_of_Korea
4.46875
The Treaty of Guadalupe Hidalgo, which brought an official end to the Mexican-American War (1846-1848) was signed on February 2, 1848, at Guadalupe Hidalgo, a city north of the capital where the Mexican government had fled with the advance of U.S. forces. To explore the circumstances that led to this war with Mexico, visit, "Lincoln's Spot Resolutions." With the defeat of its army and the fall of the capital, Mexico City, in September 1847 the Mexican government surrendered to the United States and entered into negotiations to end the war. The peace talks were negotiated by Nicholas Trist, chief clerk of the State Department, who had accompanied General Winfield Scott as a diplomat and President Polk's representative. Trist and General Scott, after two previous unsuccessful attempts to negotiate a treaty with Santa Anna, determined that the only way to deal with Mexico was as a conquered enemy. Nicholas Trist negotiated with a special commission representing the collapsed government led by Don Bernardo Couto, Don Miguel Atristain, and Don Luis Gonzaga Cuevas of Mexico. In The Mexican War, author Otis Singletary states that President Polk had recalled Trist under the belief that negotiations would be carried out with a Mexican delegation in Washington. In the six weeks it took to deliver Polk's message, Trist had received word that the Mexican government had named its special commission to negotiate. Against the president's recall, Trist determined that Washington did not understand the situation in Mexico and negotiated the peace treaty in defiance of the president. In a December 4, 1847, letter to his wife, he wrote, "Knowing it to be the very last chance and impressed with the dreadful consequences to our country which cannot fail to attend the loss of that chance, I decided today at noon to attempt to make a treaty; the decision is altogether my own." In Defiant Peacemaker: Nicholas Trist in the Mexican War, author Wallace Ohrt described Trist as uncompromising in his belief that justice could be served only by Mexico's full surrender, including surrender of territory. Ignoring the president's recall command with the full knowledge that his defiance would cost him his career, Trist chose to adhere to his own principles and negotiate a treaty in violation of his instructions. His stand made him briefly a very controversial figure in the United States. Under the terms of the treaty negotiated by Trist, Mexico ceded to the United States Upper California and New Mexico. This was known as the Mexican Cession and included present-day Arizona and New Mexico and parts of Utah, Nevada, and Colorado (see Article V of the treaty). Mexico relinquished all claims to Texas and recognized the Rio Grande as the southern boundary with the United States (see Article V). The United States paid Mexico $15,000,000 to compensate for war-related damage to Mexican property (see Article XII of the treaty) and agreed to pay American citizens debts owed to them by the Mexican government (see Article XV). Other provisions included protection of property and civil rights of Mexican nationals living within the new boundaries of the United States (see Articles VIII and IX), the promise of the United States to police its boundaries (see Article XI), and compulsory arbitration of future disputes between the two countries (see Article XXI). Trist sent a copy to Washington by the fastest means available, forcing Polk to decide whether or not to repudiate the highly satisfactory handiwork of his discredited subordinate. Polk chose to forward the treaty to the Senate. When the Senate reluctantly ratified the treaty (by a vote of 34 to 14) on March 10, 1848, it deleted Article X guaranteeing the protection of Mexican land grants. Following the ratification, U.S. troops were removed from the Mexican capital. To carry the treaty into effect, commissioner Colonel Jon Weller and surveyor Andrew Grey were appointed by the United States government and General Pedro Conde and Sr. Jose Illarregui were appointed by the Mexican government to survey and set the boundary. A subsequent treaty of December 30, 1853, altered the border from the initial one by adding 47 more boundary markers to the original six. Of the 53 markers, the majority were rude piles of stones; a few were of durable character with proper inscriptions. Over time, markers were moved or destroyed, resulting in two subsequent conventions (1882 and 1889) between the two countries to more clearly define the boundaries. Photographers were brought in to document the location of the markers. These photographs are in Record Group 77, Records of the Office of the Chief Engineers, in the National Archives. An example of one of these photographs, taken in the 1890s, is available online through the Archival Research Catalog (ARC) database, identifier: 519681 . Click to Enlarge View: Cover | Page 1 | Signature Page National Archives and Records Administration General Records of the United States Record Group 11 ARC Identifier: 299809
http://www.roebuckclasses.com/201/regional/treatyguadalupehildalgo.htm
4.21875
Today’s blog post comes from National Archives social media intern Anna Fitzpatrick. Nine months before President Lincoln signed the Emancipation Proclamation, he signed a bill on April 16, 1862, that ended slavery in the District of Columbia. The act finally concluded many years of disagreements over ending ”the national shame” of slavery in the nation’s capital. The law provided for immediate emancipation, compensation to loyal Unionist masters of up to $300 for each freed slave, voluntary colonization of former slaves to colonies outside the United States, and payments of up to $100 to each person choosing emigration. Although this three-way approach of immediate emancipation, compensation, and colonization did not serve as a model for the future, it pointed toward slavery’s death. Emancipation was greeted with great joy by the District’s African American community. The white population of DC took advantage of the act’s promise of compensation. One month after the act was issued, Margaret Barber presented a claim to the Board of Commissioners for the Emancipation of Slaves in the District of Columbia, saying that she wanted to be compensated by the Federal Government, which had freed her 34 slaves. Margaret Barber estimated that her slaves were worth a total of $23,400. On June 16, 1862, slave trader Bernard Campbell examined 28 of Barber’s slaves to assess their value for the Commission. In the end, Barber received $9,351.30 in compensation for their emancipation. But five of the 34 did not await the Commission’s deliberations. ”[S]ince the United States troops came here,” said Barber, they had ”absented themselves and went off and are believed still to be in some of the Companies and in their service.” Although the final Emancipation Proclamation did not allow for compensation such as Margaret Barber received, this earlier act proved to be an important step towards the final emancipation of the slaves. Less than a year later, on New Year’s Day of 1863, President Lincoln signed the Emancipation Proclamation into effect, and two years later the 13th Amendment finished the process of freeing all the slaves. To learn more about the compensation of owners and the personal information you can find in the commission records at the National Archives, you can read “Slavery and Emancipation in the Nation’s Capital: Using Federal Records to Explore the Lives of African American Ancestors” by Damani Davis. The story of Margaret Barber’s claims for compensation and the District of Columbia Emancipation Act is based on the article ”Teaching with Online Primary Sources: Documents from the National Archives: The Demise of Slavery in the District of Columbia, April 16, 1862,” written by Michael Hussey. It’s also featured in The Meaning and Making of Emancipation, an eBook created by the National Archives. Be sure to stop by the National Archives for the special display of the original document from Sunday, December 30, to Tuesday, January 1. The commemoration will include extended viewing hours, inspirational music, a dramatic reading of the Emancipation Proclamation, and family activities and entertainment for all ages.
http://prologue.blogs.archives.gov/2012/12/26/emancipation-proclamation-freedom-in-washington-dc/
4.03125
M.Ed., Stanford University Winner of multiple teaching awards Patrick has been teaching AP Biology for 14 years and is the winner of multiple teaching awards. M.Ed., Stanford University Winner of multiple teaching awards Patrick has been teaching AP Biology for 14 years and is the winner of multiple teaching awards. Although we don't like to think about it that much, we, like everything else in this universe, that's made up of or not made up of energy, we're made out of molecules. Scientists realised this a long time ago, but still wanted to think we're special. So they decided that we have 'organic molecules' while things like dirt, air, they're made out of metal, those are inorganic and we're special. Oops! Turns out we're not. We still have to follow the same rules as all those inorganic molecules. But we're still kind of stuck with this nomenclature so that we still talk about organic Chemistry and inorganic Chemistry. And we also wind up with some weirdly arbitrary definitions where things like carbon dioxide gas, even though it has got carbon in it, it's declared just an inorganic molecule. Because it turned out that all those organic molecules, what's the basic thing behind them, is that they're made out of carbon. And it's because carbon can form these long chains or these rings that give the great diversity of the organic molecules. So you really need to know about organic molecules for the AP biology test. Both because, they will ask questions on it and because their properties underlie a lot of the basic properties or things that go on in Biology like proteins, specificity or membrane function. So I'm going to begin with the simplest of the organic molecules, the carbohydrates and continue then with the fats and lipids. Third, I'll go through the proteins, very important topic and finish off with the nucleic acids. As I go through these groups of organic molecules, I'll begin by mentioning some of the common examples to give you a context for what I'm talking about. Then I'll describe the monomers, the basic building blocks that can be joined together to form the larger molecules called polymers. Then I'll finish off by going through some of the major functions of each of the groups of organic molecules. So to begin with the carbohydrates, you can probably already hopefully mention off some common examples of carbohydrates. These are things like glucose, fructose, lactose, cellulose, starch. You may notice that a lot of them all end with the word 'ose'. That's one of the tips that you can use to fake you're way through some parts of the AP Biology exam. Because if you see something that ends with 'ose' then it's a carbohydrate. You don't even need to know what it is. If you see the word amylose, it's a carbohydrate. So let's look at the monomers, the basic building blocks that are used to build the rest of the carbohydrates. The monomers of carbohydrates are called monosaccharides, where mono means one, saccha means sugar. So this means one sugar, or simple sugars. The monosaccharides of the carbohydrates are typically groups of three to eight carbons joined together with a bunch of hydrogens, and OH groups, or hydroxyl groups they're called. So we can see here an example of ribose which is a five carbon sugar, and glucose which is a six carbon sugar. Now these five carbon or six carbon sugars, can very often not only be in these what are called straight chains, but they can manoeuvre and join to form ring structures. Let's take a look at what happens with glucose. So you could see glucose has a group of six carbons and a linear form or straight chain form, or forming this ring structure, the hexagon shape. So if you are looking at a AP-Bio multiple choice test question and you see this hexagon shape, or pentagon shape for a five carbon sugar, you know it's a monosaccharide, a carbohydrate. So that's a big clue, just look for these multiple hexagons formed in long chains. When you're trying to form a disaccharide that's when you put two of them together. And here we see glucose plus glucose to form sucrose, a disaccharide. You know 'di' means two, di-sugar, two sugars put together. So glucose plus a fructose, forms a sucrose. Now you'll notice to get them to join, we need to pull off a hydroxide group, an OH from one, a hydrogen from the other, and that forms water. And we're left with this oxygen here forming the bond between those two. Because we're removing water to do this, this is called dehydration synthesis. So this is putting things together, 'de' remove, hydro-water, dehydration synthesis. Can you guess what would happen if we were breaking it apart? You've got it, hydro means water. You may recall in other videos I've talked about lice meaning to split or break. Hydrolysis is if we ran this backwards, where we split this two consuming a water. All of us you me, everybody, every human, has the enzyme that can break this bond to do that hydrolysis. There are some other sugars, if we put a glucose together with let's say fructose, if we put it together with a monosaccharide called galactose, we would form a molecule called lactose. You may have heard of that. Lactose is a sugar commonly found in milk. Again you need an enzyme to break that because while monosaccharides can be absorbed easily in your small intestine, disaccharides are too large to fit through the walls of the small intestine. So if you don't have that enzyme to break lactose, that lactose winds up in your large intestine. You may have heard of people who are lactose intolerant, which means they lack the enzyme, lactase enzyme, that is needed to break up lactose. So, some poor guy wanted to just eat some ice cream and later on he winds up having issues because instead of him absorbing the lactose sugar, all the wee beasties that are living inside of his large intestine they go 'uh yummy!' and they start going to town and he starts having issues. Now what if we put a whole bunch of them together. Putting together a whole bunch of molecules together. You know from Math, poly means many, like polygons. Well a whole bunch of sugars put together is called a polysaccharide. Now, a couple of common polysaccarides include starch which is made of a group of glucoses joined together. Well there is another one called cellulose, which again is made out of a bunch of glucoses joined together. What's the difference is exactly how the glucoses are joined together. Notice here how this CH2OH group is always above the plane? Here it alternates, left right. We can break down starch. So if I sit here... my body can break down the starches that make up things like apples or potatoes, because I've got an enzyme called amylase. The proper name for starch is amylose. But to break cellulose, I would need a different enzyme. And it turns out almost nothing on this planet has the enzyme to break apart cellulose. So if I sat here and try to do this, instead of getting yummy nutrition, I get splinters in places I don't want splinters. Who does have it? It's a few bacteria and a few single celled creatures called prototista. You may realise there is lots of things that eat cellulose, that eat wood. Well, what are those creatures? They are things like cows and termites. How do they get any nutrition from it? Inside their guts, they actually have colonies of these bacteria or these protista breaking down the cellulose for them and sharing the glucose that's coming out of that. Now, another example of a polysaccharide is a special polysaccharide called chitin, which is used only in fungi and in arthropods to make up their exoskeleton shells. And that leads into one of the major functions of carbohydrates. Carbohydrates are used for a lot of cellulose structures. The cellulose I mentioned before forms the structural outer cell wall of plants cells, while chitin is used to make the cell walls of fungi. It's also used to make the shells like I said of arthropods, things like the lobsters or bugs. The other major function of the carbohydrates is energy storage, whether it is starch or the simpler sugars glucose or the nice sugary sweet sucrose that we love on our cereal in the morning. Carbohydrates aren't the only energy storage molecules, obviously fats are a big player in that. The lipids are a big group, ranging from the things that we may know like the triglycerides, which are fat. Which we're all familiar with, and some of us are a little bit too familiar. To things like the testosterone, which is a steroid hormone that has been screwing with your body since puberty. To the not so familiar, phospholipids. Now, as a group, unfortunately the fats and lipids don't have a common monomer like the carbohydrates do with the monosaccharides or the proteins do with the amino acids. So unfortunately, they are all grouped together not because of some common structure, but because a common behavior. They're all the rejects. That is because they don't dissolve well in water. They're called hydrophobic and that's because they don't have the ability to do what's called hydrogen bonding. Let's take a look first at one of the two major groups which are the triglycerides and phospholipids and they will have a common structure that we can see here. You see this three carbon molecule there, that's a molecule called glycerol. The triglycerides, you can hear the 'glyc' and the phospholipids share this carbon-carbon-carbon chain. Now in the triglycerides, the fats attached to each of the carbons in the glycerol, you'll have these long chains of carbons with hydrogens on them. These are called fatty acids. wWereas with the phospholipids, you'll see the wiggly fatty acids here. But and on that third carbon, instead of having a fatty acid, you'll have a phosphate ion attached to it. Now, notice how each of these is kind of lined up and this little wiggly line there represents one of these carbon chains. Notice how this one is bent, that's because this is what's known as a saturated fatty acid, while this is an unsaturated fatty acid. You may have seen food labels where they have to list the saturated versus unsaturated fats. And there is two kinds of unsaturated fats. There is trans fatty acid and cis fatty acids. All you need to know really on that is see how this one is bent, that's a cis fatty acid, that's good. If it was one of these triglycerides, if they had a bent one here, because those trans fatty acids stay in a straight line like these guys. And they cam make easy big stacks of triglycerides in your bloodstream clogging them. While the cis fatty acid is with their bend, they can't form bid clumps. So the cis fatty acids, those are good because they can help dissolve the big chunks of fat into smaller chunks of fat. A little bit of health issue there. The other big group or structural group of the fats and lipids, are the steroid molecules. They have lots of different things attached to it but, all the steroids share this 1, 2, 3, 4 ring structure. And whatever is attached to the outsides of that, make it different from one to the next. In your body, all of the steroids hormones are made using the steroid core that you get from the cholesterol in your diet. So we always talk about cholesterol being bad. It's bad on high levels but if you didn't have a certain level of cholesterol on your diet, you couldn't make testosterone and oestrogen and all the other steroid hormones that you need in order to maintain homeostasis and be healthy. So what are the functions of fats and lipids? Well they are pretty wide ranging. They range from, obviously the energy storage of the triglycerides, the fats. But fat does more than just store energy. It also provides insulation. Both against heat loss and electrical insulation in your brain provided by a special fat called myelin. They also help protect and cushion against shock, whether it's the fat in your derrière or the pads of fat behind your eyes. So that when you're going jogging eyeballs aren't bounce around and popping. There are also of course those steroid hormones testosterone and oestrogen that I mentioned before, forming signalling compounds in your body. And then those phospholipids that I mentioned previously, they form the cell membrane. And it's their chemical behavior that gives the cell membrane a lot of its properties. The last fat that you may see mentioned on the AP exam, would be the waxes. I mean again because the waxes are hydrophobic, they prevent the movement of water across them. And that's why we have waxes in our ears to help prevent our eardrums from drying out, or plants will have a waxy cuticle. That a word to remember, to get that extra little point there in the essay about leaf structure. The waxy cuticle prevents water loss at the surface of leaves. So while carbohydrates and fats are really good at storing energy and they form some important structural parts of the cell, it's the proteins that are the real work forces of the cell. What are some proteins you may have heard? Well there is keratin. The stuff that makes up finger nails or hair. There is enzymes like the lactase enzyme that we mentioned before, amylase. There is myosin. It's one of the two major contractors or proteins found in muscles. So these are some common proteins or another one that you may see on the AP exam, will be albumen, which is that egg white protein. So what are the basic building blocks or monomers of proteins? There are structures called amino acids. Let's take a look at one amino acid. All amino acids and there is roughly 20 of them, share a common structure. They have a central carbon that's often called the alpha carbon. On one end, you'll have an amino group. And in some textbooks, depending on the pH that the solution is at, you may see 3 Hydrogens instead of the two here. On the other end you'll have the carboxyl group, which again may have lost that hydrogen depending on the pH. Down of the alpha carbon, you'll find a single hydrogen by itself. And then up here you'll find one of 20 different possible R-groups. I'm not going to have you take a look at all of them, you can easily find those in your textbook. But let's take a quick look at some of the various R-groups. And it's the R-groups that make each amino acid unique. Now I've often thought of amino acids kind of like train cars. There's box cars, flat cars, passenger cars, dinning cars. But all train cars share a common structure the wheel base, the axils and etcetera. And that's kind of like the amino carboxyl group with the alpha carbon and it's hydrogen. What makes each train car unique, is what on top whether it's a group of carbons, that's a passenger car or if it's a big flat platform with some tie downs that's a flat car. And it's these R-groups here that make each one unique. To join them together, much like how you join train cars together, what you do is you'll take an OH group off a carboxyl of one peptide, or amino acid. Peptide is an old name for amino acids. And you'll rip off a hydrogen from the amino of the next amino acid. And by doing that, we pull out a water, and now we've joined together our two amino acids. They call the bond that holds the two amino acids together, between the amino of one, and the carboxyl group of the other, they call that a peptide bond. Again, because the old name of the amino acids was peptides, that's also why an amino acid chain which is a group these all hook together, is sometimes called a polypeptide. Now, when scientists were first starting to study proteins, they run into a problem. Remember those R groups are extremely variable, and they have different chemistries. Some of them are negatively charged, some of them are positively charged. Some of them are non polar or hydrophobic R groups. And so they all start to form up into these really complicated tangled up masses. And initially when scientists were first studying this, they just couldn't make sense of it. It'd be kind of like if I handed one of the Lord of the Rings books to a kindergarten. So what I'm going to do is, I'm going to take you through the different levels of structure, because initially, the only thing that scientists could figure out was, the different amino acids in the chain that makes up the protein. They couldn't figure out anything beyond that. Just like the kindergarten over the Lord of the Rings book, all he could figure out was the sequence of letters. If you asked him what's it about? No idea. So let's take a quick look at a video from YouTube that takes us all through four layers of a structure of a protein. Here is that video I was talking about at YouTube. Let's make it bigger so it's easier to see. Now all these little balls here, those are amino acids. So let's go ahead and we'll put them together, and that's what a ribosome does. Is it builds a peptide bond between each one. Now let's pause it here. This sequence of amino acids and these are all just the abbreviations of the real amino acids names. If I went along and I rattled off each amino acid in sequence, that's what's called primary structure or one, the first level of structure. That's the simplest thing. That's again like that kindergarten who's saying, "The first letter is T, the next letter is H, then E then a space." Again ,it doesn't tell you what the protein can do but it does tell you some information. If we let it continue though, let's go ahead and start it up, the video again. You'll see that the R-groups of the amino acids start to interact with each other, and they start bending and whopping portions of it in space. Well let's pause it here. This is called the secondary structure. The secondary structure is what's going on? What are the interactions? Well I see this part of the chain here, is in parallel with that part of the chain over there, and again over here and here. And then this area here start to spiral up. After some hard work, scientists started being able to figure out, "Okay, we know that all these areas here, all the R-groups are all say negative and positive." They'll start curling towards each other and that may form a spiral. That's called the secondary or second level of structure. And that's kind of like, if a third grader read Lord of the Rings. He might say, "Look in this chapter, they are fighting oh it's cool and then in this chapter, photos winding again." Again he doesn't really know about the big grand sweep theme of the book that he's reading, but he can tell what's going on in small sections. Let's start that up again and we'll look at the third level or the tertiary structure of a protein. So again this is the alpha helix, that's called a Beta pleated sheet. That parallel ripple effect. Now you can see some of these lines here represent the hydrogen bonds, and other forms of bonds that are helping hold it in its shape. Let's pause it here, the tertiary structure. The tertiary or third level of structure is the 3D shape of the protein. That's kind of like what the heck happened in that novel? If you read Fellowship of the Rings, then you know what happened. And that's something that a high school kid can do with a novel. They can get a great idea of what's going on in the book. This took scientists a lot of scientists a lot of time to figure out. And it's only after investing years or decades of effort, that they can figure out the 3D or tertiary shape of a protein. Nowadays however with modern computers, this kind work can be done pretty quickly. Instead of a year or decade, it could take as short as a couple of months, or even faster. Now with a lot of proteins, that's it. You've figured out it's three dimensional shape, that tells you what kind of molecules can it fit to, how does it interact with other things. But some proteins are actually made out of more than one chain of amino acids. And again we can still see here's the chain. Now, some proteins are made out of multiple chains. And here we see those ones coming in. Let's pause it real quick before it goes to the end. And we can see the yellow chain has to fit onto this blue guy, the red one has to go in there. And that's, again if I go to to the Lord of the Rings, that is not a single novel. It's actually a trilogy. And if you just read Fellowship of the Rings, and you thought, "That's it? You know what's the heck, they are just going off in different directions. What's up with that?" You need to know how those Fellowship of the rings fit in amongst the other two books for you to really understand what's going on. So again, there's the four layer of structure within a protein. There is the primary structure, that's the sequence of amino acids from start to end of a chain. As you relax in tension, you'll find that some parts will spiral other parts will bend and that's the tertiary level structure. When you finally let go the chain, it'll wrap itself into some complicated shape, and that's the tertiary structure. If that chain happens to be put together with other polypeptide, or amino acid chains, that's what is called the quaternary structure. So now you know how proteins are put together. What are some of their functions? It's actually a lot easier to just ask what don't they do. Some proteins obviously make up those enzymes that I've mentioned before, lactase enzyme, amylase enzyme there is a very important enzyme in photosynthesis called RuBisCo enzyme or Ribulose or bisphosphate carboxylase. They make up structural things like I mentioned hair is made out of keratin. They form important components of the cytoskeleton. They form protein hormones, some of the signals that are used between your various cells. They form channels in the cell membrane to allow stuff to go in or out of the cell. They form antibodies to help your immune system. They form receptor proteins, also embedded in the membrane, to help your cells communicate one to the other. So that gives you a sense of the broad range of what proteins can do. So how do your cells know how to build those incredibly complex proteins? That's where nucleic acids come in. Now you may have heard of some of the nucleic acids such as DNA of course, but there is also RNA and ATP. The monomers of nucleic acids are molecules called nucleotides. Let's take a look. All nucleotides share a common structure. They have a phoshate group attached to a central five carbon sugar. And then they have some kind of nitrogenous or nitrogen carrying base over here. And that five carbon sugar can be deoxyribose in the case of DNA, or ribose in RNA. Now in another separate episode, I went far more in depth into the structure of DNA. And I recommend that you watch it if you're kind of unclear on that. But I want to make sure that we go over the basics of it. Now then nitrogenous base that I mentioned before comes in four varieties. With DNA, if you take a look at that, you'll see that the four kinds are thymine, cytosine. You notice each of those only has a single ring, whereas adenine and guanine they're double ring structures. Those are called purines these are called pyrimidines. With DNA and RNA, there is pretty much the same bases. The one difference is instead of thymine in RNA, they use a molecule called uracil. How do you remember that? Just think, 'You are correct'. Again in case you missed that, that means uracil is in RNA that's the correct answer. Now to join the nucleotides again, you do that dehydration synthesis process. And what will happen is, you'll wind up forming long chains. What you're doing is you're popping off an OH group from the corner of the sugar here, and you're joining it to a hydrogen that you rip off of the phosphate of the next nucleotide. So you'll start to form a long strand with the sugars and the phosphates forming the backbone of this and the nitrogenous bases sticking out like this. With RNA, that's it. RNA is generally single stranded molecule. But DNA, of course you've heard of the double helix. With DNA, you actually form a second strand. Notice how the pentagon is now pointing downwards, that's called anti-parallel. Mention that during an AP Biology essay on the structure of the DNA, and you got yourself another point. You'll see that between adenine and thymine, you'll see two hydrogen bonds. Whereas between guanine and cytosine, those dash lines are the three hydrogen bonds that form between them. And if you can remember, two hydrogen bonds between A and T, three hydrogen bonds between guanine and cytosine, you got yourself another point. So it's always A to T, G to C. Remember that and you're pretty good to go for DNA structure. Now what are the functions of the nucleic acids. Well everybody hopefully knows that DNA is the holder of your genetic information and RNA helps in that transmission of the genetic, or the inheritance abilities. But the one thing that a lot of people forget is that ATP, the energy currency of the cell is also a nucleotide. How is it different? It just has three phosphates in it instead of the normal one in the other nucleotides. Here's a memory trick that'll help you learn these four different kinds of organic molecules. And it's a memory trick that you can also use to help learn any kind of categorical knowledge. And it's taking advantage of a former memory that you have, that you use all the time. It's the one that you use to remember, for example, where did you park your car? Very few of you will pull out your flashcards and cram to remember that. No. You just walk in the mall and an hour later, you walk out and there you are at your car. What you do, is you visualise each of these different categories into a different location and when you study them, turn and look in that area and visualize them being there. Then during a test, all you can do is just turn and look. Now make sure that you're not memorizing your location on your partners or tablemate's paper, because your teacher may not like that. So what you do is, look over here and think that where those carbohydrates are those energy storing and structural molecules made out of monosaccharides getting joined together into the disaccharide, or even longer chains called polysaccharides. Next to them is that hydrophobic reject group of the organic molecules called the fats and lipids. Those include remember, the triglycerides and phospholipids used in that glycerol and fatty acid stuff to make them up, which are involving things like making fat or phospholipids in the membrane. Or the steroid core fats that are used for things like waxes and hormones. Over here, we have the proteins. Now remember, the proteins are the ones with that incredibly complex structure, where you hook the individual monomers called amino acids, together to form long chains, the simple structure of the chain is called the primary structure. As you let it begin to coil up a little bit, that secondary structure it's three dimensional shape, its tertiary structure. And if an actual protein is made out of multiple chains, how those multiple chains fit together is called its quaternary structure. And again those proteins, they form enzymes, they form membrane channels and hormones. They make all sorts of things in the cell. The last category way over here is the nucleic acids; the DNAs and RNA. And you recall of course, they are made of nucleotides joined together in strands with DNA requiring two strands to form the double helix. You use this tricks and you'll be better able to put this stuff together. And remember, it's not how hard you study, it's how smart you study. You put your time into being efficient, and that allows you to spend the time doing the things you really want to do like watching Desperate House Wives. AP Biology Videos
https://www.brightstorm.com/test-prep/ap-biology/ap-biology-videos/organic-molecules/
4.03125
- For the first definition, a monomial, also called power product, is a product of powers of variables with nonnegative integer exponents, or, in other words, a product of variables, possibly with repetitions. The constant 1 is a monomial, being equal to the empty product and x0 for any variable x. If only a single variable x is considered, this means that a monomial is either 1 or a power xn of x, with n a positive integer. If several variables are considered, say, then each can be given an exponent, so that any monomial is of the form with non-negative integers (taking note that any exponent 0 makes the corresponding factor equal to 1). - For the second definition, a monomial is a monomial in the first sense multiplied by a nonzero constant, called the coefficient of the monomial. A monomial in the first sense is also a monomial in the second sense, because the multiplication by 1 is allowed. For example, in this interpretation and are monomials (in the second example, the variables are and the coefficient is a complex number). Since the word "monomial", as well as the word "polynomial", comes from the late Latin word "binomium" (binomial), by changing the prefix "bi" (two in Latin), a monomial should theoretically be called a "mononomial". "Monomial" is a syncope of "mononomial". Comparison of the two definitions With either definition, the set of monomials is a subset of all polynomials that is closed under multiplication. Both uses of this notion can be found, and in many cases the distinction is simply ignored, see for instance examples for the first and second meaning. In informal discussions the distinction is seldom important, and tendency is towards the broader second meaning. When studying the structure of polynomials however, one often definitely needs a notion with the first meaning. This is for instance the case when considering a monomial basis of a polynomial ring, or a monomial ordering of that basis. An argument in favor of the first meaning is also that no obvious other notion is available to designate these values (the term power product is in use, in particular when monomial is used with the first meaning, but it does not make the absence of constants clear either), while the notion term of a polynomial unambiguously coincides with the second meaning of monomial. The remainder of this article assumes the first meaning of "monomial". The most obvious fact about monomials (first meaning) is that any polynomial is a linear combination of them, so they form a basis of the vector space of all polynomials, called the monomial basis - a fact of constant implicit use in mathematics. The number of monomials of degree d in n variables is the number of multicombinations of d elements chosen among the n variables (a variable can be chosen more than once, but order does not matter), which is given by the multiset coefficient . This expression can also be given in the form of a binomial coefficient, as a polynomial expression in d, or using a rising factorial power of d + 1: The latter forms are particularly useful when one fixes the number of variables and lets the degree vary. From these expressions one sees that for fixed n, the number of monomials of degree d is a polynomial expression in d of degree with leading coefficient . For example, the number of monomials in three variables () of degree d is ; these numbers form the sequence 1, 3, 6, 10, 15, ... of triangular numbers. The Hilbert series is a compact way to express the number of monomials of a given degree: the number of monomials of degree d in n variables is the coefficient of degree d of the formal power series expansion of The number of monomials of degree at most d in n variables is This follows from the one to one correspondence between the monomials of degree d in n+1 variables and the monomials of degree at most d in n variables, which consists in substituting by 1 the extra variable. Notation for monomials is constantly required in fields like partial differential equations. If the variables being used form an indexed family like , , , ..., then multi-index notation is helpful: if we write we can define and save a great deal of space. The degree of a monomial is defined as the sum of all the exponents of the variables, including the implicit exponents of 1 for the variables which appear without exponent; e.g., in the example of the previous section, the degree is . The degree of is 1+1+2=4. The degree of a nonzero constant is 0. For example, the degree of -7 is 0. The degree of a monomial is sometimes called order, mainly in the context of series. It is also called total degree when it is needed to distinguish it from the degree in one of the variables. Monomial degree is fundamental to the theory of univariate and multivariate polynomials. Explicitly, it is used to define the degree of a polynomial and the notion of homogeneous polynomial, as well as for graded monomial orderings used in formulating and computing Gröbner bases. Implicitly, it is used in grouping the terms of a Taylor series in several variables. In algebraic geometry the varieties defined by monomial equations for some set of α have special properties of homogeneity. This can be phrased in the language of algebraic groups, in terms of the existence of a group action of an algebraic torus (equivalently by a multiplicative group of diagonal matrices). This area is studied under the name of torus embeddings. - Monomial representation - Monomial matrix - Homogeneous polynomial - Homogeneous function - Multilinear form - Log-log plot - Power law
https://en.wikipedia.org/wiki/Monomial
4.28125
Bullying is when one or more people repeatedly attempt to hurt, intimidate, or torment another person. Bullying can be either physical or emotional. Bullying is most common among youths and young adults, but it can also occur in adulthood. Bullying is a common concern for school-aged youths. It has potentially serious consequences. Physical bullying includes any attempts to cause harm to another person. Emotional attacks include name-calling, teasing, threatening, or publicly humiliating the victim. Both physical and emotional attacks can be either direct or indirect. Direct bullying involves an actual confrontation between the bully and victim. Indirect attacks include the spread of rumors or attempts to humiliate the victim when he or she is not present. Cyberbullying, or bullying that occurs online or in social media forums, is a form of indirect bullying. People who are bullied are often physically smaller or perceived as weaker than the bully. This weakness can be real or imagined. Other people who are bullied are targeted for their differences. They may have a disability, or have developed differently from their immediate peers. They may have a different sexual orientation, be of a different socio-economic class, or possess traits that others are jealous of. Bullying can occur between peers or between adults and youths. It can occur between people of the same gender or people of different genders. A victim of bullying often does nothing to cause the attacks. Bullying occurs when one person, the attacker, has some sort of power over his or her victim, and acts on it. Bullies may be popular and powerful in their social circles. Sometimes they are isolated and not accepted by their peers. Children with mental health conditions, little parental involvement, violent tendencies, aggressive personalities, or issues at home are more likely to bully others. A person may bully another in order to: - increase his or her self-esteem - feel powerful - get his or her way - get respect from others - become more popular - make others laugh - fit in Victims of bullying may experience significant physical and emotional stress. Some effects of bullying include: - hurt feelings - depression or anxiety - low self-esteem - poor performance in school - physical pain (headaches or stomachaches) Ongoing abuse can lead to long-term stress or fear. Some victims of bullying end up taking matters into their own hands with violence. Bullying can also lead to suicide. The effects of bullying can last well into adulthood. For adults, bullying in the workplace can lead to frequent missed days and poor work performance. Employers can attempt to stop bullying with new policies, training, education, or by determining the root cause of the bullying. Victims of bullying often don’t report the abuse to teachers or parents out of fear of embarrassment or reprisals. They often feel isolated. They may feel that they will not be believed. They might also be afraid of backlash from the bullies or rejection from classmates. Educators can prevent bullying by talking openly about issues related to respect, by looking for signs of bullying, and by making sure students are aware they can come to educators with problems. Educators should intervene in and mediate bullying situations. Parents should discuss concerns with children—behavioral changes can be a sign of bullying (either being bullied or being a bully) and, if applicable, with school authorities. To stop bullying of yourself or another person, it usually helps to inform a trusted adult. Victims can learn to stand up for themselves. This may cause them to be targeted at first, because the bully will not expect this change in behavior. Victims should remain confident, tell their bullies to stop, and remove themselves from the situation. They should not use violence or reciprocate bullying. People who see others being bullied can help them by standing up for them and telling a trusted adult about the incident.
http://www.healthline.com/health/bullying
4
Here are some easy tips to help you upgrade your childs supplies for high school. more How to teach division So your child has mastered addition, subtraction and multiplication. Well done, there is only division left to learn of the basic numeric operations. Division for kids aged 5-6 If your child is aged 5-6 years then the focus is on using concrete materials to learn the concept of equal shares. This is usually easily learnt as most parents have familiarised their children with the language of ‘sharing’ through play experiences with other children. Ideas to use at home are: - Equal pouring – fill a jug with water and let your child fill smaller cups/glasses with the same amount of water. - Ask your child while wrapping presents to cut sticky tape or ribbon so that there are two lengths the same. - Drawing games – lots of legs is a great one that can be done with drawing or with toothpicks and play dough. Show them 20 toothpicks and tell them you need to share the legs evenly between the monsters. Talk about what happens when the monsters have two legs, when they have 3, when they have 10. Division for kids aged 7-8 Children aged 7-8 years are recognising the division sign and understand that division “undoes” the effects of multiplication just as subtraction “undoes” or is the reverse of addition. Your child may also understand division as repeated subtraction. Try these games to reinforce the concepts: - Animal Paddocks – give your child an A4 piece of paper which has been divided into different sized segments. Give your child plastic animals and ask them to place the animals into paddocks so that they each have the same amount of space. This is working on division as well as a lead into fractions. - Dividing food is always a strong motivator. When cutting a birthday cake or slicing a pizza have children count the number of people and tell you how many pieces you will need for everyone to have an equal share. - Pegging clothes – explain that you need help with hanging the washing. Each piece of clothing takes 2 pegs and you have 20 pegs – let them guess how many items they will be able to hang and then have a go! Alter this depending on how well they know their multiplication facts. Division for kids aged 9-10 Children 9-10 years are relating their division facts for example they know 36 divided by 4 is the same as halving 36 and halving again. They recognise the sign for short and long division and can list multiples and factors. They know that when dividing there is often a remainder and can explain why. Try these learning games to help with division: - Dice Games – take three dice (you can use numbers written on cards if you don’t have dice to roll) and roll two. These two are multiplied to become the total. Then roll the 3rd dice and divide the total by this number. Model the number sentence with concrete materials or draw it if needed. Discuss why there are remainders. - Real life situations – children of this age are usually getting pocket money. Discuss real life situations that involve money and remainders for example “share $7 between 3 people”. Alternatively questions such as “fifty eggs are packed into half dozen lots (groups of 6). How many cartons would a farmer need? Division for kids aged 11-12 For children aged 11-12 division becomes complex. Children are now doing long division dividing a number by a two and three digit numbers. Children are also learning to write remainders as fractions and decimals. Further children are having to apply their knowledge of division to problem solve. - Value for Money – when going shopping ask kids to determine which item is the best value for money. Questions such as “which is the best value buy 4 toilet rolls for $2.95 or 6 toilet rolls for $3.95” are suitable for kids this age. - Dividing with Place Value with problems such as ‘On the way to school 4 children found a $50 note. They handed it in to the principal. They will get a share of the $50 if no one claims it after a week.’ Then ask how much would each child get? How much would each child get if $5 was found? How much would each child get if 50c was found? - Division Webs – this is an alternative to worksheets and algorithms. Children create web patterns using three- or four-digit numbers. They draw the web with the divisor in the middle and the divided number around the web. To make this more difficult you write in the numbers around the web and they find the common divider in the centre. - Averages – Determining the average whether it be the weather average for the week or their favourite cricket player’s batting average, this activity is a great way to practise division. To find the average add the total together and divide by the number added. Helping your child does not have to involve worksheets, flashcards or expensive maths programs. It only requires patience, enthusiasm and links to real life experiences. Find more teaching tricks to inspire learning: - Teaching kids to tell time - Teaching left vs. right - Tips for teaching addition - Tips for teaching subtraction - Tips for teaching multiplication - Tips for teaching division - The importance of music lessons - Mathematical milestones for pre-kinder children - Mathematical milestones for 5-6 year old children - Mathematical milestones for 7-8 year old children - Mathematical milestones for 9-10 year old children - Mathematical milestones for 11-12 year old children Find more articles about learning games: - Reading games for fun - Host your own spelling bee - Learning games with Kidspot's spelling scrambler handwriting with printable mazes - Handwriting fun with dot-to-dots - Fun teaching ideas to learn left from right facts and learning games - Subtraction learning games - Multiplication facts and learning ideas - How to teach division - What cooking will teach our kids
http://www.kidspot.com.au/schoolzone/Maths-&-science-Learning-games-How-to-teach-division+4253+316+article.htm