Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Previous abstract Next abstract Session 40 - The Interstellar Medium. Display session, Tuesday, June 09 Gamma Ray Burst (GRB) explosions can make kpc-size shells and holes in the interstellar media (ISM) of spiral galaxies if much of the energy heats the local gas to above 10^7 K. Disk blowout is probably the major cause for energy loss in this case, but the momentum acquired during the pressurized expansion phase can be large enough that the bubble still snowplows to a kpc diameter. This differs from the standard model for the origin of such shells by multiple supernovae, which may have problems with radiative cooling, evaporative losses, and disk blow-out. Evidence for giant shells with energies of \sim10^53 ergs are summarized. Some contain no obvious central star clusters and may be GRB remnants, although sufficiently old clusters would be hard to detect. The expected frequency of GRBs in normal galaxies can account for the number of such shells. Program listing for Tuesday
<urn:uuid:e2300ad5-01dd-4e80-92b3-7ec88785cc9d>
2.765625
208
Content Listing
Science & Tech.
47.385488
0
Wikipedia sobre física de partículas Rapidinho. Me falaram que a definição de física de partículas da Wikipedia era muito ruim. E de fato, era assim: Particle physics is a branch of physics that studies the elementary particle|elementary subatomic constituents of matter and radiation, and their interactions. The field is also called high energy physics, because many elementary particles do not occur under ambient conditions on Earth. They can only be created artificially during high energy collisions with other particles in particle accelerators. Particle physics has evolved out of its parent field of nuclear physics and is typically still taught in close association with it. Scientific research in this area has produced a long list of particles. Mas hein? Partículas que só podem ser criadas em aceleradores? Física de partículas é ensinada junto com física nuclear? A pesquisa produz partículas (essa é ótima!)? Em que mundo essa pessoa vive? Reescrevi: Particle Physics is a branch of physics that studies the existence and interactions of particles, which are the constituents of what is usually referred as matter or radiation. In our current understanding, particles are excitations of quantum fields and interact following their dynamics. Most of the interest in this area is in fundamental fields, those that cannot be described as a bound state of other fields. The set of fundamental fields and their dynamics are summarized in a model called the Standard Model and, therefore, Particle Physics is largely the study of the Standard Model particle content and its possible extensions. Eu acho que ficou bem melhor. Vamos ver em quanto tempo algum editor esquentado da Wikipedia vai demorar para reverter. Atualmente está um saco participar da Wikipedia por causa dessas pessoas.
<urn:uuid:e7f0a003-07f1-4148-a77c-6e0cb215fc0e>
3
419
Comment Section
Science & Tech.
30.5235
1
Belgian physicist Francois Englert, left, speaks with British physicist… (Fabrice Coffrini / AFP/Getty…) For physicists, it was a moment like landing on the moon or the discovery of DNA. The focus was the Higgs boson, a subatomic particle that exists for a mere fraction of a second. Long theorized but never glimpsed, the so-called God particle is thought to be key to understanding the existence of all mass in the universe. The revelation Wednesday that it -- or some version of it -- had almost certainly been detected amid more than hundreds of trillions of high-speed collisions in a 17-mile track near Geneva prompted a group of normally reserved scientists to erupt with joy. For The Record Los Angeles Times Friday, July 06, 2012 Home Edition Main News Part A Page 4 News Desk 1 inches; 48 words Type of Material: Correction Large Hadron Collider: In some copies of the July 5 edition, an article in Section A about the machine used by physicists at the European Organization for Nuclear Research to search for the Higgs boson referred to the $5-billion Large Hadron Collider. The correct amount is $10 billion. Peter Higgs, one of the scientists who first hypothesized the existence of the particle, reportedly shed tears as the data were presented in a jampacked and applause-heavy seminar at CERN, the European Organization for Nuclear Research. "It's a gigantic triumph for physics," said Frank Wilczek, an MIT physicist and Nobel laureate. "It's a tremendous demonstration of a community dedicated to understanding nature." The achievement, nearly 50 years in the making, confirms physicists' understanding of how mass -- the stuff that makes stars, planets and even people -- arose in the universe, they said. It also points the way toward a new path of scientific inquiry into the mass-generating mechanism that was never before possible, said UCLA physicist Robert Cousins, a member of one of the two research teams that has been chasing the Higgs boson at CERN. "I compare it to turning the corner and walking around a building -- there's a whole new set of things you can look at," he said. "It is a beginning, not an end." Leaders of the two teams reported independent results that suggested the existence of a previously unseen subatomic particle with a mass of about 125 to 126 billion electron volts. Both groups got results at a "five sigma" level of confidence -- the statistical requirement for declaring a scientific "discovery." "The chance that either of the two experiments had seen a fluke is less than three parts in 10 million," said UC San Diego physicist Vivek Sharma, a former leader of one of the Higgs research groups. "There is no doubt that we have found something." But he and others stopped just shy of saying that this new particle was indeed the long-sought Higgs boson. "All we can tell right now is that it quacks like a duck and it walks like a duck," Sharma said. In this case, quacking was enough for most. "If it looks like a duck and quacks like a duck, it's probably at least a bird," said Wilczek, who stayed up past 3 a.m. to watch the seminar live over the Web while vacationing in New Hampshire. Certainly CERN leaders in Geneva, even as they referred to their discovery simply as "a new particle," didn't bother hiding their excitement. The original plan had been to present the latest results on the Higgs search at the International Conference on High Energy Physics, a big scientific meeting that began Wednesday in Melbourne. But as it dawned on CERN scientists that they were on the verge of "a big announcement," Cousins said, officials decided to honor tradition and instead present the results on CERN's turf. The small number of scientists who theorized the existence of the Higgs boson in the 1960s -- including Higgs of the University of Edinburgh -- were invited to fly to Geneva. For the non-VIP set, lines to get into the auditorium began forming late Tuesday. Many spent the night in sleeping bags. All the hubbub was due to the fact that the discovery of the Higgs boson is the last piece of the puzzle needed to complete the so-called Standard Model of particle physics -- the big picture that describes the subatomic particles that make up everything in the universe, and the forces that work between them. Over the course of the 20th century, as physicists learned more about the Standard Model, they struggled to answer one very basic question: Why does matter exist? Higgs and others came up with a possible explanation: that particles gain mass by traveling through an energy field. One way to think about it is that the field sticks to the particles, slowing them down and imparting mass. That energy field came to be known as the Higgs field. The particle associated with the field was dubbed the Higgs boson. Higgs published his theory in 1964. In the 48 years since, physicists have eagerly chased the Higgs boson. Finding it would provide the experimental confirmation they needed to show that their current understanding of the Standard Model was correct. On the other hand, ruling it out would mean a return to the drawing board to look for an alternative Higgs particle, or several alternative Higgs particles, or perhaps to rethink the Standard Model from the bottom up. Either outcome would be monumental, scientists said.
<urn:uuid:fb237ffb-9cc0-4077-99d5-56c6fce1ca5f>
2.59375
1,134
Truncated
Science & Tech.
48.351553
2
By Jason Kohn, Contributing Columnist Like many of us, scientific researchers tend to be creatures of habit. This includes research teams working for the National Oceanic and Atmospheric Administration (NOAA), the U.S. government agency charged with measuring the behavior of oceans, atmosphere, and weather. Many of these climate scientists work with massive amounts of data – for example, the National Weather Service collecting up-to-the-minute temperature, humidity, and barometric readings from thousands of sites across the United States to help forecast weather. Research teams then rely on some the largest, most powerful high-performance computing (HPC) systems in the world to run models, forecasts, and other research computations. Given the reliance on HPC resources, NOAA climate researchers have traditionally worked onsite at major supercomputing facilities, such as Oak Ridge National Laboratory in Tennessee, where access to supercomputers are just steps away. As researchers crate ever more sophisticated models of ocean and atmospheric behavior, however, the HPC requirements have become truly staggering. Now, NOAA is using a super-high-speed network called “n-wave” to connect research sites across the United States with the computing resources they need. The network has been operating for several years, and today transports enough data to fill a 10-Gbps network to full capacity, all day, every day. NOAA is now upgrading this network to allow even more data traffic, with the goal of ultimately supporting 100-Gbps data rates. “Our scientists were really used to having a computer in their basement,” says Jerry Janssen, manager, n-wave Network, NOAA, in a video about the project. “When that computer moved a couple thousand miles away, we had to give them a lot of assurances that, one, the data would actually move at the speed they needed it to move, but also that they could rely on it to be there. The amount of data that will be generated under this model will exceed 80-100 Terabits per day.” The n-wave project means much more than just a massive new data pipe. It represents a fundamental shift in the way that scientists can conduct their research, allowing them to perform hugely demanding supercomputer runs of their data from dozens of remote locations. As a result, it gives NOAA climate scientists much more flexibility in where and how they work. “For the first time, NOAA scientists and engineers in completely separate parts of the country, all the way to places like Alaska and Hawaii and Puerto Rico, will have the bandwidth they need, without restriction,” says Janssen. “NOAA will now be able to do things it never thought it could do before.” In addition to providing fast, stable access to HPC resources, n-wave is also allowing NOAA climate scientists to share resources much more easily with scientists in the U.S. Department of Energy and other government agencies. Ideally, this level of collaboration and access to supercomputing resources will help climate scientists continue to develop more effective climate models, improve weather forecasts, and allow us to better understand our climate. Powering Vital Climate Research The high-speed nationwide HPC connectivity capability provided by n-wave is now enabling a broad range of NOAA basic science and research activities. Examples include: - Basic data dissemination, allowing research teams to collect up-to-the-minute data on ocean, atmosphere, and weather from across the country, and make that data available to other research teams and agencies nationwide. - Ensemble forecasting, where researchers run multiple HPC simulations using different initial conditions and modeling techniques, in order to refine their atmospheric forecasts and minimize errors. - Severe weather modeling, where scientists draw on HPC simulations, real-time atmospheric data, and archived storm data to better understand and predict the behavior of storms. - Advancing understanding of the environment to be able to better predict short-term and long-term environmental changes, mitigate threats, and provide the most accurate data to inform policy decisions. All of this work is important, and will help advance our understanding of Earth’s climate. And it is all a testament to the amazing networking technologies and infrastructure that scientists now have at their disposal, which puts the most powerful supercomputing resources in the world at their fingertips – even when they are thousands of miles away.
<urn:uuid:c23e3842-a002-4f6b-9554-bafecec0beed>
3.3125
899
News (Org.)
Science & Tech.
28.323894
3
Tornadoes are the most intense storms on the planet, and they’re never discussed without at least some mention of the term wind shear. Many of us sitting at home, though, have no idea what wind shear is, or if we do, how it affects tornado production. What is Wind Shear Wind shear, although it might sound complex, is a simple concept. Wind shear is merely the change in wind with height, in terms of wind direction and speed. I think that we all understand that the wind is generally stronger in the atmosphere over our heads than it is here on the ground, and if we think of the atmosphere in terms of the three dimensions that it has, it should not be surprising that the wind above us might also be blowing from a different direction than the wind at the ground. When that happens–the wind speed and direction vary with height–wind shear is occurring. Wind Shear and Supercell Thunderstorms This wind shear is an important part of the process in the development of a supercell thunderstorm, from which the vast majority of strong tornadoes form. All thunderstorms are produced by a powerful updraft–a surge of air that rises from the ground into the upper levels of the atmosphere, and when this updraft forms in an area where wind shear is present, the updraft is influence by this speed and different direction of the wind above, pushing the column of air in the updraft into a more vertical alignment. Rain’s Influence on Tornado Production Needless to say, thunderstorms typically produce very heavy rain, and rain-cooled air is much heavier than the warm air of the updraft, so the rain-cooled air, produces a compensating downdraft (what comes up, must come down). This downdraft pushes the part of the rotating air that was forced in its direction by the stronger wind aloft downward, and the result is a horizontal column of rotating air. That’s Not a Tornado! I know what you’re thinking that you’ve seen enough TLC or Discovery Channel shows to know that a horizontal column of air is NOT a tornado; you need a vertical column of air. This Can Be a Tornado You’re right, but remember the updraft that is driving the thunderstorm is still working, and it’s able to pull the horizontal, spinning column of air into the thunderstorm, resulting in a vertical column of spinning air. (NOAA image showing vertical column of air in a supercell thunderstorm) The result is a rotating thunderstorm capable of producing a tornado, and it would not be possible without wind shear. (NOAA image showing tornado formation in supercell thunderstorm)
<urn:uuid:7400301c-e625-46d5-be90-1020cf8d52f8>
4.15625
573
Personal Blog
Science & Tech.
45.080294
4
Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration Goswami, Nandu and Roma, Peter G. and De Boever, Patrick and Clément, Gilles and Hargens, Alan R. and Loeppky, Jack A. and Evans, Joyce M. and Stein, T. Peter and Blaber, Andrew P. and Van Loon, Jack J.W.A. and Mano, Tadaaki and Iwase, Satoshi and Reitz, Guenther and Hinghofer-Szalkay, Helmut G. (2012) Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration. Planetary and Space Science, Epub ahead of print (in press). Elsevier. DOI: 10.1016/j.pss.2012.07.030. Full text not available from this repository. Due to its proximity to Earth, the Moon is a promising candidate for the location of an extra-terrestrial human colony. In addition to being a high-fidelity platform for research on reduced gravity, radiation risk, and circadian disruption, the Moon qualifies as an isolated, confined, and extreme (ICE) environment suitable as an analogue for studying the psychosocial effects of long-duration human space exploration missions and understanding these processes. In contrast, the various Antarctic research outposts such as Concordia and McMurdo serve as valuable platforms for studying biobehavioral adaptations to ICE environments, but are still Earth-bound, and thus lack the low-gravity and radiation risks of space. The International Space Station (ISS), itself now considered an analogue environment for long-duration missions, better approximates the habitable infrastructure limitations of a lunar colony than most Antarctic settlements in an altered gravity setting. However, the ISS is still protected against cosmic radiation by the earth magnetic field, which prevents high exposures due to solar particle events and reduces exposures to galactic cosmic radiation. On Moon the ICE environments are strengthened, radiations of all energies are present capable of inducing performance degradation, as well as reduced gravity and lunar dust. The interaction of reduced gravity, radiation exposure, and ICE conditions may affect biology and behavior--and ultimately mission success--in ways the scientific and operational communities have yet to appreciate, therefore a long-term or permanent human presence on the Moon would ultimately provide invaluable high-fidelity opportunities for integrated multidisciplinary research and for preparations of a manned mission to Mars. |Title:||Using the Moon as a High-Fidelity Analogue Environment to Study Biological and Behavioural Effects of Long-Duration Space Exploration| |Journal or Publication Title:||Planetary and Space Science| |In Open Access:||No| |In ISI Web of Science:||Yes| |Volume:||Epub ahead of print (in press)| |Keywords:||Physiology, Orthostatic tolerance, Muscle deconditioning, Behavioural health, Psychosocial adaptation, Radiation, Lunar dust, Genes, Proteomics| |HGF - Research field:||Aeronautics, Space and Transport, Aeronautics, Space and Transport| |HGF - Program:||Space, Raumfahrt| |HGF - Program Themes:||W EW - Erforschung des Weltraums, R EW - Erforschung des Weltraums| |DLR - Research area:||Space, Raumfahrt| |DLR - Program:||W EW - Erforschung des Weltraums, R EW - Erforschung des Weltraums| |DLR - Research theme (Project):||W - Vorhaben MSL-Radiation (old), R - Vorhaben MSL-Radiation| |Institutes and Institutions:||Institute of Aerospace Medicine > Radiation Biology| |Deposited By:||Kerstin Kopp| |Deposited On:||27 Aug 2012 08:05| |Last Modified:||07 Feb 2013 20:40| Repository Staff Only: item control page
<urn:uuid:25dbfda6-18d6-4e04-9bf5-fe7dcc73d69b>
3.09375
887
Academic Writing
Science & Tech.
24.740737
5
Science -- Asher et al. 307 (5712): 1091: We describe several fossils referable to Gomphos elkema from deposits close to the Paleocene-Eocene boundary at Tsagan Khushu, Mongolia. Gomphos shares a suite of cranioskeletal characters with extant rabbits, hares, and pikas but retains a primitive dentition and jaw compared to its modern relatives. Phylogenetic analysis supports the position of Gomphos as a stem lagomorph and excludes Cretaceous taxa from the crown radiation of placental mammals. Our results support the hypothesis that rodents and lagomorphs radiated during the Cenozoic and diverged from other placental mammals close to the Cretaceous-Tertiary boundary. Lagomorphs are rabbits, hares, and pikas. This might be referred to as a "missing link" of the rodents. Why do we care? Most mammals are rodents, and this tells us about the evolution of the most successful group of mammals. Cool!
<urn:uuid:fa9d11c3-ad57-40a6-8915-a8b1cd687729>
2.921875
220
Personal Blog
Science & Tech.
36.115
6
Basic Use To make a new number, a simple initialization suffices: var foo = 0; // or whatever number you want foo = 1; //foo = 1 foo += 2; //foo = 3 (the two gets added on) foo -= 2; //foo = 1 (the two gets removed) Number literals define the number value. In particular: They appear as a set of digits of varying length. Negative literal numbers have a minus sign before the set of digits. Floating point literal numbers contain one decimal point, and may optionally use the E notation with the character e. An integer literal may be prepended with "0", to indicate that a number is in base-8. (8 and 9 are not octal digits, and if found, cause the integer to be read in the normal base-10). An integer literal may also be found with "0x", to indicate a hexadecimal number. The Math Object Unlike strings, arrays, and dates, the numbers aren't objects. The Math object provides numeric functions and constants as methods and properties. The methods and properties of the Math object are referenced using the dot operator in the usual way, for example: var varOne = Math.ceil(8.5); var varPi = Math.PI; var sqrt3 = Math.sqrt(3); Methods random() Generates a pseudo-random number. var myInt = Math.random(); max(int1, int2) Returns the highest number from the two numbers passed as arguments. var myInt = Math.max(8, 9); document.write(myInt); //9 min(int1, int2) Returns the lowest number from the two numbers passed as arguments. var myInt = Math.min(8, 9); document.write(myInt); //8 floor(float) Returns the greatest integer less than the number passed as an argument. var myInt = Math.floor(90.8); document.write(myInt); //90; ceil(float) Returns the least integer greater than the number passed as an argument. var myInt = Math.ceil(90.8); document.write(myInt); //91; round(float) Returns the closest integer to the number passed as an argument. var myInt = Math.round(90.8); document.write(myInt); //91;
<urn:uuid:eecdd55e-49d8-40e4-9834-6f3dce28fa4c>
3.96875
508
Documentation
Software Dev.
72.693517
7
Data structures for manipulating (biological) sequences. Generally supports both nucleotide and protein sequences, some functions, like revcompl, only makes sense for nucleotides. |A sequence is a header, sequence data itself, and optional quality data. Sequences are type-tagged to identify them as nucleotide, amino acids, or unknown type. All items are lazy bytestrings. The Offset type can be used for indexing. |A sequence consists of a header, the sequence data itself, and optional quality data. The type parameter is a phantom type to separate nucleotide and amino acid sequences |An offset, index, or length of a SeqData |The basic data type used in Sequences |Quality data is normally associated with nucleotide sequences |Basic type for quality data. Range 0..255. Typical Phred output is in the range 6..50, with 20 as the line in the sand separating good from bad. |Quality data is a Qual vector, currently implemented as a ByteString. |Read the character at the specified position in the sequence. |Return sequence length. |Return sequence label (first word of header) |Return full header. |Return the sequence data. |Check whether the sequence has associated quality data. |Return the quality data, or error if none exist. Use hasqual if in doubt. |Adding information to header |Modify the header by appending text, or by replacing all but the sequence label (i.e. first word). |Converting to and from [Char] |Convert a String to SeqData |Convert a SeqData to a String Returns a sequence with all internal storage freshly copied and with sequence and quality data present as a single chunk. By freshly copying internal storage, defragSeq allows garbage collection of the original data source whence the sequence was read; otherwise, use of just a short sequence name can cause an entire sequence file buffer to be retained. By compacting sequence data into a single chunk, defragSeq avoids linear-time traversal of sequence chunks during random access into |map over sequences, treating them as a sequence of (char,word8) pairs. This will work on sequences without quality, as long as the function doesn't try to examine it. The current implementation is not very efficient. |Phantom type functionality, unchecked conversion between sequence types |Nucleotide sequences contain the alphabet [A,C,G,T]. IUPAC specifies an extended nucleotide alphabet with wildcards, but it is not supported at this point. |Complement a single character. I.e. identify the nucleotide it can hybridize with. Note that for multiple nucleotides, you usually want the reverse complement (see revcompl for that). |Calculate the reverse complement. This is only relevant for the nucleotide alphabet, and it leaves other characters unmodified. |Calculate the reverse complent for SeqData only. |For type tagging sequences (protein sequences use Amino below) |Proteins are chains of amino acids, represented by the IUPAC alphabet. |Translate a nucleotide sequence into the corresponding protein sequence. This works rather blindly, with no attempt to identify ORFs or otherwise QA the result. |Convert a sequence in IUPAC format to a list of amino acids. |Convert a list of amino acids to a sequence in IUPAC format. |Display a nicely formated sequence. |A simple function to display a sequence: we generate the sequence string and | call putStrLn |Returns a properly formatted and probably highlighted string | representation of a sequence. Highlighting is done using ANSI-Escape |Default type for sequences |Produced by Haddock version 2.6.1|
<urn:uuid:0811e322-860e-4f42-9263-ac9ca9ec229a>
2.59375
838
Documentation
Software Dev.
35.095195
8
The Javan rhinoceros is one of the most rare animals in the world and it was just spotted on video tape. Seamen have long reported miraculous sightings of luminous, glowing seawater. You know how animals are supposed to be able sense disasters before they happen? Well some believe it’s a myth, though there are lots of reports of animals behaving strangely days before the tsunamis hit in Indonesia. Hundreds of thousands of ants were seen scurrying away from the beach. Elephants, dogs, and zoo animals were all reported to have been acting strangely. What can explain it? Learn more on this Moment of Science.
<urn:uuid:19ce4a7d-7ae3-489c-8be3-7b90046f895d>
2.53125
134
Content Listing
Science & Tech.
49.290268
9
Chinese researchers have turned to the light absorbing properties of butterfly wings to significantly increase the efficiency of solar hydrogen cells, using biomimetics to copy the nanostructure that allows for incredible light and heat absorption. Butterflies are known to use heat from the sun to warm themselves beyond what their bodies can provide, and this new research takes a page from their evolution to improve hydrogen fuel generation. Analyzing the wings of Papilio helenus, the researchers found scales that are described as having: [...] Ridges running the length of the scale with very small holes on either side that opened up onto an underlying layer. The steep walls of the ridges help funnel light into the holes. The walls absorb longer wavelengths of light while allowing shorter wavelengths to reach a membrane below the scales. Using the images of the scales, the researchers created computer models to confirm this filtering effect. The nano-hole arrays change from wave guides for short wavelengths to barriers and absorbers for longer wavelengths, which act just like a high-pass filtering layer. So, what does this have to do with fuel cells? Splitting water into hydrogen and oxygen takes energy, and is a drain on the amount you can get out of a cell. To split the water, the process uses a catalyst, and certain catalysts — say, titanium dioxide — function by exposure to light. The researchers synthesized a titanium dioxide catalyst using the pattern from the butterfly's wings, and paired it with platinum nanoparticles to make it more efficient at splitting water. The result? A 230% uptick in the amount of hydrogen produced. The structure of the butterfly's wing means that it's better at absorbing light — so who knows, you might also see the same technique on solar panels, too.
<urn:uuid:9a374252-df3c-4004-8693-6678182914d9>
3.765625
355
News Article
Science & Tech.
45.392983
10
By Irene Klotz CAPE CANAVERAL, Florida (Reuters) - Despite searing daytime temperatures, Mercury, the planet closest to the sun, has ice and frozen organic materials inside permanently shadowed craters in its north pole, NASA scientists said on Thursday. Earth-based telescopes have been compiling evidence for ice on Mercury for 20 years, but the finding of organics was a surprise, say researchers with NASA's MESSENGER spacecraft, the first probe to orbit Mercury. Both ice and organic materials, which are similar to tar or coal, were believed to have been delivered millions of years ago by comets and asteroids crashing into the planet. "It's not something we expected to see, but then of course you realize it kind of makes sense because we see this in other places," such as icy bodies in the outer solar system and in the nuclei of comets, planetary scientist David Paige, with the University of California, Los Angeles, told Reuters. Unlike NASA's Mars rover Curiosity, which will be sampling rocks and soils to look for organic materials directly, the MESSENGER probe bounces laser beams, counts particles, measures gamma rays and collects other data remotely from orbit. The discoveries of ice and organics, painstakingly pieced together for more than a year, are based on computer models, laboratory experiments and deduction, not direct analysis. "The explanation that seems to fit all the data is that it's organic material," said lead MESSENGER scientist Sean Solomon, with Columbia University in New York. Added Paige, "It's not just a crazy hypothesis. No one has got anything else that seems to fit all the observations better." Scientists believe the organic material, which is about twice as dark as most of Mercury's surface, was mixed in with comet- or asteroid-delivered ice eons ago. The ice vaporized, then re-solidified where it was colder, leaving dark deposits on the surface. Radar imagery shows the dark patches subside at the coldest parts of the crater, where ice can exist on the surface. The areas where the dark patches are seen are not cold enough for surface ice without the overlying layer of what is believed to be organics. So remote was the idea of organics on Mercury that MESSENGER got a relatively easy pass by NASA's planetary protection protocols that were established to minimize the chance of contaminating any indigenous life-potential material with hitchhiking microbes from Earth. Scientists don't believe Mercury is or was suitable for ancient life, but the discovery of organics on an inner planet of the solar system may shed light on how life got started on Earth and how life may evolve on planets beyond the solar system. "Finding a place in the inner solar system where some of these same ingredients that may have led to life on Earth are preserved for us is really exciting," Paige said. MESSENGER, which stands for Mercury Surface, Space Environment, Geochemistry and Ranging, is due to complete its two-year mission at Mercury in March. Scientists are seeking NASA funding to continue operations for at least part of a third year. The probe will remain in Mercury's orbit until the planet's gravity eventually causes it to crash onto the surface. Whether the discovery of organics now prompts NASA to select a crash zone rather than leave it up to chance remains to be seen. Microbes that may have hitched a ride on MESSENGER likely have been killed off by the harsh radiation environment at Mercury. The research is published in this week's edition of the journal Science. (Editing by Kevin Gray and Vicki Allen)
<urn:uuid:954bdf7e-7951-42c2-a6f6-6f7912bad693>
3.234375
753
News Article
Science & Tech.
30.158056
11
Jim Lake and Maria Rivera, at the University of California-Los Angeles (UCLA), report their finding in the Sept. 9 issue of the journal Nature. Scientists refer to both bacteria and Archaea as "prokaryotes"--a cell type that has no distinct nucleus to contain the genetic material, DNA, and few other specialized components. More-complex cells, known as "eukaryotes," contain a well-defined nucleus as well as compartmentalized "organelles" that carry out metabolism and transport molecules throughout the cell. Yeast cells are some of the most-primitive eukaryotes, whereas the highly specialized cells of human beings and other mammals are among the most complex. "A major unsolved question in biology has been where eukaryotes came from, where we came from," Lake said. "The answer is that we have two parents, and we now know who those parents were." Further, he added, the results provide a new picture of evolutionary pathways. "At least 2 billion years ago, ancestors of these two diverse prokaryotic groups fused their genomes to form the first eukaryote, and in the processes two different branches of the tree of life were fused to form the ring of life," Lake said. The work is part of an effort supported by the National Science Foundation--the federal agency that supports research and education across all disciplines of science and engineering--to re-examine historical schemes for classifying Earth's living creatures, a process that was once based on easily observable traits. Microbes, plants or animals wer Contact: Leslie Fink National Science Foundation
<urn:uuid:baf824b2-7e06-471a-8510-efd5abab1567>
3.796875
335
News Article
Science & Tech.
30.012417
12
Refraction and Acceleration Name: Christopher S. Why is it that when light travels from a more dense to a less dense medium, its speed is higher? I've read answers to this question in your archives but, sadly, still don't get it. One answer (Jasjeet S Bagla) says that we must not ask the question because light is massless, hence questions of acceleration don't make sense. It does, however, seem to be OK to talk about different speeds of light. If you start at one speed and end at a higher one, why is one not allowed to talk about acceleration? Bagla goes on to say that it depends on how the em fields behave in a given medium. It begs the question: what is it about, say, Perspex and air that makes light accelerate, oops, travel at different speeds? If you're dealing with the same ray of light, one is forced to speak of acceleration, no? What other explanation is there for final velocity>initial velocity? Arthur Smith mentioned a very small "evanescent" component that travels ahead at c. Where can I learn more about this? Sorry for the long question. I understand that F=ma and if there is no m, you cannot talk about a, but, again, you have one velocity higher than another for the same thing. I need to know more than "that's just the way em fields are!" An explanation that satisfies me relates to travel through an interactive medium. When light interacts with an atom, the photon of light is absorbed and then emitted. For a moment, the energy of the light is within the atom. This causes a slight delay. Light travels at the standard speed of light until interacting with another atom. It is absorbed and emitted, causing another slight delay. The average effect is taking more time to travel a meter through glass than through air. This works like a slower speed. An individual photon does not actually slow down. It gets delayed repeatedly by the atoms of the medium. A more dense medium has more atoms per meter to Dr. Ken Mellendorf Illinois Central College Congratulations! on not being willing to accept "that is just the way em fields are!" The answer to your inquiry is not all that simple (my opinion), but I won't try to do so in the limited space allowed here, not to say my own limitations of knowledge. Like so many "simple" physics questions, I find the most lucid, but accurate, explanation in Richard Feynman's, "Lectures on Physics" which most libraries will have. Volume I, Chapter 31-1 through 31-6, which describes refraction, dispersion, diffraction. The "answer" has to do with how matter alters the electric field of incident radiation, but I won't pretend to be able to do a better job than Feynman. The answer is that you are not dealing with the same ray of light. In vacuum a photon just keeps going at the speed of light. In a medium, however, it interacts with the atoms, often being absorbed while bumping an atomic or molecular motion into a higher energy state. The excited atom/molecule then can jump to a lower energy state, emitting a photon while doing so. This can obviously make light appear to travel slower in a In detail, it is a very complicated question, requiring at least a graduate course in electromagnetism to begin to understand. Why, for example do the emitted photons tend to travel in the same direction? Best, Richard J. Plano Click here to return to the Physics Archives Update: June 2012
<urn:uuid:d2b35c16-35c7-477e-80c7-8dded3739ec4>
3.03125
794
Q&A Forum
Science & Tech.
58.858511
13
Attempts to relay mail by issuing a predefined combination of SMTP commands. The goal of this script is to tell if a SMTP server is vulnerable to mail relaying. An SMTP server that works as an open relay, is a email server that does not verify if the user is authorised to send email from the specified email address. Therefore, users would be able to send email originating from any third-party email address that they want. The checks are done based in combinations of MAIL FROM and RCPT TO commands. The list is hardcoded in the source file. The script will output all the working combinations that the server allows if nmap is in verbose mode otherwise the script will print the number of successful tests. The script will not output if the server requires authentication. If debug is enabled and an error occurrs while testing the target host, the error will be printed with the list of any combinations that were found prior to the error. Use this to change the IP address to be used (default is the target IP address) Define the destination email address to be used (without the domain, default is relaytest) or smtp-open-relay.domain Define the domain to be used in the anti-spam tests and EHLO command (default is nmap.scanme.org) Define the source email address to be used (without the domain, default is antispam) smbdomain, smbhash, smbnoguest, smbpassword, smbtype, smbusernameSee the documentation for the smbauth library. nmap --script smtp-open-relay.nse [--script-args smtp-open-relay.domain=<domain>,smtp-open-relay.ip=<address>,...] -p 25,465,587 <host> Host script results: | smtp-open-relay: Server is an open relay (1/16 tests) |_MAIL FROM:<email@example.com> -> RCPT TO:<firstname.lastname@example.org> Author: Arturo 'Buanzo' Busleiman License: Same as Nmap--See http://nmap.org/book/man-legal.html
<urn:uuid:2fc62870-a21f-42bb-90f1-0b5d5c8d75a5>
2.71875
483
Documentation
Software Dev.
63.215
14
Giant Manta Ray Giant Manta Ray Manta birostris Divers often describe the experience of swimming beneath a manta ray as like being overtaken by a huge flying saucer. This ray is the biggest in the world, but like the biggest shark, the whale shark, it is a harmless consumer of plankton. When feeding, it swims along with its cavernous mouth wide open, beating its huge triangular wings slowly up and down. On either side of the mouth, which is at the front of the head, there are two long paddles, called cephalic lobes. These lobes help funnel plankton into the mouth. A stingerless whiplike tail trails behind. Giant manta rays tend to be found over high points like seamounts where currents bring plankton up to them. Small fish called remoras often travel attached to these giants, feeding on food scraps along the way. Giant mantas are ovoviviparous, so the eggs develop and hatch inside the mother. These rays can leap high out of the water, to escape predators, clean their skin of parasites or communicate.
<urn:uuid:f3984201-a44a-42d6-802f-de566b1e8a6e>
3.09375
238
Knowledge Article
Science & Tech.
55.646214
15
Topics covered: Ideal solutions Instructor/speaker: Moungi Bawendi, Keith Nelson The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So. In the meantime, you've started looking at two phase equilibrium. So now we're starting to look at mixtures. And so now we have more than one constituent. And we have more than one phase present. Right? So you've started to look at things that look like this, where you've got, let's say, two components. Both in the gas phase. And now to try to figure out what the phase equilibria look like. Of course it's now a little bit more complicated than what you went through before, where you can get pressure temperature phase diagrams with just a single component. Now we want to worry about what's the composition. Of each of the components. In each of the phases. And what's the temperature and the pressure. Total and partial pressures and all of that. So you can really figure out everything about both phases. And there are all sorts of important reasons to do that, obviously lots of chemistry happens in liquid mixtures. Some in gas mixtures. Some where they're in equilibrium. All sorts of chemical processes. Distillation, for example, takes advantage of the properties of liquid and gas mixtures. Where one of them might be richer, will be richer, and the more volatile of the components. That can be used as a basis for purification. You mix ethanol and water together so you've got a liquid with a certain composition of each. The gas is going to be richer and the more volatile of the two, the ethanol. So in a distillation, where you put things up in the gas, more of the ethanol comes up. You could then collect that gas, right? And re-condense it, and make a new liquid. Which is much richer in ethanol than the original liquid was. Then you could make, then you could put some of them up into the gas phase. Where it will be still richer in ethanol. And then you could collect that and repeat the process. So the point is that properties of liquid gas, two-component or multi-component mixtures like this can be exploited. Basically, the different volatilities of the different components can be exploited for things like purification. Also if you want to calculate chemical equilibria in the liquid and gas phase, of course, now you've seen chemical equilibrium, so the amount of reaction depends on the composition. So of course if you want reactions to go, then this also can be exploited by looking at which phase might be richer in one reactant or another. And thereby pushing the equilibrium toward one direction or the other. OK. So. we've got some total temperature and pressure. And we have compositions. So in the gas phase, we've got mole fractions yA and yB. In the liquid phase we've got mole fractions xA and xB. So that's our system. One of the things that you established last time is that, so there are the total number of variables including the temperature and the pressure. And let's say the mole fraction of A in each of the liquid and gas phases, right? But then there are constraints. Because the chemical potentials have to be equal, right? Chemical potential of A has to be equal in the liquid and gas. Same with B. Those two constraints reduce the number of independent variables. So there'll be two in this case rather than four independent variables. If you control those, then everything else will follow. What that means is if you've got a, if you control, if you fix the temperature and the total pressure, everything else should be determinable. No more free variables. And then, what you saw is that in simple or ideal liquid mixtures, a result called Raoult's law would hold. Which just says that the partial pressure of A is equal to the mole fraction of A in the liquid times the pressure of pure A over the liquid. And so what this gives you is a diagram that looks like this. If we plot this versus xB, this is mole fraction of B in the liquid going from zero to one. Then we could construct a diagram of this sort. So this is the total pressure of A and B. The partial pressures are given by these lines. So this is our pA star and pB star. The pressures over the pure liquid A and B at the limits of mole fraction of B being zero and one. So in this situation, for example, A is the more volatile of the components. So it's partial pressure over its pure liquid. At this temperature. Is higher than the partial pressure of B over its pure liquid. A would be the ethanol, for example and B the water in that mixture. OK. Then you started looking at both the gas and the liquid phase in the same diagram. So this is the mole fraction of the liquid. If you look and see, well, OK now we should be able to determine the mole fraction in the gas as well. Again, if we note total temperature and pressure, everything else must follow. And so, you saw this worked out. Relation between p and yA, for example. The result was p is pA star times pB star over pA star plus pB star minus pA star times yA. And the point here is that unlike this case, where you have a linear relationship, the relationship between the pressure and the liquid mole fraction isn't linear. We can still plot it, of course. So if we do that, then we end up with a diagram that looks like the following. Now I'm going to keep both mole fractions, xB and yB, I've got some total pressure. I still have my linear relationship. And then I have a non-linear relationship between the pressure and the mole fraction in the gas phase. So let's just fill this in. Here is pA star still. Here's pB star. Of course, at the limits they're still, both mole fractions they're zero and one. OK. I believe this is this is where you ended up at the end of the last lecture. But it's probably not so clear exactly how you read something like this. And use it. It's extremely useful. You just have to kind of learn how to follow what happens in a diagram like this. And that's what I want to spend some of today doing. Is just, walking through what's happening physically, with a container with a mixture of the two. And how does that correspond to what gets read off the diagram under different conditions. So. Let's just start somewhere on a phase diagram like this. Let's start up here at some point one, so we're in the pure - well, not pure, you're in the all liquid phase. It's still a mixture. It's not a pure substance. pA star, pB star. There's the gas phase. So, if we start at one, and now there's some total pressure. And now we're going to reduce it. What happens? We start with a pure - with an all-liquid mixture. No gas. And now we're going to bring down the pressure. Allowing some of the liquid to go up into the gas phase. So, we can do that. And once we reach point two, then we find a coexistence curve. Now the liquid and gas are going to coexist. So this is the liquid phase. And that means that this must be xB. And it's xB at one, but it's also xB at two, and I want to emphasize that. So let's put our pressure for two. And if we go over here, this is telling us about the mole fraction in the gas phase. That's what these curves are, remember. So this is the one that's showing us the mole fraction in the liquid phase. This nonlinear one in the gas phase. So that means just reading off it, this is xB, that's the liquid mole fraction. Here's yB. The gas mole fraction. They're not the same, right, because of course the components have different volatility. A's more volatile. So that means that the mole fraction of B in the liquid phase is higher than the mole fraction of B in the gas phase. Because A is the more volatile component. So more, relatively more, of A, the mole fraction of A is going to be higher up in the gas phase. Which means the mole fraction of B is lower in the gas phase. So, yB less than xB if A is more volatile. OK, so now what's happening physically? Well, we started at a point where we only had the liquid present. So at our initial pressure, we just have all liquid. There's some xB at one. That's all there is, there isn't any gas yet. Now, what happened here? Well, now we lowered the pressure. So you could imagine, well, we made the box bigger. Now, if the liquid was under pressure, being squeezed by the box, right then you could make the box a little bit bigger. And there's still no gas. That's moving down like this. But then you get to a point where there's just barely any pressure on top of the liquid. And then you keep expanding the box. Now some gas is going to form. So now we're going to go to our case two. We've got a bigger box. And now, right around where this was, this is going to be liquid. And there's gas up here. So up here is yB at pressure two. Here's xB at pressure two. Liquid and gas. So that's where we are at point two here. Now, what happens if we keep going? Let's lower the pressure some more. Well, we can lower it and do this. But really if we want to see what's happening in each of the phases, we have to stay on the coexistence curves. Those are what tell us what the pressures are. What the partial pressure are going to be in each of the phases. In each of the two, in the liquid and the gas phases. So let's say we lower the pressure a little more. What's going to happen is, then we'll end up somewhere over here. In the liquid, and that'll correspond to something over here in the gas. So here's three. So now we're going to have, that's going to be xB at pressure three. And over here is going to be yB at pressure three. And all we've done, of course, is we've just expanded this further. So now we've got a still taller box. And the liquid is going to be a little lower because some of it has evaporated, formed the gas phase. So here's xB at three. Here's yB at three, here's our gas phase. Now we could decrease even further. And this is the sort of thing that you maybe can't do in real life. But I can do on a blackboard. I'm going to give myself more room on this curve, to finish this illustration. There. Beautiful. So now we can lower a little bit further, and what I want to illustrate is, if we keep going down, eventually we get to a pressure where now if we look over in the gas phase, we're at the same pressure, mole fraction that we had originally in the liquid phase. So let's make four even lower pressure. What does that mean? What it means is, we're running out of liquid. So what's supposed to happen is A is the more volatile component. So as we start opening up some room for gas to form, you get more of A in the gas phase. But of course, and the liquid is richer in B. But of course, eventually you run out of liquid. You make the box pretty big, and you run out, or you have the very last drop of liquid. So what's the mole fraction of B in the gas phase? It has to be the same as what it started in in the liquid phase. Because after all the total number of moles of A and B hasn't changed any. So if you take them all from the liquid and put them all up into the gas phase, it must be the same. So yB of four. Once you just have the last drop. So then yB of four is basically equal to xB of one. Because everything's now up in the gas phase. So in principle, there's still a tiny, tiny bit of xB at pressure four. Well, we could keep lowering the pressure. We could make the box a little bigger. Then the very last of the liquid is going to be gone. And what'll happen then is, we're all here. There's no more liquid. We're not going down on the coexistence curve any more. We don't have a liquid gas coexistence any more. We just have a gas phase. Of course, we can continue to lower the pressure. And then what we're doing is just going down here. So there's five. And five is the same as this only bigger. And so forth. OK, any questions about how this works? It's really important to just gain facility in reading these things and seeing, OK, what is it that this is telling you. And you can see it's not complicated to do it, but it takes a little bit of practice. OK. Now, of course, we could do exactly the same thing starting from the gas phase. And raising the pressure. And although you may anticipate that it's kind of pedantic, I really do want to illustrate something by it. So let me just imagine that we're going to do that. Let's start all in the gas phase. Up here's the liquid. pA star, pB star. And now let's start somewhere here. So we're down somewhere in the gas phase with some composition. So it's the same story, except now we're starting here. It's all gas. And we're going to start squeezing. We're increasing the pressure. And eventually here's one, will reach two, so of course here's our yB. We started with all gas, no liquid. So this is yB of one. It's the same as yB of two, I'm just raising the pressure enough to just reach the coexistence curve. And of course, out here tells us xB of two, right? So what is it saying? We've squeezed and started to form some liquid. And the liquid is richer in component B. Maybe it's ethanol water again. And we squeeze, and now we've got more water in the liquid phase than in the gas phase. Because water's the less volatile component. It's what's going to condense first. So the liquid is rich in the less volatile of the components. Now, obviously, we can continue in doing exactly the reverse of what I showed you. But all I want to really illustrate is, this is a strategy for purification of the less volatile component. Once you've done this, well now you've got some liquid. Now you could collect that liquid in a separate vessel. So let's collect the liquid mixture with xB of two. So it's got some mole fraction of B. So we've purified that. But now we're going to start, we've got pure liquid. Now let's make the vessel big. So it all goes into the gas phase. Then lower p. All gas. So we start with yB of three, which equals xB of two. In other words, it's the same mole fraction. So let's reconstruct that. So here's p of two. And now we're going to go to some new pressure. And the point is, now we're going to start, since the mole fraction in the gas phase that we're starting from is the same number as this was. So it's around here somewhere. That's yB of three equals xB of two. And we're down here. In other words, all we've done is make the container big enough so the pressure's low and it's all in the gas phase. That's all we have, is the gas. But the composition is whatever the composition is that we extracted here from the liquid. So this xB, which is the liquid mole fraction, is now yB, the gas mole fraction. Of course, the pressure is different. Lower than it was before. Great. Now let's increase. So here's three. And now let's increase the pressure to four. And of course what happens, now we've got coexistence. So here's liquid. Here's gas. So, now we're over here again. There's xB at pressure four. Pure still in component B. We can repeat the same procedure. Collect it. All liquid, put it in a new vessel. Expand it, lower the pressure, all goes back into the gas phase. Do it all again. And the point is, what you're doing is walking along here. Here to here. Then you start down here, and go from here to here. From here to here. And you can purify. Now, of course, the optimal procedure, you have to think a little bit. Because if you really do precisely what I said, you're going to have a mighty little bit of material each time you do that. So yes it'll be the little bit you've gotten at the end is going to be really pure, but there's not a whole lot of it. Because, remember, what we said is let's raise the pressure until we just start being on the coexistence curve. So we've still got mostly gas. Little bit of liquid. Now, I could raise the pressure a bit higher. So that in the interest of having more of the liquid, when I do that, though, the liquid that I have at this higher pressure won't be as enriched as it was down here. Now, I could still do this procedure. I could just do more of them. So it takes a little bit of judiciousness to figure out how to optimize that. In the end, though, you can continue to walk your way down through these coexistence curves and purify repeatedly the component B, the less volatile of them, and end up with some amount of it. And there'll be some balance between the amount that you feel like you need to end up with and how pure you need it to be. Any questions about how this works? So purification of less volatile components. Now, how much of each of these quantities in each of these phases? So, pertinent to this discussion, of course we need to know that. If you want to try to optimize a procedure like that, of course it's going to be crucial to be able to understand and calculate for any pressure that you decide to raise to, just how many moles do you have in each of the phases? So at the end of the day, you can figure out, OK, now when I reach a certain degree of purification, here's how much of the stuff I end up with. Well, that turns out to be reasonably straightforward to do. And so what I'll go through is a simple mathematical derivation. And it turns out that it allows you to just read right off the diagram how much of each material you're going to end up with. So, here's what happens. This is something called the lever rule. How much of each component is there in each phase? So let's consider a case like this. Let me draw yet once again, just to get the numbering consistent. With how we'll treat this. So we're going to start here. And I want to draw it right in the middle, so I've got plenty of room. And we're going to go up to some pressure. And somewhere out there, now I can go to my coexistence curves. Liquid. And gas. And I can read off my values. So this is the liquid xB. So I'm going to go up to some point two, here's xB of two. Here's yB of two. Great. Now let's get these written in. So let's just define terms a little bit. nA, nB. Or just our total number of moles. ng and n liquid, of course, total number of moles. In the gas and liquid phases. So let's just do the calculation for each of these two cases. We'll start with one. That's the easier case. Because then we have only the gas. So at one, all gas. It says pure gas in the notes, but of course that isn't the pure gas. It's the mixture of the two components. So. How many moles of A? Well it's the mole fraction of A in the gas. Times the total number of moles in the gas. Let me put one in here. Just to be clear. And since we have all gas, the number of moles in the gas is just the total number of moles. So this is just yA at one times n total. Let's just write that in. And of course n total is equal to nA plus nB. So now let's look at condition two. Now we have to look a little more carefully. Because we have a liquid gas mixture. So nA is equal to yA at pressure two. Times the number of moles of gas at pressure two. Plus xA, at pressure two, times the number of moles of liquid at pressure two. Now, of course, these things have to be equal. The total number of moles of A didn't change, right? So those are equal. Then yA of two times ng of two. Plus xA of two times n liquid of two, that's equal to yA of one times n total. Which is of course equal to yA of one times n gas at two plus n liquid at two. I suppose I could be, add that equality. Of course, it's an obvious one. But let me do it anyway. The total number of moles is equal to nA plus nB. But it's also equal to n liquid plus n gas. And that's all I'm taking advantage of here. And now I'm just going to rearrange the terms. So I'm going to write yA at one minus yA at two, times ng at two, is equal to, and I'm going to take the other terms, the xA term. xA of two minus yA of one times n liquid at two. So I've just rearranged the terms. And I've done that because now, I think I omitted something here. yA of one times ng. No, I forgot a bracket, is what I did. yA of one there. And I did this because now I want to do is look at the ratio of liquid to gas at pressure two. So, ratio of I'll put it gas to liquid, that's ng of two over n liquid at two. And that's just equal to xA of two minus yA at one minus yA at one minus yA at two. So what does it mean? It's the ratio of these lever arms. That's what it's telling me. I can look, so I raise the pressure up to two. And so here's xB at two, here's yB at two. And I'm here somewhere. And this little amount and this little amount, that's that difference. And it's just telling me that ratio of those arms is the ratio of the total number of moles of gas to liquid. And that's great. Because now when I go back to the problem that we were just looking at, where I say, well I'm going to purify the less volatile component by raising the pressure until I'm at coexistence starting in the gas phase. Raise the pressure, I've got some liquid. But I also want some finite amount of liquid. But I don't want to just, when I get the very, very first drop of liquid now collected, of course it's enriched in the less volatile component. But there may be a minuscule amount, right? So I'll raise the pressure a bit more. I'll go up in pressure. And now, of course, when I do that the amount of enrichment of the liquid isn't as big as it was if I just raised it up enough to barely have any liquid. Then I'd be out here. But I've got more material in the liquid phase to collect. And that's what this allows me to calculate. Is how much do I get in the end. So it's very handy. You can also see, if I go all the way to the limit where the mole fraction in the liquid at the end is equal to what it was in the gas when I started, what that says is that there's no more gas left any more. In other words, these two things are equal. If I go all the way to the point where I've got all the, this is the amount I started with, in the pure gas phase, now I keep raising it all the way. Until I've got the same mole fraction in the liquid. Of course, we know what that really means. That means that I've gone all the way from pure gas to pure liquid. And the mole fraction in that case has to be the same. And what this is just telling us mathematically is, when that happens this is zero. That means I don't have any gas left. Yeah. PROFESSOR: No. Because, so it's the mole fraction in the gas phase. But you've started with some amount that it's only going to go down from there. PROFESSOR: Yeah. Yeah. Any other questions? OK. Well, now what I want to do is just put up a slightly different kind of diagram, but different in an important way. Namely, instead of showing the mole fractions as a function of the pressure. And I haven't written it in, but all of these are at constant temperature, right? I've assumed the temperature is constant in all these things. Now let's consider the other possibility, the other simple possibility, which is, let's hold the pressure constant and vary the temperature. Of course, you know in the lab, that's usually what's easiest to do. Now, unfortunately, the arithmetic gets more complicated. It's not monumentally complicated, but here in this case, where you have one linear relationship, which is very convenient. From Raoult's law. And then you have one non-linear relationship there for the mole fraction of the gas. In the case of temperature, they're both, neither one is linear. Nevertheless, we can just sketch what the diagram looks like. And of course it's very useful to do that, and see how to read off it. And I should say the derivation of the curves isn't particularly complicated. It's not particularly more complicated than what I think you saw last time to derive this. There's no complicated math involved. But the point is, the derivation doesn't yield a linear relationship for either the gas or the liquid part of the coexistence curve. OK, so we're going to look at temperature and mole fraction phase diagrams. Again, a little more complicated mathematically but more practical in real use. And this is T. And here is the, sort of, form that these things take. So again, neither one is linear. Up here, now, of course if you raise the temperatures, that's where you end up with gas. If you lower the temperature, you condense and get the liquid. So, this is TA star. TB star. So now I want to stick with A as the more volatile component. At constant temperature, that meant that pA star is bigger than pB star. In other words, the vapor pressure over pure liquid A is higher than the vapor pressure over pure liquid B. Similarly, now I've got constant pressure and really what I'm looking at, let's say I'm at the limit where I've got the pure liquid. Or the pure A. And now I'm going to, let's say, raise the temperature until I'm at the liquid-gas equilibrium. That's just the boiling point. So if A is the more volatile component, it has the lower boiling point. And that's what this reflects. So higher pB star A corresponds to lower TA star A. Which is just the boiling point of pure A. So, this is called the bubble line. That's called the dew line. All that means is, let's say I'm at high temperature. I've got all gas. Right no coexistence, no liquid yet. And I start to cool things off. Just to where I just barely start to get liquid. What you see that as is, dew starts forming. A little bit of condensation. If you're outside, it means on the grass a little bit of dew is forming. Similarly, if I start at low temperature, all liquid now I start raising the temperature until I just start to boil. I just start to see the first bubbles forming. And so that's why these things have those names. So now let's just follow along what happens when I do the same sort of thing that I illustrated there. I want to start at one point in this phase diagram. And then start changing the conditions. So let's start here. So I'm going to start all in the liquid phase. That is, the temperature is low. Here's xB. And my original temperature. Now I'm going to raise it. So if I raise it a little bit, I reach a point at which I first start to boil. Start to find some gas above the liquid. And if I look right here, that'll be my composition. Let me raise it a little farther, now that we've already seen the lever rule and so forth. I'll raise it up to here. And that means that out here, I suppose I should do here. So, here is the liquid mole fraction at temperature two. xB at temperature two. This is yB at temperature two. The gas mole fraction. So as you should expect, what's going to happen here is that the gas, this is going to be lower in B. A, that means that the mole fraction of A must be higher in the gas phase. That's one minus yB. So xA is one minus -- yA, which is one minus yB higher in gas phase. Than xA, which is one minus xB. In other words, the less volatile component is enriched up in the gas phase. Now, what does that mean? That means I could follow the same sort of procedure that I indicated before when we looked at the pressure mole fraction phase diagram. Namely, I could do this and now I could take the gas phase. Which has less of B. It has more of A. And I can collect it. And then I can reduce the temperature. So it liquefies. So I can condense it, in other words. So now I'm going to start with, let's say I lower the temperature enough so I've got basically pure liquid. But its composition is the same as the gas here. Because of course that's what that liquid is formed from. I collected the gas and separated it. So now I could start all over again. Except instead of being here, I'll be down here. And then I can raise the temperature again. To some place where I choose. I could choose here, and go all the way to hear. A great amount of enrichment. But I know from the lever rule that if I do that, I'm going to have precious little material over here. So I might prefer to raise the temperature a little more. Still get a substantial amount of enrichment. And now I've got, in the gas phase, I'll further enriched in component A. And again I can collect the gas. Condense it. Now I'm out here somewhere, I've got all liquid and I'll raise the temperature again. And I can again keep walking my way over. And that's what happens during an ordinary distillation. Each step of the distillation walks along in the phase diagram at some selected point. And of course what you're doing is, you're always condensing the gas. And starting with fresh liquid that now is enriched in more volatile of the components. So of course if you're really purifying, say, ethanol from an ethanol water mixture, that's how you do it. Ethanol is the more volatile component. So a still is set up. It will boil the stuff and collect the gas and and condense it. And boil it again, and so forth. And the whole thing can be set up in a very efficient way. So you have essentially continuous distillation. Where you have a whole sequence of collection and condensation and reheating and so forth events. So then, in a practical way, it's possible to walk quite far along the distillation, the coexistence curve, and distill to really a high degree of purification. Any questions about how that works? OK. I'll leave till next time the discussion of the chemical potentials. But what we'll do, just to foreshadow a little bit, what I'll do at the beginning of the next lecture is what's at the end of your notes here. Which is just to say OK, now if we look at Raoult's law, it's straightforward to say what is the chemical potential for each of the substances in the liquid and the gas phase. Of course, it has to be equal. Given that, that's for an ideal solution. We can gain some insight from that. And then look at real solutions, non-ideal solutions, and understand a lot of their behavior as well. Just from starting from our understanding of what the chemical potential does even in a simple ideal mixture. So we'll look at the chemical potentials. And then we'll look at non-ideal solution mixtures next time. See you then.
<urn:uuid:246f9a12-fd35-40fa-8257-b07bf8d92857>
3.921875
7,164
Audio Transcript
Science & Tech.
77.794819
16
We had a running joke in science ed that kids get so overexposed to discrepant events involving density and air pressure that they tend to try to explain anything and everything they don't understand with respect to science in terms of those two concepts. Why do we have seasons? Ummm... air pressure? Why did Dr. Smith use that particular research design? Ummm... density? I think we need another catch-all explanation. I suggest index of refraction. To simplify greatly, index of refraction describes the amount of bending a light ray will undergo as it passes from one medium to another (it's also related to the velocity of light in both media, but I do want to keep this simple). If the two media have significantly different indices, light passing from one to the other at an angle (not perpendicularly, in which case there is no bending) will be bent more than if indices of the two are similar. The first four data points are from Hyperphysics, the final one from Wikipedia... glass has a wide range of compositions and thus indices of refraction. Water at 20 C: 1.33 Typical soda-lime glass: close to 1.5 Since glycerine and glass have similar IoR, light passing from one to the other isn't bent; as long as both are transparent and similarly colored, each will be effectively "invisible" against the other. So, why does it rain? Umm... index of refraction? A Bright Moon Impact 12 hours ago
<urn:uuid:7eeb7ef3-3122-42f0-86c8-01da8f3d7396>
3.1875
313
Comment Section
Science & Tech.
62.924413
17
|Gallium metal is silver-white and melts at approximately body temperature (Wikipedia image).| |Atomic Number:||31||Atomic Radius:||187 pm (Van der Waals)| |Atomic Symbol:||Ga||Melting Point:||29.76 °C| |Atomic Weight:||69.72||Boiling Point:||2204 °C| |Electron Configuration:||[Ar]4s23d104p1||Oxidation States:||3| From the Latin word Gallia, France; also from Latin, gallus, a translation of "Lecoq," a cock. Predicted and described by Mendeleev as ekaaluminum, and discovered spectroscopically by Lecoq de Boisbaudran in 1875, who in the same year obtained the free metal by electrolysis of a solution of the hydroxide in KOH. Gallium is often found as a trace element in diaspore, sphalerite, germanite, bauxite, and coal. Some flue dusts from burning coal have been shown to contain as much 1.5 percent gallium. It is one of four metals -- mercury, cesium, and rubidium -- which can be liquid near room temperature and, thus, can be used in high-temperature thermometers. It has one of the longest liquid ranges of any metal and has a low vapor pressure even at high temperatures. There is a strong tendency for gallium to supercool below its freezing point. Therefore, seeding may be necessary to initiate solidification. Ultra-pure gallium has a beautiful, silvery appearance, and the solid metal exhibits a conchoidal fracture similar to glass. The metal expands 3.1 percent on solidifying; therefore, it should not be stored in glass or metal containers, because they may break as the metal solidifies. High-purity gallium is attacked only slowly by mineral acids. Gallium wets glass or porcelain and forms a brilliant mirror when it is painted on glass. It is widely used in doping semiconductors and producing solid-state devices such as transistors. Magnesium gallate containing divalent impurities, such as Mn+2, is finding use in commercial ultraviolet-activated powder phosphors. Gallium arsenide is capable of converting electricity directly into coherent light. Gallium readily alloys with most metals, and has been used as a component in low-melting alloys. Its toxicity appears to be of a low order, but should be handled with care until more data is available.
<urn:uuid:317a0fc8-b8f1-4147-a9ac-f69a1f176048>
3.46875
546
Knowledge Article
Science & Tech.
38.890701
18
If superparticles were to exist the decay would happen far more often. This test is one of the "golden" tests for supersymmetry and it is one that on the face of it this hugely popular theory among physicists has failed. Prof Val Gibson, leader of the Cambridge LHCb team, said that the new result was "putting our supersymmetry theory colleagues in a spin". The results are in fact completely in line with what one would expect from the Standard Model. There is already concern that the LHCb's sister detectors might have expected to have detected superparticles by now, yet none have been found so far.This certainly does not rule out SUSY, but it is getting to the same level as cold fusion if positive experimental result does not come soon.
<urn:uuid:72def0d3-296d-49d8-bdf5-73c351dd6672>
2.6875
163
Personal Blog
Science & Tech.
46.709545
19
Major Section: BREAK-REWRITE Example: (brr@ :target) ; the term being rewritten (brr@ :unify-subst) ; the unifying substitutionwhere General Form: (brr@ :symbol) :symbolis one of the following keywords. Those marked with *probably require an implementor's knowledge of the system to use effectively. They are supported but not well documented. More is said on this topic following the table. :symbol (brr@ :symbol) ------- ---------------------In general :target the term to be rewritten. This term is an instantiation of the left-hand side of the conclusion of the rewrite-rule being broken. This term is in translated form! Thus, if you are expecting (equal x nil) -- and your expectation is almost right -- you will see (equal x 'nil); similarly, instead of (cadr a) you will see (car (cdr a)). In translated forms, all constants are quoted (even nil, t, strings and numbers) and all macros are expanded. :unify-subst the substitution that, when applied to :target, produces the left-hand side of the rule being broken. This substitution is an alist pairing variable symbols to translated (!) terms. :wonp t or nil indicating whether the rune was successfully applied. (brr@ :wonp) returns nil if evaluated before :EVALing the rule. :rewritten-rhs the result of successfully applying the rule or else nil if (brr@ :wonp) is nil. The result of successfully applying the rule is always a translated (!) term and is never nil. :failure-reason some non-nil lisp object indicating why the rule was not applied or else nil. Before the rule is :EVALed, (brr@ :failure-reason) is nil. After :EVALing the rule, (brr@ :failure-reason) is nil if (brr@ :wonp) is t. Rather than document the various non-nil objects returned as the failure reason, we encourage you simply to evaluate (brr@ :failure-reason) in the contexts of interest. Alternatively, study the ACL2 function tilde-@- failure-reason-phrase. :lemma * the rewrite rule being broken. For example, (access rewrite-rule (brr@ :lemma) :lhs) will return the left-hand side of the conclusion of the rule. :type-alist * a display of the type-alist governing :target. Elements on the displayed list are of the form (term type), where term is a term and type describes information about term assumed to hold in the current context. The type-alist may be used to determine the current assumptions, e.g., whether A is a CONSP. :ancestors * a stack of frames indicating the backchain history of the current context. The theorem prover is in the process of trying to establish each hypothesis in this stack. Thus, the negation of each hypothesis can be assumed false. Each frame also records the rules on behalf of which this backchaining is being done and the weight (function symbol count) of the hypothesis. All three items are involved in the heuristic for preventing infinite backchaining. Exception: Some frames are ``binding hypotheses'' (equal var term) or (equiv var (double-rewrite term)) that bind variable var to the result of rewriting term. :gstack * the current goal stack. The gstack is maintained by rewrite and is the data structure printed as the current ``path.'' Thus, any information derivable from the :path brr command is derivable from gstack. For example, from gstack one might determine that the current term is the second hypothesis of a certain rewrite rule. brr@-expressionsare used in break conditions, the expressions that determine whether interactive breaks occur when monitored runes are applied. See monitor. For example, you might want to break only those attempts in which one particular term is being rewritten or only those attempts in which the binding for the variable ais known to be a consp. Such conditions can be expressed using ACL2 system functions and the information provided by brr@. Unfortunately, digging some of this information out of the internal data structures may be awkward or may, at least, require intimate knowledge of the system functions. But since conditional expressions may employ arbitrary functions and macros, we anticipate that a set of convenient primitives will gradually evolve within the ACL2 community. It is to encourage this evolution that brr@provides access to the
<urn:uuid:460fe123-8906-4320-9cc8-f581b79ced1f>
2.6875
976
Documentation
Software Dev.
47.978
20
May 16, 2011 If you fuel your truck with biodiesel made from palm oil grown on a patch of cleared rainforest, you could be putting into the atmosphere 10 times more greenhouse gasses than if you’d used conventional fossil fuels. It’s a scenario so ugly that, in its worst case, it makes even diesel created from coal (the “coal to liquids” fuel dreaded by climate campaigners the world over) look “green.” The biggest factor determining whether or not a biofuel ultimately leads to more greenhouse-gas emissions than conventional fossil fuels is the type of land used to grow it, says a new study from researchers at MIT. The carbon released when you clear a patch of rainforest is the reason that palm oil grown on that patch of land leads to 55 times the greenhouse-gas emissions of palm oil grown on land that had already been cleared or was not located in a rainforest, said the study’s lead author. The solution to this biofuels dilemma is more research. Unlike solar and wind, it’s truly an area in which the world is desperate for scientific breakthroughs, such as biofuels from algae or salt-tolerant salicornia.
<urn:uuid:15d19448-aa73-495a-802e-5b1e68a460f3>
3.484375
253
News Article
Science & Tech.
46.947667
21
This work is licensed under the GPLv2 license. See License.txt for details Autobuild imports, configures, builds and installs various kinds of software packages. It can be used in software development to make sure that nothing is broken in the build process of a set of packages, or can be used as an automated installation tool. Autobuild config files are Ruby scripts which configure rake to imports the package from a SCM or (optionnaly) updates it configures it. This phase can handle code generation, configuration (for instance for autotools-based packages), … It takes the dependencies between packages into account in its build process, updates the needed environment variables
<urn:uuid:d4c570b0-6a4e-47fd-afe7-15b6daac7169>
2.84375
144
Documentation
Software Dev.
27.461
22
Let and be two differentiable functions. We will say that and are proportional if and only if there exists a constant C such that . Clearly any function is proportional to the zero-function. If the constant C is not important in nature and we are only interested into the proportionality of the two functions, then we would like to come up with an equivalent criteria. The following statements are equivalent: Therefore, we have the following: Define the Wronskian of and to be , that is The following formula is very useful (see reduction of order technique): Remark: Proportionality of two functions is equivalent to their linear dependence. Following the above discussion, we may use the Wronskian to determine the dependence or independence of two functions. In fact, the above discussion cannot be reproduced as is for more than two functions while the Wronskian does....
<urn:uuid:b7bc34b8-0f1f-4df8-8e8d-e56fc9c8fec5>
2.6875
180
Knowledge Article
Science & Tech.
38.502318
23
Forecast Texas Fire Danger (TFD) The Texas Fire Danger(TFD) map is produced by the National Fire Danger Rating System (NFDRS). Weather information is provided by remote, automated weather stations and then used as an input to the Weather Information Management System (WIMS). The NFDRS processor in WIMS produces a fire danger rating based on fuels, weather, and topography. Fire danger maps are produced daily. In addition, the Texas A&M Forest Service, along with the SSL, has developed a five day running average fire danger rating map. Daily RAWS information is derived from an experimental project - DO NOT DISTRIBUTE
<urn:uuid:a789fd8d-b873-45cf-b01d-af6eca242a5d>
3.015625
136
Knowledge Article
Science & Tech.
31.717
24
The Gram-Schmidt Process Now that we have a real or complex inner product, we have notions of length and angle. This lets us define what it means for a collection of vectors to be “orthonormal”: each pair of distinct vectors is perpendicular, and each vector has unit length. In formulas, we say that the collection is orthonormal if . These can be useful things to have, but how do we get our hands on them? It turns out that if we have a linearly independent collection of vectors then we can come up with an orthonormal collection spanning the same subspace of . Even better, we can pick it so that the first vectors span the same subspace as . The method goes back to Laplace and Cauchy, but gets its name from Jørgen Gram and Erhard Schmidt. We proceed by induction on the number of vectors in the collection. If , then we simply set This “normalizes” the vector to have unit length, but doesn’t change its direction. It spans the same one-dimensional subspace, and since it’s alone it forms an orthonormal collection. Now, lets assume the procedure works for collections of size and start out with a linearly independent collection of vectors. First, we can orthonormalize the first vectors using our inductive hypothesis. This gives a collection which spans the same subspace as (and so on down, as noted above). But isn’t in the subspace spanned by the first vectors (or else the original collection wouldn’t have been linearly independent). So it points at least somewhat in a new direction. To find this new direction, we define This vector will be orthogonal to all the vectors from to , since for any such we can check where we use the orthonormality of the collection to show that most of these inner products come out to be zero. So we’ve got a vector orthogonal to all the ones we collected so far, but it might not have unit length. So we normalize it: and we’re done.
<urn:uuid:4a2ad899-7ba0-4bfc-9276-c5c5c0845fe6>
3.625
447
Tutorial
Science & Tech.
55.786307
25
x2/3 + y2/3 = a2/3 x = a cos3(t), y = a sin3(t) Click below to see one of the Associated curves. |Definitions of the Associated curves||Evolute| |Involute 1||Involute 2| |Inverse curve wrt origin||Inverse wrt another circle| |Pedal curve wrt origin||Pedal wrt another point| |Negative pedal curve wrt origin||Negative pedal wrt another point| |Caustic wrt horizontal rays||Caustic curve wrt another point| The astroid only acquired its present name in 1836 in a book published in Vienna. It has been known by various names in the literature, even after 1836, including cubocycloid and paracycle. The length of the astroid is 6a and its area is 3πa2/8. The gradient of the tangent T from the point with parameter p is -tan(p). The equation of this tangent T is x sin(p) + y cos(p) = a sin(2p)/2 Let T cut the x-axis and the y-axis at X and Y respectively. Then the length XY is a constant and is equal to a. It can be formed by rolling a circle of radius a/4 on the inside of a circle of radius a. It can also be formed as the envelope produced when a line segment is moved with each end on one of a pair of perpendicular axes. It is therefore a glissette. Other Web site: |Main index||Famous curves index| |Previous curve||Next curve| |History Topics Index||Birthplace Maps| |Mathematicians of the day||Anniversaries for the year| |Societies, honours, etc||Search Form| The URL of this page is:
<urn:uuid:367a0525-d005-4467-93f1-a7ac123614d1>
2.71875
409
Knowledge Article
Science & Tech.
54.846538
26
Arctic meltdown not caused by nature Rapid loss of Arctic sea ice - 80 per cent has disappeared since 1980 - is not caused by natural cycles such as changes in the Earth's orbit around the Sun, says Dr Karl. The situation is getting rather messy with regard to the ice melting in the Arctic. Now the volume of the ice varies throughout the year, rising to its peak after midwinter, and falling to its minimum after midsummer, usually in the month of September. Over most of the last 1,400 years, the volume of ice remaining each September has stayed pretty constant. But since 1980, we have lost 80 per cent of that ice. Now one thing to appreciate is that over the last 4.7 billion years, there have been many natural cycles in the climate — both heating and cooling. What's happening today in the Arctic is not a cycle caused by nature, but something that we humans did by burning fossil fuels and dumping slightly over one trillion tonnes of carbon into the atmosphere over the last century. So what are these natural cycles? There are many many of them, but let's just look at the Milankovitch cycles. These cycles relate to the Earth and its orbit around the Sun. There are three main Milankovitch cycles. They each affect how much solar radiation lands on the Earth, and whether it lands on ice, land or water, and when it lands. The first Milankovitch cycle is that the orbit of the Earth changes from mostly circular to slightly elliptical. It does this on a predominantly 100,000-year cycle. When the Earth is close to the Sun it receives more heat energy, and when it is further away it gets less. At the moment the orbit of the Earth is about halfway between "nearly circular" and "slightly elliptical". So the change in the distance to the Sun in each calendar year is currently about 5.1 million kilometres, which translates to about 6.8 per cent difference in incoming solar radiation. But when the orbit of the Earth is at its most elliptical, there will be a 23 per cent difference in how much solar radiation lands on the Earth. The second Milankovitch cycle affecting the solar radiation landing on our planet is the tilt of the north-south spin axis compared to the plane of the orbit of the Earth around the Sun. This tilt rocks gently between 22.1 degrees and 24.5 degrees from the vertical. This cycle has a period of about 41,000 years. At the moment we are roughly halfway in the middle — we're about 23.44 degrees from the vertical and heading down to 22.1 degrees. As we head to the minimum around the year 11,800, the trend is that the summers in each hemisphere will get less solar radiation, while the winters will get more, and there will be a slight overall cooling. The third Milankovitch cycle that affects how much solar radiation lands on our planet is a little more tricky to understand. It's called 'precession'. As our Earth orbits the Sun, the north-south spin axis does more than just rock gently between 22.1 degrees and 24.5 degrees. It also — very slowly, just like a giant spinning top — sweeps out a complete 360 degrees circle, and it takes about 26,000 years to do this. So on January 4, when the Earth is at its closest to the Sun, it's the South Pole (yep, the Antarctic) that points towards the Sun. So at the moment, everything else being equal, it's the southern hemisphere that has a warmer summer because it's getting more solar radiation, but six months later it will have a colder winter. And correspondingly, the northern hemisphere will have a warmer winter and a cooler summer. But of course, "everything else" is not equal. There's more land in the northern hemisphere but more ocean in a southern hemisphere. The Arctic is ice that is floating on water and surrounded by land. The Antarctic is the opposite — ice that is sitting on land and surrounded by water. You begin to see how complicated it all is. We have had, in this current cycle, repeated ice ages on Earth over the last three-million years. During an ice age, the ice can be three kilometres thick and cover practically all of Canada. It can spread through most of Siberia and Europe and reach almost to where London is today. Of course, the water to make this ice comes out of the ocean, and so in the past, the ocean level has dropped by some 125 metres. From three million years ago to one million years ago, the ice advanced and retreated on a 41,000-year cycle. But from one million years ago until the present, the ice has advanced and retreated on a 100,000-year cycle. What we are seeing in the Arctic today — the 80 per cent loss in the volume of the ice since 1980 — is an amazingly huge change in an amazingly short period of time. But it seems as though the rate of climate change is accelerating, and I'll talk more about that, next time … Published 27 November 2012 © 2013 Karl S. Kruszelnicki Pty Ltd
<urn:uuid:3a4ac59c-d59d-470b-adad-88e5e1c8a45a>
3.5625
1,065
News Article
Science & Tech.
65.159255
27
Black holes growing faster than expected Black hole find Existing theories on the relationship between the size of a galaxy and its central black hole are wrong according to a new Australian study. The discovery by Dr Nicholas Scott and Professor Alister Graham, from Melbourne's Swinburne University of Technology, found smaller galaxies have far smaller black holes than previously estimated. Central black holes, millions to billions of times more massive than the Sun, reside in the core of most galaxies, and are thought to be integral to galactic formation and evolution. However astronomers are still trying to understand this relationship. Scott and Graham combined data from observatories in Chile, Hawaii and the Hubble Space Telescope, to develop a data base listing the masses of 77 galaxies and their central supermassive black holes. The astronomers determined the mass of each central black hole by measuring how fast stars are orbiting it. Existing theories suggest a direct ratio between the mass of a galaxy and that of its central black hole. "This ratio worked for larger galaxies, but with improved technology we're now able to examine far smaller galaxies and the current theories don't hold up," says Scott. In a paper to be published in the Astrophysical Journal, they found that for each ten-fold decrease in a galaxy's mass, there was a one hundred-fold decrease in its central black hole mass. "That was a surprising result which we hadn't been anticipating," says Scott. The study also found that smaller galaxies have far denser stellar populations near their centres than larger galaxies. According to Scott, this also means the central black holes in smaller galaxies grow much faster than their larger counterparts. Black holes grow by merging with other black holes when their galaxies collide. "When large galaxies merge they double in size and so do their central black holes," says Scott. "But when small galaxies merge their central black holes quadruple in size because of the greater densities of nearby stars to feed on." Somewhere in between The findings also solve the long standing problem of missing intermediate mass black holes. For decades, scientists have been searching for something in between stellar mass black holes formed when the largest stars die, and supermassive black holes at the centre of galaxies. "If the central black holes in smaller galaxies have lower mass than originally thought, they may represent the intermediate mass black hole population astronomers have been hunting for," says Graham. "Intermediate sized black holes are between ten thousand and a few hundred thousand times the mass of the Sun, and we think we've found several good candidates." "These may be big enough to be seen directly by the new generation of extremely large telescopes now being built," says Graham.
<urn:uuid:e617c5fd-d556-4d43-be1f-042e7e7f2c60>
4.25
552
News Article
Science & Tech.
38.051734
28
Hoodoos may be seismic gurus Hoodoo prediction Towering chimney-like sedimentary rock spires known as hoodoos may provide an indication of an area's past earthquake activity. The research by scientists including Dr Rasool Anooshehpoor, from the United States Nuclear Regulatory Commission, may provide scientists with a new tool to test the accuracy of current hazard models. Hoodoo formations are often found in desert regions, and are common in North America, the Middle East and northern Africa. They are caused by the uneven weathering of different layers of sedimentary rocks, that leave boulders or thin caps of hard rock perched on softer rock. By knowing the strengths of different types of sedimentary layers, scientists can determine the amount of stress needed to cause those rocks to fracture. The United States Geological Survey (USGS) use seismic hazard models to predict the type of ground motion likely to occur in an area during a seismic event. But, according to Anooshehpoor, these models lack long term data. "Existing hazard maps use models based on scant data going back a hundred years or so," says Anooshehpoor. "But earthquakes have return periods lasting hundreds or thousands of years, so there is nothing to test these hazard models against." The researchers examined two unfractured hoodoos within a few kilometres of the Garlock fault, which is an active strike-slip fault zone in California's Red Rock Canyon. Their findings are reported in the Bulletin of the Seismological Society of America. "Although we can't put a precise age on hoodoos because of their erosion characteristics, we can use them to provide physical limits on the level of ground shaking that could potentially have occurred in the area," says Anooshehpoor. The researchers developed a three-dimensional model of each hoodoo and determined the most likely place where each spire would fail in an earthquake. They then tested rock samples similar to the hoodoo pillars to measure their tensile strength and compared their results with previously published data. USGS records suggest at least one large magnitude earthquake occurred along the fault in the last 550 years, resulting in seven metres of slip, yet the hoodoos are still standing. This finding is consistent with a median level of ground motion associated with the large quakes in this region, says Anooshehpoor. "If an earthquake occurred with a higher level of ground motion, the hoodoos would have collapsed," he says. "Nobody can predict earthquakes, but this will help predict what ground motions are associated with these earthquakes when they happen." Dr Juan Carlos Afonso from the Department of Earth and Planetary Sciences at Sydney's Macquarie University says it's an exciting development. "In seismic hazard studies, it's not just difficult to cover the entire planet, it's hard to cover even small active regions near populated areas," says Afonso. "You need lots of instruments, so it's great if you can rely on nature and natural objects to help you." He says while the work is still very new and needs to be proven, the physics seems sound.
<urn:uuid:85a979cb-9571-4e06-b38a-2f79912abb44>
4.3125
644
News Article
Science & Tech.
37.919371
29
Science Fair Project Encyclopedia The chloride ion is formed when the element chlorine picks up one electron to form the anion (negatively charged ion) Cl−. The salts of hydrochloric acid HCl contain chloride ions and are also called chlorides. An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissolves into Na+ and Cl− ions. The word chloride can also refer to a chemical compound in which one or more chlorine atoms are covalently bonded in the molecule. This means that chlorides can be either inorganic or organic compounds. The simplest example of an inorganic covalently bonded chloride is hydrogen chloride, HCl. A simple example of an organic covalently bonded chloride is chloromethane (CH3Cl), often called methyl chloride. Other examples of inorganic covalently bonded chlorides which are used as reactants are: - phosphorus trichloride, phosphorus pentachloride, and thionyl chloride - all three are reactive chlorinating reagents which have been used in a laboratory. - Disulfur dichloride (SCl2) - used for vulcanization of rubber. Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:4e76b8fd-c479-45d7-8ee7-faf61495aecb>
4.59375
320
Knowledge Article
Science & Tech.
27.864975
30
Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents Convective heat flux is a flux depending on the temperature difference between the body and the adjacent fluid (liquid or gas) and is triggered by the *FILM card. It takes the form where is the a flux normal to the surface, is the film coefficient, is the body temperature and is the environment fluid temperature (also called sink temperature). Generally, the sink temperature is known. If it is not, it is an unknown in the system. Physically, the convection along the surface can be forced or free. Forced convection means that the mass flow rate of the adjacent fluid (gas or liquid) is known and its temperature is the result of heat exchange between body and fluid. This case can be simulated by CalculiX by defining network elements and using the *BOUNDARY card for the first degree of freedom in the midside node of the element. Free convection, for which the mass flow rate is a n unknown too and a result of temperature differences, cannot be simulated. Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents guido dhondt 2012-10-06
<urn:uuid:47d24057-e332-41de-bbe6-0338e16b49a6>
3.3125
249
Tutorial
Science & Tech.
41.094375
31
RR Lyrae starArticle Free Pass RR Lyrae star, any of a group of old giant stars of the class called pulsating variables (see variable star) that pulsate with periods of about 0.2–1 day. They belong to the broad Population II class of stars (see Populations I and II) and are found mainly in the thick disk and halo of the Milky Way Galaxy and often in globular clusters. There are several subclasses—designated RRa, RRb, RRc, and RRd—based on the manner in which the light varies with time. The intrinsic luminosities of RR Lyrae stars are relatively well-determined, which makes them useful as distance indicators. What made you want to look up "RR Lyrae star"? Please share what surprised you most...
<urn:uuid:ca821097-b750-4e33-85da-b6754420e0dc>
2.921875
171
Knowledge Article
Science & Tech.
63.468978
32
NOAA scientists agree the risks are high, but say Hansen overstates what science can really say for sure Jim Hansen at the University of Colorado’s World Affairs Conference (Photo: Tom Yulsman) Speaking to a packed auditorium at the University of Colorado’s World Affairs Conference on Thursday, NASA climatologist James Hansen found a friendly audience for his argument that we face a planetary emergency thanks to global warming. Despite the fact that the temperature rise has so far been relatively modest, “we do have a crisis,” he said. With his characteristic under-stated manner, Hansen made a compelling case. But after speaking with two NOAA scientists today, I think Hansen put himself in a familiar position: out on a scientific limb. And after sifting through my many pages of notes from two days of immersion in climate issues, I’m as convinced as ever that journalists must be exceedingly careful not to overstate what we know for sure and what is still up for scientific debate. Crawling out on the limb, Hansen argued that global warming has already caused the levels of water in Lake Powell and Lake Mead — the two giant reservoirs on the Colorado River than insure water supplies for tens of millions of Westerners — to fall to 50 percent of capacity. The reservoirs “probably will not be full again unless we decrease CO2 in the atmosphere,” he asserted. Hansen is arguing that simply reducing our emissions and stabilizing CO2 at about 450 parts per million, as many scientists argue is necessary, is not nearly good enough. We must reduce the concentration from today’s 387 ppm to below 35o ppm. “We have already passed into the dangerous zone,” Hansen said. If we don’t reduce CO2 in the atmosphere, “we would be sending the planet toward an ice free state. We would have a chaotic journey to get there, but we would be creating a very different planet, and chaos for our children.” Hansen’s argument (see a paper on the subject here) is based on paleoclimate data which show that the last time atmospheric CO2 concentrations were this high, the Earth was ice free, and sea level was far higher than it is today. “I agree with the sense of urgency,” said Peter Tans, a carbon cycle expert at the National Oceanic and Atmospheric Administration here in Boulder, in a meeting with our Ted Scripps Fellows in Environmental Journalism. “But I don’t agree with a lot of the specifics. I don’t agree with Jim Hansen’s naming of 350 ppm as a tipping point. Actually we may have already gone too far, except we just don’t know.” A key factor, Tans said, is timing. “If it takes a million years for the ice caps to disappear, no problem. The issue is how fast? Nobody can give that answer.” Martin Hoerling, a NOAA meteorologist who is working on ways to better determine the links between climate change and regional impacts, such as drought in the West, pointed out that the paleoclimate data Hansen bases his assertions on are coarse. They do not record year-to-year events, just big changes that took place over very long time periods. So that data give no indication just how long it takes to de-glaciate Antarctica and Greenland. Hoerling also took issue with Hansen’s assertions about lakes Powell and Mead. While it is true that “the West has had the most radical change in temperature in the U.S.,” there is no evidence yet that this is a cause of increasing drought, he said. Flows in the Colorado River have been averaging about 12 million acre feet each year, yet we are consuming 14 million acre feet. “Where are we getting the extra from? Well, we’re tapping into our 401K plan,” he said. That would be the two giant reservoirs, and that’s why their water levels have been declining. “Why is there less flow in the river?” Hoerling said. “Low precipitation — not every year, but in many recent years, the snow pack has been lower.” And here’s his almost counter-intuitive point: science shows that the reduced precipitation “is due to natural climate variability . . . We see little indication that the warming trend is affecting the precipitation.” In my conversation with Tans and Hoerling today, I saw a tension between what they believe and what they think they can demonstrate scientifically. “I like to frame the issue differently,” Tans said. “Sure, we canot predict what the climate is going to look like in a couple of dcades. There are feedbacks in the system we don’t understand. In fact, we don’t even know all the feedbacks . . . To pick all this apart is extremely difficult — until things really happen. So I’m pessimistic.” There is, Tans said, “a finite risk of catastrophic climate change. Maybe it is 1 in 6, or maybe 1 in 20 or 1 in 3. Yet if we had a risk like that of being hit by an asteroid, we’d know what to do. But the problem here is that we are the asteroid.” Tans argues that whether or not we can pin down the degree of risk we are now facing, one thing is obvious: “We have a society based on ever increasing consumption and economic expectations. Three percent growth forever is considered ideal. But of course it’s a disaster.” Hoerling says we are living like the Easter Islanders, who were faced with collapse from over consumption of resources but didn’t see it coming. Like them, he says, we are living in denial. “I think we are in that type of risk,” Tans said. “But is that moving people? It moves me. But I was already convinced in 1972.”
<urn:uuid:f9441dcc-dc2a-4077-aac8-1b49394182e2>
2.546875
1,273
News Article
Science & Tech.
58.102556
33
Study promoter activity using the Living Colors Fluorescent Timer, a fluorescent protein that shifts color from green to red over time (1). This color change provides a way to visualize the time frame of promoter activity, indicating where in an organism the promoter is active and also when it becomes inactive. Easily detect the red and green emissions indicating promoter activity with fluorescence microscopy or flow cytometry. Easily Characterize Promoter Activity The Fluorescent Timer is a mutant form of the DsRed fluorescent reporter, containing two amino acid substitutions which increase its fluorescence intensity and endow it with a distinct spectral property: as the Fluorescent Timer matures, it changes color—in a matter of hours, depending on the expression system used. Shortly after its synthesis, the Fluorescent Timer begins emitting green fluorescence but as time passes, the fluorophore undergoes additional changes that shift its fluorescence to longer wavelengths. When fully matured the protein is bright red. The protein’s color shift can be used to follow the on and off phases of gene expression (e.g., during embryogenesis and cell differentiation). Fluorescent Timer under the control of the heat shock promoter hsp16-41 in a transgenic C. elegans embryo. The embryo was heat-shocked in a 33°C water bath. Promoter activity was studied during the heat shock recovery period. Green fluorescence was observed in the embryo as early as two hr into the recovery period. By 50 hr after heat shock, promoter activity had ceased, as indicated by the lack of green color. pTimer (left) is primarily intended to serve as a convenient source of the Fluorescent Timer cDNA. Use pTimer-1 (right) to monitor transcription from different promoters and promoter/ enhancer combinations inserted into the MCS located upstream of the Fluorescent Timer coding sequence. Without the addition of a functional promoter, this vector will not express the Fluorescent Timer. Detecting Timer Fluorescent Protein You can detect the Fluorescent Timer with the DsRed Polyclonal Antibody. You can use the DsRed1-C Sequencing Primer to sequence wild-type DsRed1 C-terminal gene fusions, including Timer fusions. Terskikh, A., et al. (2000) Science290(5496):1585–1588.
<urn:uuid:fee85558-8ff7-41a4-9a52-a042d84e5f3a>
2.6875
499
Knowledge Article
Science & Tech.
36.829775
34
Download source - 8 Kb This tutorial is based off of the MSDN Article #ID: Q194873. But, for a beginner, following these MSDN articles can be intimidating to say the least. One of the most often asked questions I see as a Visual C++ and Visual Basic programmer is how to call a VB DLL from VC++. Well, I am hoping to show you exactly that today. I am not going to go over the basic details of COM as this would take too long, so I am assuming you have an understanding of VB, VC++ and a little COM knowledge. It's not too hard to learn; just takes a little time. So let's get started. The first thing you need to do is fire up Visual Basic 6 (VB 5 should work as well). With VB running, create a new "ActiveX DLL" project. Rename the project to "vbTestCOM" and the class to "clsTestClass". You can do this by clicking in the VB Project Explorer Window on the Project1 item (Step 1), then clicking in the Properties window and selecting the name property (Step 2). Do the same for the Class. Click on the class (Step 3), then the name property and enter the name mentioned above (Step 4). Your project so far should look like the folowing right hand side picture: Ok, now we are ready to add some code to the VB Class. Click on the "Tools" menu, then select the "Add Procedure" menu item. The Add Procedure window will open up. In this window we need to add some information. First (Step 1) make sure the type is set to Function. Second (Step 2) enter a Function name called "CountStringLength". Finally hit the Ok button and VB will generate the new function in the class. You should have an empty function with which to work. The first thing we will do is specify a return type and an input parameter. Edit your code to look like this: Public Function CountStringLength(ByVal strValue As String) As Long What are we doing here? We are taking one parameter, as a String type in this case, then returning the length through the return type, which is a Long. We specify the input parameter as ByVal, meaning VB will make a copy of this variable and use the copy in the function, rather than the default ByRef, which passes the variable by reference. This way we can be sure that we do not modify the string by accident that was passed to us by the calling program. Let's add the code now. Public Function CountStringLength(ByVal strValue As String) As Long If strValue = vbNullString Then CountStringLength = 0 CountStringLength = Len(strValue) In the first line of code we are checking to see if the calling program passed us an empty, or NULL, string. If so we return 0 as the length. If the user did pass something other than an empty string, then we count it's length and return the length back to the calling program. Now would be a good time to save your project. Accept the default names and put it in a safe directory. We need to compile this project now. Go to the File menu and select the "Make vbCOMTest.dll..." menu item. The compiler will produce a file called surprisingly enough: vbCOMTest.dll. The compiler will also do us the favor of entering this new DLL into the system registry. We have finished the VB side of this project, so let's start the VC++ side of it. Fire up a copy of VC++, then select from the menu, "New Project". The New Project window should appear. Select a "Win32 Console Application" (Step 1), then give it a name of "TestVBCOM" (Step 2). Finally, enter a directory you want to build this project in (Step 3 - your directory will vary from what I have entered). Click on the "OK" button and the "Win32 Console Application - Step 1 of 1" window will appear. Leave everything on this page as the default, and click the "Finish" button. One final window will appear after this titled "New Project Information". Simply click the "Ok" button here. You should now have an empty Win32 Console project. Press the "Ctrl" and hit the "N" key. Another window titled "New" will appear. Select the "C++ Source File" (Step 1), then enter the new name for this file called, "TestVBCOM.cpp" (Step 2 - make sure the Add to Project checkbox is checked and the correct project name is in the drop down combo box), then click the "Ok" button to finish. Now we are going to get fancy! You need to go to your Start Menu in Windows and navigate to the "Visual Studio 6" menu and go into the "Microsoft Visual Studio 6.0 Tools" sub-menu. In here you will see an icon with the name "OLE View". Click on it. The OLE View tool will open up. You will see a window similar to this one: Collapse all the trees, if they are not already. This will make it easier to navigate to where we want to go. Highlight the "Type Libraries" (Step 1) and expand it. You should see a fairly massive listing. We need to locate our VB DLL. Now, remember what we named the project? Right, we need to look for vbTestCOM. Scroll down until you find this. Once you have found it, double click on it. A new window should appear - the "ITypeLib Viewer" window. We are only interested in the IDL (Interface Definition Language) code on the right side of the window. Select the entire IDL text and hit the "Ctrl" and "C" buttons to copy it to the clipboard. You can close this window and the OLE View window now as we are done with the tool. We need to add the contents of the IDL file into our VC++ project folder. Go to the folder you told VC++ to create your project in and create a new text file there (If you are in Windows Explorer, you can right click in the directory and select "New" then scroll over following the arrow and select "Text Document"). Rename the text document to "vbCOMTEST.idl". Then double click on the new IDL file (VC++ should open it if you named it correctly with an IDL extension). Now paste the code in the file by pressing the "Ctrl" and "V" keys. The IDL text should be pasted into the file. So far, so good. Now, this IDL file is not going to do us much good until we compile it. That way, VC++ can use the files it generates to talk to the VB DLL. Let's do that now. Open a DOS window and navigate to the directory you created your VC++ project in. Once in that directory, at the prompt you need to type the following to invoke the MIDL compiler: E:\VCSource\TestVBCOM\TestVBCOM\midl vbTestCOM.idl /h vbTestCOM.h Hit the "Enter" key and let MIDL do its magic. You should see results similar to the following: Close the DOS window and head back into VC++. We need to add the newly generated vbTestCOM.h and vbTestCOM_i.c files to the project. You can do this by going to the "Project" menu, then selecting the "Add to Project" item, and scrolling over to the "Files" menu item and clicking on it. A window titled, "Insert Files into Project" will open. Select the two files highlighted in the next picture, then select the "Ok" button. These two files were generated by MIDL for us, and VC++ needs them in order to talk to the VB DLL (actually VC++ does not need the "vbCOMTest_i.c" file in the project, but it is handy have in the project to review). We are going to add the following code to the "TestVBCOM.cpp" file now, so navigate to that file in VC++ using the "Workspace" window. Open the file by double clicking it and VC++ will display the empty file for editing. Now add the following code to the "TestVBCOM.cpp" file: _clsVBTestClass *IVBTestClass = NULL; hr = CoInitialize(0); hr = CoCreateInstance( CLSID_clsVBTestClass, _bstr_t bstrValue("Hello World"); hr = IVBTestClass->CountStringLength(bstrValue, cout << "The string is: " << ReturnValue << " characters in length." << endl; hr = IVBTestClass->Release(); cout << "CoCreateInstance Failed." << endl; If all the code is entered in correctly, then press the "F7" key to compile this project. Once the project has compiled cleanly, then press the "Ctrl" and the "F5" keys to run it. In the C++ code, we include the MIDL created "vbTestCOM.h" file, the "Comdef.h" file for the _bstr_t class support and the "iostream.h" file for the "cout" support. The rest of the comments should speak for themselves as to what's occurring. This simple tutorial shows how well a person can integrate VB and VC++ apps together using COM. Not too tough actually.
<urn:uuid:a5fbb498-1ce4-4861-9a4d-ac9d31394472>
2.796875
2,064
Tutorial
Software Dev.
74.372644
35
Hold the salt: UCLA engineers develop revolutionary new desalination membrane Process uses atmospheric pressure plasma to create filtering 'brush layer' Desalination can become more economical and used as a viable alternate water resource. By Wileen Wong Kromhout Originally published in UCLA Newsroom Researchers from the UCLA Henry Samueli School of Engineering and Applied Science have unveiled a new class of reverse-osmosis membranes for desalination that resist the clogging which typically occurs when seawater, brackish water and waste water are purified. The highly permeable, surface-structured membrane can easily be incorporated into today's commercial production system, the researchers say, and could help to significantly reduce desalination operating costs. Their findings appear in the current issue of the Journal of Materials Chemistry. Reverse-osmosis (RO) desalination uses high pressure to force polluted water through the pores of a membrane. While water molecules pass through the pores, mineral salt ions, bacteria and other impurities cannot. Over time, these particles build up on the membrane's surface, leading to clogging and membrane damage. This scaling and fouling places higher energy demands on the pumping system and necessitates costly cleanup and membrane replacement. The new UCLA membrane's novel surface topography and chemistry allow it to avoid such drawbacks. "Besides possessing high water permeability, the new membrane also shows high rejection characteristics and long-term stability," said Nancy H. Lin, a UCLA Engineering senior researcher and the study's lead author. "Structuring the membrane surface does not require a long reaction time, high reaction temperature or the use of a vacuum chamber. The anti-scaling property, which can increase membrane life and decrease operational costs, is superior to existing commercial membranes." The new membrane was synthesized through a three-step process. First, researchers synthesized a polyamide thin-film composite membrane using conventional interfacial polymerization. Next, they activated the polyamide surface with atmospheric pressure plasma to create active sites on the surface. Finally, these active sites were used to initiate a graft polymerization reaction with a monomer solution to create a polymer "brush layer" on the polyamide surface. This graft polymerization is carried out for a specific period of time at a specific temperature in order to control the brush layer thickness and topography. "In the early years, surface plasma treatment could only be accomplished in a vacuum chamber," said Yoram Cohen, UCLA professor of chemical and biomolecular engineering and a corresponding author of the study. "It wasn't practical for large-scale commercialization because thousands of meters of membranes could not be synthesized in a vacuum chamber. It's too costly. But now, with the advent of atmospheric pressure plasma, we don't even need to initiate the reaction chemically. It's as simple as brushing the surface with plasma, and it can be done for almost any surface." In this new membrane, the polymer chains of the tethered brush layer are in constant motion. The chains are chemically anchored to the surface and are thus more thermally stable, relative to physically coated polymer films. Water flow also adds to the brush layer's movement, making it extremely difficult for bacteria and other colloidal matter to anchor to the surface of the membrane. "If you've ever snorkeled, you'll know that sea kelp move back and forth with the current or water flow," Cohen said. "So imagine that you have this varied structure with continuous movement. Protein or bacteria need to be able to anchor to multiple spots on the membrane to attach themselves to the surface — a task which is extremely difficult to attain due to the constant motion of the brush layer. The polymer chains protect and screen the membrane surface underneath." Another factor in preventing adhesion is the surface charge of the membrane. Cohen's team is able to choose the chemistry of the brush layer to impart the desired surface charge, enabling the membrane to repel molecules of an opposite charge. The team's next step is to expand the membrane synthesis into a much larger, continuous process and to optimize the new membrane's performance for different water sources. "We want to be able to narrow down and create a membrane selection system for different water sources that have different fouling tendencies," Lin said. "With such knowledge, one can optimize the membrane surface properties with different polymer brush layers to delay or prevent the onset of membrane fouling and scaling. "The cost of desalination will therefore decrease when we reduce the cost of chemicals [used for membrane cleaning], as well as process operation [for membrane replacement]. Desalination can become more economical and used as a viable alternate water resource." Cohen's team, in collaboration with the UCLA Water Technology Research (WaTeR) Center, is currently carrying out specific studies to test the performance of the new membrane's fouling properties under field conditions. "We work directly with industry and water agencies on everything that we're doing here in water technology," Cohen said. "The reason for this is simple: If we are to accelerate the transfer of knowledge technology from the university to the real world, where those solutions are needed, we have to make sure we address the real issues. This also provides our students with a tremendous opportunity to work with industry, government and local agencies." A paper providing a preliminary introduction to the new membrane also appeared in the Journal of Membrane Science last month. Published: Thursday, April 08, 2010
<urn:uuid:c0b175bb-65fb-420e-a881-a80b91d00ecd>
2.8125
1,115
News Article
Science & Tech.
24.364388
36
Killing Emacs means ending the execution of the Emacs process. If you started Emacs from a terminal, the parent process normally resumes control. The low-level primitive for killing Emacs is This command calls the hook kill-emacs-hook, then exits the Emacs process and kills it. If exit-data is an integer, that is used as the exit status of the Emacs process. (This is useful primarily in batch operation; see Batch Mode.) If exit-data is a string, its contents are stuffed into the terminal input buffer so that the shell (or whatever program next reads input) can read them. kill-emacs function is normally called via the higher-level command C-x C-c save-buffers-kill-terminal). See Exiting. It is also called automatically if Emacs receives a SIGHUP operating system signal (e.g., when the controlling terminal is disconnected), or if it receives a SIGINT signal while running in batch mode (see Batch Mode). This normal hook is run by kill-emacs, before it kills Emacs. kill-emacscan be called in situations where user interaction is impossible (e.g., when the terminal is disconnected), functions on this hook should not attempt to interact with the user. If you want to interact with the user when Emacs is shutting down, use kill-emacs-query-functions, described below. When Emacs is killed, all the information in the Emacs process, aside from files that have been saved, is lost. Because killing Emacs inadvertently can lose a lot of work, the save-buffers-kill-terminal command queries for confirmation if you have buffers that need saving or subprocesses that are running. It also runs the abnormal hook save-buffers-kill-terminalis killing Emacs, it calls the functions in this hook, after asking the standard questions and before calling kill-emacs. The functions are called in order of appearance, with no arguments. Each function can ask for additional confirmation from the user. If any of them returns save-buffers-kill-emacsdoes not kill Emacs, and does not run the remaining functions in this hook. Calling kill-emacsdirectly does not run this hook.
<urn:uuid:af93ad35-c5de-4297-a667-afc7347bbc6c>
2.6875
488
Documentation
Software Dev.
51.422678
37
In fact, the United States apparently just emerged from the hottest spring on record. The period between June 2011 and May of this year was the warmest on record since NOAA record keeping began in 1985. Aside from Washington, every state experienced higher-than-average temperatures during that period, which also featured the second-warmest summer and fourth-warmest winter in almost 28 years. The nation's average temperature during those 12 months hovered at 56 degrees Fahrenheit, reportedly 3.2 degrees above the long-term average, surpassing the previous record, which was just set in April, in an analysis of temperatures between May 2011 and April 2012. The warmer-than-average conditions persisted through the winter and spring, resulting in a limited snowfall that the Rutgers Global Snow Lab reports was the third-smallest on record for the contiguous U.S. The rising temperatures may have altered precipitation patterns as well, according to NOAA. While the country as a whole actually experienced a drier spring than usual, the West Coast, Northern Plains and Upper Midwest regions were simultaneously wetter than average. On a more concerning note, the prevalence of natural disasters, such as the disastrous tornado in Joplin, Mo., and the massive, hurricane-caused flooding in Vermont, that plagued the country over the past year were also far form usual. The U.S. Climate Extreme Index, which tracks extremes in temperatures, precipitation, drought and tropical cyclones, reached 44 percent this past spring. That's twice the average value. The NOAA report is not the only recent analysis to note the prevalence, and consequences, of rising temperatures. On Thursday, NASA reported that scientists have discovered unprecedented blooms of plant life beneath the waters of the Arctic Ocean. While that certainly does not seem like cause for concern, NASA noted it was likely caused by a thinning of the Arctic Ocean's three-foot thick layer of ice, allowing the sun to penetrate that ice to foster plant life under the sea. A continuous rise in summer temperatures is expected to triple the number of heat-related deaths in the U.S. by the end of the century, the Natural Resources Defense Council reported last month. In an analysis of peer-reviewed data, the organization said summer temperatures could rise by 4 to 11 degrees Fahrenheit by that time due to human-induced climate change, which could increase the number of summer heat-related deaths from 1,300 to 4,600 a year.
<urn:uuid:628e935a-7678-4d56-8179-04a384233ade>
3.625
499
News Article
Science & Tech.
42.651575
38
|Scientific Name:||Phoebastria albatrus| |Species Authority:||(Pallas, 1769)| |Red List Category & Criteria:||Vulnerable D2 ver 3.1| |Reviewer/s:||Butchart, S. & Taylor, J.| |Contributor/s:||Balogh, G., Chan, S., Hasegawa, H., Peet, N., Rivera, K. & Suryan, R.| This species is listed as Vulnerable because, although conservation efforts have resulted in a steady population increase, it still has a very small breeding range, limited to Torishima and Minami-kojima (Senkaku Islands), rendering it susceptible to stochastic events and human impacts. Phoebastria albatrus breeds on Torishima (Japan), and Minami-kojima (Senkaku Islands), that are claimed jointly by Japan, mainland China and Chinese Taipei. Historically there are believed to have been at least nine colonies south of Japan and in the East China Sea (Piatt et al. 2006). Its marine range covers most of the northern Pacific Ocean, but it occurs in highest densities in areas of upwelling along shelf waters of the Pacific Rim, particularly along the coasts of Japan, eastern Russia, the Aleutians and Alaska (Piatt et al. 2006, Suryan et al. 2007). During breeding (December - May) it is found in highest densities around Japan. Satellite tracking has indicated that during the post-breeding period, females spend more time offshore of Japan and Russia, while males and juveniles spend greater time around the Aleutian Islands, Bering Sea and off the coast of North America (Suryan et al. 2007). Juveniles have been shown to travel twice the distances per day and spend more time within continental shelf habitat than adult birds (Suryan et al. 2008). The species declined dramatically during the 19th and 20th centuries owing to exploitation for feathers, and was believed extinct in 1949, but was rediscovered in 1951. The current population is estimated, via direct counts and modelling based on productivity data, to be 2,364 individuals, with 1,922 birds on Torishima and 442 birds on Minami-kojima (G.R. Balogh in litt. 2008). In 1954, 25 birds (including at least six pairs) were present on Torishima. Given that there are now c.426 breeding pairs on Torishima (G.R. Balogh in litt. 2008), the species has undergone an enormous increase since its rediscovery and the onset of conservation efforts. In addition, in 2010, one nesting pair was observed on Kure Atoll (Hawaii, USA), but was probably female-female and unsuccessful, and one chick was produced on Midway Atoll (M. Naughton pers. comm. 2011). A tsunami which hit Midway Atoll in March 2011, did not impact on the single pair nesting on Eastern Island (U.S. Fish & Wildlife Service 2008). Native:Canada; China; Japan; Korea, Republic of; Mexico; Russian Federation; Taiwan, Province of China; United States; United States Minor Outlying Islands Present - origin uncertain:Northern Mariana Islands; Philippines |Range Map:||Click here to open the map viewer and explore range.| |Population:||At the end of the 2006-2007 breeding season, the global population was estimated to be 2,364 individuals, with 1,922 birds on Torishima and 442 birds on Minami-kojima (Senkaku Islands). This estimate is based on: direct observation of breeding pairs on Torishima; an assumption on numbers of non-breeding birds; an estimate for the Minami-kojima population that is based upon a 2002 estimate and an assumption of population growth rate (which, together, puts the Minami-kojima population at about 15% of the global population [G.R. Balogh in litt. 2008]). More recently, Brazil (2009) estimates the population in Japan at c.100-10,000 breeding pairs and c.50-1,000 individuals on migration. The population is taken here as likely to number 2,200-2,500 individuals based on these estimates, roughly equating to 1,500-1,700 mature individuals.| |Habitat and Ecology:||Behaviour Phoebastria albatrus is a colonial, annually breeding species, with each breeding cycle lasting about 8 months. Birds begin to arrive at the main colony on Torishima Island in early October. A single egg is laid in late October to late November and incubation lasts 64 to 65 days. Hatching occurs in late December through January. Chicks begin to fledge in late May into June. There is little information on timing of breeding on Minami-kojima. First breeding sometimes occurs when birds are five years old, but more commonly when birds are aged six. It forages diurnally and potentially nocturnally, either singly or in groups primarily taking prey by surface-seizing (ACAP 2009). During the breeding season, individuals nesting off Japan forage over the continental shelf (Kiyota and Minami 2008). Habitat Breeding Historically, it preferred level, open, areas adjacent to tall clumps of the grass Miscanthus sinensis for nesting. Diet It feeds mainly on squid, but also takes shrimp, fish, flying fish eggs and other crustaceans (ACAP 2009). It has been recorded following ships to feed on scraps and fish offal.| |Major Threat(s):||Its historical decline was caused by exploitation. Today, the key threats are the instability of soil on its main breeding site (Torishima), the threat of mortality and habitat loss from the active volcano on Torishima, and mortality caused by fisheries. Torishima is also vulnerable to other natural disasters, such as typhoons. Introduced predators are a potential threat at colonies. Environmental contaminants at sea (oil based compounds) may also be a threat (G.R. Balogh in litt. 2008). Threats at sea (fisheries, oil pollution) are exacerbated by the fact that birds concentrate into predictable hotspots (Piatt et al. 2006). Modelling work has showed that even a small increase in low level chronic mortality (such as fisheries bycatch) has more of an impact on population growth rates than stochastic and theoretically catastrophic events, such as volcanic eruptions (Finkelstein et al. 2010). Phoebastria albatrus has the greatest potential overlap with fisheries that occur in the shallower waters along continental shelf break and slope regions, e.g., sablefish and Pacific halibut longline fisheries off the coasts of Alaska and British Columbia. Although, overlap between the distribution of birds and fishery effort does not mean that interactions between birds and boats necessarily occur, P. albatrus are known to have been killed in U.S. and Russian longline fisheries for Pacific cod and Pacific halibut. In addition, birds on Torishima have been observed with hooks in their mouths of the style used in Japanese fisheries near the island (ACAP 2009).| Conservation Actions Underway It is legally protected in Japan, Canada and the USA. A draft recovery plan has been developed (USFWS 2005). Mitigation measures have been established in the Alaska demersal longline fishery and in the Hawaii-based pelagic longline fishery (NOAA 2008). Streamer lines (both heavy weight lines for large boats and lightweight lines for smaller vessels) have been designed to keep birds from longline hooks as they are set, and these are being distributed free to the Alaskan longline fleet (USFWS 2005), though they are not deployed in near-shore waters. In 2006, the Western and Central Pacific Fisheries Commission passed a measure which requires large tuna and swordfish longline vessels (>24m long) to use a combination of two seabird bycatch mitigation measures when fishing north of 23 degrees North. Torishima has been established as a National Wildlife Protection Area. In 1981-1982, native plants were transplanted into the Torishima nesting colony in order to stabilise the nesting habitat and the nest structures. This has enhanced breeding success, with over 60% of eggs now resulting in fledged young. Decoys have been used to attract birds to nest at another site on Torishima since 1993 and the first pair started breeding at this new site in November 1995. The number of chicks fledged from this new colony has increased from one chick in 2004; four chicks in 2005; 13 chicks in 2006; 16 chicks in 2007. In October-November 2007, 35 eggs were laid at this new site (Sato 2009). In 2007, the Japanese government approved a project to translocate chicks from Torishima to Mukojima, 300 km away. All ten chicks of the first translocations in March 2008 fledged (Jacobs 2009). If successful, this project will translocate at least ten chicks per year for five years. Conservation Actions Proposed Continue to promote measures designed to protect this species from becoming hooked or entangled by commercial fishing gear. Re-establish birds within historic range as insurance against natural disasters on primary breeding colony. Promote conservation measures for the Minami-kojima population. Continue research into the at-sea distribution and marine habitat use through satellite telemetry studies. Continue land-based management and population monitoring. |Citation:||BirdLife International 2012. Phoebastria albatrus. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.2. <www.iucnredlist.org>. Downloaded on 18 May 2013.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please fill in the feedback form so that we can correct or extend the information provided|
<urn:uuid:573c77f2-d484-430d-94d7-05417faf55af>
3.203125
2,094
Knowledge Article
Science & Tech.
46.976922
39
Boulder trails are common to the interior of Menelaus crater as materials erode from higher topography and roll toward the crater floor. Downhill is to the left, image width is 500 m, LROC NAC M139802338L [NASA/GSFC/Arizona State University]. Most boulder trails are relatively high reflectance, but running through the center of this image is a lower reflectance trail. This trail is smaller than the others, and its features may be influenced by factors such as mass of the boulder, boulder speed as it traveled downhill, and elevation from which the boulder originated. For example, is the boulder trail less distinct than the others because the boulder was smaller? What about the spacing of boulder tracks? The spacing of bounce-marks along boulder trails may say something about boulder mass and boulder speed. But why is this boulder trail low reflectance when all of the surrounding trails are higher reflectance? Perhaps this boulder trail is lower reflectance because the boulder gently bounced as it traveled downhill, and barely disturbed a thin layer of regolith? The contrast certainly appears similar to the astronauts' footprints and paths around the Apollo landing sites. Or, maybe the boulder fell apart during its downhill travel and the trail is simply made up of pieces of the boulder - we just don't know yet. LROC WAC context of Menelaus crater at the boundary between Mare Serenitatis and the highlands (dotted line). The arrow marks the location of today's featured image at contact between the crater floor and NE crater wall [NASA/GSFC/Arizona State University]. What do you think? Why don't you follow the trail to its source in the full LROC NAC frame and see if you can find any other low reflectance trails.
<urn:uuid:ce50e516-2229-404a-b328-7d80cdfd0d33>
3.25
362
Comment Section
Science & Tech.
50.615374
40
Ulva spp. on freshwater-influenced or unstable upper eulittoral rock Ecological and functional relationships The community predominantly consists of algae which cover the rock surface and creates a patchy canopy. In doing so, the algae provides an amenable habitat in an otherwise hostile environment, exploitable on a temporary basis by other species. For instance, Ulva intestinalis provides shelter for the orange harpacticoid copepod, Tigriopus brevicornis, and the chironomid larva of Halocladius fucicola (McAllen, 1999). The copepod and chironomid species utilize the hollow thalli of Ulva intestinalis as a moist refuge from desiccation when rockpools completely dry. Several hundred individuals of Tigriopus brevicornis have been observed in a single thallus of Ulva intestinalis (McAllen, 1999). The occasional grazing gastropods that survive in this biotope no doubt graze Ulva. Seasonal and longer term change - During the winter, elevated levels of freshwater runoff would be expected owing to seasonal rainfall. Also, winter storm action may disturb the relatively soft substratum of chalk and firm mud, or boulders may be overturned. - Seasonal fluctuation in the abundance of Ulva spp. Would therefore be expected with the biotope thriving in winter months. Porphyra also tends to be regarded as a winter seaweed, abundant from late autumn to the succeeding spring, owing to the fact that the blade shaped fronds of the gametophyte develop in early autumn, whilst the microscopic filamentous stages of the spring and summer are less apparent (see recruitment process, below). Habitat structure and complexity Habitat complexity in this biotope is relatively limited in comparison to other biotopes. The upper shore substrata, consisting of chalk, firm mud, bedrock or boulders, will probably offer a variety of surfaces for colonization, whilst the patchy covering of ephemeral algae provides a refuge for faunal species and an additional substratum for colonization. However, species diversity in this biotope is poor owing to disturbance and changes in the prevailing environmental factors, e.g. desiccation, salinity and temperature. Only species able to tolerate changes/disturbance or those able to seek refuge will thrive. The biotope is characterized by primary producers. Rocky shore communities are highly productive and are an important source of food and nutrients for neighbouring terrestrial and marine ecosystems (Hill et al., 1998). Macroalgae exude considerable amounts of dissolved organic carbon which is taken up readily by bacteria and may even be taken up directly by some larger invertebrates. Dissolved organic carbon, algal fragments and microbial film organisms are continually removed by the sea. This may enter the food chain of local, subtidal ecosystems, or be exported further offshore. Rocky shores make a contribution to the food of many marine species through the production of planktonic larvae and propagules which contribute to pelagic food chains. The life histories of common algae on the shore are generally complex and varied, but follow a basic pattern, whereby there is an alternation of a haploid, gamete-producing phase (gametophyte-producing eggs and sperm) and a diploid spore-producing (sporophyte) phase. All have dispersive phases which are circulated around in the water column before settling on the rock and growing into a germling (Hawkins & Jones, 1992). Ulva intestinalis is generally considered to be an opportunistic species, with an 'r-type' strategy for survival. The r-strategists have a high growth rate and high reproductive rate. For instance, the thalli of Ulva intestinalis, which arise from spores and zygotes, grow within a few weeks into thalli that reproduce again, and the majority of the cell contents are converted into reproductive cells. The species is also capable of dispersal over a considerable distance. For instance, Amsler & Searles (1980) showed that 'swarmers' of a coastal population of Ulva reached exposed artificial substrata on a submarine plateau 35 km away. The life cycle of Porphyra involves a heteromorphic (of different form) alternation of generations, that are either blade shaped or filamentous. Two kinds of reproductive bodies (male and female (carpogonium)) are found on the blade shaped frond of Porphyra that is abundant during winter. On release these fuse and thereafter, division of the fertilized carpogonium is mitotic, and packets of diploid carpospores are formed. The released carpospores develop into the 'conchocelis' phase (the diploid sporophyte consisting of microscopic filaments), which bore into shells (and probably the chalk rock) and grow vegetatively. The conchocelis filaments reproduce asexually. In the presence of decreasing day length and falling temperatures, terminal cells of the conchocelis phase produce conchospores inside conchosporangia. Meiosis occurs during the germination of the conchospore and produces the macroscopic gametophyte (blade shaped phase) and the cycle is repeated (Cole & Conway, 1980). Time for community to reach maturity Disturbance is an important factor structuring the biotope, consequently the biotope is characterized by ephemeral algae able to rapidly exploit newly available substrata and that are tolerant of changes in the prevailing conditions, e.g. temperature, salinity and desiccation. For instance, following the Torrey Canyon tanker oil spill in mid March 1967, which bleached filamentous algae such as Ulva and adhered to the thin fronds of Porphyra, which after a few weeks became brittle and were washed away, regeneration of Porphyra and Ulva was noted by the end of April at Marazion, Cornwall. Similarly, at Sennen Cove where rocks had completely lost their cover of Porphyra and Ulva during April, by mid-May had occasional blade-shaped fronds of Porphyra sp. up to 15 cm long. These had either regenerated from basal parts of the 'Porphyra' phase or from the 'conchocelis' phase on the rocks (see recruitment processes). By mid-August these regenerated specimens were common and well grown but darkly pigmented and reproductively immature. Besides the Porphyra, a very thick coating of Ulva (as Enteromorpha) was recorded in mid-August (Smith 1968). Such evidence suggests that the community would reach maturity relatively rapidly and probably be considered mature in terms of the species present and ability to reproduce well within six months. No text entered. This review can be cited as follows: Ulva spp. on freshwater-influenced or unstable upper eulittoral rock. Marine Life Information Network: Biology and Sensitivity Key Information Sub-programme [on-line]. Plymouth: Marine Biological Association of the United Kingdom. Available from: <http://www.marlin.ac.uk/habitatecology.php?habitatid=104&code=2004>
<urn:uuid:13da434f-f140-49e3-8fdb-67019653693a>
3.625
1,520
Knowledge Article
Science & Tech.
19.802367
41
of the Giant Squid scientifically known as Architeuthis dux, is the largest of all invertebrates. Scientists believe it can be as long as 18 metres (60 feet). This specimen was collected by Dr Gordon Williamson who worked as the resident ships biologist for the whaling company Salvesons. He examined the stomach contents of 250 Sperm Whales Physeter macrocephalus keeping the largest squid beak and discarding the smaller until he ended up with this magnificent specimen.
<urn:uuid:03dc2cd4-80be-4c32-8ff8-4b196542656b>
3.03125
105
Knowledge Article
Science & Tech.
43.41975
42
You're using more water than you think A water footprint is the total volume of freshwater used to produce the goods and services consumed. Here are some ways to lighten your water footprint. Fri, Aug 31 2012 at 11:28 AM Prodded by environmental consciousness — or penny pinching — you installed low-flow showerheads and fixed all the drippy facets. Knowing that your manicured lawn was sucking down an unnatural amount of water — nearly 7 billion gallons of water is used to irrigate home landscaping, according to the U.S. Environmental Protection Agency — you ripped up the turf and replaced it with native plants. You’re still using a lot more water than you think. The drought of 2012 has generated images of parched landscapes and sun-baked lakebeds. At least 36 states are projecting water shortages between now and 2013, according to a survey by the federal General Accounting Office. Water supplies are finite, and fickle. Water, we all know, is essential to life. It is also essential to agriculture, industry, energy and the production of trendy T-shirts. We all use water in ways that go way beyond the kitchen and bathroom. The measure of both direct and indirect water use is known as the water footprint. Your water footprint is the total volume of freshwater used to produce the goods and services consumed, according to the Water Footprint Network, an international nonprofit foundation based in the Netherlands. The Water Footprint Network has crunched the numbers and developed an online calculator to help you determine the size of your footprint. You’ll be astonished to know how much water you’re using … once you’ve converted all those metric measurements into something you can understand. The average American home uses about 260 gallons of water per day, according to the EPA. That quarter-pound burger you just gobbled down? More than 600 gallons of water. That Ramones T-shirt? More than 700 gallons. So, adjustments to your diet and buying habits can have a much greater impact on the size of your water footprint than taking 40-second showers. A pound of beef, for example, takes nearly 1,800 gallons of water to produce, with most of that going to irrigate the grains and grass used to feed the cattle. A pound of chicken demands just 468 gallons. If you really want to save water, eat more goat. A pound of goat requires 127 gallons of water. We’ve been told to cut down on our use of paper to save the forests, but going paperless also saves water. It takes more than 1,300 gallons of water to produce a ream of copy paper. Even getting treated water to your house requires electricity. Letting your faucet run for five minutes, the EPA says, uses about as much energy as burning a 60-watt light bulb for 14 hours. Reducing your water footprint also reduces your carbon footprint, the amount of greenhouse gases your lifestyle contributes to the atmosphere and global warming. So, you could say that conserving water is more than hot air. It’s connected to almost everything you do. Related water stories on MNN:
<urn:uuid:cca5126a-d443-4b80-89a5-01bc0108a268>
2.640625
660
News Article
Science & Tech.
54.947142
43
PHP, while originally designed and built to run on Unix, has had the ability since version 3 to run on Windows. That includes 9x, ME, NT, and 2000. In this article I'm going to go through the process of installing PHP on Windows and explain what you should look out for. On Windows, as on Unix, you have two options for installing PHP: as a CGI or as an ISAPI module. The obvious benefit of the latter is speed. The downside is that this is still somewhat new and may not be as stable. But, before you do anything, you have to do some prep work, which is pretty simple. Once you've downloaded and unzipped the Windows binary version of PHP, you have to copy php4isapi.dll from the sapi/ directory to WINNT/system or WINDOWS/system directories. You'll also probably want to move php.ini-dist from your installation directory to the WINDOWS/ directory and rename it to php.ini, if you plan on changing any of the precompiled defaults. Now you're ready to go, regardless of whether you use PHP as a CGI or ISAPI module. For NT/2000, you'll need to tell IIS how to recognize PHP. Thanks to the wonders of GUI, this can easily be accomplished with a few mouse clicks. First, fire up the Microsoft Management Console or Internet Service Manager, depending on whether you're using NT or 2000. Click on the Properties button of the web node you'll be working with. In this example we'll use Default Web Server. Click the ISAPI Filters tab and then click Add. Use php as the name and in the path put the location of php4isapi.dll, which should be C:\WINNT\system\ in this example. Configuring IIS to recognize PHP. Under the Home Directory tab, click the Configuration button, then click Add for Application Mappings. Use the same location of php4isapi.dll as you did with ISAP Filters and use .php as the extension. Here comes the caveat!! As a test, I tried using as my path the temporary location of php4isapi.dll, which was in the install directory on my desktop. Windows 2000 popped up a wonderfully annoying little box telling me I'm stupid and I should go dunk my head in the sand. OK, so it wasn't quite that wordy, but that's how I took it. Apparently Windows 2000 and IIS require App Mappings to be under the WINDOWS dir at least. So keep that in mind if you don't like sand up your nose. Now, click OK on the Properties dialog. The next thing to do is start and stop IIS. This isn't the same as pushing the stop and start buttons on the Management Console. You should go to a command (or cmd, as it were) window and type net stop issadmin. Wait for it to tell you what it's doing, as if you were nosy enough to care, and then type net start w3svc. Why they didn't make it net start iisadmin is beyond me. But I'm a Unix guy, and logic seems to be my downfall here. But I digress... So now you have PHP installed as an ISAPI module on Windows! Aren't you happy? Probably not, because you probably don't believe me. Well, if you need proof just go to C:\inetpub\wwwroot\ and put a file called test.php in there. Inside that file put the following code: <?php phpinfo(); ?> Then pull up test.php in your browser. If you see the PHP Info page, it worked! Now this probably goes without saying, so I'll say it. You have to pull up test.php as a web page in the browser using http://. If you give the path for the file using file://, then you'll get a raw output of the code, which is no fun. The PHP Info page indicates success. On Windows 9x/ME, you're somewhat restricted to PWS (Personal Web Server). Of course there are alternatives such as OmniHTTPd, but we don't have all day here! Anyway, this is far simpler. It's mainly just a matter of a registry entry. After you've moved php4ts.dll to the appropriate directories, you should open up your favorite registry editor. At this point I note the obligatory warnings about fooling around with the registry, blah, blah, blah... In HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w3svc\parameters\Script Map, you'll want to add an entry with the name of .php and a value of Next, go start up the PWS Manager. For each of the directories that you want to make PHP aware, you have to right-click on that directory and check the executable box. Now you're ready. Perform the same test ( test.php) as I described above, and have fun making all those ASP cronies jealous at the the guru-like glow that surrounds you! Darrell Brogdon is a web developer for SourceForge at VA Linux Systems and has been using PHP since 1996. Read more PHP Admin Basics columns. Discuss this article in the O'Reilly Network PHP Forum. Return to the PHP DevCenter. Copyright © 2009 O'Reilly Media, Inc.
<urn:uuid:26746c26-10ba-46d1-9738-1e79ec76d82b>
2.609375
1,155
Tutorial
Software Dev.
75.591953
44
is a C based interpreter (runloop) that executes, what different compiler (like Mildew ) produce. If you want to help SMOP, you can just take on one of the lowlevel S1P implementations and write it. If you have any questions ask ruoso or pmurias at #perl6 @ irc.freenode.org. The Slides for the talk Perl 6 is just a SMOP are available, it introduces a bit of the reasoning behind SMOP. A newer version of the talk presented at YAPC::EU 2008 is available SMOP is an alternative implementation of a C engine to run Perl 6. It is focused in getting the most pragmatic approach possible, but still focusing in being able to support all Perl 6 features. Its core resembles Perl 5 in some ways, and it differs from Parrot in many ways, including the fact that SMOP is not a Virtual Machine. SMOP is simply a runtime engine that happens to have a interpreter run loop. The main difference between SMOP and Parrot (besides the not-being-a-vm thing), is that SMOP is from bottom-up an implementation of the Perl 6 OO features, in a way that SMOP should be able to do a full bootstrap of the Perl 6 type system. Parrot on the other hand have a much more static low-level implementation (the PMC) The same way PGE is a project on top of Parrot, SMOP will need a grammar engine for itself. SMOP is the implementation that is stressing the meta object protocol more than any other implementation, and so far that has been a very fruitful exercise, with Larry making many clarifications on the object system thanks to SMOP. Important topics on SMOP - SMOP doesn't recurse in the C stack, and it doesn't actually define a mandatory paradigm (stack-based or register-based). SMOP has a Polymorphic Eval, that allows you to switch from one interpreter loop to another using Continuation Passing Style. See SMOP Stackless. - SMOP doesn't define a object system in its own. The only thing it defines is the concept of SMOP Responder Interface, which then encapsulates whatever object system. This feature is fundamental to implement the SMOP Native Types. - SMOP is intended to bootstrap itself from the low-level to the high-level. This is achieved by the fact that everything in SMOP is an Object. This way, even the low-level objects can be exposed to the high level runtime. See SMOP OO Bootstrap. - SMOP won't implement a parser in its own, it will use STD or whatever parser gets ported to its runtime first. - In order to enable the bootstrap, the runtime have a set of SMOP Constant Identifiers that are available for the sub-language compilers to use. - There are some special SMOP Values Not Subject to Garbage Collection. - A new interpreter implementation SMOP Mold replaced SLIME - The "official" smop Perl 6 compiler is mildew - it lives in v6/mildew - Currently there exists an old Elf backend which targets SMOP - it lives in misc/elfish/elfX SMOP GSoC 2009 See the Old SMOP Changelog
<urn:uuid:9ef4d308-fa15-4196-86db-2db8b4c54358>
2.875
694
Knowledge Article
Software Dev.
53.614756
45
The process of accretion is important in the formation of planets, stars, and black holes; it is also believed to power some of the most energetic phenomena in the universe. In an accretion disc, the accretion rate is controlled by the outward transport of angular momentum. Collisional processes like friction or viscosity are typically too small to account for the observed rates, and it is universally believed that astrophysical accretion discs are turbulent. However the origin of this turbulence is not clear since discs with velocity profiles close to Keplerian are stable to infinitesimal perturbations. In the early nineties it was realized that the stability picture changes dramatically if magnetic fields or nonlinear effects are present. In this talk I will describe how some of these issues can be discussed within the framework of the flow of a conducting fluid between coaxial rotating cylinders. I will also describe some of the experiments that are on the way to study these flows as well as the computational efforts to clarify the nature of the ensuing turbulent transport. ANL Physics Division Colloquium Schedule
<urn:uuid:9bf53c84-d3fd-428b-bc4b-63892ad85de5>
2.625
215
Academic Writing
Science & Tech.
18.868453
46
Titan's Ethane Lake This artist concept shows a mirror-smooth lake on the surface of the smoggy moon Titan. Cassini scientists have concluded that at least one of the large lakes observed on Saturn's moon Titan contains liquid hydrocarbons, and have positively identified ethane. This result makes Titan the only place in our solar system beyond Earth known to have liquid on its surface. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter was designed, developed and assembled at JPL. For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov.
<urn:uuid:36c0c102-e78a-494d-9b3b-78d6003c8994>
3.34375
185
Knowledge Article
Science & Tech.
35.831515
47
Oct. 9, 1998 COLUMBIA, Mo.--Ducks, geese and bald eagles soaring over areas the size of small towns are envisioned when talking about federally protected wetlands, not areas that are maybe as big as a small swimming pool and apparently void of life. University of Missouri-Columbia Professor Ray Semlitsch is trying to change that view and explain the importance of smaller wetlands before they are managed out of existence. "Large wetlands are beautiful and need to be protected, but for some animal species such as frogs, toads and salamanders, it is small wetlands that support greater species diversity," said Semlitsch, who along with his graduate research assistant, Russ Bodie, recently published their research in Conservation Biology. "These smaller, temporary wetlands--because they are dry at certain times during the year--are much harder to appreciate than vast marsh areas. But without these smaller wetlands, it is very possible that much of the animal and plant life that make wetlands rich, productive habitats would not survive. We need to worry about the conservation of smaller wetlands as well as the larger ones." Small wetlands currently are defined as being less than 4 hectares, or about 8 to 9 acres. The majority of the nation's wetlands are much smaller than might be imagined, closer to 1 to 2 acres and sometimes as small as several square yards. These small wetlands may comprise the majority of wetlands in the United States and help support a vast diversity of wetland species. However, unlike the large wetlands, these smaller areas are not protected to the same extent. Recently, the Army Corp of Engineers, which manages wetlands of all sizes throughout the United States, drafted regulations that will change the way wetlands are managed in the future. They have put off any change in management regulations until April, but the MU researchers argue that the changes in the regulations could manage these smaller wetlands out of existence. "Right now we can't detect losses of small wetlands by satellite imagery, a technique used to assess environmental change," Bodie said. "We lose thousands of acres each year in wetlands and these smaller ones are not even taken into account. Yet, they play a vital role in the ecosystem and support a great variety of organisms." Research done by Semlitsch and Bodie has indicated that when some individuals of a species move between wetlands, this increases their chances of survival. By populating many different wetlands, various species thrive, even during drought years when some wetlands are dry. When smaller wetlands are destroyed, the chances of survival for many species' populations may decrease dramatically because distances between individual wetlands become longer, making movement between wetlands more difficult. These small wetland breeding sites for amphibians are especially critical in light of purported world-wide declines, Semlitsch said. Wetlands in general also have direct benefits to humans as they filter out chemicals and silt, buffer lands from flooding, and are a favorite of hunters and fishers. They also are very costly and difficult to develop for construction or other purposes. Other social bookmarking and sharing tools: The above story is reprinted from materials provided by University Of Missouri, Columbia. Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:33275736-ac37-49fe-a13a-d130e6ad29c6>
3.125
674
Truncated
Science & Tech.
33.419667
48
Nov. 27, 2009 Physicists from the Japanese-led multi-national T2K neutrino collaboration have just announced that over the weekend they detected the first neutrino events generated by their newly built neutrino beam at the J-PARC (Japan Proton Accelerator Research Complex) accelerator laboratory in Tokai, Japan. Protons from the 30-GeV Main Ring synchrotron were directed onto a carbon target, where their collisions produced charged particles called pions. These pions travelled through a helium-filled volume where they decayed to produce a beam of the elusive particles called neutrinos. These neutrinos then flew 200 metres through the earth to a sophisticated detector system capable of making detailed measurements of their energy, direction, and type. The data from the complex detector system is still being analysed, but the physicists have seen at least 3 neutrino events, in line with the expectation based on the current beam and detector performance. This detection therefore marks the beginning of the operational phase of the T2K experiment, a 474-physicist, 13-nation collaboration to measure new properties of the ghostly neutrino. Neutrinos interact only weakly with matter, and thus pass effortlessly through the earth (and mostly through the detectors!). Neutrinos exist in three types, called electron, muon, and tau; linked by particle interactions to their more familiar charged cousins like the electron. Measurements over the last few decades, notably by the Super Kamiokande and KamLAND neutrino experiments in western Japan, have shown that neutrinos possess the strange property of neutrino oscillations, whereby one type of neutrino will turn into another as they propagate through space. Neutrino oscillations, which require neutrinos to have mass and therefore were not allowed in our previous theoretical understanding of particle physics, probe new physical laws and are thus of great interest in the study of the fundamental constituents of matter. They may even be related to the mystery of why there is more matter than anti-matter in the universe, and thus are the focus of intense study worldwide. Precision measurements of neutrino oscillations can be made using artificial neutrino beams, as pioneered by the K2K neutrino experiment where neutrinos from the KEK laboratory were detected using the vast Super Kamiokande neutrino detector near Toyama. T2K is a more powerful and sophisticated version of the K2K experiment, with a more intense neutrino beam derived from the newly-built Main Ring synchrotron at the J-PARC accelerator laboratory. The beam was built by physicists from KEK in cooperation with other Japanese institutions and with assistance from the US, Canadian, UK and French T2K institutes. Prof. Chang Kee Jung of Stony Brook University, Stony Brook, New York, leader of the US T2K project, said "I am somewhat stunned by this seemingly effortless achievement considering the complexity of the machinery, the operation and international nature of the project. This is a result of a strong support from the Japanese government for basic science, which I hope will continue, and hard work and ingenuity of all involved. I am excited about more ground breaking findings from this experiment in the near future." The beam is aimed once again at Super-Kamiokande, which has been upgraded for this experiment with new electronics and software. Before the neutrinos leave the J-PARC facility their properties are determined by a sophisticated "near" detector, partly based on a huge magnet donated from CERN where it had earlier been used for neutrino experiments (and for the UA1 experiment, which won the Nobel Prize for the discovery of the W and Z bosons which are the basis of neutrino interactions), and it is this detector which caught the first events. The first neutrino events were detected in a specialize detector, called the INGRID, whose purpose is to determine the neutrino beam's direction and profile. Further tests of the T2K neutrino beam are scheduled for December, and the experiment plans to begin production running in mid-January. Another major milestone should be observed soon after -- the first observation of a neutrino event from the T2K beam in the Super-Kamiokande experiment. Running will continue until the summer, by which time the experiment hopes to have made the most sensitive search yet achieved for a so-far unobserved critical neutrino oscillation mode dominated by oscillations between all three types of neutrinos. In the coming years this search will be improved even further, with the hope that the 3-mode oscillation will be observed, allowing measurements to begin comparing the oscillations of neutrinos and anti-neutrinos, probing the physics of matter/ anti-matter asymmetry in the neutrino sector. Other social bookmarking and sharing tools: Note: If no author is given, the source is cited instead.
<urn:uuid:73f94bf7-72a9-431b-90ac-37db05302858>
3.34375
1,033
News Article
Science & Tech.
22.25657
49
June 22, 2010 Millions of years before humans began battling it out over beachfront property, a similar phenomenon was unfolding in a diverse group of island lizards. Often mistaken for chameleons or geckos, Anolis lizards fight fiercely for resources, responding to rivals by doing push-ups and puffing out their throat pouches. But anoles also compete in ways that shape their bodies over evolutionary time, says a new study in the journal Evolution. Anolis lizards colonized the Caribbean from South America some 40 million years ago and quickly evolved a wide range of shapes and sizes. "When anoles first arrived in the islands there were no other lizards quite like them, so there was abundant opportunity to diversify," said author Luke Mahler of Harvard University. Free from rivals in their new island homes, Anolis lizards evolved differences in leg length, body size, and other characteristics as they adapted to different habitats. Today, the islands of Cuba, Hispaniola, Jamaica and Puerto Rico -- collectively known as the Greater Antilles -- are home to more than 100 Anolis species, ranging from lanky lizards that perch in bushes, to stocky, long-legged lizards that live on tree trunks, to foot-long 'giants' that roam the upper branches of trees. "Each body type is specialized for using different parts of a tree or bush," said Mahler. Alongside researchers from the University of Rochester, Harvard University, and the National Evolutionary Synthesis Center, Mahler wanted to understand how and when this wide range of shapes and sizes came to be. To do that, the team used DNA and body measurements from species living today to reconstruct how they evolved in the past. In addition to measuring the head, limbs, and tail of over a thousand museum specimens representing nearly every Anolis species in the Greater Antilles -- including several Cuban species that were previously inaccessible to North American scientists -- they also used the Anolis family tree to infer what species lived on which islands, and when. By doing so, they discovered that the widest variety of anole shapes and sizes arose among the evolutionary early-birds. Then as the number of anole species on each island increased, the range of new body types began to fizzle. Late-comers in lizard evolution underwent finer and finer tinkering as time went on. As species proliferated on each island, their descendants were forced to partition the remaining real estate in increasingly subtle ways, said co-author Liam Revell of the National Evolutionary Synthesis Center in Durham, NC. "Over time there were fewer distinct niches available on each island," said Revell. "Ancient evolutionary changes in body proportions were large, but more recent evolutionary changes have been more subtle." The researchers saw the same trend on each island. "The islands are like Petri dishes where species diversification unfolded in similar ways," said Mahler. "The more species there were, the more they put the brakes on body evolution." The study sheds new light on how biodiversity comes to be. "We're not just looking at species number, we're also looking at how the shape of life changes over time," said Mahler. The team's findings are published in the journal Evolution. Richard Glor of the University of Rochester and Jonathan Losos of Harvard University were also authors on this study. Other social bookmarking and sharing tools: - D. Luke Mahler, Liam J. Revell, Richard E. Glor, Jonathan B. Losos. Ecological opportunity and the rate of morphological evolution in the diversification of Greater Antillean anoles. Evolution, 2010; DOI: 10.1111/j.1558-5646.2010.01026.x Note: If no author is given, the source is cited instead.
<urn:uuid:d0b67315-27de-4788-9b3c-46132eef151f>
3.640625
791
News Article
Science & Tech.
36.878159
50
The word vivisection was first coined in the 1800s to denote the experimental dissection of live animals - or humans. It was created by activists who opposed the practice of experimenting on animals. The Roman physician Celsus claimed that in Alexandria in the 3rd century BCE physicians had performed vivisections on sentenced criminals, but vivisection on humans was generally outlawed. Experimenters frequently used living animals. Most early modern researchers considered this practice acceptable, believing that animals felt no pain. Even those who opposed vivisection in the early modern period did not usually do so out of consideration for the animals, but because they thought that this practice would coarsen the experimenter, or because they were concerned that animals stressed under experimental conditions did not represent the normal state of the body. Prompted by the rise of experimental physiology and the increasing use of animals, an anti-vivisection movement started in the 1860s. Its driving force, the British journalist Frances Power Cobbe (1822-1904), founded the British Victoria Street Society in 1875, which gave rise to the British government's Cruelty to Animals Act of 1876. This law regulated the use of live animals for experimental purposes. R A Kopaladze, 'Ivan P. Pavlov's view on vivisection', Integr. Physiol. Behav. Sci., 4 (2000), pp 266-271 C Lansbury, The Old Brown Dog: Women, Workers, and Vivisection in Edwardian England (Madison: University of Wisconsin Press, 1985) P Mason, The Brown Dog Affair: The Story of a Monument that Divided the Nation (London: Two Stevens, 1997) N A Rupke, (ed.) Vivisection in Historical Perspective (London: Crooms Helm, 1987) The science of the functioning of living organisms and their component parts.
<urn:uuid:302a84f1-d0b1-4e14-8e71-b2ded9ee5190>
3.71875
392
Knowledge Article
Science & Tech.
36.06538
51
by I. Peterson Unlike an ordinary, incandescent bulb, a laser produces light of a single wavelength. Moreover, the emitted light waves are coherent, meaning that all of the energy peaks and troughs are precisely in step. Now, a team at the Massachusetts Institute of Technology has demonstrated experimentally that a cloud consisting of millions of atoms can also be made coherent. Instead of flying about and colliding randomly, the atoms display coordinated behavior, acting as if the entire assemblage were a single entity. According to quantum mechanics, atoms can behave like waves. Thus, two overlapping clouds made up of atoms in coherent states should produce a zebra-striped interference pattern of dark and light fringes, just like those generated when two beams of ordinary laser light overlap. By detecting such a pattern, the researchers proved that the clouds' atoms are coherent and constitute an "atom laser," says physicist Wolfgang Ketterle, who heads the MIT group. These matter waves, in principle, can be focused just like light. Ketterle and his coworkers describe their observations in the Jan. 31 Science. The demonstration of coherence involving large numbers of atoms is the latest step in a series of studies of a remarkable state of matter called a Bose-Einstein condensate. Chilled to temperatures barely above absolute zero, theory predicted, the atoms would collectively enter the same quantum state and behave like a single unit, or superparticle, with a specific wavelength. First created in the laboratory in 1995 by Eric A. Cornell and his collaborators at the University of Colorado and the National Institute of Standards and Technology, both in Boulder, Bose-Einstein condensates have been the subject of intense investigation ever since (SN: 7/15/95, p. 36; 5/25/96, p. 327). At MIT, Ketterle and his colleagues cool sodium atoms to temperatures below 2 microkelvins. The frigid atoms are then confined in a special magnetic trap inside a vacuum chamber. To determine whether the atoms in the resulting condensate are indeed as coherent as photons in a laser beam, the researchers developed a novel method of extracting a clump of atoms from the trap. In effect, they manipulate the magnetic states of the atoms to expel an adjustable fraction of the original cloud; under the influence of gravity, the released clump falls. The method can produce a sequence of descending clumps, with each containing 100,000 to several million coherent atoms. The apparatus acts like a dripping faucet, Ketterle says. He and his colleagues describe the technique in the Jan. 27 Physical Review Letters. To demonstrate interference, the MIT group created a double magnetic trap so that two pulses of coherent atoms could be released at the same time. As the two clumps fell, they started to spread and overlap. The researchers could then observe interference between the atomic waves of the droplets. "The signal was almost too good to be true," Ketterle says. "We saw a high-contrast, very regular pattern." "It's a beautiful result," Cornell remarks. "This work really shows that Bose-Einstein condensation is an atom laser." From the pattern, the MIT researchers deduced that the condensate of sodium atoms has a wavelength of about 30 micrometers, considerably longer than the 0.04-nanometer wavelength typical of individual atoms at room temperature. Ketterle and his colleagues are already planning several improvements to their primitive atom laser, including getting more atoms into the emitted pulses and going from pulses to a continuous beam. Practical use of an atom laser for improving the precision of atomic clocks and for manipulating atoms is still distant, however, Cornell notes.
<urn:uuid:5a667bf7-c324-483a-8231-ce8448d754f3>
4
769
News Article
Science & Tech.
35.487766
52
The Weekly Newsmagazine of Science Volume 155, Number 19 (May 8, 1999) |<<Back to Contents| By J. Raloff Canadian scientists have identified the likely culprit behind some historic, regional declines in Atlantic salmon. The researchers find that a near-ubiquitous water pollutant can render young, migrating fish unable to survive a life at sea. Heavy, late-spring spraying of forests with a pesticide laced with nonylphenol during the 1970s and '80s was the clue that led the biologists to unmask that chemical's role in the transitory decline of salmon in East Canada. Though these sprays have ended, concentrations of nonylphenols in forest runoff then were comparable to those in the effluent of some pulp mills, industrial facilities, and sewage-treatment plants today. Downstream of such areas, the scientists argue, salmon and other migratory fish may still be at risk. Nonylphenols are surfactants used in products from pesticides to dishwashing detergents, cosmetics, plastics, and spermicides. Because waste-treatment plants don't remove nonylphenols well, these chemicals can build up in downstream waters (SN: 1/8/94, p. 24). When British studies linked ambient nonylphenol pollution to reproductive problems in fish (SN: 2/26/94, p. 142), Wayne L. Fairchild of Canada's Department of Fisheries and Oceans in Moncton, New Brunswick, became concerned. He recalled that an insecticide used on local forests for more than a decade had contained large amounts of nonylphenols. They helped aminocarb, the oily active ingredient in Matacil 1.8D, dissolve in water for easier spraying. Runoff of the pesticide during rains loaded the spawning and nursery waters of Atlantic salmon with nonylphenols. Moreover, this aerial spraying had tended to coincide with the final stages of smoltificationthe fish's transformation for life at sea. To probe for effects of forest spraying, Fairchild and his colleagues surveyed more than a decade of river-by-river data on fish. They overlaid these numbers with archival data on local aerial spraying with Matacil 1.8D or either of two nonylphenol-free pesticides. One contained the same active ingredient, aminocarb, as Matacil 1.8D does. Most of the lowest adult salmon counts between 1973 and 1990 occurred in rivers where smolts would earlier have encountered runoff of Matacil 1.8D, Fairchild's group found. In 9 of 19 cases of Matacil 1.8D spraying for which they had good data, salmon returns were lower than they were within the 5 years earlier and 5 years later, they report in the May Environmental Health Perspectives. No population declines were associated with the other two pesticides. The researchers have now exposed smolts in the laboratory to various nonylphenol concentrations, including some typical of Canadian rivers during the 1970s. The fish remained healthyuntil they entered salt water, at which point they exhibited a failure-to-thrive syndrome. "They looked like they were starving," Fairchild told Science News. Within 2 months, he notes, 20 to 30 percent died. Untreated smolts adjusted normally to salt water and fattened up. Steffen S. Madsen, a fish ecophysiologist at Odense University in Denmark, is not surprised, based on his own experiments. To move from fresh water to the sea, a fish must undergo major hormonal changes that adapt it for pumping out excess salt. A female preparing to spawn in fresh water must undergo the opposite change. Since estrogen triggers her adaptation, Madsen and a colleague decided to test how smolts would respond to estrogen or nonylphenol, an estrogen mimic. In the lab, they periodically injected salmon smolts with estrogen or nonylphenol over 30 days, and at various points placed them in seawater for 24 hours. Salt in the fish's blood skyrocketed during the day-long trials, unlike salt in untreated smolts. "Our preliminary evidence indicates that natural and environ- mental estrogens screw up the pituitary," Madsen says. The gland responds by making prolactin, a hormone that drives freshwater adaptation. Judging by Fairchild's data, Madsen now suspects that any fish that migrates between fresh and salt water may be similarly vulnerable to high concentrations of pollutants that mimic estrogen. From Science News, Vol. 155, No. 19, May 8, 1999, p. 293. Copyright © 1999, Science Service. Copyright © 1999 Science Service
<urn:uuid:3ac50003-34df-4326-9ff5-f4278ff44a0b>
3.109375
978
Truncated
Science & Tech.
47.450967
53
Gaia theory is a class of scientific models of the geo-biosphere in which life as a whole fosters and maintains suitable conditions for itself by helping to create an environment on Earth suitable for its continuity. The first such theory was created by the atmospheric scientist and chemist, Sir James Lovelock, who developed his hypotheses in the 1960s before formally publishing the concept, first in the New Scientist (February 13, 1975) and then in the 1979 book "Quest for Gaia". He hypothesized that the living matter of the planet functioned like a single organism and named this self-regulating living system after the Greek goddess, Gaia, using a suggestion of novelist William Golding. Gaia "theories" have non-technical predecessors in the ideas of several cultures. Today, "Gaia theory" is sometimes used among non-scientists to refer to hypotheses of a self-regulating Earth that are non-technical but take inspiration from scientific models. Among some scientists, "Gaia" carries connotations of lack of scientific rigor, quasi-mystical thinking about the planet arth, and therefore Lovelock's hypothesis was received initially with much antagonism by much of the scientific community. No controversy exists, however, that life and the physical environment significantly influence one another. Gaia theory today is a spectrum of hypotheses, ranging from the undeniable (Weak Gaia) to the radical (Strong Gaia). At one end of this spectrum is the undeniable statement that the organisms on the Earth have radically altered its composition. A stronger position is that the Earth's biosphere effectively acts as if it is a self-organizing system, which works in such a way as to keep its systems in some kind of meta-equilibrium that is broadly conducive to life. The history of evolution, ecology and climate show that the exact characteristics of this equilibrium intermittently have undergone rapid changes, which are believed to have caused extinctions and felled civilisations. Biologists and earth scientists usually view the factors that stabilize the characteristics of a period as an undirected emergent property or entelechy of the system; as each individual species pursues its own self-interest, for example, their combined actions tend to have counterbalancing effects on environmental change. Opponents of this view sometimes point to examples of life's actions that have resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one. However, proponents will point out that those atmospheric composition changes created an environment even more suitable to life. Some go a step further and hypothesize that all lifeforms are part of a single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms. While it is arguable that the Earth as a unit does not match the generally accepted biological criteria for life itself (Gaia has not yet reproduced, for instance), many scientists would be comfortable characterising the earth as a single "system". The most extreme form of Gaia theory is that the entire Earth is a single unified organism; in this view the Earth's biosphere is consciously manipulating the climate in order to make conditions more conducive to life. Scientists contend that there is no evidence at all to support this last point of view, and it has come about because many people do not understand the concept of homeostasis. Many non-scientists instinctively see homeostasis as an activity that requires conscious control, although this is not so. Much more speculative versions of Gaia theory, including all versions in which it is held that the Earth is actually conscious or part of some universe-wide evolution, are currently held to be outside the bounds of science. This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Gaia".
<urn:uuid:7a3fa081-9c60-42a7-8ec4-1d8c386b4009>
3.4375
794
Knowledge Article
Science & Tech.
23.657602
54
Giant Water Scavenger Beetle |Geographical Range||North America| |Scientific Name||Hydrophilus triangularis| |Conservation Status||Not listed by IUCN| The name says it all. This large beetle lives in water, where it scavenges vegetation and insect parts. The insect can store a supply of air within its silvery belly, much like a deep-sea diver stores air in a tank.
<urn:uuid:469863a4-9f80-47c2-ad04-ee7f0adecfd5>
3.078125
91
Knowledge Article
Science & Tech.
34.880113
55
WAKING the GIANT Bill McGuire While we transmit more than two million tweets a day and nearly one hundred trillion emails each year, we're also emitting record amounts of carbon dioxide (CO2). Bill McGuire, professor of geophysical and climate hazards at University College London, expects our continued rise in greenhouse gas emissions to awaken a slumbering giant: the Earth's crust. In Waking the Giant: How a Changing Climate Triggers Earthquakes, Tsunamis and Volcanoes (Oxford University Press), he explains that when the Earth's crust (or geosphere) becomes disrupted from rising temperatures and a C[O.sub.2]-rich atmosphere, natural disasters strike more frequently and with catastrophic force. Applying a "straightforward presentation of what we know about how climate and the geosphere interact," the book links previous warming periods 20,000 to 5,000 years ago with a greater abundance of tsunamis, landslides, seismic activity and volcanic eruptions. McGuire urgently warns of the "tempestuous future of our own making" as we progressively inch toward a similar climate. Despite his scientific testimony to Congress stating "what is going on in the Arctic now is the biggest and fastest thing that Nature has ever done" and the "incontrovertible" data that the Earth's climate draws lively response from the geosphere, brutal weather events are still not widely seen as being connected to human influence. Is our global population sleepwalking toward imminent destruction, he asks, until "it is obvious, even to the most entrenched denier, that our climate is being transformed?"
<urn:uuid:46ed79e4-97dd-492f-bf29-99304e01f4ee>
3.046875
330
Nonfiction Writing
Science & Tech.
28.729356
56
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
2