id
stringlengths
30
34
text
stringlengths
0
71.3k
industry_type
stringclasses
1 value
2016-40/3983/en_head.json.gz/5517
Dr. Bob Bowman & Alex Jones: DARPA’s Secret Little Air Force in Space Aug 19, 2011Alex welcomes back to the show Bob Bowman, a former Director of Advanced Space Programs Development for the U.S. Air Force in the Ford and Carter administrations. Mr. Bowman and Alex will talk about the supposed loss of the Defense Advanced Research Projects Agency’s hypersonic test vehicle (HTV-2) on its second and final flight. The government claims the experimental vehicle was lost in the ocean. NEWSLETTER SIGN UP
科技
2016-40/3983/en_head.json.gz/5580
Craters on a Crescent Engulfing a Gap Shadow Between Moons The shadow of the moon Mimas is cast on Saturn's outer A ring in this image which also shows a couple of moons and a collection of stars.Atlas (30 kilometers, or 19 miles across) can be seen in the top right of the image, between the A ring and thin F ring. Pan (28 kilometers, or 17 miles across) can be seen orbiting in the Encke Gap in the lower left of the image. Mimas is not shown. The bright object between the A ring and F ring on the left of the image is a star. Other smaller, bright specks in the image are also background stars.The novel illumination geometry created as Saturn approaches its August 2009 equinox allows moons orbiting in or near the plane of Saturn's equatorial rings to cast shadows onto the rings. These scenes are possible only during the few months before and after Saturn's equinox which occurs only once in about 15 Earth years. To learn more about this special time and to see movies of moons' shadows moving across the rings, see PIA11651 and PIA11660.This view looks toward the unilluminated side of the rings from about 59 degrees above the ringplane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on May 30, 2009. The view was obtained at a distance of approximately 1.6 million kilometers (994,000 miles) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 76 degrees. Image scale is 9 kilometers (6 miles) per pixel.The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov/. The Cassini imaging team homepage is at http://ciclops.org. View all Images Mission: Cassini-HuygensTarget: PanSpacecraft: Cassini OrbiterInstrument: Imaging Science Subsystem - Narrow AngleViews: 3,850 Image credit: NASA/JPL/Space Science Institute
科技
2016-40/3983/en_head.json.gz/5581
Diffuse Winter Lighting of the Chasma Boreale Scarp Memnonia Fossae The Subtle Jet A single jet feature appears to leap from the F ring of Saturn in this image from the Cassini spacecraft. A closer inspection suggests that in reality there are a few smaller jets that make up this feature, suggesting a slightly more complex origin process.These "jets," like much of the dynamic and changing F ring, are believed by scientists to be caused by the ring's particles interacting with small moons orbiting nearby. This view looks toward the unilluminated side of the rings from about 45 degrees below the ringplane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on June 20, 2013.The view was obtained at a distance of approximately 870,000 miles (1.4 million kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 77 degrees. Image scale is 5 miles (8 kilometers) per pixel.The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo. View all Images Mission: Cassini-HuygensTarget: S RingsSpacecraft: Cassini OrbiterInstrument: Imaging Science Subsystem - Narrow AngleViews: 1,764 Image credit: NASA/JPL-Caltech/Space Science Institute
科技
2016-40/3983/en_head.json.gz/5694
Limerick company Teckro raises $6m in funding round Reporter: Alan Owens8 Aug 2016 Staff at work in Teckro's offices in Limerick '); LIMERICK company Teckro has announced that it has raised $6m via a Silicon Valley based venture capital fund. The life sciences technology company, which operates out of the Bank Building in the city centre, secured the investment in a funding round led by the San Francisco based Founders Fund, which has previously invested in Facebook, Airbnb and other companies. It has developed specialist software for clinical drug trials. This is the fund’s first investment in an Irish based company. It brings to the total amount of funding raised by Teckro to date to $7.8m. Teckro, which has developed a new software platform to streamline the process of clinical trials, has already seen its technology adopted by a number of leading pharmaceutical companies around the world. Co-founded by brothers Gary and Nigel Hughes along with Jacek Skryzpiec, the Limerick tech company uses information retrieval and machine learning technologies to improve the speed and accuracy of trial conduct. Gary Hughes said the funding, which was supported by Enterprise Ireland, “validates our unique approach, and the growth potential of the company”. Teckro is currently recruiting for staff for its Limerick office, which is the company’s headquarters. “We set out to make a difference, to make it easier for drug developers, research staff and patients to connect, and to simplify every interaction in the conduct of a clinical trial,” he explained. “The funding will be used to expand our product development team as we continue to digitise clinical research services. “Fundamentally, the clinical trial landscape has changed and physicians struggle with the current method of conducting clinical trials,” he added. Scott Nolan, Partner at Founders Fund, which has also been early backers in companies such as SpaceX and Palantir, said the fund was “really impressed by the Teckro team and their vision for clinical research. “There’s a clear opportunity in modernizing how clinical trials are run, and Teckro’s mobile-first solution is informed by a firsthand understanding of the challenges involved in conducting global clinical trials,” he said. Teckro bought and refurbished the landmark Bank Bar building in the city to develop a ‘world-class’ company, its founders previously told the Limerick Leader. The ever expanding firm believes it has created a unique, if not revolutionary office environment in the city, one intended to encourage and not limit interactions between staff. “Our aim is to be very ambitious with the company. We feel we know how to create a world class, high performance culture where you get really good people together, you work on big problems, build a lot of momentum around the brand and the idea and you can make it successful,” said Gary. “The scale of that success is the unknown factor. We are very ambitious, we would like to create a world-class company based out of Limerick.”
科技
2016-40/3983/en_head.json.gz/5715
TGS: Namco Bandai announces Eternal Sonata updated 04:50 pm EDT, Thu September 21, 2006 TGS: Eternal Sonata Namco Bandai has announced Eternal Sonata, a new RPG for the Xbox 360. The game's plot is set in the reveries of a dying composer, who sees in his visions the story of a young boy trying to save a girl from a doomed existence. The outcome of the boy's struggle will determine the fate of the world. In terms of gameplay, Namco is touting features such as "a unique monster morphing system" and "a hybrid of a turn-based and action battle system with a special attack feature dependent on light and shadow." The game will ship to North America sometime in 2007. TOTAL_COMMENTS Comments
科技
2016-40/3983/en_head.json.gz/5717
Apple WWDC statistics claim over 800M iOS devices sold updated 03:30 pm EDT, Mon June 2, 2014 Mavericks installed on 51 percent of Mac desktops, 40M copies installed The number of developers registered with Apple has increased 47 percent since last year to 9 million, according to one of a number of statistics brought up in the Worldwide Developers Conference keynote address. The conference itself has grown in the 25 years it has run for, with over 1,000 Apple engineers in attendance, and the youngest developer in the audience is 13 years old. The sale of Macs beat the industry trend by a significant margin, with the PC industry as a whole declining 5 percent year-on-year, and Macs increasing by 12 percent, bringing the total Mac install base to 80 million. Over 40 million copies of Mavericks have been installed, with Tim Cook claiming it to be the fastest adoption of any PC operating system. User adoption rates of Mavericks are at 51 percent, dwarfing Windows 8 and the 14 percent share of the entire Windows install base it currently has, despite Windows 8 shipping a year before Mavericks. OS X is not the only aspect of Apple's empire that has enjoyed high sales. Sales of iOS devices have now surpassed 800 million, consisting of 500 million iPhones, 200 million iPads, and more than 100 million units of the iPod touch, with 98 percent of Fortune 500 companies also said to use iOS. Over 130 million iOS device buyers in the last 12 months alone were entirely new to Apple, with a large contingent being switchers from Android. It is claimed there are large numbers of smartphone owners in China switching from Android to iOS. Customer satisfaction for iOS 7 lies at 97 percent, with 89 percent of the entire iOS install base running the latest version. By comparison, only 9 percent of Android users are using the latest iteration, KitKat, with more than a third running on versions of Android released four years ago. "They can't get security updates, which is particularly important for these users because Android dominates the mobile malware market," claimed Cook, accompanied by a chart showing a 99-percent share of malware for Android. The App Store has grown to offer over 1.2 million apps to iOS users, with 300 million visitors every week, and over 75 billion apps downloaded since the store opened. Gallery
科技
2016-40/3983/en_head.json.gz/5765
Manhunt 2 ban erodes civil liberties Tuesday, July 3rd 2007 at 12:47PM BST I find myself in a rather unique position as I’ve played Manhunt 2 and I really don’t like it that much. However, having mulled over this now for a while, I vehemently disagree with the BBFC’s decision to refuse certification.The reason I don’t like the game so much is not because of the violence, but because I’m not really so much a fan of the horror genre. I rarely watch horror movies and the brief (incredibly gruesome) clips I’ve seen of Saw III and Hostel II were enough for me. I have less inclination to see those movies than I do to play through Manhunt 2.However, I strongly believe we need to communicate to the games industry that we should be rallying round and supporting Rockstar in any attempts the company might make to have the BBFC’s decision overturned by the Video Appeals Committee. I also think that ELSPA’s move to take sides and agree with the BBFC’s position was, while not very surprising, deeply misjudged.Manhunt 2 should clearly have an 18-rating and not be sold to minors. Most adult gamers and people in the games industry I’ve spoken to are pretty much agreed on this. Most, in private, are also pretty shocked by the erosion of civil liberties that the BBFC (and now in the US, the ESRB) effectively banning the game represents. However, many, for whatever reasons, are less happy to voice these opinions in public. Here is the bottom line. I have played Manhunt 2 and I don’t really like it. I won’t be recommending it to friends (and certainly not to younger family members). But I can see that it works as a game and that fans of the horror genre will get some enjoyment out of it.As responsible adult gamers and as an industry we need to work with and encourage ELSPA, PEGI and the BBFC to better educate parents (and retailers) about age-ratings and improving the ways in which they enforces and police sales of 18-rated games in particular.So let’s help ELSPA prove to renegade retailers that selling 18-rated games to minors is against the law. Clearly no adult gamer in their right mind wants to see a child or a young teenager playing Manhunt 2. Just as no adult horror movie fan would want to see a child watching Saw III or Hostel II.If I found out that somebody had sold Manhunt 2 to my 11-year-old nephew (who, of course, desperately wants to play it now after all the recent adverse publicity) then I would be tempted to smash the shop owners’ face in. And I’m a pacifist. Even though I’ve played Manhunt 2.Perhaps my new-found anger is a result of me being exposed to this morally destructive game. I doubt it. It’s more to do with our industry’s apparent refusal to recognise that the banning of Manhunt 2 sets a dangerous precedent for our future freedoms as gamers and indeed as game-creators. “I may disagree with what you say, but I’ll defend to the death your right to say it” goes the popular quote.I urge everybody to log on to the Prime Minister’s website and sign the online petition. It sums up the argument neatly: “The BBFC have recently refused to rate the videogame Manhunt 2. Adults in this country will never be allowed to play this game. Adults should be allowed to make their own decisions with regard to what video games they want to play.” Sign up here. Advertisement
科技
2016-40/3983/en_head.json.gz/5838
This $1 Billion Startup Handles 5% Of All Web Traffic And Is Ready To Take On Cisco Over the last three years, CloudFlare has grown 450% annually, and is currently adding about 5,000 new clients a day, Matthew Prince, its programmer/lawyer-turned-founder, tells us It is now handling the equivalent of roughly 5% of the entire web's traffic through its servers and, just as we predicted in 2012, CloudFlare is indeed a monster company in the making. Prince is on a mission to ‘build a better Internet.’ To do so, CloudFlare’s technology serves as what many call the "digital bouncer" for its 2 million-plus client websites. That means it fights off malicious hacker attacks. But CloudFlare is bigger than just a cyber-security firm. It also improves a website's performance by offering classic networking services like routing and switching (helping computers connect over the Internet), load balancing (making sure computer servers don't get overloaded), and performance acceleration (helping websites run faster) among other things. It is essentially creating a cloud service that parallels the features of many of Cisco’s hardware products. Prince wrote his college thesis on "Why the Internet was a fad" (which he now calls "embarrassing") but his experience working on the Internet from its early days made him realize the "incredibly disruptive" potential of it. Now he wants to help the Internet live up to its promise - a place he calls, “where anyone, anywhere can publish information and get access to that information.” We had a chance to catch up with Prince and ask more about his company's explosive growth. The below interview has been edited for clarity. Business Insider: Why is your company growing so fast? Matthew Prince: The honest answer is we don’t entirely know. We have done almost no marketing, we have a nascent sales team, but we have a large client base from Fortune 50 financial services to national governments across the world. Just this week, we added Reddit, the 17th largest site in the U.S. If we added up all the page views of our customers’ sites, then we would have over 400 billion page views a month. BI: What's drawing people to your service? MP: We made resources that were previously only used by big companies like Google available to everyone online. The initial interest in CloudFlare is for security, but that’s only for about 25% of our customers. Another 25% want to get more out of their infrastructure, and the rest of our users sign up because they want access to our analytics and other ancillary services. BI: For those who don’t fully understand your service, what’s a good comparison? MP: If you asked, “What company are you disrupting?” and if you made me just pick one, I think it would be Cisco. If you look at the entire Cisco line – routing and switching, load balancing, security, DDoS mitigation, performance acceleration, all of these functionalities – that’s what CloudFlare is doing. But instead of selling you a box to do that, we’re providing you a service, which is extremely easy to provision and deploy. It’s the same as Amazon Web Services taking, for example, what HP used to sell you as a box and deploying it as a cloud service. BI: I’m still confused. So your service basically builds a cloud firewall in front of your clients’ websites and protects it from attacks? MP: That’s actually only for security, which is a meaningful part of the business. But you first have to understand how the current Internet business is changing. There are three core tiers to any Internet infrastructure. The base at the bottom is the store and compute level. In the past, HP, Dell, EMC, IBM and thousands of different companies would sell you a box and you would store your data there. Now, we’ve got AWS and god willing someone else, but we’re not going to have thousands of different vendors anymore. It’s going to be a relatively limited number of vendors dealing with enormous amounts of data. The next level above that is application. Once upon a time, there were really three companies that controlled most of the software application revenue in this space: SAP, Oracle, and Microsoft. Great fortunes were made for these companies. They had these bundles of applications that were used to perform a whole bunch of different functions. The enterprise would buy the Oracle suite and then use its database, CRM system, and accounting features in a single package. And it took forever to employ all of these applications, but you kind of got locked in to whatever bundle you bought. What’s happening now is that these bundles are getting unbundled to smaller parts. So you got Salesforce, which is doing CRM, Box for collaboration, NetSuite for financial reporting, and a million others. So instead of a really small universe of companies that control the applications, now it’s getting blown up to thousands and thousands of companies specializing in particular niche of all things. Q: And what does that have to do with CloudFlare? MP: On top of all of this, which I call the ‘edge,’ a whole bunch of [specialized computer] boxes used to exist. So think about what Cisco or Juniper makes, that has a firewall in it. There’s a whole bunch of companies that are making those boxes that sit on the top ‘edge.’ That’s what we’re building - but in the cloud. We’re taking all of this functionality, and a big part of it is firewall and security, but it’s also performance and load balancing and switching and routing, doing all of that. You can take CloudFlare’s service and stick it in front of Salesforce or Workday or Box’s servers. CloudFlare can sit in front of it and do all the functionalities that you used to have with the hardware companies. We’ve built data centers all around the world that sit between users of websites and the actual services, so instead of you having to actually buy a box for this, you could just deploy our service. BI: Tell me more about the security part. How effective is CloudFlare against hackers? MP: About 1 out of every 20 web-requests pass through CloudFlare. So you probably have used our network hundreds of times in the last 24 hours, without even knowing it. If you’re an attacker, we’re an incredibly frustrating thing because we help stop those attacks from ever hitting our clients’ infrastructure. We actually see hackers who advertise services to launch DDoS attacks on behalf of others say they’ll charge 20 to 30 times more or not even do it if it’s against CloudFlare. BI: What attack or client do you remember the most? MP: We work a lot with an organization called the Committee to Protect Journalists. This is an organization that protects journalists from being kidnapped or something, or helps those at risk when publishing controversial stories. So the director came in to our office with three African bloggers. One was from Ethiopia, the other was from Angola, but they wouldn’t tell us the third one’s name or where he came from because death squads were hunting him down in his home country. All three of them came up to me and hugged me, saying, “We couldn’t be doing what we’re doing without you, because the government’s trying to silence us. They’re trying to shut us down and launch attacks, hack our servers. CloudFlare stands in front of it and is able to make sure we stay online.” That’s a pretty powerful thing. BI: What are some the trends you see in the cyber attack space? MP: There’s definitely an uptick in the number of attacks targeting the DNS infrastructure. DNS is what takes Amazon.com or Businessinsider.com and turns it into an IP address. If you can shut that down, then it can shut down the effective ability for anyone to get to a website. And I think we’re very good at dealing with it. There’s also a lot of creativity on how hackers use other people’s resources to help launch an attack. We’re seeing old Internet protocols, like NTP, which stands for Network Time Protocol, exploited to launch these very high-scale attacks. The attacks we see have gotten over 400 gigs/second, and those are some of the biggest attacks that we’ve seen. BI: In 2012, rumor was some VCs were offering to invest in CloudFlare at a $1 billion+ valuation but you accepted a $50 million investment at a lower valuation. What can you tell us about that? MP: We’ve actually never maximized our valuation. Our last valuation, if it wasn’t the lowest valuation, it was close to it. But we really wanted to have Brad Burnham from Union Square Ventures, because he’s just a really deep thinker on the future of the Internet and how things work. Just maximizing valuation has never really been the primary driver for us than making sure we could find the right people. It’s better to be long-term greedy than short-term greedy. BI: What’s next? An IPO? MP: We’re cash flow positive now. So we get to choose our own destiny and figure out what it is that we want to do next. But if you want to see what our roadmap product line is, just kind of look at the entire Cisco catalogue. And say how could this become a service? And if we don’t have it already, then that is probably something that we are working on. We still admire Cisco, and we think they are making great hardware. But we just want to become the service version of that. We think that’s a really attractive business.See Also:10 Hoop Stars You Need To Know In TechHere's Why 3-D Printing Is Still A Niche Product In EnterprisePalantir, A Startup Valued at $9 Billion, Is On A Spending Spree
科技
2016-40/3983/en_head.json.gz/5869
Home > News > Programmed DNA forms fractal April 7th, 2005 Programmed DNA forms fractal A decade after the idea became the topic of his doctoral dissertation, a researcher at the California Institute of Technology has showed that it is possible to coax short strands of artificial DNA to spontaneously assemble into a Sierpinski triangle. The DNA Sierpinski triangles show that there is no theoretical barrier to using molecular self-assembly to carry out any kind of computing and nanoscale fabrication, according to (Erik Winfree, an assistant professor of computer science at The California Institute of Technology). If someone comes up with the right rules, the right set of molecules should be able to carry out the instructions, he said. Source:TRN Bookmark: Related News Press Possible Futures Molecular Nanotechnology On-surface chemistry leads to novel products: On-surface chemical Reactions can lead to novel chemical compounds not yet synthesized by solution chemistry. September 13th, 2016 Measuring forces in the DNA molecule: First direct measurements of base-pair bonding strength September 13th, 2016 A versatile method to pattern functionalized nanowires: A team of researchers from Hokkaido University has developed a versatile method to pattern the structure of 'nanowires,' providing a new tool for the development of novel nanodevices September 9th, 2016 Location matters in the self-assembly of nanoclusters: Iowa State University scientists have developed a new formulation to explain an aspect of the self-assembly of nanoclusters on surfaces that has broad applications for nanotechnology September 8th, 2016
科技
2016-40/3983/en_head.json.gz/5871
New nanoscale imaging method finds application in plasmonics (Nanowerk News) Researchers from the National Institute of Standards and Technology (NIST) and the University of Maryland have shown how to make nanoscale measurements of critical properties of plasmonic nanomaterials�the specially engineered nanostructures that modify the interaction of light and matter for a variety of applications, including sensors, cloaking (invisibility), photovoltaics and therapeutics. Their technique is one of the few that allows researchers to make actual physical measurements of these materials at the nanoscale without affecting the nanomaterial's function ("Nanoscale Imaging of Plasmonic Hot Spots and Dark Modes with the Photothermal-Induced Resonance Technique"). Infrared laser light (purple) from below a sample (blue) excites ring-shaped nanoscale plasmonic resonator structures (gold). Hot spots (white) form in the rings' gaps. In these hot spots, infrared absorption is enhanced, allowing for more sensitive chemical recognition. A scanning AFM tip detects the expansion of the underlying material in response to absorption of infrared light. Plasmonic nanomaterials contain specially engineered conducting nanoscale structures that can enhance the interaction between light and an adjacent material, and the shape and size of such nanostructures can be adjusted to tune these interactions. Theoretical calculations are frequently used to understand and predict the optical properties of plasmonic nanomaterials, but few experimental techniques are available to study them in detail. Researchers need to be able to measure the optical properties of individual structures and how each interacts with surrounding materials directly in a way that doesn't affect how the structure functions. "We want to maximize the sensitivity of these resonator arrays and study their properties," says lead researcher Andrea Centrone. "In order to do that, we needed an experimental technique that we could use to verify theory and to understand the influence of nanofabrication defects that are typically found in real samples. Our technique has the advantage of being extremely sensitive spatially and chemically, and the results are straightforward to interpret." The research team turned to photothermal induced resonance (PTIR), an emerging chemically specific materials analysis technique, and showed it can be used to image the response of plasmonic nanomaterials excited by infrared (IR) light with nanometer-scale resolution. The team used PTIR to image the absorbed energy in ring-shaped plasmonic resonators. The nanoscale resonators focus the incoming IR light within the rings' gaps to create "hot spots" where the light absorption is enhanced, which makes for more sensitive chemical identification. For the first time, the researchers precisely quantified the absorption in the hot spots and showed that for the samples under investigation, it is approximately 30 times greater than areas away from the resonators. The researchers also showed that plasmonic materials can be used to increase the sensitivity of IR and PTIR spectroscopy for chemical analysis by enhancing the local light intensity, and thereby, the spectroscopic signal. Their work further demonstrated the versatility of PTIR as a measurement tool that allows simultaneous measurement of a nanomaterial's shape, size, and chemical composition�the three characteristics that determine a nanomaterial's properties. Unlike many other methods for probing materials at the nanoscale, PTIR doesn't interfere with the material under investigation; it doesn't require the researcher to have prior knowledge about the material's optical properties or geometry; and it returns data that is more easily interpretable than other techniques that require separating the response of the sample from response of the probe. Source: NIST
科技
2016-40/3983/en_head.json.gz/5951
Physicists Quantify Temperature Changes in Metal Nanowires Findings in field that affects cancer treatment, solar energy Released: 21-Jan-2014 10:00 AM EST Source Newsroom: University of Arkansas, Fayetteville Nano Letters KEYWORDS Plasmonics, Physics Credit: University of Arkansas Joseph B. Herzog Newswise — FAYETTEVILLE, Ark. — Using the interaction between light and charge fluctuations in metal nanostuctures called plasmons, physicist have demonstrated the capability of measuring temperature changes in very small 3-D regions of space.Plasmons can be thought of as waves of electrons in a metal surface, said Joseph B. Herzog, visiting assistant professor of physics at the University of Arkansas, who co-authored a paper detailing the findings that was published Jan. 1 by the journal Nano Letters, a publication of the American Chemical Society. The paper, titled “Thermoplasmonics: Quantifying Plasmonic Heating in Single Nanowires," was co-written by Rice University researchers Mark W. Knight and Douglas Natelson.In the experiments, Herzog fabricated plasmonic nanostructures with electron beam lithography and precisely focused a laser on to a gold nanowire with a scanning optical setup.“This work measures the change in electrical resistance of a single gold nanowire while it is illuminated with light,” Herzog said. “The change in resistance is related to the temperature change of the nanowire. Being able to measure temperature changes at small nanoscale volumes can be difficult, and determining what portion of this temperature change is due to plasmons can be even more challenging.“By varying the polarization of the light incident on the nanostructures, the plasmonic contribution of the optical heating has been determined and confirmed with computational modeling,” he said.Herzog’s publication is in a rapidly growing, specialized area called thermoplasmonics, a sub-field of plasmonics that studies the effects of heat due to plasmons and has been used in applications ranging from cancer treatment to solar energy harvesting.Herzog combines his research of plasmons with his expertise in nano-optics, which is the nanoscale study of light. “It’s a growing field,” he said. “Nano-optics and plasmonics allow you to focus light into smaller regions that are below the diffraction limit of light. A plasmonic nanostructure is like an optical antenna. The plasmon-light interaction makes plasmonics fascinating.” SEE ORIGINAL STUDY
科技
2016-40/3983/en_head.json.gz/5973
Missing plane search: 122 objects spotted by French satellite plane.jpg A picture of an image showing the location 122 objects in the southern Indian Ocean. The search for a missing Malaysia Airlines flight continued today. (EPA) PERTH, Australia — A French satellite scanning the Indian Ocean for remnants of a missing jetliner found a possible plane debris field containing 122 objects, a top Malaysian official said Wednesday, calling it "the most credible lead that we have." Defense Minister Hishammuddin Hussein said the objects were more than 2,500 kilometers (1,550 miles) southwest of Australia, in the area where a desperate, multinational hunt has been going on since other satellites detected possible jet debris. Clouds obscured the latest satellite images, but dozens of objects could be seen in the gaps, ranging in length from one yard to 23 meters 25 yards. Hishammuddin said some of them "appeared to be bright, possibly indicating solid materials." The images were taken Sunday and relayed by French-based Airbus Defence and Space, a division of Europe's Airbus Group; its businesses include the operation of satellites and satellite communications. Various floating objects have been spotted by planes and satellites over the last week, including on today, when the Australian Maritime Safety Authority sent a tweet saying three more objects were seen. The authority said two objects seen from a civil aircraft appeared to be rope, and that a New Zealand military plane spotted a blue object. None of the objects were seen on a second pass, a frustration that has been repeated several times in the hunt for Malaysian Airlines Flight 370, missing since March 8 with 239 people aboard. It remains uncertain whether any of the objects came from the plane; they could have come from a cargo ship or something else. "If it is confirmed to be MH370, at least we can then we can move on to the next phase of deep sea surveillance search," Hishammuddin said. The search resumed today after fierce winds and high waves forced crews to take a break Tuesday. A total of 12 planes and five ships from the United States, China, Japan, South Korea, Australia and New Zealand were participating in the search, hoping to find even a single piece of the jet that could offer tangible evidence of a crash and provide clues to find the rest of the wreckage. Malaysia announced Monday that a mathematical analysis of the final known satellite signals from the plane showed that it had crashed in the sea, killing everyone on board. The new data greatly reduced the search zone, but it remains huge — an area estimated at 622,000 square miles, about the size of Alaska. "We're throwing everything we have at this search," Australian Prime Minister Tony Abbott told Nine Network television on Wednesday. "This is about the most inaccessible spot imaginable. It's thousands of kilometers from anywhere," he later told Seven Network television. He vowed that "we will do what we can to solve this riddle." In Beijing, some families held out a glimmer of hope their loved ones might somehow have survived. About two-thirds of the missing were Chinese, and their relatives have lashed out at Malaysia for essentially declaring their family members dead without any physical evidence of the plane's remains. Many also believe Malaysia has not been transparent or swift in communicating information with them about the status of the search. Wang Chunjiang, whose brother was on the plane, said he felt "very conflicted." "We want to know the truth, but we are afraid the debris of the plane should be found," he said while waiting at a hotel near the Beijing airport for a meeting with Malaysian officials. "If they find debris, then our last hope would be dashed. We will not have even the slightest hope." China dispatched a special envoy to Kuala Lumpur, Vice Foreign Minister Zhang Yesui, who met Malaysian Prime Minister Najib Razak and other top officials Wednesday, the official Xinhua News Agency reported. China, which now has Chinese warships and an icebreaker in the search zone, has been intent on supporting the interests of the Chinese relatives of passengers, backing their demands for detailed information on how Malaysia concluded the jet went down in the southern Indian Ocean. That also is the likely reason why Chinese authorities — normally extremely wary of any spontaneous demonstrations that could undermine social stability — permitted a rare protest Tuesday outside the Malaysian embassy in Beijing, during which relatives chanted slogans, threw water bottles and briefly tussled with police who kept them separated from a swarm of journalists. The plane's bizarre disappearance shortly after it took off from Kuala Lumpur en route to Beijing has proven to be one of the biggest mysteries in aviation. Investigators have ruled out nothing so far — including mechanical or electrical failure, hijacking, sabotage, terrorism or issues related to the mental health of the pilots or someone else on board. The search for the wreckage and the plane's flight data and cockpit voice recorders will be a major challenge. It took two years to find the black box from an Air France jet that went down in the Atlantic Ocean on a flight from Rio de Janeiro to Paris in 2009, and searchers knew within days where the crash site was. There is a race against the clock to find Flight 370's black boxes, whose battery-powered "pinger" could stop sending signals within two weeks. The batteries are designed to last at least a month. On Wednesday, the Australian Maritime Safety Authority, which is coordinating the southern search operation on Malaysia's behalf, said a U.S. Towed Pinger Locator arrived in Perth along with Bluefin-21 underwater drone. The equipment will be fitted to the Australian naval ship, the Ocean Shield, but AMSA could not say when they would be deployed. Various pieces of floating objects have been spotted by planes and satellite, but none have been retrieved or identified. Today's search focused on an 80,000 square kilometer (31,000 square miles) swath of ocean about 2,000 kilometers (1,240 miles) southwest of Perth. David Ferreira, an oceanographer at the University of Reading in Britain, said little is known about the detailed topography of the seabed in the general area where the plane is believed to have crashed. "We know much more about the surface of the moon than we do about the ocean floor in that part of the Indian Ocean," Ferreira said. Kerry Sieh, the director of the Earth Observatory of Singapore, said the seafloor in the search area is relative flat, with dips and crevices similar to that the part of the Atlantic Ocean where the Air France wreckage was found. He believes any large pieces of the plane would likely stay put once they have completely sunk. But recovering any part of the plane will be tough because of the sheer depth of the ocean — much of it between about 10,000-15,000 feet in the search area — and inhospitable conditions on the surface where intense winds and high swells are common. Australia's Bureau of Meteorology warned that weather was expected to deteriorate again Thursday with a cold front passing through the search area that bring rain thunderstorms, low clouds and strong winds.
科技
2016-40/3983/en_head.json.gz/5993
Changing Views About A Changing Climate August 3, 20121:00 PM ET What is the role of humans in climate change? "Call me a converted skeptic," physicist Richard Muller wrote in an Op-Ed in the New York Times this week, describing his analysis of data from the Berkeley Earth Surface Temperature project. Though Muller was once a notable skeptic regarding studies connecting human activity to climate change, he has now concluded that "humans are almost entirely the cause" of global warming. IRA FLATOW, HOST: This is SCIENCE FRIDAY. I'm Ira Flatow. This week, physicist Richard Muller published an op-ed piece in the New York Times in which he said that humans are almost entirely the cause of global climate change. This is following his own analysis of Earth's surface temperature data. What makes the op-ed notable is Dr. Muller was once openly skeptical of studies linking global warming to human activity, and now, well, call me a converted skeptic, he wrote. He joins us now to talk about it. He is professor of physics at the University of California at Berkeley and a faculty senior scientist at Lawrence Berkeley National Laboratory, author of the new book "Energy For Future Presidents: The Science Behind the Headlines." Welcome back, Dr. Muller. RICHARD MULLER: Good to be here, Ira. FLATOW: So tell us about your change of mind and heart about this issue. MULLER: Well, if you had asked me a year ago, I might have said I didn't know whether there was global warming at all. But we had begun a major study, scientific reinvestigation. We were addressing what I consider to be legitimate criticisms of many of the skeptics. But about nine months ago, we reached a conclusion that global warming was indeed taking place, that all of the effects that the skeptics raised could be addressed, and to my surprise, actually, the global warming was approximately what people had previously said. It came as a bigger surprise over the last three to six months when our young scientist Robert Rohde was able to adopt really excellent statistical methods and push the record back to 1753. With such a long record, we could then separate out the signatures of solar variability, of volcanic eruptions, of El Nino and so on. And actually, to my surprise, the clear signature that really matched the rise in the data was human carbon dioxide and other greenhouse gases. It just matched so much better than anything else. I was just stunned. FLATOW: You know, you wrote in your book, even, page 75: The evidence shows that global warming is real, and the recent analysis of our team indicates that most of it is due to humans. MULLER: (Technical difficulties)... And then we had these new results. I was much more cautious in the version of the book that was sent around for pre-review, and then we managed to get the new results in the new book. FLATOW: You know, you've been criticized on both sides of the aisle, as they say now. I think some scientists are saying: Why did you publish - why didn't you publish it in legitimate, peer-reviewed journals first? MULLER: Oh, we're following the - I mean, we're following the tradition of science, which is that you distribute this widely to your peers before you publish it. Jim Hansen does the same thing. He puts his papers online. This is a tradition in the field. Peer review means you present results in a public forum, you distribute pre-prints. Most of my important papers were widely distributed to other scientists far before they appeared. It's the best kind of peer review. FLATOW: And you challenge anyone to come up with a better explanation. MULLER: I do, and we have this record going back to 1753. That's pre-Revolutionary War. Some of the early measurements in the United States were taken by Benjamin Franklin and Thomas Jefferson. So we have this excellent record now. We use essentially all the data, nobody had previously used more than about 20 percent of the temperature stations. So we have this excellent record. And given that excellent record, now you say, where does it come from? A real surprise to me was that when we compare to this to the sun-spot record, which shows the solar variability, there's no match whatsoever. The variability of the sun did not contribute. I think that was the primary alternative explanation, and now with this really long record, thanks again to Robert Rohde, we really can eliminate that. And there's not much left. We see the volcanoes very clearly. We see the volcanic eruptions, but their effect is always short-lived, about three or four years. FLATOW: And yet you say that even though you can accept the CO2-temperature connection and that humans are behind it, you say that some people are still too ready to connect various weather events to climate change. MULLER: Well, that's true. I believe that many people who are deeply concerned about global warming feel that the public needs something more dramatic. And so take example the - NOAA recently announced that the last 12 months were the warmest on record in the United States. When I heard that, I looked it up, and sure enough they're right. We see that in our own record. But remarkably, the world had cooled somewhat in that period. It was just the United States, which is two percent of the globe. I feel that one has to be a little bit more candid with the public, and to ascribe the warming of the United States not as a heat wave but as global warming, when the globe is cooling, is not being completely up front. FLATOW: Well, if the globe is... MULLER: The public is smart. They don't like to be fooled. FLATOW: If the globe is cooling, then what is global warming, then? MULLER: Oh, I just mean the globe has been cooling for the last three or four years. FLATOW: I see. MULLER: As we said, a heat wave in the U.S., but it wasn't a world event that set that heat wave. FLATOW: How - let me just change gears because the title of your book is "Energy for Future Presidents: The Science Behind the Headlines." The last time you were on, you were talking about physics for future presidents. We're in a presidential debate year. Do you think this should be an issue, the talk about global climate change and global warming? MULLER: Oh I - energy, there's nothing more important in our world than the future of energy. We start wars over energy. Events like Fukushima and the Gulf oil spill have too much influence in our policy. We have to sit back and be thoughtful and think: What can we do? For the case of global warming, I do believe we should take action, but most of the action that people are suggesting will not address the problem, and so we have to get the energy policy right. It has to be based in science and engineering and technology. FLATOW: And what should that policy be? What should we do? MULLER: The most important thing, there are two things that are really important. One is there's an enormous amount that can be done with energy efficiency and conservation: better automobiles, better insulation in homes. The second thing that we need to do, and this is equally important, is to recognize that natural gas emits one-third the carbon dioxide of coal. And the future emissions, unfortunately, are not within the U.S. control. By the end of this year, China will be emitting twice the carbon dioxide as the U.S., and they're growing rapidly whereas our carbon dioxide emissions have been going down over the last few years. So unless we can devise an approach where China can reduce its emissions, it won't do any good. Fortunately, there is that approach because China is building essentially one new coal gigawatt every week. That's a huge growth, and it's responsible for their increase in emissions, but they have good natural gas resources. We have to develop and devise methods for clean fracking. Clean fracking is the key. They have enormous reserves over there. If they can switch from coal to natural gas, that'll have as big an effect as worldwide energy conservation and energy efficiency. FLATOW: Do you think that your change of position might effect any other changes of positions among people possibly in Congress, some people who believe that global warming is the greatest hoax perpetuated on the human race? MULLER: Well, I have great sympathy for such people because many of them, and I've talked to them in Congress, they knew there are legitimate concerns. They knew the data, not all the data had been used, it had been selected. They knew there had been adjustments to the data. They knew that a large number of the stations were poor quality. I've testified in Congress about this. I think they had legitimate concerns. We need to respect the people who have been skeptical. What - I don't think changing my opinion will have a big impact. I think the work that we did - we have posted our papers online for scrutiny. They have actually undergone a lot of peer review. They have been submitted to journals. We've gotten peer review back, and we've responded to it. We hope they'll be published soon. But in the meanwhile, they are available online. We've put the data available online, we've put all of our programs online. We have an utter transparency that I hope is setting a new standard for how transparent you can be. In the end, I hope it is the work we did that will convince people, not the fact that I've changed my mind. FLATOW: And where do you go from here? Is there anything next on your agenda about global warming? MULLER: Well, I think the issue of policy is really important. And we have been looking very hard about what can be done. In my new book, we talk about how much money can be made by using energy efficiency, energy conservation, I called it energy productivity. The fact is just for the ordinary citizen, as well as the Chinese, a little bit of money placed in insulation in your home can yield a return on your investment that exceeds that of Bernie Madoff. And not only that, it's legitimate, and it's even tax-free because you don't pay taxes on money you save. So that, and unfortunately too many people in the community, too many of my friends who are worried about global warming, have already taken a position on fracking. The fact is that natural gas can be made clean. It's not hard. It's much easier to do clean fracking than it is, for example, to make cheap solar. So I'm hoping that the environmentalists who have started to oppose fracking, I think prematurely, can be won over and recognize that this has to be part of a worldwide energy policy. Natural gas also helps the Chinese because their citizens are being choked by the soot and other emissions of their coal plants. So expediting a shift, this should be U.S. policy, that we will help the Chinese make the shift from coal to natural gas. China, India, the developing world, this is absolutely essential. FLATOW: And what about a shift to renewables? MULLER: Well, that's wonderful, and that will ultimately take the place of natural gas, but take China for example. Last year, they installed what everybody says was a gigawatt of solar. In fact, it wasn't a gigawatt because that's the peak power. Average in night, and it's half a gigawatt. Average mornings and late afternoons, it's a quarter of a gigawatt. Meanwhile, they put in 40 gigawatts of coal. So renewables are great, but for the developing world, they're still too expensive. And so China and India are going the way they can afford. We can't criticize them for that, and we can't afford to subsidize them. So the switch to natural gas I think is absolutely essential for the next several decades. FLATOW: All right, thank you very much, it's been enlightening to talk to you, and hope that you'll be back when you've got new data to share with us. MULLER: We'd be happy to come back any time, Ira. FLATOW: Richard Muller, thank you. He's author of "Energy for Future Presidents: The Science Behind the Headlines." He's also senior scientist at Lawrence Berkeley National Laboratory. And we're going to take a break, and when we come back, we're going to talk about planetary science of another kind, a trip to Mars. This weekend, there are little Mars parties going on all over the country, waiting for the landing of a new Mars Rover Sunday night into early morning Eastern Time. So when we come back, John Grunsfeld from NASA is here to talk about it, 1-800-989-8255 is our number. You can tweet us @scifri, @-S-C-I-F-R-I or go to our website at sciencefriday.com. Stay with us, we'll be right back after this break. FLATOW: I'm Ira Flatow. This is SCIENCE FRIDAY from NPR. Melting The World's Biggest Ice Cube July 20, 2012 Climate Change Ups Odds Of Heat Waves, Drought July 13, 2012 Researchers Observe Climate Change, First-Hand June 25, 2012
科技
2016-40/3983/en_head.json.gz/6069
EU urged to act on mobile satellite service PC Advisor EU urged to act on mobile satellite service The European Commission on Thursday told European Union countries that they need to take urgent action to allow the development of mobile satellite services. Twenty-one countries fail to implement legislation Jennifer Baker Some 21 of the EU's 27 countries have so far failed to implement legislation leading to the pan-EU deployment of mobile satellite services (MSS) that could be used for high-speed internet, mobile television, mobile radio or emergency communications. The Commission, the European Parliament and the EU's Council of Ministers agreed to create a single selection and authorisation process for MSS, and more than 20 months ago selected Inmarsat Ventures and Solaris Mobile as the two operators to provide such services. They were due to start offering mobile satellite services from this May, but many countries have yet to remove legal uncertainties, such as license fees. The two operators concerned were also told to step up their efforts by Digital Agenda Commissioner Neelie Kroes. Fitbit trackers vs Apple Watch
科技
2016-40/3983/en_head.json.gz/6075
Adobe debuts next-gen publishing Jeff Partyka (PC World) on 03 March, 1999 21:49 Adobe opened the Seybold Publishing Conference in Boston, US with a keynote committing itself to producing the best set of publishing tools for the new millennium.The company formally announced Adobe InDesign, an extensible layout tool geared toward graphics professionals and based mostly on a series of plug-ins. It was formally introduced during the keynote by Charles Geschke, president and chairman of the board at Adobe, and Chief Executive Officer John Warnock.Already dubbed "the Quark Killer" in reference to its likely market competitor QuarkXPress from Quark Inc., InDesign brings together the functionality of many of Adobe's other popular publishing tools -- such as Photoshop, Illustrator and Acrobat -- in one environment, the executives said. It also allows users to open QuarkXPress and Adobe PageMaker 6.5 files directly.Warnock said InDesign is the culmination of a four-and-a-half-year effort within Adobe to design a publishing architecture that integrates the functions of all the company's applications."There were differences between our products that made interoperability in some ways cumbersome," Warnock said. "We wanted to make them more and more similar."Many of InDesign's features were demonstrated on both Macintosh and Windows platforms, although a memory error derailed one part of the Windows demo, eliciting applause and laughter from the crowd.The demo included importing native Photoshop files and copy-and-pasting Illustrator drawings, which were then tweaked within InDesign. Exporting to PDF from within the application was also demonstrated, with immediate notification of font problems in the document.A multiline composer feature was also shown, allowing users to check and adjust the kerning not only of an individual line, but also of the lines before and after it for better-looking paragraphs.The executives commented several times that InDesign, together with Adobe's other products, is the basis for a whole new open architecture for publishing. Warnock called it an "ecosystem that works together.""We're trying to provide an infrastructure so that all this stuff makes sense together," Warnock said. "It's not just a feature here and a feature there." InDesign will be available for Macintosh OS 8.5, Windows 98 and Windows NT 4.0 for $US699. It's scheduled to ship in the third quarter.GoLive 4.0, a cross-platform Web design and publishing tool, was also demonstrated during the keynote. It allows professional management of large Web sites and integrates all Adobe products, Warnock said."The Web has become an integral part of all of our jobs," Warnock said. "Our overriding goal is to make our Web and our print production tools interchangeable."Warnock discussed the proposed scalable vector graphics (SVG) standard for the Web, which he said Adobe is supporting as a way "to seriously upgrade graphics on the Web." Geschke likened many current Web efforts to "publishing on a dot-matrix printer -- it's not good enough.""We'll have plug-ins for each browser, and we'll target it with all of our applications," Warnock said. "You'll be able to design Web pages with no compromise. The Web is going to change and Adobe is going to play a central role.""The Web is at the core of our market strategy and, more importantly, of our vision for the future," Geschke said.The GoLive demo showed SVG's ability to zoom in on a Web graphic without the need to re-contact the server. It also demonstrated the ability to fix links in PDF files without launching Acrobat.The other new products announced at the keynote included PressReady, a colour-management tool for ink-jet printers and PageMaker 6.5 Plus, the latest version of Adobe's page-layout software.Adobe1800 065 628http://www.adobe.com Jeff Partyka
科技
2016-40/3983/en_head.json.gz/6076
Google Play Music finally live in Australia Google has finally launched its Play Music service in Australia, almost a year and a half since it made its debut in the US. Best sites for downloading music In a day and age where music sharing is a bigger deal than Justin Bieber's haircut, it can be confusing to wade through Web sites looking for the best ones for downloading some new tunes. Embattled LimeWire to launch subscription music service After a thorough pummeling by the music industry , peer-to-peer (P2P) file-sharing software vendor LimeWire Inc. will launch a subscription-based music service for consumers. Music service Rdio opens up to new round of users Rdio is opening up its social music service to a new round of users, the company said in a blog post on Wednesday. Apple controls 70% of U.S. music download biz Apple's iTunes music store controls a dominant share of the U.S. digital music market, a research analyst said today. Music companies want Pirate Bay founders to pay fine The Stockholm District Court should decide that two of The Pirate Bay's founders have to pay a fine since the file-sharing site is still open and they are still involved, according to a recent filing from the music industry. Facebook group crowns UK's no.1 Christmas song This might have been an upset of major proportions for the music industry but at the end of the day, it just underscored the power of Facebook. The Beatles and iTunes: A complicated history It's something that happens every twelve months, but this year's rumors about the Beatles catalog landing in the iTunes store is taking on a life of its own. The rumors are fueled in part by the fact that today is not only Apple event day, but also B... Spotify iPhone app hits Twitter rumour mill Despite Twitter going into rumour overdrive, Spotify is still waiting to see if an iPhone application submission will make it past the Apple's App Store approval process. Tenenbaum hit with $675,000 fine for music piracy In another big victory for the Recording Industry Association of America (RIAA) a federal jury has fined Boston University student Joel Tenenbaum $675,000 for illegally downloading and distributing 30 copyrighted songs. Reports: Record industry wins $US675k in damages from file swapper A Boston student has been ordered to pay $US675,000 to the recording industry for illegal file-sharing, according to reports Friday. Second RIAA piracy trial starts The Recording Industry Association of America may have decided not to pursue further file-sharing trials as a policy, but one last case is set to get underway today and promises to bring a dash of the theatrical into the courtroom. News Apple's digital album plan sounds familiar Apple is working on a new plan to save the album, according to a report taking the Web by storm this week. News Minnesota woman appeals $1.9M music piracy fine The woman ordered to pay $1.92 million in fines for illegally distributing 24 copyrighted songs said she will appeal, and called the June 18 jury verdict "excessive, shocking and monstrous." Virgin plans unlimited music downloads Virgin Media has unveiled an unlimited music download service that will allow users to stream and download as many tracks as they want a month.
科技
2016-40/3983/en_head.json.gz/6101
Explore the Science • Ask & Experiment • Physics Buzz Physics in Action Physics + People in Physics Physics in Pictures Science off the Sphere Explore Einstein Writers' Gallery Do You See What Eye See? It’s been hard to miss the publicity for LASIK, the laser surgery that reshapes the cornea to improve the eye’s ability to focus. Actually, both the cornea and the lens focus light, as shown in the diagram. But the lens, itself mostly water, is bathed in watery fluid on both sides, so upon entering and leaving the lens, light bends, or refracts, relatively little. It refracts far more when passing from air into the cornea. The structure of the human eye. Note that the lens is bathed in watery fluid on both sides, so its refracting power is much less than that of the cornea. (image courtesy of HyperPhysics, by Rod Nave, Georgia State University).To demonstrate the precision of ablation of human tissue, IBM scientists cut these slots in a human hair with the excimer laser (image courtesy of IBM research). Reshaping the cornea can make a big difference. Six months after LASIK surgery, about 95% of patients have uncorrected vision of at least 20/40, the minimum for driving, and about 50% have uncorrected vision of at least 20/20, considered to be ideal normal vision. On the downside, about 5% of patients experience side-effects and 1% suffer serious, vision-threatening problems. LASIK is performed with an ultraviolet (UV) excimer laser. Excimer stands for excited dimer, an excited, unstable molecule of an “inert” gas and a halogen—argon and fluorine. This short-lived molecule dissociates promptly with the emission of a UV photon of a particular frequency. In an alternate process, if a photon of this frequency hits an as-yet-undissociated dimer, the dimer emits a second photon, in step with the first—a process called “stimulated emission,” the basis of laser action. With the argon and fluorine confined in a tube capped with mirrors, one of which allows some light to escape (see diagram), the result is an intense UV laser beam. Excimer lasers, unlike the familiar ones in bar-code readers, are pulsed—they pack their output into short bursts about 10 nanoseconds long (10-8 sec). This pulsing makes it ideal for eye surgery, because the intense pulses vaporize tissues without heating the rest of the eye. The UV light is absorbed in a very thin layer of tissue, decomposing that tissue into a vapor of small molecules, which fly away from the surface in a tiny plume. This happens so fast that nearly all of the deposited heat energy is carried away in the plume, leaving too little energy behind to damage the adjacent tissue. The process is called ablation, and its application to surgery was an invention of IBM physical scientists. To see its precision, look at the image of the slots cut by an excimer laser in a human hair. Subsequently, the IBM scientists collaborated with ophthalmologists, giving birth to laser refractive surgery. Ablation of the outermost layers of the cornea and its covering can produce a number of visual problems after the surgery. To avoid this, the ophthalmologist first shaves a thin slice of the outer corneal tissue, folds it back (see photo), and then with the laser ablates the underlying cornea to produce the required shape. When the flap is folded back in place—no sutures are necessary—the two corneal surfaces grow together and the eye usually heals within a week or less. The drawback of this procedure is that cutting the flap is responsible for most of the side-effects. To reduce these problems, an interdisclipinary team at the University of Michigan is working with the femtosecond laser, whose pulses last only 10-13 seconds.Schematic diagram of laser action. Each wiggly line represents a photon. Note how the number of photons increases through stimulated emission along the path of the original photon. (image courtesy of HyperPhysics, by Rod Nave, Georgia State University). In the original LASIK, the corneal flap is cut with a microkeratome, a device similar to a carpenter’s plane. Any metallic shards or irregularities in the blade can easily damage the delicate cornea. To avoid this problem, a team of physicists, engineers, and ophthalmologists at the University of Michigan developed a procedure to make this cut with the intense pulses of the femtosecond laser. To avoid the side-effects associated with a cut made by a microkeratome blade, the ophthamologist cuts the flap with the laser itself (image courtesy of Intralase).Flap (right) and disk of corneal material (left) removed from a pig’s eye in a procedure performed entirely with an excimer laser (image courtesy of Center for Ultrafast Optical Science, University of Michigan. http://www.eecs.umich.edu/USL/).The pulses of the femtosecond laser last only 10-13 seconds, as opposed to 10-8 seconds for the excimer, and these ultrashort pulses are made much more intense by a technique called chirped pulse amplification. In general, a laser pulse is amplified by passing it through additional matched lasers, where stimulated emission can vastly increase the number of photons. At very high power, the amplified pulse can have so much energy that it could destroy these lasers. To sidestep this effect, a group at Michigan’s Center for Ultrafast Optical Science developed a way to spread out the different frequencies in the pulse with diffraction gratings to produce a much longer and less intense pulse that can be amplified, as shown in the drawing. Then, with amplification complete, the femtosecond pulse is reconstituted. Getting back to LASIK, to avoid the damage caused by the microtome, surgeons cut out the underside of the flap with femtosecond laser pulses. These pulses are focused inside the cornea and vaporize tissue at the focal point. The result is a short-lived bubble of gas, which dissolves into the water in the cornea. A rapid sweep of the focus creates a surface of bubbles that define the underside of the flap, and a second cut, which is cylindrical, enables the surgeon to fold back the flap. At this point, the excimer laser performs the LASIK surgery, essentially as before. Beyond cutting the flap, the Michigan team is developing a way to perform LASIK entirely with the femtosecond laser. In this surgery, the laser ablates two curved surfaces, which define the material to be removed. The upper surface is also the underside of the flap. The surgeon then folds back the flap and removes the “lenticle” of corneal tissue underneath. The image shows the flap, and the disk-shaped tissue that was removed, when this experimental procedure was performed on a pig’s eye. A further possible procedure avoids the flap altogether. Here the laser would ablate a lenticle-shaped tissue within the cornea, the bubbles would be absorbed in the watery tissue, and the upper and lower surfaces of the ablated lenticle would grow together, providing a reshaped cornea. This kind of research, not to mention the development of LASIK itself, is made possible by collaboration among physicists, engineers, and MDs. The story is the same for numerous other advances in high-tech medicine, such as the PET scan, the pacemaker/defibrillator, and the endoscope, to name but a few. The corresponding fields of physics range from elementary particles to electricity and magnetism to optics, and more. Links U. of Michigan, Center for Ultrafast Optical Science Corneal Refractive Surgery HyperPhysics, Georgia State University Light, Laser Bell Labs The Invention of the LaserA strand of hair etched with pulses from a laser. Those with access to the device preferred it 3 to 1 to the old method of splitting hairs manually. (Image courtesy of IBM Research) Latest from Physics in Action Hitomi, An Ambitious Endeavor Cut Short Japan's Hitomi X-ray Observatory was lost in an accident just a month after launch. What did we learn in that month? Advances in Micro-Drones Electrostatics help "bug bot" micro-drones cling to surfaces just like their biological counterparts! Quantum Computing, Human Processing Gamers are helping engineer the next generation of supercomputers—and you can, too! Physics in Action by Topic Compression Waves & Sound Light & Optics Space & the Universe Thermodynamics & Heat Home | About PhysicsCentral | Terms of Use | Contact Us | Site MapAmerican Physical Society ©2016
科技
2016-40/3983/en_head.json.gz/6183
The World's Biggest Bomb: Revealed The Cold War race for nuclear supremacy throughout the 1950s and 1960s, when the US and the Soviet Union locked horns in a battle to build the most powerful atomic bomb. First-hand testimony and modern forensic investigation explores what happened when America's Castle Bravo went out of control, vaporising three small islands in the Pacific testing ground of Bikini Atoll and spreading severe radioactive fallout to nearby inhabited islands, and reveals the story of the Soviets' Tsar bomb, which was six times more powerful than Bravo. Andy Webb Dan Chambers What did you think of The World's Biggest Bomb: Revealed?
科技
2016-40/3983/en_head.json.gz/6187
An Emerging View on Early Land UseApril 15th, 2011 by group Guest article by William RuddimanMore than 20 years ago, analyses of greenhouse gas concentrations in ice cores showed that downward trends in CO2 and CH4 that had begun near 10,000 years ago subsequently reversed direction and rose steadily during the last several thousand years. Competing explanations for these increases have invoked either natural changes or anthropogenic emissions. Reasonably convincing evidence for and against both causes has been put forward, and the debate has continued for almost a decade. Figure 1 summarizes these different views. An August 2011 special issue of the journal The Holocene will help to move this discussion forward. All scientists who have been part of this debate during the last decade were invited to contribute to the volume. The list of those invited was well balanced between the two views, both of which are well represented in the issue. The papers have recently begun to come online, but unfortunately behind a paywall.Arguably, the most significant new insight emerging from this issue comes from several papers that converge on a view of pre-industrial land use that is very different from the one that has prevailed until recently. Most previous modeling simulations relied on the simplifying assumption that per-capita clearance and cultivation remained small and nearly constant during the late Holocene, but historical and archeological data now reveal much larger earlier per-capita land use than used in these models. The emergence of this view was reported in several presentations at a March 2011 Chapman Conference, and it has attracted recent attention both in Nature and Science News. The following article summarizes this new evidence.Historical data on land use extending back some 2000 years exists for two regions — Europe and China. In a 2009 paper, Jed Kaplan and colleagues reported evidence showing nearly complete deforestation in Europe at mid-range population densities, but very little additional clearance at higher densities. Embedded in this historical relationship was a trend from much greater per-capita clearance 2000 years ago to much smaller values in recent centuries. Similarly, a Holocene special-issue paper by Ruddiman and colleagues pointed to a pioneering study of early agriculture in China published in 1937 by J. L. Buck. Paired with reasonably well-constrained population estimates that extend back to the Han dynasty 2000 years ago, these data show a 4-fold decrease in per-capita land area cultivated in China from that time until the 1800’s.These two re-evaluations of per capita land use have important implications for global pre-industrial carbon emissions. A special issue paper by Kaplan and colleagues used the historical relationships from Europe to estimate worldwide clearance, with smaller per-capita land needs in tropical regions due to the longer growing season that allows multiple crops per year. Their model simulated major forest clearance thousands of years ago not just in Europe and China, but also in India, the Fertile Crescent, Sahelian Africa, Mexico and Peru. The pattern of clearance is nicely shown in a time-lapse sequence available in the Science News article cited above. Kaplan and colleagues estimated cumulative carbon emissions of ~340 GtC (1 Gt = billion metric tons) before the industrial-era CO2 rise began in 1850. This estimate is 5 to 7 times larger than those based on the assumption that early farmers cleared forests and cultivated land in the small per-capita amounts typical of recent centuries.Page 1 of 4 | Next page Posted in Climate Science | 98 commentsPrevious post: The warm beer chartNext post: Fracking methane
科技
2016-40/3983/en_head.json.gz/6200
New Technique Lets Scientists Peer Within Nanoparticles, See Atomic Structure In 3-D UCLA researchers are now able to peer deep within the world's tiniest structures to create three-dimensional images of individual atoms and their positions. Their research, published March 22 in the journal Nature, presents a new method for directly measuring the atomic structure of nanomaterials. "This is the first experiment where we can directly see local structures in three dimensions at atomic-scale resolution – that's never been done before," said Jianwei (John) Miao, a professor of physics and astronomy and a researcher with the California NanoSystems Institute (CNSI) at UCLA. Miao and his colleagues used a scanning transmission electron microscope to sweep a narrow beam of high-energy electrons over a tiny gold particle only 10 nanometers in diameter (almost 1,000 times smaller than a red blood cell). The nanoparticle contained tens of thousands of individual gold atoms, each about a million times smaller than the width of a human hair. These atoms interact with the electrons passing through the sample, casting shadows that hold information about the nanoparticle's interior structure onto a detector below the microscope. Miao's team discovered that by taking measurements at 69 different angles, they could combine the data gleaned from each individual shadow into a 3-D reconstruction of the interior of the nanoparticle. Using this method, which is known as electron tomography, Miao's team was able to directly see individual atoms and how they were positioned inside the specific gold nanoparticle. Presently, X-ray crystallography is the primary method for visualizing 3-D molecular structures at atomic resolutions. However, this method involves measuring many nearly identical samples and averaging the results. X-ray crystallography typically takes an average across trillions of molecules, which causes some information to get lost in the process, Miao said. "It is like averaging together everyone on Earth to get an idea of what a human being looks like – you completely miss the unique characteristics of each individual," he said. X-ray crystallography is a powerful technique for revealing the structure of perfect crystals, which are materials with an unbroken honeycomb of perfectly spaced atoms lined up as neatly as books on a shelf. Yet most structures existing in nature are non-crystalline, with structures far less ordered than their crystalline counterparts – picture a rock concert mosh pit rather than soldiers on parade. "Our current technology is mainly based on crystal structures because we have ways to analyze them," Miao said. "But for non-crystalline structures, no direct experiments have seen atomic structures in three dimensions before." Probing non-crystalline materials is important because even small variations in structure can greatly alter the electronic properties of a material, Miao noted. The ability to closely examine the inside of a semiconductor, for example, might reveal hidden internal flaws that could affect its performance. "The three-dimensional atomic resolution of non-crystalline structures remains a major unresolved problem in the physical sciences," he said. Miao and his colleagues haven't quite cracked the non-crystalline conundrum, but they have shown they can image a structure that isn't perfectly crystalline at a resolution of 2.4 angstroms (the average size of a gold atom is 2.8 angstroms). The gold nanoparticle they measured for their paper turned out to be composed of several different crystal grains, each forming a puzzle piece with atoms aligned in subtly different patterns. A nanostructure with hidden crystalline segments and boundaries inside will behave differently from one made of a single continuous crystal – but other techniques would have been unable to visualize them in three dimensions, Miao said. Miao's team also found that the small golden blob they studied was in fact shaped like a multi-faceted gem, though slightly squashed on one side from resting on a flat stage inside the gigantic microscope – another small detail that might have been averaged away when using more traditional methods. This project was inspired by Miao's earlier research, which involved finding ways to minimize the radiation dose administered to patients during CT scans. During a scan, patients must be X-rayed at a variety of angles, and those measurements are combined to give doctors a picture of what's inside the body. Miao found a mathematically more efficient way to obtain similar high-resolution images while taking scans at fewer angles. He later realized that this discovery could benefit scientists probing the insides of nanostructures, not just doctors on the lookout for tumors or fractures. Nanostructures, like patients, can be damaged if too many scans are administered. A constant bombardment of high-energy electrons can cause the atoms in nanoparticles to be rearranged and the particle itself to change shape. By bringing his medical discovery to his work in materials science and nanoscience, Miao was able to invent a new way to peer inside the field's tiniest structures. The discovery made by Miao's team may lead to improvements in resolution and image quality for tomography research across many fields, including the study of biological samples.
科技
2016-40/3983/en_head.json.gz/6202
MSG-3 Satellite Ready To Continue Weather-monitoring Service International partners are looking ahead to the newest member in a series of weather satellites that deliver images to European forecasters: MSG-3 is set for launch this summer. The Meteosat Second Generation (MSG) satellites are designed to improve weather prediction. The first in the series, MSG-1 — also known as Meteosat-8 — was launched in 2002. MSG-2 followed three years later. Both have been successful in continuing the legacy of the operational meteorological satellites that started with Meteosat-1 in 1977. The MSGs offer more spectral channels and are sensing Earth at a higher resolution than the previous Meteosat satellites. The series returns highly detailed imagery of Europe, the North Atlantic and Africa every 15 minutes for use by meteorologists and national weather forecasters. To guarantee the continuity of service, the third in a planned series of four satellites is on track for launch in June. Shortly after liftoff from Kourou, French Guiana, it will be injected into geostationary orbit at an altitude of 36 000 km over the equator. ESA has developed the satellites in close cooperation with Eumetsat — the European Organisation for the Exploitation of Meteorological Satellites — and is responsible for the complex launch and early orbit phase. About ten days after launch, Eumetsat will take over routine operations. Along with keeping track of cloud development and temperature to improve weather forecasting accuracy, MSG-3 has two secondary missions in the areas of radiation and rescue. The Global Earth Radiation Budget payload measures energy radiated by Earth. This radiation balance across day and night gives insight into atmosphere circulation and energy distribution. Meanwhile, the Search and Rescue transponder receives and relays distress signals from beacons within its field of view. On Wednesday, MSG-3 builder Thales Alenia Space hosted a media event in Cannes, France. Director of ESA´s Earth Observation Programs, Volker Liebig outlined the innovations that the mission has made in weather monitoring. “The latest MSG satellite will continue to deliver precise data in a wide range of spectral bands to ensure high accuracy in weather forecasting in the years to come, and keep close track on cloud development,” said Prof. Liebig. The Eumetsat Director-General, Alain Ratier, spoke about the success of the first two MSGs and the importance of continuing Meteosat´s vital services, while Eumetsat Meteorological Scientist Philip Watts detailed how MSG-3 observes Earth´s atmosphere and surface. The Director of Optical Observation & Science for Thales Alenia Space, Jean-Jacques Juillet, also presented on how the MSG satellites are built and how extensive tests ensure reliable operations in the years to come. The presentations were followed by a visit to Thales Alenia Space´s cleanroom, where the satellite is currently being held. Last month, MSG-3 underwent an intensive, two-day test to validate the full set of flight procedures such as receiving commands and delivering data. During a simulation campaign this month, engineers will train the mission control teams, particularly on how to solve contingency situations if anything goes wrong. Image Caption: The third Meteosat Second Generation satellite, MSG-3, in the cleanroom at Thales Alenia Space in Cannes, France. The solar panels have been removed for testing, so the inside of the satellite is visible in this image. Credits: ESA Meteosat Second Generation Eumetsat
科技
2016-40/3983/en_head.json.gz/6214
IN PRAISE OF SOLAR The Four Pillars of Sustainability by Chris Dunham Cover: Sunrise. Photograph: P. G. Adam, Publiphoto Diffusion/ Science Photo Library Issue availability No Back Issue available Issue available as PDF Houses featuring turf roofs and passive solar design Photograph: Martin Bond/Still Pictures Roofing materials that do nothing but keep the rain out are a shameful waste of space. OVER THE LAST forty years the world of energy has abounded with predictions, from dire forecasts of the imminent exhaustion of world oil supplies, to the rather optimistic "Nuclear power will be too cheap to meter." But only the truly visionary could have forseen Terminator actor Arnold Schwarzenegger standing before the people of California in 2005, as their governor, launching the world's largest solar roof programme! Solar heat and power technologies have been making impressive progress in the US over the last decade, mainly through government support programmes like Schwarzenegger's Million Solar Roofs Initiative. A precursor to the Californian programme came in 1997 when President Clinton launched a national Solar Roof Initiative. Without any formal budget it still achieved 229,000 installations by the end of 2003. Earlier still, in 1994, Japan launched a 70,000-roof programme and reached 144,000 residential systems in 2002. Germany upgraded its 1,000-roof programme to 100,000 roofs in late 1998. The programme was such a success that it met its targets early in 2003. Meanwhile, in the UK the outlook doesn't seem quite so bright. After managing just 600 installations in the first two years the Department of Trade and Industry is to end the Major Photovoltaics Demonstration Programme in March 2006. The programme was due to run until 2012, by which time 70,000 small-scale systems were due to be installed. According to Friends of the Earth UK, on twelve separate occasions since 1999 the German solar roof programme has delivered the equivalent of the UK's initial three-year target in just one month. The UK programme, along with its solar thermal equivalent, Clear Skies, will be replaced with a non-technology-specific programme and it is as yet unclear how solar will fare against the other technologies that are closer to being commercially competitive. 'Sustained nurture' has been the nascent renewable industry's wish for government policy for some time now, rather than stop-start grant programmes which make investment in industrial capacity so unattractive. In Germany sustained nurture has been enshrined in the Renewable Energy Feed Law, whereby support for photovoltaic systems is fixed per solar kWh produced into the foreseeable future and backed by low-interest loans. The price is set to reduce each year by five per cent, to mirror the reduction in capital costs brought on by the scaling-up of production that the programme facilitates. In 1999 Greenpeace commissioned business advisers KPMG to examine the economics of photovoltaics (PV). The report Solar Energy: From Perennial Promise to Competitive Alternative concluded that solar PV could be competitive with conventional fossil-fuel power generation if production were scaled up to 500MW peak per year (around 250,000 household-sized systems per year). They estimated that to build a factory of this kind would cost US$660 million. This sounds like a lot of money, but the authors pointed out that the investment equates to just one half of one per cent of current expenditure on oil and gas exploration annually. Six years later costs have fallen significantly but we are still a long way from the "competitive alternative" scenario that KPMG envisaged. Without grant support, payback periods for solar at current prices are around 100 years for PV in the UK's climate. Clearly more nurturing is required to change the fact that sadly, most people with the money to spare choose a new kitchen rather than a solar roof. THE SITUATION WITH solar thermal is somewhat different. This is already a mature technology and the potential to reduce cost through scaling up production is not as great. There are estimated to be around 45,000 systems already installed in the UK. Its financial viability is still not exactly a heart-stopping money-earner though, either: the simple payback is around forty years. But with gas prices rising substantially after years of decline, and a range of local support programmes designed to make the technology accessible to the public, prospects for solar water heating are now looking a little brighter. The possibility of a Renewable Heat Obligation, modelled on its electrical cousin, looks like a real possibility. This would put a requirement on suppliers of heating fuel to source a proportion of their heat from renewable sources - with solar thermal being one of the three permitted technologies. An EU Renewable Heat and Cooling Directive is also being proposed. Solar enthusiasts in the UK can take comfort from the fact that the uptake of solar in Europe bears almost no relation to climate. The largest markets for solar water heating in Europe are Germany, then Austria and Greece, which together enjoy more than eighty per cent of Europe's installed capacity. Greece clearly has a climatic advantage, but why has Austria achieved almost two million square metres of solar collectors by 2004 - more than twice as much as sunnier Spain, Portugal and Italy combined? Denmark has managed 45m2 per 1,000 inhabitants while the UK has managed just 5m2. Clearly there are other factors at work. Research has shown that public awareness of environmental issues, government intervention through regulation and financial support, and the quality of the products and services offered by the industry are as important as climate in the uptake of solar. HOWEVER, THE UK hasn't been completely backward in the area of solar power. Few people realise that there is a renewable revolution beginning in our town halls. Frustrated by central government inaction, local authorities have taken it upon themselves to introduce tough new planning requirements for new developments. The London Borough of Merton was the first, and it managed to overturn developers' challenges and steer past a nervous government. Merton's planning system now requires any new large-scale commercial development to source ten per cent of its energy needs from on-site renewable energy. Five London Boroughs and the Greater London Authority have since followed suit, introducing their own ten per cent requirements, with some extending this to residential as well as commercial developments. With solar technologies as the obvious choice to integrate into a new building, and huge swathes of new housing planned for London, this could be the kick-start that the UK solar industry needs. The prospect of a technology costing in the region of £500 a square metre replacing the humble concrete tile as the roofing material of choice in ten years may seem remote today. But as the price continues to tumble, and with the evidence that the point of irreversible accelerated climate change is getting closer, how long will it be before we view using roofing materials that do nothing except keep the rain out as a shameful waste of space? Chris Dunham is Director of SEA/RENUE, a not-for-profit organisation promoting sustainable energy use in London. Our Philosophy
科技
2016-40/3983/en_head.json.gz/6320
Dec 5 2012, 8:43 am | Posted by Jasmin Sasin Geologists Request Government Assistance in Protecting Fossil Community Chinese geologists are asking for a government sanctioned research and protection for a plant fossil community found in Dapeng New Area that is said to be more than 200 million years old. The said fossil community is said to be in the early Jurassic period and around 100 fossils were visible. The plant fossils are found in a quarry and is said to be significant for research on the biological diversity, geographic and climate variations in southern China during the Jurassic period. It is also important for the studies of coal bearing layers in the Guangdong Province area. Several of the fossils were already destroyed due to quarrying and reclamation because of the lack of protection while some were weathered due to exposure to the elements. Geologists said that the area where the fossils were found should be placed under protection and a geological museum should be built to facilitate further research, but the research plan can’t be realized unless without the support of the government. The Dapeng Peninsula has a very unique diversity of land plants from 200million years ago and the government should at least build a natural museum and currently none of the plant fossils are placed on display in the Shenzhen Museum. Pin It You are here: Home » Culture » Exhibitions » Geologists Request Government Assistance in Protecting Fossil Community Comments are closed Environmental plans to be reviewed by government officialsSixteen candidates consisting of chief district leaders and city government heads will have their environmental protection plans that have policies and work with direct impact on environmental protection evaluated. Each... Residents Request More Bicycle Parking Near Metro ExitsThe city government will be providing more bicycle services that will enable commuters to take the Metro Rail more conveniently. Commuters have requested that the government build bicycle parking spaces... Environmental Protection Programs of Government Officials EvaluatedChief district leaders and heads of 10 city government agencies whose policies and work had a direct impact on environmental protection were evaluated on Wednesday, August 11, 2010. Each of... Government land used for dumping construction wasteA 180,000 square meter city government reserved land for future improvement and development becomes a dumping site for construction waste in Longgang District. Even there is a warning upon entering... Museum construction started to commemorate World Heritage DayThe Chongqing started construction of a museum in line with celebrating the World Heritage Day that will feature that century old rock carvings that is now placed in the 1999... Where is your favorite holiday destination in SEA?
科技
2016-40/3983/en_head.json.gz/6349
How Two Women Ended the Deadly Feather Trade Birds like the snowy egret were on the brink of extinction, all because of their sought-after plumage From the Smithsonian National Museum of Natural History (Cade Martin) John James Audubon, the pre-eminent 19th-century painter of birds, considered the snowy egret to be one of America’s surpassingly beautiful species. The egret, he noted, was also abundant. “I have visited some of their breeding grounds,” Audubon wrote, “where several hundred pairs were to be seen, and several nests were placed on the branches of the same bush, so low at times that I could easily see into them.” 100 Years Later, the First International Treaty to Protect Birds Has Grown Wings Can Birds Survive Climate Change? At 62, the Oldest Bird in the World Is Still Hatching Chicks Communication Towers Are Death Traps for Threatened Bird Species Audubon insisted that birds were so plentiful in North America that no depredation—whether hunting, the encroachment of cities and farmlands, or any other act of man—could extinguish a species. Yet little more than half a century after Audubon’s death in 1851, the last passenger pigeon—a species once numbering in the billions—was living out its days in the Cincinnati Zoo, to be replaced shortly thereafter by a final handful of Carolina parakeets, also soon to die in captivity. The snowy egret—and its slightly larger cousin, the great egret—were similarly imperiled by the late 1800s, when fashionable women began wearing hats adorned with feathers, wings and even entire taxidermied birds. The egrets’ brilliant white plumage, especially the gossamer wisps of feather that became more prominent during mating season, was in high demand among milliners. (A snowy egret specimen from the Smithsonian National Museum of Natural History’s ornithology collections, above, documents the bird’s showy splendor.) The plume trade was a sordid business. Hunters killed and skinned the mature birds, leaving orphaned hatchlings to starve or be eaten by crows. “It was a common thing for a rookery of several hundred birds to be attacked by the plume hunters, and in two or three days utterly destroyed,” wrote William Hornaday, director of the New York Zoo­logical Society and formerly chief taxidermist at the Smithsonian. The main drivers of the plume trade were millinery centers in New York and London. Hornaday, who described London as “the Mecca of the feather killers of the world,” calculated that in a single nine-month period the London market had consumed feathers from nearly 130,000 egrets. And egrets were not the only species under threat. In 1886, it was estimated, 50 North American species were being slaughtered for their feathers. Egrets and other wading birds were being decimated until two crusading Boston socialites, Harriet Hemenway and her cousin, Minna Hall, set off a revolt. Their boycott of the trade would culminate in formation of the National Audubon Society and passage of the Weeks-McLean Law, also known as the Migratory Bird Act, by Congress on March 4, 1913. The law, a landmark in American conservation history, outlawed market hunting and forbade interstate transport of birds. Harriet Lawrence Hemenway and her husband Augustus, a philanthropist who was heir to a shipping fortune, lived in a tony section of Back Bay. Hemenway, a Boston Brahmin but also something of an iconoclast (she once invited Booker T. Washington as a houseguest when Boston hotels refused him), would live to 102. A passionate amateur naturalist, she was known for setting out on birding expeditions wearing unthinkably unfashionable white sneakers. In 1896, after Hemenway read an article describing the plume trade, she enlisted the help of Hall. The cousins consulted the Blue Book, Boston’s social register, and launched a series of tea parties at which they urged their friends to stop wearing feathered hats. “We sent out circulars,” Hall later recalled, “asking the women to join a society for the protection of birds, especially the egret. Some women joined and some who preferred to wear feathers would not join.” Buoyed by their success—some 900 women joined this upper-crust boycott—Hemenway and Hall that same year organized the Massachusetts Audubon Society. Audubon societies formed in more than a dozen states; their federation would eventually be called the National Audubon Society. In 1900, Congress passed the Lacey Act, which prohibited transport across state lines of birds taken in violation of state laws. But the law, poorly enforced, did little to slow the commerce in feathers. Getting in the way of the plume trade could be dangerous. In 1905, in an incident that generated national outrage, a warden in south Florida, Guy M. Bradley, was shot and killed while attempting to arrest a plume hunter—who was subsequently acquitted by a sympathetic jury. The watershed moment arrived in 1913, when the Weeks-McLean Law, sponsored by Massachusetts Representative John Weeks and Connecticut Senator George McLean, effectively ended the plume trade. In 1920, after a series of inconclusive court challenges to Weeks-McLean, the Supreme Court upheld a subsequent piece of legislation, the Migratory Bird Treaty Act of 1918. Justice Oliver Wendell Holmes, writing for the majority, declared that the protection of birds was in the “national interest.” Without such measures, he declared, one could foresee a day when no birds would survive for any power—state or federal—to regulate. The Meanest Girls at the Watering Hole Jane Goodall Reveals Her Lifelong Fascination With…Plants? Tags Artifact of the Week Ask Smithsonian: How Do Vaccines Work? (1:18) Have you ever wondered how a simple shot can keep you from dying a horrible death? In this one-minute video, Ask Smithsonian’s host, Eric Schulze, unravels how vaccines boot-camp our bodies into shape, getting us ready to fight off deadly diseases. A mongoose is lightning fast and has razor-sharp teeth. A black mamba can kill 15 grown men with just one bite. Which of these two mortal enemies will win? A Mongoose and Black Mamba Fight to Death Real raptors had feathers and, according to one paleontologist, looked a lot more like “prehistoric kickboxing killer turkeys.” What Jurassic Park Got Wrong About Raptors
科技
2016-40/3983/en_head.json.gz/6366
Headlines > News > NASA Selects SpaceX to Begin Negotiations for Use of Historic Launch Pad NASA Selects SpaceX to Begin Negotiations for Use of Historic Launch Pad Published by Klaus Schmidt on Fri Dec 13, 2013 11:15 pm via: NASA NASA, SpaceX, LC-39A NASA has selected Space Exploration Technologies Corporation (SpaceX) of Hawthorne, Calif., to begin negotiations on a lease to use and operate historic Launch Complex (LC) 39A at the agency’s Kennedy Space Center in Florida. Permitting use and operation of this valuable national asset by a private-sector, commercial space partner will ensure its continued viability and allow for its continued use in support of U.S. space activities. Launch Pad 39A served as the starting point for many NASA missions, including the space shuttle Endeavour, ready on the launch pad Feb. 6, 2010, just days before its launch from Kennedy Space Center in Cape Canaveral, Florida. Image Credit: NASA/Bill Ingalls The reuse of LC-39A is part of NASA’s work to transform the Kennedy Space Center into a 21st century launch complex capable of supporting both government and commercial users. Kennedy is having success attracting significant private sector interest in its unique facilities. The center is hard at work assembling NASA’s Orion spacecraft and preparing its infrastructure for the Space Launch System rocket, which will launch from LC-39B and take American astronauts into deep space, including to an asteroid and Mars. NASA made the selection decision Thursday after the U.S. Government Accountability Office (GAO) denied a protest filed against the Agency by Blue Origin LLC on Sept. 13. In its protest, Blue Origin raised concerns about the competitive process NASA was using to try to secure a potential commercial partner or partners to lease and use LC-39A. Blue Origin had argued the language in the Announcement for Proposals (AFP) favored one proposed use of LC-39A over others. The GAO disagreed. While the GAO protest was underway, NASA was prohibited from selecting a commercial partner for LC-39A from among the proposals submitted in response to the agency’s AFP that had been issued on May 23. However, while the GAO considered the protest, NASA continued evaluating the proposals in order to be prepared to make a selection when permitted to do so. After the GAO rendered its decision Thursday in NASA’s favor, the agency completed its evaluation and selection process. NASA notified all proposers on Friday of its selection decision concerning LC-39A. Further details about NASA’s decision will be provided to each proposer when NASA furnishes the source selection statement to the proposers. In addition, NASA will offer each the opportunity to meet to discuss NASA’s findings related to the proposer’s individual proposal. NASA will release the source selection statement to the public once each proposer has been consulted to ensure that any proprietary information has been appropriately redacted. NASA will begin working with SpaceX to negotiate the terms of its lease for LC-39A. During those ongoing negotiations, NASA will not be able to discuss details of the pending lease agreement. Since the late 1960s, Kennedy’s launch pads 39 A and B have served as the starting point for America’s most significant human spaceflight endeavors — Apollo, Skylab, Apollo-Soyuz and all 135 space shuttle missions. LC-39A is the pad where Apollo 11 lifted off from on the first manned moon landing in 1969, as well as launching the first space shuttle mission in 1981 and the last in 2011. Cooling System Troubleshooting, Biomedical Research on Station NASA's Hubble Looks at a Members-only Galaxy Club
科技
2016-40/3983/en_head.json.gz/6402
October 10, 2012 Newsletter Steve's Digicams Newsletter: October 10th 2012 Canon EOS Rebel T4i Review Over the last few years, Canon's EOS Rebel series of entry level DSLRs has become a fertile market for those wishing to make the jump to DSLR while remaining budget conscious. For 2012, Canon has announced the T4i, described as their flagship entry level model. Previous models, the T3i and T3, will remain in production as more affordable options. The T2i is slated to be discontinued (so look out for steep discounts). On the surface, the T4i boasts similar specs to the T3i -- 18 megapixels, vari-angle display, 5 frames per second burst shooting, Full 1080p HD video recording (in 24 or 30 frames per second), and compatibility with all Canon EF and EF-S lenses. But there's some new tech under this new Rebel's hood...Continue Reading Olympus TG-1 iHS Review Canon EF-S 18-135mm f/3.5-5.6 IS STM Review Panasonic LUMIX DMC-LX7 Review September 29th Winner: Patterns by Teri Moyer (Canon 5D) "I love the repeating pattern with this pagoda. It was taken on a mountain overlooking Reading, PA which is a surprise to most folks." Click Here to See Today's Photo of the Day Winner! Are you a Steve's Fan on Facebook? Steve's Digicams now has over 5,575 awesome fans around the world and we'd love for you to join our social network. It's a fun place to see high resolution Photo of the Day albums, interact with other shutterbugs, post your own photos, and read our latest articles or reviews! Steve's Digicams Most Popular News Stories Sakar Launches Ultra-Thin 1080p Polaroid Camcorders Sakar International has announced the launch of a few new Polaroid branded 1080p camcorders. Of particular note is the iD820, which the company calls "one of the world's thinnest," measuring in at just a half-inch wide and weighing a mere four ounces. It also allows you to record both 1080p and standard definition video at the same time for quicker uploads. You can find it for $179. The new Polaroid Polaroid iD879 is similar in capabilities, but the 5x optical zoom means it's going to be a little bit bulkier. It also offers two different memory slots - microSD and SD. The iD879 will also run you $179...Continue Reading Kodak is Ending Sales of Inkjet Printers - More Layoffs on the Way Kodak is taking another step forward in the restructuring process, and the newest piece of the business to go is printers. The company has announced that it will be putting a halt to consumer printer sales and laying off 200 more employees than previously planned. Kodak expects to fully wind down printer sales sometime in 2013 and will, for the moment, be focusing on selling ink for its existing printers...Continue Reading Samsung is Giving out Cameras to Those in the UK Named David Bailey Samsung's new promotion is an interesting one. In order to celebrate their new NX1000, they're giving out cameras to UK residents that happen to share the name of one of Britain's finest photographers. The only real catch is that you have to use the camera and that you allow Samsung rights to your photos. Things are a little fuzzy on how long you get the camera for, but it seems like it might be yours for good as long as you participate in the promotion. "Legally speaking, we are loaning you the camera for the duration of the campaign, but as long as you take part by submitting photographs," says the Samsung site, "the camera is yours to keep."... Continue Reading
科技
2016-40/3983/en_head.json.gz/6473
If Megaupload users want their data, they're going to have to pay The U.S. government says it doesn't have the data and isn't opposed to users retrieving it U.S. federal prosecutors are fine with Megaupload users recovering their data -- as long as they pay for it.The government's position was explained in a court filing on Friday concerning one of the many interesting side issues that has emerged from the shutdown of Megaupload, formerly one of the most highly trafficked file-sharing sites.Prosecutors were responding to a motion filed by the Electronic Frontier Foundation in late March on behalf of Kyle Goodwin, an Ohio-based sports reporter who used Megaupload legitimately for storing videos.Goodwin's hard drive crashed, and he lost access to the data he backed up on Megaupload when the site was shut down on Jan. 19 on criminal copyright infringement charges.U.S. law allows for third parties who have an interest in forfeited property to make a claim. But the government argues that it only copied part of the Megaupload data and the physical servers were never seized.Megaupload's 1,103 servers -- which hold upwards of 28 petabytes of data -- are still held by Carpathia Hosting, the government said. "Access is not the issue -- if it was, Mr. Goodwin could simply hire a forensic expert to retrieve what he claims is his property and reimburse Carpathia for its associated costs," the response said. "The issue is that the process of identifying, copying, and returning Mr. Goodwin's data will be inordinately expensive, and Mr. Goodwin wants the government, or Megaupload, or Carpathia, or anyone other than himself, to bear the cost."The government also suggested that if Megaupload or Carpathia violated a term of service or contract, Goodwin could "sue Megaupload or Carpathia or recover his losses."The issue of what to do with Megaupload's data has been hanging around for a while. Carpathia contends it costs US$9,000 a day to maintain. Megaupload's assets are frozen, so it has asked a court to make the DOJ pay for preserving the data, which may be needed for its defense. So far, the issue remains unresolved.Meanwhile, Megaupload founder Kim Dotcom is free on bail, living in his rented home near Auckland and awaiting extradition proceedings to begin in August. Dotcom along with Finn Batato, Julius Bencko, Sven Echternach, Mathias Ortmann, Andrus Nomm and Bram Van Der Kolk are charged with criminal copyright infringement and money laundering.The men -- along with two companies -- are accused of collecting advertising and subscription fees from users for faster download speeds of material stored on Megaupload. Prosecutors allege the website and its operators collected US$175 million in criminal proceeds, costing copyright holders more than $500 billion in damages to copyright holders.Send news tips and comments to jeremy_kirk@idg.com Four state AGs sue to block US decision to cede key internet ...
科技
2016-40/3983/en_head.json.gz/6561
Space Nazis are coming… at least in an upcoming sci-fi film. (credit: IronSky.net) “There’s a war going on upstairs” by Dwayne DayMonday, February 15, 2010 “Ahem. OK, here’s what we’ve got: the Rand Corporation, in conjunction with the saucer people… under the supervision of the reverse vampires… are forcing our parents to go to bed early in a fiendish plot to eliminate the meal of dinner. [whispers] We’re through the looking glass, here, people…” – Milhouse van Houten, The Simpsons When the new Obama space policy was unveiled a few weeks ago, even those familiar with American space policy could be forgiven for being a little confused. In Washington, budget is policy, and the new NASA budget had reflected a pretty complex new policy, including both budget increases and program cancellations and the creation of several new categories of research and development efforts. But the central feature was the cancellation of the Constellation program. The United States has made the decision to not even try venturing beyond low Earth orbit for the foreseeable future. Hoagland has come up with a startling revelation… that Obama canceled the lunar program because (drum roll please): he was warned by Space Nazis. Fortunately, it turns out that there is a good explanation for why Obama canceled the Constellation program. That explanation has been provided by Richard C. Hoagland. Hoagland, you may remember, is the person who discovered the lost city on Mars, and a bunch of giant invisible structures on the Moon that he asserts are the remains of alien civilizations. They’re there, he says, but because they are invisible we have to trust him. I’m not making this up. Honest, I am not making this up. Hoagland recently explained this all on the Coast to Coast AM radio program. It’s also on his website. And this past weekend, for a fee, he explained it to a bunch of people at the “Conscious Life Expo” conference in Los Angeles. It’s a great story. According to Hoagland, Obama had been prepared to finally give the Constellation program the funding it required to return Americans to the Moon. But then, in December, a remarkable thing happened in the skies over Norway. Right before Obama visited Norway to receive his Nobel Peace Prize, the Russians launched a ballistic missile on a test. The missile sailed into the northern sky and then was stopped in mid-air, grabbed while it was going thousands of miles an hour. It was stopped by some kind of massively powerful mysterious device. And when it was stopped in midair, hundreds of people across Norway saw it, and some photographed it, seeing a weird spiral in the sky that was quickly labeled the “Norway Spiral.” This was a “double-whammy message” to both Obama and Russian leader Vladimir Putin, “that somebody has the power to stop us,” Hoagland told Coast to Coast host George Noory. Who? Noory asked. “Them, out there!” Hoagland replied, “…the secret space program.” The message was apparently that humanity needed to be “imprisoned” on Earth. Once Obama got the message, he immediately canceled the American lunar program. The secret space program is based on the Moon, and we’re not supposed to go there. Noory asked if this might have actually been extraterrestrial technology that stopped the Russian missile. Hoagland doesn’t believe that’s the case. He says that what is actually going on dates back to the last days of World War II. As the Allies were closing in, some Nazi scientists took their best technology and fled the Earth, apparently leaving for the Moon, forming “a secret off-world civilization.” There they set up shop and continued to develop their capabilities to the point where their technology is so advanced that they are practically god-like to us in their abilities. Halting the Russian missile is simply the most visible recent example of this, Hoagland said. “The physics are there and it’s all about who is controlling it and what they intend for us.” (My guess is that it’s not going to be nice.) But of course there’s more going on than us mere mortals stuck on Earth can comprehend. Hoagland says that “there’s a war going on upstairs.” But it’s not clear if the Space Nazis are battling each other—a Space Nazi Civil War—or if they’re battling our government and we civilians don’t know about it. Hoagland believes that terrorism, and events like the 9/11 attack on the World Trade Center, is the terrestrial manifestation of this war. Apparently there are members of our own government who are involved in battling these Space Nazis, but they have kept Obama in the dark. Hoagland referred to a comment made by Bill Clinton years ago that if the government had information on UFOs, Clinton wished somebody would tell him about it. “If you’re going to freeze out the good ole boy from the South,” Hoagland said, “what about the black guy from Chicago?” But of course there’s more going on than us mere mortals stuck on Earth can comprehend. Now one problem with Hoagland’s theory is that it’s contradicted by no less an authoritative source than the Space Frontier Foundation. According to SFF co-founder Bob Werb, people who supported the Ares I rocket are “national socialists,” aka “Nazis.” This is somewhat confusing. If the Space Nazis were running the Ares program, why did the Space Nazis on the Moon blackmail Obama into stopping their program? Now one interpretation is that Werb was simply using the timeworn tactic of labeling anybody he disagrees with as evil and calling them names (where’s Godwin’s Law when you need it?). But frankly, it seems like too much of a coincidence that Werb brought up Nazis at the exact same time that Hoagland says the Space Nazis are making their presence known. Clearly, both men are very concerned about the Space Nazi threat. We’re through the looking glass here, people… But maybe things will all become much clearer soon. Hoagland is a big proponent of the idea that Hollywood actually reveals secrets about the powers that control us as well as our future. According to Hoagland, George Lucas—again, I’m not making this up—modeled C3PO of his Star Wars movies on an alien sculpture on the Moon. The 1984 movie 2010 was billed as “the year we make contact” because Arthur C. Clarke (or maybe it was the director Peter Hyams) knew that we would ultimately make contact with space aliens in 2010 (keep your fingers crossed!). And soon we will be treated to a movie about Nazis on the Moon. Called Iron Sky, the film has been in the works for awhile now. The movie is being filmed in Finland. Finland! If you look on a map, you’ll see that Finland actually shares a border with Norway—where the Norway Spiral was spotted back in December! Coincidence? I think not! So maybe when the Space Nazis movie premieres, it will have some clues to the war that is waging upstairs. Maybe Hoagland and Werb will be in the front row, taking notes and looking for clues about American space policy. Dwayne Day is a Free Mason and a shameless hack. He can be reached at zirconic1@cox.net.
科技
2016-40/3983/en_head.json.gz/6591
Ramco to take on SAP and Oracle in cloud business [Software & Services] [Times of India] (Times of India Via Acquire Media NewsEdge) MUMBAI: Virender Aggarwal, ex-Satyam and HCL executive, is shaking things up at the Chennai-based Ramco Systems, where he joined as CEO about nine months ago. Ramco is one of India's few product companies and the only one with an enterprise resource planning software. Aggarwal is attempting to position the technology company in the global market to challenge established giants like SAP and Oracle with its cloud product. This might seem far-fetched for a company that has a fraction of the clients and is racking up losses, but there are enough indicators to show there is a makeover in progress at Ramco. "There is nothing to guarantee it will work out. But there is a tectonic shift in technology and we are at the right place at the right time," said Aggarwal, who quit his last assignment in search of more excitement. "It's a feeling of being there, done that. Outsourcing today is as interesting as producing cement. It's more headcount based." And with so much happening on the cloud, Aggarwal said he didn't want to miss the opportunity and hence joined Ramco Systems. Aggarwal is the first non-promoter CEO of the Chennai-based group that is also into textiles and cement. Ramco has had a cloud-based ERP product for about four years, but the company was more engineering-focussed than user-focussed. Since his joining, Aggarwal has oriented the company and the product from engineering-driven to usability-driven. Engineering-heavy presentations were replaced with how the product could solve user problems. "We poured a lot of money into our product and, ideally speaking, we should've been the SAP of India. We are not. Now, we need to get our rightful place in the global market," said Aggarwal, who also accelerated some of R&D efforts, while cutting down on research for the sake of it. Ramco is among the few companies globally that offer ERP on the cloud. It integrates Google Maps, Google Location and Directions to represent data spatially instead of through spreadsheets. What this means is a manager can view dealer sales by region and right away see which regions are lagging or leading. "Earlier , we were suffering from a syndrome that if we haven't developed it, we will not offer it in our product. Now, we are saying, even if it is not developed here, if it is good and we can integrate it, we will do it," said Aggarwal. All its 3,000 screens run on iPad, and many modules have a Facebook-like interface. The company says it is getting as many as 20 enquires per day from countries like Australia, New Zealand and the Scandinavian countries with cloud-based model. Customers are even willing to be guided through the installation on Skype. A lot of its marketing is digital through Google Adwords, banner advertisements on websites, especially for the overseas audience, although traditional advertising on billboards and in-trade magazines are also happening. If the company was spending $100 a day on Google Adwords , it is now spending $200,000 a day, he says. Overall, the company expects sales and marketing spends -- currently about 2-3% of revenue to increase to 10% of revenue in a year's time. The global benchmark for sales and marketing spends for product companies is 20-30%. But how is this going to be funded Ramco Systems had losses of Rs 8 crore on revenues of Rs 223 crore in fiscal 2012. In the latest quarter, losses were Rs 12 crore, although down from the previous quarter. "We are focussed on revenue growth because in cloud business, revenue growth matters more than anything else, and we have a very short window before the big boys come in, so we have to occupy a space and build a brand name in that." The top 70 people in the company have taken pay cuts and after years of flying business class, Aggarwal now flies low-cost airlines. For his current trip, for instance, he flew Tiger Airways from Singapore, where he is based. The day the company becomes profitable, employees have been promised free food, as in Google. The company has moved from a direct-selling model to also selling through partners. Dell, Ingram Micro and NIIT Technologies are some of its partners. In the US, from where it aims to get at least half of its business, it has 20 partners. The US, Middle East and the Scandinavian countries will be key markets for Ramco, going ahead, said Aggarwal. "He (Aggarwal) is turning things around. Since he took over the role as CEO, his first priority was getting the right sales strategy and team in place. They are building mobility solutions for the clients to increase the touch and access points for them," added Gogia. (With inputs from Akanksha Prasad) (c) 2013 Bennett, Coleman & Company Limited
科技
2016-40/3983/en_head.json.gz/6742
Reaping Profits From Soundtracks There's a Growing Business in Royalty-Free Music for Videos Hannah Karp BiographyHannah Karp hannah.karp@wsj.com Video may have killed the radio star, but video—whether on TV or the Web—still needs a soundtrack. With video content proliferating, new models for supplying background music are taking root, with many trying to bring down the cost and avoid the complicated royalty-payment rules. Licensing music is typically an arduous, labor-intensive process that can involve sifting through libraries of songs, manually filling out forms and keeping lawyers on hand in case a rights holder feels wronged. Enter companies like Epidemic Sound, a startup that removes royalties from the equation. The Stockholm-based company offers TV and video producers subscriptions to its library of 25,000 original musical tracks and sound effects for a monthly subscription fee—all royalty free. Think of it as a musical version of clip art. It is also starting to allow video creators on a budget to use its music at the rate of $1.35 per second, instead of requiring them to license an entire track for a small snippet. Epidemic Sound pays composers up front for songs in exchange for complete ownership, meaning its users don't need to make royalty payments. Composers get $100 to $1,000 a song, and up to hundreds of thousands for TV-show theme songs, Epidemic says. Co-founded in 2009 by Peer Astrom, who produces the music on the TV show "Glee," Epidemic Sound says it now provides 70% of the music broadcast on TV in Sweden and about half of the TV music across Scandinavia. And it now is expanding into the U.S. Others venturing into this new territory include SourceAudio LLC of Los Angeles, which offers businesses subscriptions to existing music libraries for fixed monthly rates, and ScoreAscore LLC., based in Los Angeles, which allows video producers to post projects for a desired price and suggests composers for the job. Sony Corp.'s Sony/ATV Music Publishing and film composer Hans Zimmer opened Bleeding Fingers Custom Music Shop earlier this year, offering scores for lower-budget productions like reality-TV shows. Alicen Schneider, vice president of Music Creative Services at NBCUniversal's television unit, said that because production has ramped up but budgets have stagnated or declined, "We've had to get more creative in where we get music because we can't afford to get it from the major labels anymore." Currently NBCUniversal is producing 50 shows, she said, about five times more than five years ago. To cope with the time and money crunch, she said she uses name-your-price services like ScoreAScore, which streamlines the licensing process delivers a selection custom or existing music in a matter of hours that she can buy for one lump-sum payment with full permission from rights holders. "We tell them what we have to spend and they pre-clear everything," said Ms. Schneider. "Within 12 hours you have 15 to 20 things you can listen to." Peter Gannon, an executive music producer at ad agency McCann Worldgroup, said that with record sales down artists are eager to find new revenue streams. Advertisers still commission artists to create about 30% of the music they use, Mr. Gannon said, but commercials increasingly use music that already exists. In the U.S., most networks and studios have licenses that allow performing-rights societies to collect publishing royalties on behalf of their songwriters and composers, when their work is aired. But many artists, particularly younger ones, would rather get money upfront rather than collecting tiny payments over years and years. Epidemic Sound doesn't hire composers registered with performing-rights societies like Ascap and BMI, but many other services do. Most of the new music businesses aren't completely royalty-free on the publishing side, only on the recorded-music side. Maker Studios Inc., a Los Angeles-based producer of YouTube videos, recently signed a deal for unlimited access to Epidemic Sound's royalty-free catalog. "We plan to use as much content from Epidemic Sound as possible," said Maker Studios Chief Operating Officer Courtney Holt, adding that the company's demand for music has increased sharply since its launch four years ago. Maker Studios produces a wide range of videos, from fashion, news, videogames, food, comedy and music, including many videos of amateurs performing covers of existing songs. Epidemic and Maker Studios declined to disclose terms of the deal. Royalties have presented a problem for Maker Studios in the past. The National Music Publishers Association alleged that the studio had used songs for several years without paying or getting permission. The two sides are in settlement talks. Maker will also continue to license music from Vivendi SA VIVHY 's Universal Music Group, after inking a deal this year, but with Epidemic, royalty payments won't be a concern. The recording industry last year made $337 million world-wide from licensing music to TV, video and movie productions—known in the industry as "synchronization licenses"—up from $310 million in 2010, according to the International Federation of the Phonographic Industry. Recording artists and advertisers these days lean heavily on such "sync" use, evidenced by hits such as Justin Timberlake's "Suit and Tie" and Lady Gaga's "Applause" showing up in commercials when they are still on the Billboard charts. For music publishers, which control the rights to compositions including melodies and lyrics, sync-use revenue has doubled as a percentage of total revenue over the past decade to about one-third, but mostly because record sales have declined. David Israelite, president of the National Music Publishers Association, says he expects such revenue to start rising in absolute terms as social-media outlets like Facebook and Twitter ad more video advertising that includes music. "We're very bullish on sync licensing—there's only going to be more media in the future and much of it will require music," said Geoff Grotz, CEO of SourceAudio. Epidemic Sound's CEO Oscar Höglund said the company received tens of thousands of applications from composers around the world and has employed about 200 of them, mostly from Sweden and the U.S. Clients, who subscribe for up to $100,000 a month depending on how much music they expect to use, can request playlists of suggested tunes for their particular needs, or search the library by mood, style or other key terms. Gavin Luke, a 36-year-old musician in Minneapolis, started writing music for Epidemic Sound in 2010, enticed by the prospect of getting paid upfront instead of waiting for royalty checks. He said he typically earns $290 per piece and can write about three a day, for projects that have ranged from Swedish cooking programs to the Swedish version of the "Survivor" reality-TV show. Write to Hannah Karp at hannah.karp@wsj.com Corrections & Amplifications Epidemic Sound said it has employed about 200 of the composers who applied and that composers get up to hundreds of thousands of dollars for a TV-show theme song. An earlier version of this article incorrectly said that Epidemic Sound has employed 3,000 composers and pays up to millions for a TV-show theme song . Save Article
科技
2016-40/3983/en_head.json.gz/6778
View the results at Google, or enable JavaScript to view them here. You are hereHome » AtmosNews » News Releases Statement on Reductions of Programs at NCAR August 8, 2008 Richard A. Anthes, UCAR President. [ENLARGE] (©UCAR, photo by Carlye Calvin.) News media terms of use* BOULDER—The National Center for Atmospheric Research, like many universities and other research institutions in the United States, continues to face extraordinary budget pressures, due to the decreases in real terms in federal funding for science. Over the past five years NCAR has had to lay off approximately 55 people and have lost another 77 positions due to attrition, totaling roughly 16% of NCAR positions, because of sub-inflationary NSF funding and decreases in other agency support. NSF and all of the government agencies that support science have faced similar budgetary stringencies. Over the past five years we have had to make painful cuts in all areas of our scientific and facilities programs—including climate, weather, atmospheric chemistry, solar physics and certain computational and observational facilities and services. In addition, this year we have postponed all UCAR and NCAR raises from FY08 to FY09. Despite these negative impacts, NCAR, together with the UCAR Board of Trustees and the National Science Foundation, have worked to support important priorities in computing, observing facilities, modeling, and other areas that are essential for a national center. Unfortunately, this year we are projecting a shortfall of $8 million (about 10%) in our NSF base budget and must plan for a worst-case shortfall next year of roughly $10 million. Eric Barron, NCAR Director. [ENLARGE] (©UCAR, photo by Carlye Calvin.) News media terms of use* NCAR is continuing to take actions to address this shortfall, including reducing the NCAR director’s reserve to zero for the coming year, reducing the number of administrative positions, and making additional program reductions. On August 4 we made the difficult decision to close the Center for Capacity Building. This action was not taken lightly, but the budget shortfalls are so severe that every additional budget cut impacts high-quality work in some area of NCAR’s contributions to society. We are working to increase the resources available to the community, including NCAR, and we remain committed to a scientific program at NCAR that integrates societal needs with research in the atmospheric and related sciences. UCAR, along with other partners including the American Meteorological Society, is developing transition materials for the next administration and Congress on what our nation needs to become more resilient to severe weather and climate change impacts and these will be available soon. We urge the community to support our efforts to reverse the prolonged decline in science budgets in the United States. —Rick Anthes, UCAR President —Eric Barron, NCAR Director *Media & nonprofit use of images: Except where otherwise indicated, media and nonprofit use permitted with credit as indicated above and compliance with UCAR's terms of use. Find more images in the NCAR|UCAR Multimedia & Image Gallery.The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Contacts for This Release David Hosansky, Head of Media Relations303-497-8611 Jeff Smith, Media Relations 303-497-2679 more info for journalists > National Center for Atmospheric Research | University Corporation for Atmospheric Research @UCAR | http://www2.ucar.edu/atmosnews/news/927/statement-reductions-programs-ncar Follow Us
科技
2016-40/3983/en_head.json.gz/6805
Small Business Innovation Research on the Road SBIR, SBIR Bus Tour, Small Business Innovation Research 0 Comments By Paul Shapiro EPA’s Paul Shapiro with SBIR representatives from other federal agencies at the University of South Dakota’s SBIR Center. Working on EPA’s Small Business Innovation Research (SBIR) program, I feel like I am part of something that is quintessentially American. Americans are known for using their ingenuity to solve important problems and for developing innovative technologies to do so. Whether in agriculture, industry, defense, space, or the environment, Americans have created small businesses to put such technologies into practice. But starting a successful technology company is challenging. The SBIR program is a great resource for innovators. It gives them funds—with no strings attached—to move their technologies through the early stages of development and into commercialization. I recently participated in an SBIR bus tour with representatives from the ten other federal agencies that have SBIR programs. We went across the Mid-West to get the word out about SBIR. I felt like I was doing something else that is part of American lore—prospecting. Yes, mining for golden nuggets of inventiveness and diamond gems of business acumen. Last year’s Road Tour was a success—I connected with people who ended up proposing projects to our program. So this year I again went to spread the word about SBIR to states from which we have not had many SBIR proposals. In Grand Forks, ND, Sioux Falls, SD, Ames, IA, St. Louis, MO, and Indianapolis, IN, I got a chance to present EPA’s SBIR program and talk one-on-one with people about their technologies. I told them we want high-risk, high-reward projects that can “disrupt” the marketplace by providing better performing, safer, and less costly technologies than those that are currently being used. To make the risk manageable, however, we evaluate proposed projects equally on commercial and technical soundness. I told them that their success will be our success, since they will help us achieve our mission of protecting human health and the environment. Since start-ups need all the support they can get, another thing I find inspiring on these trips is meeting the dedicated people in the state agencies, universities, entrepreneurial centers, and consulting firms who host these events and provide supportive services all year long to the innovators in their area. It was great hearing entrepreneurs describe their ideas and so rewarding to help them see new opportunities—like the person I approached on one of the tour stops. He got up in the larger group to describe a material with remarkable properties he developed for defense purposes. I showed him that we have an environmental topic in our 2016 solicitation that would be a great fit for his technology. One more nugget for our program! The timing of this tour was brilliant, because our annual Phase I solicitation just opened and will remain open until Oct. 20, 2016. To propose a project, go to the EPA SBIR page. About the Author: Paul Shapiro is a Senior Environmental Engineer with the EPA Small Business Innovation Research (SBIR) Program. He has worked for many years on the development and commercialization of innovative environmental technologies. He has helped solve a wide range of environmental problems, usually in collaboration with public and private sector stakeholders. This Week in EPA Science Ecovative, Lucid Technologies, research recap, SBIR, Society of Toxicology 0 Comments By Kacey Fitzpatrick It’s March! Spring is right around the corner, though you wouldn’t know it from the snow on the ground here in Washington DC. Here’s something to read while you wait for it to get a bit warmer out. Funding Small Businesses to Develop Environmental Technologies This week EPA announced eight contracts to small businesses to develop innovative technologies to protect the environment through EPA’s Small Business Innovation Research (SBIR) Program. The Agency is one of eleven federal agencies that participate in the program, established by the Small Business Innovation Development Act of 1982. The Advance of Lucid and the Building Dashboard One of our SBIR recipients is Lucid. The company got its start as a student team competing in EPA’s People, Prosperity, and the Planet (P3) grant competition. In 2005, the team won a P3 grant for their prototype, the Building Dashboard, which tracks how much energy and water is being used in a building and provides visual insights that can influence occupants to change their habits. Read their success story in the blog From Oberlin to Oakland: The Advance of Lucid and BuildingOS. Furniture Giant Considers Switch to Green Packaging Another SBIR company made big news when the furniture giant Ikea announced that it is considering replacing polystyrene packing with a biodegradable, fungus-based, green alternative produced by Ecovative, a growing small business that in large part got its start from an EPA SBIR contract. Ecovative’s innovative process of utilizing mycelium, the vegetative growth stage of fungi, to commercialize custom molded protective packaging is proving that the development of sustainable products can spark economic growth. Read more in the article Ikea considers deal with Green Island-based Ecovative Design. Society of Toxicology Annual Meeting Attending the Society of Toxicology annual meeting this year? So are we! Advances in EPA’s toxicology research will be featured at sessions, symposia, workshops, platform discussions, informational sessions, poster sessions, and at EPA’s booth in the ToxExpo exhibit hall. Find more information on EPA’s Society of Toxicology page. About the Author: Kacey Fitzpatrick is a student contractor and writer working with the science communication team in EPA’s Office of Research and Development. research recap, SBIR, Shark Tank, Water Security 1 Comment ‘Twas the day before Christmas, and all through the Agency, Our researchers were working, so much discovery! Is there one place, where all this can be found? One science review, no looking around? Here’s my present to you, no need to unwrap. Right here on this blog, your Research Recap! Swimming with the Sharks Through Small Business Innovation Research contracts, EPA helps many great, environmentally-minded business ventures with potential, get the funding they need to get started. Read about some of our success stories—one of which was recently on the show Shark Tank—in the blog Swimming with the Sharks. EPA Researchers Share Chemical Knowledge after Contamination Scare In September, people living and working near an Australian air force base were warned that elevated levels of the chemicals Perfluorooctane Sulfonate and Perfluorooctanoic Acid had been detected in the surrounding area. EPA researchers Chris Lau and John Rogers were recently interviewed by the Australian Broadcasting Corporation about their expertise in these chemicals. Read about their insights in the article US scientists reveal further detail about chemicals at heart of Williamtown RAAF contamination. EPA is responsible for working with water utilities to protect water systems from contamination and to clean up systems that become contaminated. These systems can be contaminated by, for example, natural disasters such as Superstorm Sandy or by individuals hoping to cause harm. To help address these science gaps, EPA researchers have developed the first-of-its-scale Water Security Test Bed. Watch the video EPA and Idaho National Laboratory create first-of-its-scale Water Security Test Bed and learn more about our Homeland Security Research. If you have any comments or questions about what I share or about the week’s events, please submit them below in the comments section! On the Road from Cajun Country to the Heartland to Seed Small Business Innovation Research America's Largest Seed Fund, innovation, SBIR, Small Business Innovation Research 0 Comments By Greg Lank On our “Seeding America’s Future Innovations” tour In April, I had the pleasure of representing EPA on a bus tour during the second leg of “Seeding America’s Future Innovations,” a national effort to spread the word about the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs. The two programs are coordinated by the Small Business Administration and administered by EPA and 10 other federal agencies. Together—“America’s Largest Seed Fund”—they provide $2.5 billion of contracts and other awards to small, advanced technology firms to spur discoveries and facilitate the commercialization of innovations. We traveled from the Cajun country of Long Beach, Mississippi and Ruston, Louisiana through Texas and into the heartland, including Oklahoma City, Oklahoma, Wichita, Kansas and finally Columbia, Missouri. At every stop, each representative shared an overview of their agency’s SBIR program, including existing opportunities and exciting success stories of now thriving businesses have come out of the program. Following the presentations, companies had the rest of the morning to sit down with representatives from the SBIR program of their choice for one-on-one meetings and to get answers to their questions. The primary question that every company asked me was if their technology would fit into one of EPA’s SBIR topic areas. And I learned that there is broad interest in water resources and energy recovery—exciting topics where innovation can lead to the recovery and reuse of resources that are presently lost in the waste stream. Everyone was humbled and honored to pay their respects at The Oklahoma City National Memorial In between locations the Road Tour stopped at the Oklahoma City National Memorial and the National Institute of Aviation Research (NIAR). Everyone was humbled and honored to pay their respects at The Oklahoma City National Memorial, which honors the victims, survivors, rescuers, and others affected by the Oklahoma City bombing on April 19, 1995. At NIAR, I was fascinated to see the testing that goes into making air travel safe globally. Each packed-house tour stop proved to be a phenomenal platform to collaborate, educate and learn. Collaboration occurred between federal agencies, academia and innovators. Finally, all who attended functioned as educators and students. Not only were we able to educate the attendees about our programs, but meeting them provided us with the opportunity to learn about the exciting innovations coming down the pike from our Nation’s best and brightest. The next tour will be the north central tour from July 13-18. That will be followed the final tour, August 17-21 through the Pacific Northwest. To learn more about EPA’s SBIR program, visit www.epa.gov/ncer/sbir. About the Author: Greg Lank is a mechanical engineer in EPA’s Office of Research and Development. He manages grants and contracts for the SBIR and People, Prosperity and the Planet (P3) programs, which facilitate the research, development and deployment of sustainability innovations. On a Mission: Finding Life Cycle Environmental Solutions Cambrian Innovation, Ecovative, EPA SBIR, innovation, SBIR 0 Comments A blog post by April Richards and Mary Wigginton highlighting EPA’s Small Business Innovation Research program–“the small program with the big mission”–was recently posted by the U.S. Small Business Administration. A portion is reposted below. Read about EPA-supported innovative companies and their products, such as environmentally-friendly packaging (pictured), in the SBA blog post. We often describe the U.S. Environmental Protection Agency’s (EPA) Small Business Innovation Research (SBIR) program as the small program with the big mission, to protect human health and the environment. The mission is big and the areas of focus are broad: air, water, climate change, waste and manufacturing. We strive to promote “greening” it all. The President’s budget calls to equip the EPA with the best scientific information and research to underpin its regulatory actions and helps the agency find the most sustainable solutions for the wide range of environmental challenges facing the United States today. It supports high-priority research in such areas as air quality, sustainable approaches to environmental protection, and safe drinking water. Through the years, the EPA SBIR program has supported advances in green technologies such as state-of-the-art monitoring devices and pollution clean-up systems and processes. Recently though, we have expanded to support companies whose ideas are launched from a foundation of life cycle assessment (LCA). This proactive approach means solving an environmental problem in a way that takes into account resources, feedstock, emissions, toxicity and waste. While clean-up, containment systems, and other “end-of-pipe technologies” are still important for managing pollution and potential contaminants after they have been produced, we want to foster game-changers that reduce or eliminate their production in the first place. Read the rest of the blog. Editor's Note: The opinions expressed here are those of the author. They do not reflect EPA policy, endorsement, or action. Please share this post. However, please don't change the title or the content. If you do make changes, don't attribute the edited title or content to EPA or the author. Local Water Woes, No More? Advancing Safe Drinking Water Technology arsenic, Arsenic in drinking water, DC Water, District of Columbia Water and Sewer Authority, EPA People Prosperity and the Planet (P3) award, EPA’s Small Business Innovation Research Program, SBIR, SimpleWater 2 Comments By Ryann A. Williams The SimpleWater company got their start as an EPA P3 team. As a child growing up in Washington, D.C. I remember hearing adults talk about their concerns about the local tap water. Overheard conversations about lead content and murkiness in the water certainly got my attention. As an adult who now works at the Environmental Protection Agency, I know things have greatly improved. Today, DC tap water is among the least of my concerns. I drink it every day. Frequent testing to confirm its safety and public awareness campaigns by DC Water (the District of Columbia Water and Sewer Authority) have put my own worries to rest. But in other parts of the world and even in some areas of the U.S., people still have a reason to worry about their drinking water: arsenic. Globally, millions of people are exposed to arsenic via drinking water and can suffer serious adverse health effects from prolonged exposure. This is especially true in Bangladesh where it is considered a public health emergency. Other countries where drinking water can contain unsafe levels of arsenic include Argentina, Chile, Mexico, China, Hungary, Cambodia, Vietnam, and West Bengal (India). In addition, parts of the U.S. served by private wells or small drinking water systems also face risks due to arsenic in their drinking water. Remedies are expensive and both energy- and chemical-intensive. In 2007, a student team from the University of California, Berkeley won an EPA People, Prosperity and the Planet (P3) award for their research project aiming to help change that. Explaining the arsenic removal project. The students set out to test a cost-effective, self-cleaning, and sustainable arsenic-removal technology that employs a simple electric current. The current charges iron particles that attract and hold on to arsenic, and are then removed by filter or settle out of the water. By the end of their P3 funding in 2010, promising results had allowed the team to extend their field testing to Cambodia and India, and move forward with the licensing and marketing of their product to interested companies in Bangladesh and India. Today, the same group of former Berkeley students who formed the P3 team now own a company called SimpleWater. SimpleWater is among 21 companies that recently received a Phase One contract from EPA’s Small Business Innovation Research Program. SimpleWater aims to commercialize their product and bring their track record of success in Bangladesh and India to help Americans who may be at risk from arsenic exposure in their drinking water. In particular they’re focusing on those who live in arsenic-prone areas and whose drinking water is served by private wells or small community water systems that test positive for elevated arsenic levels. (Learn more about Arsenic in Drinking Water and what to do if you think testing is needed for your water.) Thanks to EPA support, SimpleWater is working to reduce the threat of arsenic in small drinking water systems and private wells. With their help, millions of people may soon feel safer about their drinking water, and like me, have one less big thing to worry about. About the Author: Ryann Williams is a student services contractor with the communications team at EPA’s National Center for Environmental Research. When she’s not working with the team, she enjoys other team activities like soccer and football. Saving Energy and Money: Go Team Go! Building Dashboard, DC Green Schools Challenge, Environmental Fuel Research, EPA SBIR, EPA’s Small Business Innovative Research, Lucid Technologies, SBIR, SimpleWater, Sprint to Savings 1 Comment By Lek Kadeli Spirited competition between local schools is a time honored tradition. From the football and soccer teams to the debate club, nothing beats taking on your arch rival to spark school spirit, get the neighbors talking, and build community pride. That spirit of competition has helped schools here in the District of Columbia save more than 76,000 kilowatt-hours of electricity, thanks to Lucid—an EPA-supported small business started by previous winners of the agency’s People, Prosperity and the Planet (P3) award. The schools vied to see which could most dramatically reduce their energy consumption as part of the three-week “Sprint to Savings” competition. The DC Green Schools Challenge set up the competition to help schools conserve energy and save money while “engaging students in real-world learning opportunities.” It is managed by the the District of Columbia, Department of General Service (www.dgs.dc.gov). To monitor their progress and take action, students used Lucid’s “Building Dashboard,” a software program that monitors a building’s energy and water consumption in real time and presents that information in easy-to-understand graphic displays on computer screens or other devices. Students were able to use Building Dashboard installed at their schools to gauge their progress in 15-minute intervals and help the school take corrective action, such as switching lights off when not needed, shutting down unused computers and monitors, and turning the heat down after hours. A District-wide leader board helped them keep an eye on the competition. Interactive Building Dashboard The idea for a data monitoring display system begin when the now principal partners of Lucid Technology were students at Oberlin College. In 2005, their prototype won an EPA P3 Award. The P3 program is an annual student design competition that supports undergraduate and graduate student teams to research and design innovative, sustainable methods and products that solve complex environmental problems. Since then, there’s been no looking back! Today, we are thrilled to announce that Lucid is among 20 other small businesses—including two other former P3 winners—selected to receive funding as part of the EPA’s Small Business Innovative Research (SBIR) program. The program was designed to support small businesses in the commercialization as well as the research and development of technologies that encourage sustainability, protect human health and the environment, and foster a healthy future. Environmental Fuel Research, LLC, and SimpleWater, LLC are the other two former P3 winning teams. Thanks to Lucid, Environmental Fuel Research, LLC, SimpleWater, LLC and the other innovative small businesses we are supporting today, winning ideas are bringing products to the marketplace that protect our environment while sparking economic growth. I’ll bet that even arch rivals can agree that’s a win for everyone. About the Author: Lek Kadeli is the Acting Assistant Administrator in the Agency’s Office of Research and Development. Rethinking Wastewater Around the Water Cooler, Cambrian Innovation, Department of Defense, DOD, EcoVolt, NASA, National Aeronautics and Space Administration, National Science Foundation, NSF, SBIR, Small Business Innovation Research, US Department of Agriculture, USDA, Wastewater 0 Comments By Marguerite Huber The next time you enjoy a beer you might be helping the environment. The next time you enjoy a cold, refreshing beer or glass of wine, you might also be helping the environment. Over 40 billion gallons of wastewater are produced every day in the United States, and wineries, breweries, and other food and beverage producers are significant contributors. For example, the brewing industry averages five or six barrels of water to produce just one barrel of beer. But where most see only waste, others see potential resources. What we label “wastewater” can contain a wealth of compounds and microbes, some of which can be harvested. One innovative company that has recognized this, Cambrian Innovation, is harnessing wastewater’s potential through the world’s first bioelectrically-enhanced, wastewater-to-energy systems, EcoVolt. (We first blogged about them in 2012.) Cambrian Innovation is working with Bear Republic Brewing Company, one of the largest craft breweries in the United States. Located in California, which is suffering from severe drought, Bear Republic first began testing Cambrian’s technology to save water and reduce energy costs. Fifty percent of the brewery’s electricity and more than twenty percent of its heat needs could be generated with EcoVolt. Compared to industry averages, Bear Republic uses only three and a half barrels of water to produce one barrel of beer. The EcoVolt bioelectric wastewater treatment system leverages a process called “electromethanogenesis,” in which electrically-active organisms convert carbon dioxide and electricity into methane, a gas used to power generators. The methane is renewable and can provide an energy source to the facility. Rather than being energy intensive and expensive, like traditional wastewater treatment, Cambrian’s technology generates electricity as well as cost savings. Furthermore, the EcoVolt technology is capable of automated, remote operation, which can further decrease operating costs. EPA first awarded Cambrian Innovation a Phase I (“proof of concept”) Small Business Innovation Research contract in 2010. Based on that work, the company then earned a Phase II contract in 2012 to develop wastewater-to-energy technology. Cambrian Innovation has also developed innovative solutions with funding from other partners, including the National Science Foundation, National Aeronautics and Space Administration, Department of Defense, and U.S. Department of Agriculture. With access to water sources becoming more of a challenge in many areas of the country, Cambrian’s technology can help change how we look at wastewater. It doesn’t have to be waste! Wastewater can instead be an asset, but only as long as we keep pushing its potential. That can make enjoying a cold glass of your favorite beverage even easier to enjoy! About the Author: Marguerite Huber is a Student Contractor with EPA’s Science Communications Team. Waste to Value: EPA’s Role in Advancing Science and Business Around the Water Cooler, Bactobot, innovation, Pilus Energy, SBIR, Small Business Innovation Research, Tauriga, Wastewater 4 Comments Electrogenic bioreactor containing “Bactobots” and wastewater. In case you missed it in the news, a New-York-based micro-robotics firm, Tauriga, acquired Cincinnati-based Pilus Energy last month. In the business world, acquisitions and mergers happen all the time, but I bet you are wondering what makes this one significant to the EPA? Tauriga CEO, Seth M. Shaw describes Pilus Energy’s technology as “extraordinary.” What makes it so is that Pilus Energy operates with the goal of turning waste into value, turning sewage into electricity to power approximately 275 million homes a year! Their innovative technology claims to transform dirty, wastewater into electricity, as well as clean water, and other valuable biogases and chemicals. The secret to this venture is the help of genetically enhanced bacteria, given the more affectionate name of “Bactobots.” “Essentially we are mining wastewater for valuable resources similarly to gold mining companies mining ore for gold,” Shaw confides. Now this is where the EPA comes in. Dr. Vasudevan Namboodiri, an EPA scientist with 20 years of research and development experience, explains that EPA and Pilus are investigating the potential for Pilus Energy technology in the water industry. With EPA’s technical oversight, Pilus Energy’s goal is to eventually build an industrial pilot-scale prototype. This type of technology is still in its infancy and will be many years away from large scale production, Dr. Namboodiri explained. Large- scale usage of the technology could possibly be revolutionary, and provide great benefits in the future. Tauriga CEO Shaw notes that, “There is an enormous global need to maximize all resources available, due to population growth and energy costs.” If applied to whole communities in both developing and developed countries, there could be major benefits such as: Reduced wastewater treatment costs Creation of a renewable energy source Valuable chemical byproducts that could be used towards renewable products Higher quality water for both drinking and recreation Healthier food due to less contaminates in soil Improved ecosystem benefits or services and biodiversity if applied in an entire watershed Even though the large scale benefits will likely not be seen until years from now, the partnership between Pilus Energy and the EPA helps support EPA’s mission of protecting human health and the environment. About the Author: Marguerite Huber is a Student Services Contractor with EPA’s Science Communications Team. Editor's Note: The opinions expressed here are those of the author. They do not reflect EPA policy, endorsement, or action. Please share this post. However, please don't change the title or the content. If you do make changes, don't attribute the edited title or content to EPA or the author. Sister Blog: Small Business Innovation is Mushrooming compost, Ecovative, EPA Connect, Great Pacific Garbage Patch, innovation, SBIR, Small Business Innovative Research, styrene, sustainable packaging, waste disposal 0 Comments EPA Connect, the official blog of EPA’s leadership, recently shared a post featuring Ecovative, one of our favorite success stories! Small Business Innovation is Mushrooming By Judith Enck Sometimes I worry that one of the enduring manmade wonders of our time will be the Great Pacific Garbage Patch. You know the Garbage Patch – the huge concentration of marine debris (mostly plastics) floating in the Pacific Ocean. It may still be there centuries from now. I wonder if a thousand years from now, tourists will visit the Garbage Patch the way we do the Roman Coliseum or the Pyramids. They’ll take pictures and stand there with their mouths agape wondering “how could they let this happen?” Personally, I’m hopeful we can reduce the “greatness” of the garbage patch – and solve many of our other waste disposal problems – by reducing packaging or at least making it more sustainable. 1 2 Our Other Blogs EPA Connect (445)Our Planet, Our Home (2,310)Español (239)It All Starts with Science (614)Environmental Justice (192)EPA’s New England (67)Healthy Waters (320)Greening the Apple (516)Big Blue Thread (179)
科技
2016-40/3983/en_head.json.gz/6872
Mining for Heat Underground mining is a sweaty job, and not just because of the hard work it takes to haul ore: Mining tunnels fill with heat naturally emitted from the surrounding rock. A group of researchers from McGill University in Canada has taken a systematic look at how such heat might be put to use once mines are closed. They calculate that each kilometer of a typical deep underground mine could produce 150 kW of heat, enough to warm five to 10 Canadian households during off-peak times. A number of communities in Canada and Europe already use geothermal energy from abandoned mines. Noting these successful, site-specific applications, the McGill research team strove to develop a general model that could be used by engineers to predict the geothermal energy potential of other underground mines. In a paper accepted for publication in the American Institute of Physics' Journal of Renewable and Sustainable Energy, the researchers analyze the heat flow through mine tunnels flooded with water. In such situations, hot water from within the mine can be pumped to the surface, the heat extracted, and the cool water returned to the ground. For the system to be sustainable, heat must not be removed more quickly than it can be replenished by the surrounding rock. The team's model can be used to analyze the thermal behavior of a mine under different heat extraction scenarios. "Abandoned mines demand costly perpetual monitoring and remediating. Geothermal use of the mine will offset these costs and help the mining industry to become more sustainable," says Seyed Ali Ghoreishi Madiseh, lead author on the paper. The team estimates that up to one million Canadians could benefit from mine geothermal energy, with an even greater potential benefit for more densely populated countries such as Great Britain.
科技
2016-40/3983/en_head.json.gz/6876
EPA suspends IBM from new governmentwide work IBM was apparently blindsided by the Environmental Protection Agency’s move to suspend the company from pursuing new government work. A company spokesman said IBM learned of the suspension Friday. It then obtained a letter from EPA broadly outlining the allegations and spent most of Monday trying to gather more information, said Fred McNeese, an IBM spokesman. “Prior to Friday, there was no hint of any dispute or reason for this step,” McNeese said. The U.S. Attorney for the Eastern District of Virginia also served grand jury subpoenas on IBM and on certain employees Monday asking for testimony and documents regarding interactions between EPA employees and IBM employees, the company said in a statement. The company is cooperating with the investigation. The suspension apparently involves an $84 million EPA contract to modernize the agency’s financial management system that the CGI Group won in February 2007. IBM filed a protest of the award, which stopped work on the project, according to market research firm Input Inc. The Government Accountability Office still has the protest under review, according to Input. McNeese would not comment on the contract, but another source said the alleged improprieties involved both IBM and EPA employees. No one has been fired, the source said, because the investigation is still under way. In its statement, IBM said the temporary suspension was issued by EPA because of an investigation of possible violations of the “Procurement Integrity provisions of the Office of Federal Procurement Policy Act.” The company has 30 days to challenge the suspension and either have it lifted entirely or narrowed, McNeese said. The suspension only applies to new work and contract modifications and not to ongoing work, he said. A governmentwide suspension of a company, particularly a large one, is unusual, said Alan Chvotkin, senior vice president and counsel, for the Professional Services Council, an industry group. A suspension, which can last for up to 12 months, is usually confined to a specific contract or business unit of a company, he said. Chvotkin cited the example of Boeing Co., which received proprietary documents belonging to Lockheed Martin Corp. when both were competing for a satellite contract. Only the portion of Boeing that does that type of work was suspended, he said. A suspension also is usually a last resort and usually comes after there have been negotiations between the government and the contractor, he said. “They can suspend payment. They can terminate a contract,” Chvotkin said. “A suspension is one of the last remedies.” IBM ranks No. 18 on Washington Technology’s 2007 Top 100 list of the largest federal government prime contractors.Nick Wakeman writes for Washington Technology an 1105 Government Information Group publication. About the Author Nick Wakeman is the editor-in-chief of Washington Technology. Follow him on Twitter: @nick_wakeman.
科技
2016-40/3983/en_head.json.gz/6877
DOD Tech Cleared for takeoff: How pilot projects fast-track mobile tech By Amber CorrinApr 11, 2013 Ever seen a high-ranking military official or even an enlisted officer strutting through the Pentagon proudly brandishing a shiny new iPad? Until recently, that was an uncommon — and unofficial — sight. But times are changing as federal agencies break an enduring tradition that has put the government woefully behind technology's swiftly moving curve. Government buzzwords do not get much hotter than mobility. Inside and outside the Beltway, agencies are launching an array of pilot projects designed to test, prove and issue the smart phones and tablet PCs that have come to define life outside the office. Backed by emerging policies from as high as the White House and Office of Management and Budget, mobile pilot projects are taking off at unprecedented rates. Why do a mobile pilot? To get a feel for what works for your specific organization. A test run of mobile technology can help answer a variety of questions. For example, do employees need tablet PCs for taking notes or managing data in the field, or are smart phones enough for them to stay digitally connected at all times? Who truly needs a mobile device? Which operating system is best for specific networks? To keep up with new technology. Because they are limited deployments, pilot projects give agencies more leeway in terms of governance, funding and how they write the eventual contract. Furthermore, pilot projects can support spiral development and give agencies a chance to resolve certification and accreditation issues — one of the biggest roadblocks to rapid procurement. Once a mobile pilot project moves into production, those technology purchases are subject to acquisition regulations. To streamline activities enterprisewide. The Defense Department is using pilot projects to coordinate security requirements and interoperability, which is particularly important as the Pentagon seeks to consolidate IT systems. In the process, DOD is improving efficiency and saving resources, according to the Defense Information Systems Agency. To drive productivity and innovation. Given that the youngest members of the federal workforce have essentially been raised with technology, it is second nature for them to use mobile tools anytime, anywhere — including, for better or worse, for work purposes. Furthermore, pilot projects lend themselves to collaboration and input from team members — a process that fosters creative thinking and problem-solving. — Amber Corrin The test runs allow organizations to evaluate new mobile technologies unencumbered by the red tape and budget line items that IT acquisition typically comprises. They also help agency leaders get a better idea of requirements and what works for their respective offices. "Lessons learned from pilots reduce programmatic risk prior to committing to the execution of full production rollout, increase our chances for success on the first attempt, allow better allocation of increasingly constricted resources and enable reinvestment into the development of more productivity-enhancing solutions," said Lt. Col. Damien Pickart, a Defense Department spokesman. DOD might not be the first agency to launch mobile pilot projects, but it is arguably the most aggressive. Furthermore, officials used a mobility strategy released in June 2012 as a springboard for its latest milestone — an implementation plan with a framework and guidelines for expanding mobile device use at DOD, issued in February. According to CIO Teri Takai, pilot projects were crucial to the plan's development. "The 50 pilot programs we have out there today are really the basis for the strategy and for the implementation plan," she said. "Those pilots were a great way to get a view of what the needs were. Then the question is: How do you take care of those needs? How do you use those as an example in the future?" Takai's commitment to mobile pilot projects illustrates their importance in identifying requirements for introducing new technology, saving scarce money and improving productivity. "A lot of these pilot programs are being driven by the senior-most executives at agencies," said Jeff Ait, director of public-sector business at Good Technology, a company that has been involved in DOD's pilot projects. "General officers get iPads for Christmas, bring them in and want to carry this cool device instead of an 8-pound laptop." He added that officials often launch pilot projects for email and quickly find that users are hungry for a wide range of applications that can turn mobile devices into full-fledged productivity tools. Staying ahead of adversaries It is not always clear skies for pilot projects. Authorities sometimes get muddled, and the glacially slow processes for security accreditation and other formalities render some tools obsolete before they can be adopted. Indeed, program managers hope the pilot process will shine a light on the antiquated policies that inhibit progress. "When we first started this, we had policies that predated smart devices by decades and prevented us from using the technology, but it didn't prevent our adversaries from using it," said Michael McCarthy, director of operations at the Army Brigade Modernization Command. "It's a gradual process though. Any time you're changing the status quo, it makes people nervous." McCarthy has helped push through some of DOD's most ambitious mobile pilot projects. In fact, a number of the devices the Army is testing have been put to use in military classrooms or sent abroad to troops in the field. What started with 200 devices could expand to as many as 25,000 tablet PCs and smart phones across the Army by the end of the fiscal year, McCarthy said. "As doctrine changes, new tactics, techniques and procedures come out, new administrative forms, new training content. You have to be able to keep up and make that available," he said. "Our focus has been to get the processes into place to do the governance and certification to [ensure] we're current. It's not showy, but it's setting conditions for the not-so-distant future when the smart device becomes as common throughout the Army as the laptop computer or BlackBerry. We're seeing an evolution of technology." But that technology does not come cheap, which is a top concern for agency officials trying to determine how to integrate mobility into their operations. Most insiders agree that mobile devices improve productivity and, therefore, justify the investment, but the unclear path for securely adopting the technology is compounded by budget pressures. "Most people don't realize we never have had a budget for these projects," McCarthy said. "We've had to beg, borrow and steal — and that was by design." He added that his team largely avoided the expenses associated with big-ticket projects by using an agile development process. "We were told to see what we could do with what we had," he said. "I never sat down and totaled how much we've spent on mobile pilots, but it's probably less than $6 million. Something like this in the traditional model never would have made it out of the gate, but it's proven very effective, and it's caused us to be very careful about where we spend our resources." Resolving the non-tech issues Nevertheless, mobile pilot projects are not just about the technology. According to some experts, they have more to do with policies and governance — elements that can, fortunately, require less upfront investment to change. Teri Takai, DOD CIO At the Nuclear Regulatory Commission, for example, a test run of a bring-your-own-device policy has already attracted 350 volunteers and a tablet PC pilot project will soon move into production, but technology was far from the only consideration. "We spent the vast majority of our resources not on technical or technological issues, but on non-tech things like rules of behavior, policy, security, privacy, records and working with local union chapters," NRC CIO Darren Ash said. "As we did that we benefited because solutions out there continue to mature, but since we've focused on the long-run issues, the technology itself is less of an issue." In what turned out to be a valuable lesson, Ash and his team paused to assess how the tablet PC project was going, obtain feedback and make tweaks before heading into production. That experience reflects another key tenet of mobile pilot projects: the importance of communicating with other organizations and sharing the lessons learned along the way. "We talk to a wide variety of agencies, and in each instance, we learn from them and they learn from us," Ash said. "All of us have an interest in doing this, all of us have our own approaches, but there are a lot of mistakes we don't all have to repeat." That concept will be crucial in the coming months as DOD prepares to award a contract for its mobile device management infrastructure — a step that will spur adoption and help push pilot projects into production, said Tom Suder, president of Mobilegov. "Pilot programs are best done involving all elements, not in stealth mode," Suder said. "You want to involve all the stakeholders." At DOD, he added, "a lot of pressure will be on them. It's all up to them to execute now." "There's a lot of work within the entire DOD, not just the Army, to leverage the insight we've had in these smart phone efforts from the past couple years," McCarthy said. "And I think you're going to see some things that are going to be universal throughout DOD and probably throughout other departments as we continue to press forward." Pilots that have panned out The Army put 200 mobile test devices in the hands of senior leaders in October 2012 and used soft certifications instead of Common Access Cards for identity management. The program is set to expand to 2,500 devices at the Army Training and Doctrine Command, McCarthy said. And if the budget is right, another 2,500 devices might be fielded on a pilot basis, he added. The Air Force started planning its electronic flight bag program at the end of 2011. It swapped large, heavy flight bags full of maps, manuals, navigation materials and flight plans for tablet PCs that contain up-to-date, more quickly accessible data. The devices improve efficiency, cut down on paperwork and can also be used for functions typically performed manually, such as takeoff calculations, according to Pickart. Outside DOD, the Bureau of Alcohol, Tobacco, Firearms and Explosives began testing expanded use of mobile devices as far back as 2010. At ATF, iPads have been used to explore email as a service, video content management, surveillance and other uses for the agency’s inherently off-site work. The Nuclear Regulatory Commission is close to adopting tablet PCs after a pilot project demonstrated the benefits of allowing field inspectors to quickly access and share data. NRC is also testing a bring-your-own-device program with the help of 350 volunteers. Ash said pilot projects played a crucial role in enabling NRC’s workforce to go mobile. The Air Force RDT&E TAG offers AF groups an fast, simple, short route to obtain an Interim Authority to Test (IATT) from one of the AF DAA's. Typically a page of text and week's processing. CAC-in at https://eis.af.mil/cs/rdte/pilot/default.aspx
科技
2016-40/3983/en_head.json.gz/6910
Press Release 12-178 NSF Invests $50 Million in Research to Secure Our Nation's Cyberspace Multiple awards include two large "Frontier" collaborative projects totaling $15 million Secure and trustworthy cyberspace requires more than ethernet cables and gigabit switches. The National Science Foundation (NSF) today awarded $50 million for research projects to build a cybersecure society and protect the United States' vast information infrastructure.The investments were made through the NSF's Secure and Trustworthy Cyberspace (SaTC) program, which builds on the agency's long-term support for a wide range of cutting edge interdisciplinary research and education activities to secure critical infrastructure that is vulnerable to a wide range of threats that challenge its security."Securing cyberspace is key to America's global economic competitiveness and prosperity," said NSF Director Subra Suresh. "NSF's investment in the fundamental research of cybersecurity is core to national security and economic vitality that embraces efficiency while also maintaining privacy."In response to the SaTC call for proposals, more than 70 new research projects were funded, with award amounts ranging from about $100,000 to $10 million. This SaTC funding portfolio invests in state-of-the-art research in incentives that reduce the likelihood of cyber attacks and mitigate the negative effects arising from them. Together, these SaTC awards aim to improve the resilience of operating systems, software, hardware and critical infrastructure while preserving privacy, promoting usability and ensuring trustworthiness through foundational research and prototype deployments.Two of the SaTC funded projects are Frontier awards, which are large, multi-institution projects that aim to provide high-level visibility to grand challenge research areas. The SaTC program supports research from a number of disciplinary perspectives with investments from NSF's Directorates for Computer and Information Science and Engineering (CISE); Mathematical and Physical Sciences; and Social, Behavioral and Economic Sciences, as well as the Office of Cyberinfrastructure."We are excited that the SaTC award portfolio contains many interdisciplinary projects, including these two Frontier projects at the scale and complexity of research centers," said Farnam Jahanian, assistant director of NSF's CISE directorate. "The challenges they address--the technical and economic elements of Internet security and the issues associated with sharing of data in cyberspace while protecting individual privacy--are fundamental; addressing them will help establish a scientific basis for developing and operating computing and communications infrastructure that can resist attacks and be tailored to meet a wide range of technical and policy requirements."What follows are descriptions of the two Frontier awards.Beyond Technical Security: Developing an Empirical Basis for Socio-Economic PerspectivesUniversity of California-San Diego - Stefan SavageInternational Computer Science Institute - Vern PaxsonGeorge Mason University - Damon McCoyThis project will receive a five-year grant totaling $10 million to tackle the technical and economic elements of Internet security: how the motivations and interactions of attackers, defenders and users shape the threats we face, how they evolve over time and how they can best be addressed.While security is mediated by the technical workings of computers and networks, a commensurate level of scrutiny is driven by conflict between economic and social issues. Today's online attackers are commonly profit-seeking, and the implicit social networks that link them together play a critical role in fostering underlying cybercrime markets. By using a socio-economic lens, this project seeks to gain insights for understanding attackers, as well as victims, in order to help consumers, corporations and governments make large investments in security technology with greater understanding of their ultimate return-on-investment.Security research has tended to focus only on the technologies that enable and defend against attacks. This project also emphasizes the economic incentives that motivate the majority of Internet attacks, the elaborate marketplaces that support them, and the relationships among cyber criminals who rely upon each other for services and expertise.Grappling with both the economic and technical dimensions of cybersecurity is of fundamental importance for achieving a secure future information infrastructure, and developing a sound understanding requires research grounded in observation and experiment. Accordingly, the research will focus on four key components to:Pursue in-depth empirical analyses of a range of online criminal activities.Map out the evolving attacker ecosystem that preys on online social networks, and the extent to which unsafe online behavior is itself adopted and transmitted.Study how relationships among these criminals are established, maintained and evolve over time.Measure the efficacy of today's security interventions, both at large and at the level of individual users.Consequently, this research has the potential to dramatically benefit society by undermining entire cybercrime ecosystems by, for example, disrupting underground activities, infrastructure and social networks.Privacy Tools for Sharing Research DataHarvard University - Salil VadhanA multi-disciplinary team of researchers at Harvard University will receive a four-year grant totaling nearly $5 million to develop tools and policies to aid the collection, analysis and sharing of data in cyberspace, while protecting individual privacy.Today, information technology, advances in statistical computing and the deluge of data available through the Internet are transforming all areas of science and engineering. However, maintaining the privacy of human subjects is a major challenge. Given the complexities involved in ensuring privacy for shared research data, Vadhan will be joined by a team of professors with expertise in areas such as mathematics and statistics, government, technology and law. Together they will engage in a multi-disciplinary approach to refine and develop definitions and measures for privacy and data utility. They will also design an array of technological, legal and policy tools that can be used when dealing with sensitive data.These tools will be tested and deployed at the Harvard Institute for Quantitative Social Science's Dataverse Network, an open-source digital repository that offers the largest catalogue of science datasets in the world. The ideas and tools developed in this project will have a significant broad impact on society since the issues addressed in the work arise in many other important domains, including public health and electronic commerce. -NSF- Lisa-Joy Zgorski, NSF, (703) 292-8311, lisajoy@nsf.gov M. Rutter, Harvard University, mrutter@seas.harvard.edu Chris Switzer, University of California Berkeley, (510) 666-2927, cswitzer@icsi.berkeley.edu Ioana Patringenaru, UC San Diego Jacobs School of Engineering, (858) 822-0899, ipatrin@eng.ucsd.edu Nina Amla, NSF, (703) 292-8910, namla@nsf.gov Ralph Wachter, NSF, (703) 292-8950, rwachter@nsf.gov Kevin Thompson, NSF, (703) 292-4220, kthompso@nsf.gov Peter Muhlberger, NSF, (703) 292-7848, pmuhlber@nsf.gov Related WebsitesUCSD Jacobs School of Engineering Announces $10 Million NSF Grant to Help Computer Scientists Understand the World of Cybercrime: http://www.jacobsschool.ucsd.edu/news/news_releases/release.sfe?id=1266International Computer Science Institute Announces NSF Awards $10 Million Grant to ICSI and Collaborators to Study Human Element of Cybercrime: https://www.icsi.berkeley.edu/icsi/news/2012/09/frontier-cybercrime-economyHarvard University announces new tools will make sharing research data safer in cyberspace: http://www.seas.harvard.edu/news-events/press-releases/new-tools-will-make-sharing-research-data-safer-in-cyberspace
科技
2016-40/3983/en_head.json.gz/6927
NASA's Orbiting Carbon Observatory Set For Launch Tomorrow from the door-to-door-to-get-the-gas dept. bughunter writes "The Orbiting Carbon Observatory (OCO) is slated for launch tomorrow, February 24, 2009. OCO is the first earth science observatory that will create a detailed map of atmospheric carbon dioxide sources and sinks around the globe. And not a moment too soon. Popular Mechanics has a concise article on the science that this mission will perform, and how it fits in with the existing 'A-train' of polar-orbiting earth observatories. JPL's page goes into more detail. And NASA's OCO Launch Blog will have continuous updates as liftoff approaches and the spacecraft reports in and checks out from 700km up." orbitingcannon BASH 4.0 Released NASA Tests New Moon EngineSubmission: Will Orbiting Carbon Observatory find missing CO2?NASA's Orbiting Carbon Observatory Mission Fails Microsoft Unveils "Elevate America" Re:What Are They Gonna Say? by MightyYar ( 622222 ) writes: on Monday February 23, 2009 @05:41PM (#26962297) I presume by "they", you mean atmospheric scientists? Presumably, they'd follow the scientific method and adjust their theories to fit the new data.If by "they" you mean career warming deniers, then they will use it as "evidence" when they go on talk shows and sell their newest book to the ignorant on the internet.If you fall into the latter camp, I wouldn't get your hopes up. War of the Deniers by Bemopolis ( 698691 ) writes: on Monday February 23, 2009 @05:56PM (#26962499) Who will win the battle: the pro-troleum anti-AGW crowd, the creationists who believe that man cannot corrupt the Earth since it was created by a loving God, or the Flat-Earthers who think all satellites are a conspiracy from Big Spheroid? Whoever wins, we lose. by Locklin ( 1074657 ) writes: on Monday February 23, 2009 @06:25PM (#26962833) Homepage Because the models can be made better?? When the models can predict sea level rise to the nearest mm in each region of the globe, the exact quantity of ice during the winter of 2094, or the new ocean currents after a 3 degree rise in average temperature, there will still be improvements that can be made. by Qrlx ( 258924 ) writes: on Monday February 23, 2009 @06:36PM (#26962981) Journal Seeing everything as a dichotomy = your problem. A lot of others suffer from the same disease. Re:I know.... by riverat1 ( 1048260 ) writes: on Monday February 23, 2009 @06:38PM (#26962997) Most of the life on earth today is evolved for the current conditions, not the conditions that existed when that carbon as sequestered from the environment. At a minimum going back to those levels of CO2 would be uncomfortable. Studies have shown that when the CO2 level in a room is 1000 ppm then over 20% of people feel discomfort from it. With business as usual we could reach that level around 2100. Re:War of the Deniers Journal I don't exactly know what obligation I have to do anything for the earth if there is no God and I'm a product of evolution.Well then you should give that one some thought, since at least the latter half of your statement is undeniably true. by Rei ( 128717 ) writes: on Monday February 23, 2009 @06:52PM (#26963143) Homepage I know where it went; it is called the carbon cycle. All that CO2 is either in the oceans, in plants/animals and in the air as CO2. I just saved you $273 million dollars, and I take a 10% cut. Check please.The point is to know precisely where it's going, to know how much its future capacity to soak carbon will be. For example, here's a known case: the oceans. Since we know that a lot of it is going to the oceans, and how much, we can determine what it's carbon soaking capacity will be in the future as it gets more and more saturated. But, to pick some random possibility... carbonates formed from exposed surface rock. If we don't how much CO2 is going into forming additional carbonates naturally, we have no ability to model how much that ability will fade off in the future.The current models, which generally assume that unknown carbon sinks will remain equally able to keep sinking an unlimited amount of carbon into the future, are likely very overly optimistic on this front on this front.We've probably made the world a better place for our friends who breathe the stuff. Most of the world's oceans (2/3rds of the world's available area for photosynthesis) are not CO2-limited, but nutrient-limited. In particular, iron.Can someone please answer this: If we are burning fossil fuels; presumably all this carbon we are burning was part of the carbon cycle 100s of millions of years ago.Not necessarily. Oil, and even natural gas and coal deposits are just a fraction of all entombed carbon. There's also shales and all sorts of carbonate minerals. Carbon levels are constantly in flux. Back during the Cambrian, they hit as high as 7,000 ppm. By the mid-Carboniferous, they were down to around 350 ppm. But that took place over the course of 250 million years, an average change of 1ppm every 40,000 years. Some periods were steeper than others, of course; the mid Devonian dropped a ppm every few thousand years, and there are probably more dramatic spikes that we just don't have the resolution to see (more like what we see in the Holocene record). But nothing in the historical record even approaches a relentless 1 1/2 ppm/year.It's not *that* the Earth is changing. The Earth always changes. The problem is how fast it's changing. I don't know about yours, but my species certainly can't rapidly evolve over the course of a few dozen generations. And much of our infrastructure is fixed in place, unable to adapt at all; you can't just pack people up from areas that are drying out and move them to new Canadian/Siberian farmland without huge expense and hardship. by EZLeeAmused ( 869996 ) writes: on Monday February 23, 2009 @06:55PM (#26963171) A huge plume of CO2 located off the eastern coast of Florida. OCO being the rough shape of CO2 :) by slashdotlurker ( 1113853 ) writes: on Monday February 23, 2009 @07:02PM (#26963237) Particularly apt name. Re:It's All Magic Anyhow! by AstrumPreliator ( 708436 ) writes: on Monday February 23, 2009 @07:09PM (#26963307) Of course there will always be people like the ones you describe. People on Slashdot throwing around the word "denialist" is starting to annoy me now though. What, was heretic too strong of a word for you? I mean seriously, how do you deal with someone who believes the Earth is flat? Personally if they believe the Earth is flat then there's no reason for me to talk to them, their mind is made up. Scientific reasoning will never reach them. Lately Slashdot commenters, for whatever reason, have moved away from scientific reasoning onto name calling and petty bickering though. Apparently global climate change is serious enough to warrant discussion, but not well thought out discussion, just ad hominem attacks. Not to mention half the people who are called "denialists" are just people arguing about the extent of anthropogenic climate change, but agree the average temperature of the Earth is rising faster than current models predict it should. I'm usually too disgusted by these threads on Slashdot to post anymore. This time I'm posting rather early in hopes that at least a few people will read this. Re:CO2 not a killer gas Homepage It seems impossible to have any reasoned discussion about carbon dioxide. Carbon dioxide concentration in the atmosphere has increased from 290 ppm in pre-industrial times to 365 ppm today and that increase is NOT having a significant effect on climate. Oh really? [duke.edu] In the 'global warming' scenario, short wavelength radiation from the sun passes through the atmosphere and warms the earth. The warmed earth then re-radiates long-wavelength infra-red radiation back into space, or at least tries to but is allegedly stopped by carbon dioxide. So...what's wrong with this? CO2 absorbs infra-red radiation in only a narrow wavelength band and it will not absorb any infra-red radiation with a wavelength outside of its absorption band. There is already far more CO2 in the atmosphere than is needed to effectively absorb ALL infra-red radiation in the CO2 absorption band. (A much bigger absorber of infra-red radiation in the atmosphere is...water vapor...but that's another movie.) Sorry, but you should really start reading peer-reviewed research and stop listening to viscounts. First off, for something to be a greenhouse gas, it *needs* to be selective on what it blocks. An optimal greenhouse gas is *transparent* to light in the visible and near-IR spectrum, and *opaque* to far-IR. You need to let the sun's energy in (mostly visible and near-IR) while making it harder for what the Earth radiates (mostly far-IR) out. A gas that blocks everything evenly is not a greenhouse gas.Secondly, your argument is akin to saying that if a reflective blanket keeps 95% of your heat in, putting another reflective blanket around you won't help much. Earth is not a simple physics problem with a surface, a single one-pass medium, and an energy input. Light is constantly absorbed and re-radiated all throughout the atmosphere. The upper layers are colder than the upper layers. The higher the absorption of far-IR, the slower energy can transfer from the lower layers to the upper layers; the lower the absorption of near-IR and visible, the faster energy can transfer from the upper layers to the surface (or even straight to the surface). In short, until a 10-meter or so column of atmosphere can absorb 95%, increasing CO2 levels is a *major* impactor on surface temperature.Lastly, water vapor is 100% feedback, not forcing. Water vapor has a tiny residency in the atmosphere (days), while CO2 has a long residency (hundreds of years). Any disequilibrium in water vapor is rapidly remedied. Now, on *geological time scales*, CO2 is feedback, mostly to Milankovitch cycles. But that's on the scale of tens of thousands of years. The effect of increasing CO2 concentration is therefore only to cause absorption to occur at a slightly lower altitude in the atmosphere and after carbon dioxide absorbs infra-red radiation, it quickly collides with nearby, and far more abundant, oxygen and nitrogen molecules, transferring heat to them. These then re-radiate heat out into space. Wow, was the person you read that from a comedian or just an idiot? CO2 is perfectly capable of radiating IR. *All* objects in the universe are. It doesn't matter whether it's CO2, O2, N2, or what. There are different spectral lines (rather than a perfect blackbody), but it's not a practical distinction. The energy can be radiated in any direction -- up or down. It's almost invariably reabsorbed unless it's in the outermost fringes of the atmosphere. As mentioned before, the more transparent the atmosphere is to "incoming" radiation types, the faster solar energy can migrate to the surface. The less transparent it is to "outgoing" types, the slower far-IR energy can migrate away from the surface. I can make you a drawing or a rudimentary python script to illustrate this concept if you're still having trouble with it. So...if carbon dioxide is not changing our climate, what is? Look to the SunRead the rest of this comment... by wizardforce ( 1005805 ) writes: on Monday February 23, 2009 @10:05PM (#26964857) Journal those who don't find the need to protect themselves, their descendants or their environment are going to kill themselves off.I actually don't see any real obligation, if I were an atheistic evolutionist, to do anything about the earth. Or, for that matter, to do anything for humanity. Unless I see a distinct benefit in it for me AND I have a desire to reap said benefit. From my point of view as an atheist and a scientist [I am an evolutionist but also a gravityist, relativityist etc...] the answer as to why someone suc has myself would bother helping anyone other than myself is that I feel good doing so. Just as any other normal, rational human being would. Part of the reason why this is the case is because of all of that natural selection combined with genetic change that has been going on for billions of years.. those species that had a tendency to cooperate of their own free will no doubt had an advantage than those who exercised their primitive ignorant self interest instead. This is likely a point you would agree with yes? That voluntary cooperation is better than pure ignorant selfishness? The point is this: cooperative behavior is not dependant on the belief of your subservience to a deity of some sort. It is a rather useful set of adaptive behaviors that assist our species to exist and function normally in society. It is normal for human beings to cooperate because they know that doing so makes them feel good about their actions. Re:Everyone follows the scientific method. Or, they would have just been wrong.Of course they are "wrong", in the same sense that Newton or Darwin were "wrong". Science is no fun at all if we have all of the answers. In fact, it would be completely obsolete. Science is just a method used to try to understand nature. Climate science is young, and it's only been in the last decade that any real consensus has arisen regarding global warming.2008 came in cooler, and we'll see how 2009 does.That's not considered to be a long-term trend. Look at the data from the past 100 years... lots of down years, yet the general trend is upward. BASH 4.0 Released
科技
2016-40/3983/en_head.json.gz/6933
About Skeptoid What Is Skepticism? Aliens & UFOs Consumer Ripoffs History & Pseudohistory Logic & Persuasion Skeptoid Blog Skeptalk Email Discussion List Answering Student Questions Help with Skeptoid Research Stalin's Human-Ape Hybrids Did Josef Stalin order the creation of an army of half-ape, half-human hybrids, and did these experiments take place? by Brian Dunning Filed under Conspiracies, Cryptozoology, Urban Legends Skeptoid Podcast #219 August 17, 2010 Podcast transcript | Download | Subscribe http://skeptoid.com/audio/skeptoid-4219.mp3 It was the Soviet dictator's dream: Soldiers with no fear, with superhuman strength and endurance, who would follow any order, eat anything, and ignore pain or injury. Workers who could do the labor of ten men without complaint, with no thought of personal time off, and no desire for pay. A force to carry the Soviet Union through its Five-Year Plan for economic development, and to make the nation invincible in war. Stalin's goal, according to modern mythology, was no less than a slave race of scientifically bred beings that were half human and half ape; a race he hoped would combine tremendous physical strength, dumb loyalty, and a human's ability to follow direction and perform complex tasks. But how much of this is true, and how much of it is the invention of modern writers and filmmakers looking for the sensational story? It's no secret that a renowned Russian biologist, Il'ya Ivanovich Ivanov, spent much of his career working on just this. Around 1900 he gained great fame and national acclaim with his work on artificially inseminating horses, increasing the number of horses that could be bred by a factor of about twenty. For a preindustrialized nation, this was a tremendous economic accomplishment. Primarily funded by the Veterinary Department of the Russian Interior Ministry, Ivanov carried this technology to its next logical step, the creation of specialized hybrid animals for agricultural and industrial purposes, as well as for the sake of advancing the science. His artificial insemination experiments successfully crossed many closely related species: donkeys and zebras, mice and rats and other rodents, birds, and various species of cattle. As early as 1910, Ivanov lectured on the possibility of crossing humans and apes, citing artificial insemination as the method of choice due to prevailing ethical objections to, well, interspecies partying, for lack of a better term. However, before he could make any progress, Ivanov's work came to an abrupt halt in 1917 with the Russian Revolution, which effectively dissolved most existing government programs and eliminated all of his funding. The new Soviet government was committed to technical innovation and science, but it took seven long years for Ivanov to rebuild his network of support. Ivanov's entire career could be fairly characterized as a constant fundraising effort, desperately seeking resources for his hybridization dream and other projects, and failing nine times out of ten. He should have been so lucky as to have the government come to him with an offer, much less an order. Interestingly, many modern articles about Ivanov portray his work as a religiously motivated crusade. It's often said that the Russian and Soviet governments funded Ivanov not for any practical purpose, but merely out of atheist activism to prove evolutionary biology and to show that creationism has no place. Amid the developing nation's immense problems with famine and agricultural development, this would seem to be a bizarre reason to explore the capabilities of animal insemination. Nevertheless, there's an element of truth to it. In voicing support for Ivanov's 1924 grant proposal, the representative of the Commissariat of Agriculture said: "...The topic proposed by Professor Ivanov...should become a decisive blow to the religious teachings, and may be aptly used in our propaganda and in our struggle for the liberation of working people from the power of the Church." It's not clear whether this was the Commissariat's actual position or whether it was simply a sales tactic; either is plausible. Ivanov himself is not known to have ever expressed interest in this interpretation of his work; after all, he'd been studying reproduction as a scientist for almost 30 years, since long before the Soviet state existed. It took another year for this particular proposal to be funded. Apes were prohibitively expensive and rare in Russia, so Ivanov set off for Africa to set up a new lab. After some false starts, he finally launched his own facility in Guinea with chimpanzees netted for him by local hunters. Using sperm from an unidentified man, Ivanov made three artifical insemination attempts on his female chimps. Because Ivanov observed that the local Africans viewed chimps as inferior humans, and viewed humans who had had contact with chimps as tainted, he performed these inseminations in secret with only his son present as an assistant. Ivanov knew that a mere three attempts was inadequate to hope for any success, but the difficulties and expenses of maintaining and inseminating the chimps was too great. So he conceived a more sustainable experimental technique: Collecting the sperm of only two or three male apes, and then using that to artificially inseminate human women. He found no support for his plan in Africa — in large part because he had proposed to inseminate women in hospitals without their knowledge or consent — so he returned to the Soviet Union with his remaining chimps and founded a primate station in Sukhum (today called Sukhumi) on the Black Sea. Only one mature male survived, an orangutan named Tarzan. By 1929, the plan was to have five women be artificially inseminated, and then live at Ivanov's institute with a gynecologist for one full year. But just as the first woman volunteer was secured, known only to history as "G", Tarzan died. Ivanov ordered five male chimps, but just as they were delivered, his life suddenly turned in a new direction, driven by the constant turmoil of philosophies and favoritisms in the Soviet Union. Ivanov was accused of sabotaging the Soviet agricultural system and various political crimes, leading to his arrest a few months later. G never visited the Sukhum station, and no sperm was ever harvested from the new chimps. Ivanov died after two years of exile. Ivanov's primate station survived, however, and became his only real legacy. By the 1960's it had over two thousand apes and monkeys, and was employed by the Soviet and American space programs. But nobody ever followed his ape-human hybrid research there, though conventional artificial insemination was often employed among its primate population. So, does this history support or contradict the claim that Stalin wanted an ape-man hybrid race of slave super warriors? Well it certainly doesn't confirm it. Contrary to the modern version of the story, Stalin personally had no connection with Ivanov or his work, and probably didn't even know about it. No evidence has ever surfaced that Stalin or the Soviet government ever went out looking for someone to create an ape-man super soldier, though it's certainly possible that someone evaluating Ivanov's proposal may have made such an extrapolation. Yet, in 2005, the Scottish newspaper The Scotsman reported the following: The Soviet dictator Josef Stalin ordered the creation of Planet of the Apes-style warriors by crossing humans with apes, according to recently uncovered secret documents. Moscow archives show that in the mid-1920s Russia's top animal breeding scientist, Ilya Ivanov, was ordered to turn his skills from horse and animal work to the quest for a super-warrior. The latter claim, that Ivanov was "ordered" to shift his work, we've found to be demonstrably untrue. The former claim, that "secret documents" have been uncovered in Moscow, is a little hard to swallow. The article gives no information whatsoever about these alleged documents, and no source is even mentioned. A search of Russian language newspapers reveals no news stories about this at all, prior to The Scotsman's article. Certainly there are documents somewhere pertaining to the grants Ivanov received from both the Russian and the Soviet governments, but if these are what The Scotsman referred to, they are wrong when they describe them as secret, as recently uncovered, and that they showed Ivanov was ordered to create a super-warrior. From what I can see, The Scotsman's story was merely another in a long line of cases where a journalist fills a slow news day with a sensationalized and/or fictionalized version of very old news, just as the National Enquirer did with the Roswell UFO story in 1978. In that case, the TV show Unsolved Mysteries picked it up and broadcast an imaginative reconstruction based on the article, and launched a famous legend. In this instance, the show MonsterQuest picked up The Scotsman article and broadcast a 2008 episode called Stalin's Ape Man. The Internet has been full of articles about Stalin's supposed experiments ever since. Interestingly, a very thorough and well researched episode of Unsolved History on the Discovery Channel called Humanzee, which was all about human-ape hybrid experiments, did not mention Stalin or the Ivanov experiments at all. Why not? Because it was made in 1998, seven years before The Scotsman published its unsourced article, and introduced a new fiction into pop culture. Humanzee focused on a particular chimp named Oliver, still living as of today, who has a bald head, prefers to walk upright, and has a number of other eerily humanlike tendencies. Although Oliver has been long promoted as a hybrid, genetic testing found that he is simply a normal chimp. This result was disappointing to cryptozoologists and conspiracy theorists, but it did not surprise primatologists who knew that each of Oliver's unusual features is within the range of normal chimps. In fact, this was established 20 years ago by testing done in Japan, and again in 1996; it's just that nobody reported it since it was not the sensational version of the story. Oliver is not a hybrid; Ivanov produced no hybrids; and other scientists have at least looked into it and never created any. There are the usual unsourced stories out of China of hybrids being created in labs, and even one from Florida in the 1920's. Desmond Morris, author of The Naked Ape, reported rumors of unidentified researchers in Africa growing hybrids, but even he dismissed it as "no more than the last quasi-scientific twitchings of the dying mythology." None of these tall tales are supported by any meaningful evidence. But is it possible? Biologists who have studied the question are split, but the majority appear to think it is not, at least not from simple artificial insemination. But one conclusion can be drawn as a certainty, at least to my satisfaction: The urban legend that Stalin ordered Ivanov (or anyone else) to create an ape-man super soldier is patently false. It has all the hallmarks and appearance of imaginative writers creating their own news, and it was done at the expense of Il'ya Ivanov, whose proper place as a giant in the field of biology has been unfairly overshadowed by a made-up fiction. Treat this one as you would any urban myth: Be skeptical. By Brian DunningFollow @BrianDunning Please contact us with any corrections or feedback. Cite this article: Dunning, B. "Stalin's Human-Ape Hybrids." Skeptoid Podcast. Skeptoid Media, 17 Aug 2010. Web. 1 Oct 2016. <http://skeptoid.com/episodes/4219> References & Further Reading Davis, W. "Hybridization of Man and Ape to Be Attempted in Africa." Daily Science News Bulletin. 1 Jan. 1925, Number 248: 1-2. Hall, L. "The Story of Oliver." Primarily Primates Videos. Primarily Primates, 21 Jan. 2008. Web. 15 Aug. 2010. <http://www.primarilyprimates.org/videos/ppvid_Oliver.htm> MacCormack, J. "Genetic testing show he's a chimp, not a human hybrid." San Antonio Express-News. 26 Jan. 1997, Newspaper. Morris, Desmond and Ramona. Men and Apes. London: Hutchinson, 1966. 82. Rossiianov, K. "Beyond species: Il'ya Ivanov and his experiments on cross-breeding humans and anthropoid apes." Science in Context. 1 Jun. 2002, Volume 15, Number 2: 277-316. Schultz, A. "The Rise of Primatology in the Twentieth Century." Proceedings of the Third International Congress of Primatology, Zurich. 1 Jan. 1970, Volume 2, Number 15. Stephen, C., Hall, A. "Stalin's half-man, half-ape super-warriors." The Scotsman. 20 Dec. 2005, Newspaper. Copyright ©2016 Skeptoid Media, Inc. All Rights Reserved. Rights and reuse information The Skeptoid weekly science podcast is a free public service from Skeptoid Media, a 501(c)(3) educational nonprofit. This show is made possible by financial support from listeners like you. If you like this programming, please become a member. Other ways you can help Donate: Make this an automatic recurring monthly donation (Cancel any time) All donations are tax deductible for U.S. residents. Email me about new episodes: Now Trending... The Rothschild ConspiracyWho Are the Raelians, and Why Are They Naked?The Santa Barbara Simoom of 1859Killing Faith: Deconstructionist ChristiansSolving the Haunted Hoia-Baciu ForestFacts and Fiction of the Schumann ResonanceBinaural Beats: Digital DrugsThe Siberian Hell Sounds SKEPTOID MEDIA About us | Our programming | Become a supporter A STEM-focused 501(c)(3) educational nonprofit. All content is © Skeptoid Media, Inc. All Rights Reserved. Get the Skeptoid Companion Email in your inbox every week, and double your dose of Skeptoid:
科技
2016-40/3983/en_head.json.gz/6949
14 tech firms form cybersecurity alliance for government Lockheed Martin, top suppliers launch initiative for government market By Wyatt KashNov 13, 2009 Thirteen leading technology providers, together with Lockheed Martin, today announced the formation of a new cybersecurity technology alliance. The announcement coincided with the opening of a new NexGen Cyber Innovation and Technology Center in Gaithersburg, Md., designed to test and develop new information and cybersecurity solutions for government and commercial customers. The alliance represents a significant commitment on the part of competing technology companies to work collaboratively on new ways to detect and protect against cyber threats and develop methods that could automatically repair network systems quickly after being attacked. The companies participating in the Cyber Security Alliance include APC by Schneider Electric, CA, Cisco, Dell, EMC Corp. and its RSA security division, HP, Intel, Juniper Networks, McAfee, Microsoft, NetApp, Symantec and VMware. Art Coviello, EMC executive vice president and president of RSA, speaking on behalf of the new alliance at the center’s dedication ceremony, highlighted the importance of combining the strengths of the companies at the NexGen center. “Our adversaries operate in sophisticated criminal ecosystems that enable and enhance their attacks,” he said. To defend against such attacks, “we need to build effective security ecosystems based on collaboration, knowledge sharing and industry best practices.” “One of the challenges in moving from being reactive to being predictive,” said Lockheed Martin chairman, president and chief executive officer, Robert Stevens, “is the need to model real-world attacks and develop resilient cyber defenses to keep networks operating while they’re under attack.” That and the ability to test solutions from end-to-end across a variety of hardware and software technologies are among the primary goals of the new cyber innovation and technology center. Nearly $10 million worth of software and equipment was contributed to the NexGen center by members of the Cyber Security Alliance, according to Charles Croom, vice president of Cyber Security Solutions for Lockheed Martin Information Systems & Global Services. The 25,000-square-foot design and collaboration center is co-located with Lockheed Martin’s new global cyber innovation range and the corporation’s network defense center. The network defense center routinely handles 4 million e-mail messages and about 10T of data per day en route to and from Lockheed Martin’s 140,000 employees. Analysts there look continually for malicious activity and data patterns, such as executable software code embedded in a PDF attachment. The new NexGen facility will be able to tap into the defense center’s data feeds, or simulate government agency computing environments, and test various approaches to mitigate cyberattacks, according to Richard Johnson, chief technology officer for Lockheed Martin Information Systems & Global Services. It can also be used to test ways of improving operating efficiencies, he said. The center includes seven collaboration areas as well as high definition video teleconferencing capabilities. The new center also features dedicated distributed cloud computing and virtualization capabilities. Those capabilities would permit an agency to simulate a network under attack and test various responses. For instance, analysts could replicate an operating network and freeze it on a second virtual location, in order to study the nature of the attack, while still supporting the primary network. “We face significant known and unknown threats to our critical infrastructure,” Croom said. “We not only need solid defenses but also the right technologies to predict and prevent future threats.” Croom said the new Cyber Security Alliance, and in particular the ability for experts from participating companies to work jointly on some of the harder problems agencies face, is one of elements that distinguishes the NexGen from other testing facilities. Wyatt Kash served as chief editor of GCN (October 2004 to August 2010) and also of Defense Systems (January 2009 to August 2010). He currently serves as Content Director and Editor at Large of 1105 Media. Reader Comments Could not agree more with Observer; these are the "usual Inside-the-Beltway suspects" who manage the IT Sector Council, the IAC, the IT ISAC Board and most of the other private sector bodies which engage with the USGovt--and leave out the 10,000 companies which represent 1,000,000 IT employees nationwide, and $ 1/2 trillion in economic activity--but who abdicate their voices with government to these self-appointed "insiders".... Observer, Jr. You say "the kind of public-private collaboration" that is needed? hmmm. The alliance sounds like all private to me. These firms are a pack in search of government business, with a capital B and a capital P for profit. Alliance may create an aggregation of interesting capabilities, but are they unique? And while a swarm of lawyers probably vetted this, do you think it has some implications for the competitive dimension of the market? Will all clients like this aggregation? Will they covet each other's opportunities? But until problems shoot to the surface, congratulations all around for daring to try a new business concept in a highly regulated and scrutinized arena. As some -- but not all-- of the alliance players are some vital players, lets hope they do not get taken out of needed action for government customers or prickly business arrangements that impede government access to their individual capabilities. Jacqui Porth Washington, D.C. This is exactly the kind of public-private collaboration that is needed for cyber security. You might also be interested in some of the related elements in the feature packaged post on the Web site America.gov following an examination of the subject over a period of several months. Please see: http://www.america.gov/cybersecurity.html
科技
2016-40/3983/en_head.json.gz/7120
USCANADALog In CloudBees snags $11.2 mln Series C funds CloudBees said Wednesday that it has raised $11.2 million in Series C financing. Verizon Ventures led the round with participation from Matrix Partners, LightSpeed Venture Partners and Blue Cloud Ventures. Headquartered in Woburn, Mass., CloudBees accelerates the delivery of mobile and online applications. WOBURN, Mass.–(BUSINESS WIRE)–CloudBees, Inc., the Enterprise Platform as a Service (PaaS) innovation leader, today announced it has closed an $11.2 million Series C financing round. The round was led by Verizon Ventures, the investment arm of Verizon Communications Inc. The round also included existing investors Matrix Partners and LightSpeed Venture Partners, as well as new investor Blue Cloud Ventures. Founded in 2010, CloudBees is a recognized leader in Platform as a Service technology and in its enterprise support and features for Jenkins continuous integration and delivery. The Series C funding brings the total investment in CloudBees to $25.7 million. The new funds will be used to drive continued revenue growth by rolling out additional product capabilities, fund sales expansion and extend the reach of the CloudBees brand. “Enterprise cloud application development and delivery is an innovative space that’s growing quickly,” said Dan Keoppel, executive director of Verizon Ventures and Verizon’s observer to the CloudBees board. “As enterprises rush to adopt cloud services, PaaS speeds that adoption.” “PaaS and continuous delivery are transforming the way enterprises create business applications and deliver value to the business by accelerating the way applications are built and deployed,” said Sacha Labourey, founder and chief executive officer of CloudBees. “CloudBees is at the center of this evolution and we are excited to have a group with the stature of Verizon Ventures lead our Series C investment round. We will invest in initiatives that continue to improve our platform and strengthen our go to market capabilities.” The investment comes on the heels of CloudBees being positioned in the “Visionaries” quadrant of the newly published Magic Quadrant for Enterprise Application Platforms as a Service (aPaaS) by research and advisory firm Gartner Inc. CloudBees’ standing in the January 7, 2014, report is based on the company’s completeness of vision and ability to execute. About CloudBees CloudBees (www.cloudbees.com) provides a Continuous Delivery Platform as a Service that accelerates the development, integration and deployment of web and mobile applications. The CloudBees Platform provides a set of services that allow developers to rapidly build and run new business applications and integrate them with other services – all with zero IT administrative overhead. With Continuous Cloud Delivery, development teams can make frequent updates and easily deploy those changes immediately to production. By eliminating the friction caused by provisioning, maintaining and administering complex hardware and software infrastructure, CloudBees allows developers to do what they do best: develop innovative applications — fast. CloudBees serves the needs of a wide range of businesses from small startups that need to quickly create new on-line businesses, to large IT organizations that need to rapidly respond to dynamic market opportunities. Follow CloudBees on Twitter (@CloudBees) and on Facebook. You can also try CloudBees for free. Backed by Matrix Partners and Lightspeed Venture Partners, CloudBees was founded in 2010 by former JBoss CTO Sacha Labourey and an elite team of middleware and open source technology professionals. Verizon Ventures seeks and invests in promising entrepreneurial companies to drive innovation in Verizon Communications Inc. Verizon Ventures’ portfolio focuses on new products, technologies, applications and services that complement Verizon networks, service platforms and distribution channels. Deal size ranges from seed capital to $5 million depending on the needs and opportunities. Verizon Ventures often co-invests with other venture firms and strategic partners. It undertakes an in-depth due diligence process and typically requires board observer rights. Find new deal opportunities, super-charge your fundraising efforts and track top managers with VCJ. Get your FREE trial! Or subscribe now! Sign up to our Newsletter
科技
2016-40/3983/en_head.json.gz/7136
Project fruit fly: What accounts for insect taste? Johns Hopkins Medical Institutions Scientists have identified a protein in sensory cells on the "tongues" of fruit flies that allows them to detect a noxious chemical and, ultimately, influences their decision about what to eat and what to avoid. A Johns Hopkins team has identified a protein in sensory cells on the "tongues" of fruit flies that allows them to detect a noxious chemical and, ultimately, influences their decision about what to eat and what to avoid. A report on the work, appearing April 19 in the online Early Edition of the Proceedings of the National Academy of Sciences (PNAS), raises the possibility that the protein -- TRPA1 -- is a new molecular target for controlling insect pests. "We're interested in how TRPA1 and a whole family of so-called TRP channels affect not just the senses, like taste, but also behavior," says Craig Montell, Ph.D., a professor of biological chemistry and member of the Center for Sensory Biology in Johns Hopkins' Institute for Basic Biomedical Sciences. Montell notes that when his team knocked out the TRPA1 sensor, the behavior change -- an alteration in food preference -- was stark. "This is the first TRP channel in insects that responds to a naturally occurring plant chemical known as an antifeedant, so now we have a target for finding more effective chemicals to protect plants from destruction by insect pests." Montell discovered TRP (pronounced "trip") channels in 1989 in flies and, a handful of years later, in humans, noting their abundance on sensory cells that communicate with the outside world. The job of these pore-like proteins -- activated by a bright light, a chilly breeze or a hot chili pepper -- is to excite cells to signal each other and ultimately alert the brain by controlling the flux of atoms of calcium and sodium that carry electrical charges. Montell's lab and others have tallied 28 TRP channels in mammals and 13 in flies, improving understanding about how animals detect a broad range of sensory stimuli, including the most subtle changes in temperature. "We already knew that TRP channels have these broad sensory roles, having previously discovered that the insect TRPA1 had a role in helping flies to detect small differences in sub-optimal temperatures within their comfort range," Montell says. "We wondered if it had any other sensory roles, so we went looking." First, the team genetically altered a normal TRPA1 gene. This experiment let them show that the protein was made in the fly's major taste organ (called the labellum) and trace its manufacture to a subset of sensory cells that respond to noxious chemicals. Separate taste cells in mammals are also known to respond to either noxious or appealing chemicals in foods. The researchers then conducted a series of behavioral tests comparing the feeding of wild type flies to those of mutants in which the TRPA1 gene was knocked out -- unable to manufacture the protein. The team placed 50 to 100 flies that had been purposely starved for a day in a covered plate with 72 wells full of two concentrations of sugar water. The wells containing the high concentration of sugar water were laced with different bitter compounds, including quinine, caffeine, strychnine and aristolochic acid. This bitter/sugar water was distinguished with blue food coloring as opposed to the pure sugar water, colored red. A wild type fly normally would consume the more sugary water because, like humans, it has a "sweet tooth." However, if the more sugary water is laced with an aversive flavor, they choose the less sugary water. After allowing the hungry wild type and mutant flies to feed from the wells, the team froze and then counted the insects, separating them based on belly color: red, blue or purple. Surprisingly, most of the mutants avoided all but one of the bitter compounds -- aristolochic acid, a naturally occurring chemical produced by plants to prevent themselves from being eaten by insects. The majority of the wild type were red, the appropriate color for having chosen the less sugary water; and the mutants mostly were blue, the color associated with the high concentration of sugar laced with aristolochic acid, because they couldn't taste the noxious chemical. "To our surprise, it was looking at first like TRPA1 didn't have a role in responding to anything," Montell said. "The aristolochic acid was literally the last compound we tried. I certainly wasn't expecting that the TRPA1 would be so specific in its response." The team followed up with electrophysiology tests on both wild type flies and those lacking the TRPA1 gene. By attaching electrodes to the tiny taste hairs on the labellum, the scientists were able to measure the taste-induced spikes of electrical activity resulting from neurons responding to the noxious chemicals. TRPA1 was required for aristolochic acid-induced activity by neurons, meaning it's essential for aristolochic acid avoidance. TRP channels also play important roles in taste in mammals, but the requirement is very different, Montell says. While one mammalian TRP channel is required for tasting all sugars and bitter chemicals, no single insect TRP has such a broad role. "It's important to make this discovery in insects, not only because it's interesting to trace the similarities and differences through millions of years of evolution, but also because of the possible practical applications" Montell says. "By targeting this TRP channel, we might be able to prevent insects from causing crop damage." Authors of the paper, in addition to Montell, are Sang Hoon Kim, Youngseok Lee, Bradley Akitake, Owen M. Woodward, and William B. Guggino, all of Johns Hopkins. This research was supported by a grant from the National Institute on Deafness and Other Communication Disorders. Materials provided by Johns Hopkins Medical Institutions. Note: Content may be edited for style and length. S. H. Kim, Y. Lee, B. Akitake, O. M. Woodward, W. B. Guggino, C. Montell. Drosophila TRPA1 channel mediates chemical avoidance in gustatory receptor neurons. Proceedings of the National Academy of Sciences, 2010; DOI: 10.1073/pnas.1001425107 Johns Hopkins Medical Institutions. "Project fruit fly: What accounts for insect taste?." ScienceDaily. ScienceDaily, 28 April 2010. <www.sciencedaily.com/releases/2010/04/100423113824.htm>. Johns Hopkins Medical Institutions. (2010, April 28). Project fruit fly: What accounts for insect taste?. ScienceDaily. Retrieved October 1, 2016 from www.sciencedaily.com/releases/2010/04/100423113824.htm Johns Hopkins Medical Institutions. "Project fruit fly: What accounts for insect taste?." ScienceDaily. www.sciencedaily.com/releases/2010/04/100423113824.htm (accessed October 1, 2016). Insects (including Butterflies) Drought Research Sensory neuron Trophic level Taste Sensors in Fly Legs Control Feeding Feb. 22, 2016 — Feeding is essential for survival. Senses such as smell or sight can help guide us to good food sources, but the final decision to eat or reject a potential food is controlled by taste. Scientists ... read more Fruit Fly Genetics Reveal Pesticide Resistance, Insight Into Cancer June 5, 2015 — The miniscule and the massive have been bridged in an effort to better understand the mechanisms behind several unique features of fruit fly genes. Some of these genes also shed light on the ... read more Who Knew? Fruit Flies Get Kidney Stones Too Mar. 23, 2012 — Research on kidney stones in fruit flies may hold the key to developing a treatment that could someday stop the formation of kidney stones in humans, scientists have ... read more The Buzz Around Beer: Why Do Flies Like Beer? Nov. 17, 2011 — Ever wondered why flies are attracted to beer? Entomologists have, and offer an explanation. They report that flies sense glycerol that yeasts make during fermentation. Specifically, they found that ... read more Strange & Offbeat
科技
2016-40/3983/en_head.json.gz/7140
Rapid-scanning microscope with no loss of quality Scientists have developed a rapid-scanning microscope with no loss of quality. Researchers at the University of Leicester have developed a new form of digital microscope which can create an image 100 times faster than regular equipment -- without losing image quality. The team of scientists have developed a new type of confocal microscope that produces high-resolution images at very fast speeds. The findings are due to be published on the online journal PLoS ONE on August 24. The device, which takes a cue from consumer electronics such as televisions, can be bolted on to a regular microscopes and projects light through a system of mirrors on to the microscopic sample. The device projects patterns of illumination onto the specimen, and only light that is precisely in the plane of focus returns along the same path and is reflected by the mirror onto a camera to form an image. The ability to be able to program the mirror device allows the illumination pattern to be adjusted easily for different types of specimens and conditions giving ease of use and flexibility. Unwanted light that comes from regions of the specimen which are out of focus are rejected, improving the image quality. The resulting images can be scanned on a computer at around 100 frames per second, showing biological processes such as cell activity at much higher speeds than regular microscopes -- which tend to be capped at around 1 frame per second. The Leicester team's microscope has no moving parts, making it robust, and the use of a programmable, digital micro-mirror allows the user to alter the size and spacing of mirrors in order to choose the quality of the image and adapt to different imaging conditions. Consequently, it has much greater flexibility than other microscopes capable of similar speeds. The researchers believe this technology will be a big help to those working in many scientific fields, including biomedical research and neuroscience. The research was led by Professor Nick Hartell, of the University's Department of Cell Physiology and Pharmacology, who plans to use the new device for his own work studying the cell mechanisms involved in the brain's storage of memories. The project last for three years and was funded by the Biotechnology and Biological Sciences Research Council (BBSRC), which has also provided funding for the team to develop the device as a commercial product. Professor Hartell said: "We built the device as there is a 'need for speed'. I found out about this technology from its use in projectors and realized that it could be used to develop a microscope. "Modern biological research, and modern neuroscience, depends upon the development of new technologies that allow the optical detection of biological events as they occur. Many biological events take place in the millisecond time scale and so there is a great need for new methods of detecting events at high speed and at high resolution. "We are very excited because we have been able to go from a concept, to a working prototype that is useful for my research into neuroscience. There is a good chance that we will be able to make a product and see that being used in labs in the UK and worldwide." Neil Radford from the University's Enterprise and Business Development Office adds "This capability provides a breakthrough from traditional Nipkov disk technologies and Professor Hartell is now working closely with us to commercialize the technology with a leading scientific instrument manufacturer." Materials provided by University of Leicester. Note: Content may be edited for style and length. Franck P. Martial, Nicholas A. Hartell. Programmable Illumination and High-Speed, Multi-Wavelength, Confocal Microscopy Using a Digital Micromirror. PLoS ONE, 2012; 7 (8): e43942 DOI: 10.1371/journal.pone.0043942 University of Leicester. "Rapid-scanning microscope with no loss of quality." ScienceDaily. ScienceDaily, 24 August 2012. <www.sciencedaily.com/releases/2012/08/120824205702.htm>. University of Leicester. (2012, August 24). Rapid-scanning microscope with no loss of quality. ScienceDaily. Retrieved October 1, 2016 from www.sciencedaily.com/releases/2012/08/120824205702.htm University of Leicester. "Rapid-scanning microscope with no loss of quality." ScienceDaily. www.sciencedaily.com/releases/2012/08/120824205702.htm (accessed October 1, 2016). Scanning electron microscope Scanning tunneling microscope Confocal laser scanning microscopy Hereford (cattle) SuperSTEM Microscope Sees Single Atoms Feb. 20, 2015 — A new super powerful electron microscope that can pinpoint the position of single atoms, and will help scientists push boundaries even further, in fields such as advanced materials, healthcare and ... read more New Microscopy Technique Improves Imaging at the Atomic Scale Jan. 23, 2014 — When capturing images at the atomic scale, even tiny movements of the sample can result in skewed or distorted images -- and those movements are virtually impossible to prevent. Now microscopy ... read more Filming Life in the Fast Lane June 4, 2012 — A new microscope enabled scientists to film a fruit fly embryo, in 3D, from when it was about two-and-a-half hours old until it walked away from the microscope as a ... read more For New Microscope Images, Less Is More Nov. 8, 2011 — When people email photos, they sometimes compress the images, removing redundant information and thus reducing the file size. Compression is generally thought of as something to do to data after it ... read more Strange & Offbeat
科技
2016-40/3983/en_head.json.gz/7195
WHOI Ship Hunts for Revolutionary War WreckA research vessel joins the search for John Paul Jones's famous shipBy Amy Nevala :: Originally published online January 29, 2008 : In print Vol. 46, No. 2, Apr. 2008TOPICS:ArchaeologySHARE THIS: TOOLS: One of the fiercest battles of the Revolutionary War raged off the coast of Flamborough Head, England, on Sept. 23, 1779, pitting the American ship Bonhomme Richard against the British HMS Serapis. After almost three and a half hours of combat, the American captain, John Paul Jones, uttered his famous phrase: “I have not yet begun to fight!” He emerged victorious, capturing Serapis in English waters. But the Bonhomme Richard, burned battling Serapis, sank in the North Sea. More than two centuries later, the wreck’s location remains a mystery, which crew members on the WHOI-operated research vessel Oceanus helped try to solve in July 2007. On an expedition funded by the Office of Naval Research, the 13-member Oceanus crew sailed off the coast of England for three days of exploration with members of the Groton, Conn.-based Ocean Technology Foundation, as well as three archeologists and a war historian. “We were all pretty jazzed about the opportunity. How often do you get to be a part of something so historic?” said Oceanus Capt. Diego Mello, who was on his first shipwreck-finding trip—and also his maiden voyage through the English Channel into the storm-tossed North Sea. Melissa Ryan, co-chief scientist with the Ocean Technology Foundation, called the Oceanus crew “true professionals who rose to the task despite some rather grim weather conditions.” A good man is hard to find To begin the search, researchers used five black-and-white sonar images taken during an initial survey by the Ocean Technology Foundation in 2006. Each image provided a hazy outline of seafloor wreckage that researchers determined might be the Bonhomme Richard. (Jones had named the vessel after Benjamin Franklin, the American Commissioner in Paris at the time, whose Poor Richard's Almanac had been published in France under the title Les Maximes du Bonhomme Richard.) To fine-tune their search, they also used eyewitness accounts of the battle, the ship’s log, court testimony at the time, damage assessments and information about the wind, weather, and tide, and computer models used by the Coast Guard to find lost ships. The compilation narrowed the search to five potential wreck sites, all situated within four nautical miles of each other. Over three days, researchers began ruling out candidates. Almost immediately, researchers deemed one site too close to shore, Ryan said. To search the other sites, the researchers used Seaeye Falcon, a remotely operated vehicle, or ROV, that roams the seafloor, sending real-time images to researchers via a cable to the ship. A second site turned out to be a sunken cargo of large stones. Two others sites were indeed shipwrecks, but Ryan said the vessels were too modern to be the Bonhomme Richard. A fifth site remains intriguing, Ryan said. “Whatever is underneath is buried in a large mound of sand that was impossible for the ROV to see beneath,” she said. Don't give up the ship Ryan said she hopes to hear in February about receiving a grant to return in summer 2008. If they find the wreck, it will fall under the jurisdiction of the Naval Historical Center, which supervises operations at any sunken warship. “There will be archeological mapping, inventorying of artifacts before we would even attempt to do any salvage work,” Ryan said. Before the expedition, Capt. Mello spent hours reading about the 1779 battle, the sunken ship, and its design and operation. A few times during the trip, he said he joined researchers as they watched the ROV scour the seafloor, then relay images of the various objects it found. He expressed some disappointment in not finding the wreck. “We hoped a cannon would pop up—even just a cannonball, some artifact from the battle,” he said. While there are no immediate plans to continue the search using Oceanus, he remains hopeful that the wreck will be located. “John Paul Jones didn’t give up,” he said. “Neither should we.” Funding for the project came from the Office of Naval Research as well as from public and private sources. Research vessel OceanusOcean Technology FoundationUnderwater vehicles Articles Featuring Diego Mello BONHOMME RICHARD, John Paul Jones, R/V OCEANUS, Revolutionary War shipwreck Woods Hole Oceanographic Institution is the world's leading non-profit oceanographic research organization. Our mission is to explore and understand the ocean and to educate scientists, students, decision-makers, and the public. About
科技
2016-40/3983/en_head.json.gz/7198
Uber Expands Its Insurance for a Future Where Private Cars Are Public Transit subscribe Author: Marcus Wohlsen. Marcus Wohlsen Business Date of Publication: 03.14.14. Uber Expands Its Insurance for a Future Where Private Cars Are Public Transit Photo: Uber The world appears to be moving towards a future where private vehicles double as public transportation. But the road along the way is far from smooth. The latest evidence: an announcement this morning from San Francisco ride-sharing startup Uber that it’s expanding insurance coverage for the drivers that make its service go. The company — a poster child of the “sharing economy,” though the company shuns the term — is expanding insurance plans to cover drivers who are running Uber’s app in their personal vehicles but aren’t carrying passengers at the time of an accident. The decision stems from the case of a little girl struck and killed by an Uber driver New Year’s Eve in San Francisco. Because the driver wasn’t carrying a passenger, the girl’s death wasn’t covered under Uber’s insurance policy. The new policy would provide some coverage as long as a driver for the company’s Uber X service has the app open. The issue is complicated because, as in the case of 6-year-old Sofia Liu’s death, drivers for the company’s Uber X service use their own cars. It might seem obvious that if you’re hit by a yellow taxi, both the driver and the taxi company are liable. But in the gray area of Uber X cars — which are owned by their drivers but are used in work for the company — insurers, regulators, and ride-sharing startups are far from a consensus on who should cover what. Uber says it’s trying to provide some clarity. And it’s not alone. In a sign of how pressing the “insurance gap” issue has become — and how competitive the race is between ride-sharing startups — the company’s pink-mustached rival Lyft has announced a similar expansion of its insurance coverage. The question is whether these new policies will mollify critics and regulators — and whether companies like Uber and Lyft can profitably run their operations as insurance costs continue to rise. Uber has always positioned itself as an app-based platform for ride-sharing, not a transportation company, and that has fueled some of the uncertainty around the insurance issue. The company vets drivers before allowing them on its system, and drivers carrying passengers are covered by a $1 million liability policy when they’re working for the company. But just having the Uber app open creates an ambiguous situation. Maybe drivers are actively looking for a fare — which looks a lot like working for Uber. But maybe they’re just checking in to see how busy it is that day. Or perhaps they just keep it open all the time in case they feel like accepting a fare every now and then, even if they’re really on the road just to go to the grocery store. An attorney for Sofia Liu’s family argued in a wrongful death lawsuit that because the driver in her death had the app open — his attorney says he was actively looking for a fare — he was on the clock for Uber. “Regardless of whether a driver actually has a user in their car, is on their way to a user who has engaged the driver through the app, or simply is logged on to the app as an available driver, Uber derives an economic benefit from having drivers registered on the service,” the suit says. With its new policy, Uber seems to be splitting the difference. According to the company, its new policy kicks in up to $100,000 in coverage for injuries if a driver’s personal insurance declines the claim. (Uber says that in the case of Sofia Liu’s death, the driver’s insurance company has offered to cover up to the maximum of his plan.) Uber’s “insurance gap” coverage is not as much as the million-dollar policy for cars carrying passengers, but it seems to be Uber’s way of ensuring some level of coverage without making Uber X — the company’s cheapest option — prohibitively expensive. “The bottom line is that the drivers who use our app and the riders and communities we serve should have the confidence that any potential ‘insurance gap’ is covered with a safety net as governments and insurance companies work out the details of ridesharing in their cities and state,” the company says. The new plan may help solve this particular controversy, or not. Even if it does, many other issues loom on the horizon. In many of those cities and states, policy makers are questioning whether Uber and similar services should be legal at all. Until that basic question is answered, the ostensibly simple act of sharing a ride is only going to get more complicated. Go Back to Top. Skip To: Start of Article. ride sharingsharing economyStartupsUber Skip Social. Skip to: Latest News.
科技
2016-40/3983/en_head.json.gz/7223
Shark tagging technique criticized as being cruel November 17, 2009 11:53:37 AM PST Dan Noyes SAN FRANCISCO -- A new series debuts Monday night on the National Geographic Channel featuring a controversial tagging technique for great white sharks. They are supposed to be protected under federal law as an endangered species, but one scientist is being accused of animal cruelty by other professionals in the field. Scientist Michael Domeier and his film crew quietly produced the show off the coast of Mexico. But two weeks ago, he came for the sharks at the gulf of the Farallones off San Francisco and that has set off a firestorm of debate among shark researchers. Domeier gave ABC7 a tour of his research ship in San Diego on Friday. He showed ABC7 the massive hook he has engineered to catch great white sharks and the platform that lifts them out of the water. Dan Noyes: "That's huge." Domeier: "Well, you think it's huge, but it doesn't look huge on a 4,000 pound shark, believe me." But, there's nothing like seeing the system in action. Monday night's National Geographic program explains that Domeier and his crew have been doing this off the coast of Mexico for the past two years. They bait the hook and after the shark strikes, it struggles against several buoys attached to the hook. "They're going to go fight the shark, tire it out, and bring it back to us," Domeier said. The fight can drag on for an hour or more, at which point the crew guides the shark onto the platform and raises it out of the water. They spend up to 20 minutes attaching a satellite tag and taking blood and tissue samples, before releasing the shark. Dan Noyes: "What sort of stress is that on the animal?" Domeier: "Well, certainly, we have to stress the animal, I mean, we have to tire the animal out, otherwise it's going to hurt itself when we pull it out of the water, or perhaps hurt us." Domeier has caught and tagged 15 great whites off Mexico's Guadalupe Island, but now he is coming under fire after tagging two sharks in the Farallones Marine Sanctuary off San Francisco's coast two weeks ago. He had to leave half of the hook in one of the sharks after it became lodged deep in its throat. "I mean, it's like a double punch, it's like one, a right, and then a left," University of California, Davis researcher Peter Klimley said. Klimley is one of the world's foremost white shark experts. He says the long struggle after the shark gets hooked and the time out of the water amount to animal cruelty. "I'm a behaviorist and I don't torture animals and I think this would be something I wouldn't, I wouldn't do this to such a big animal," Klimley said. Domeier: "People go out and catch and release fish all the time." Dan Noyes: "Right, but not the great white." Domeier: "No, but this is the same thing though, it's the same exact stress that you put on a striped bass when you bring it to the boat and you let it go. It is the same." Klimley actually uses the same process on small hammerhead sharks, which weigh about 200 pounds, but he says he would never try the procedure on a 4,000-5,000 pound great white. "You take them out of the water and now they're not supported by the water and they flatten out, and that can squeeze their internal organs, if it's a female and she has young, young can be forced out," Klimley said. At the Farallones, federal law prohibits anyone from getting within 50 meters of a great white or from attracting sharks with food, bait, chum, dyes or decoys, but the sanctuary superintendant Maria Brown gave Domeier a permit. Dan Noyes: "These great whites are protected from harassment, protected from even approaching them. How is this OK under those rules?" Brown: "This research helps us protect white sharks." Brown downplays the stress to the shark. She was on Domeier's ship, as the crew hooked the second shark. "I equated it to, it felt like what it's like when I go to the dentist; when you go in, you get a cavity filled, it's something that maybe you don't want to go do, but you do it, it's quick, it's over, it's done," she said. Researchers such as Sean Van Sommeran have had success tagging great whites quickly, with a pole. "Are we actually protecting them by doing this, and you know, we're not breaking eggs and making omelets, we're working with wildlife you know, protected, potentially endangered wildlife," he said. Domeier says he has to catch the sharks to install longer-lasting tags, but does Domeier's technique actually change the behavior he wants to study? Tracking data on the two sharks tagged at the Farallones shows they traveled quickly out of the area; the second swam away in a straight line for 500 miles over the past two weeks. Dan Noyes: "You're confident that the tagging of those two sharks didn't alter their behavior?" Domeier: "No, I well for a for a few hours they might be tired, might be sore the next day, like if you went out and ran 10 miles and you don't run every day, you're going to be a little stiff the next day." "There are benefits and costs to doing this sort of thing, and here, I think, maybe the costs outweigh the benefits, that's my opinion," Klimley said. Right now, Domeier and his crew are heading back out to Guadalupe Island for more tagging and more filming. Expedition Great White airs Monday at 9 p.m. as part of Expedition Week on the National Geographic Channel. The Farallones sharks will be featured in the 2011 season. Load Comments
科技
2016-40/3983/en_head.json.gz/7226
December 23rd, 2008 08:08 AM ET Obama’s Science Team: Reshaping Our Long-Term Future David Gergen | BIO AC360° Contributor CNN Senior Political Analyst In coming months, public attention will heavily focus on the performance of Barack Obama’s economic and national security teams, but over the long haul, his new team in science and technology could do even more to shape the country’s future. They will arrive not a moment too soon. Over the past seven plus years, many leaders in the science and technology community feel they have been in a virtual war with the Bush administration. They despaired, as one told me this weekend, that “no one was ever home” and that the Bush team was so dismissive of key scientific research that it threatened our future. In a brief capsule, here are some of their key complaints:. The President and the men around him have been so ideologically opposed to the idea of man-made global warming that they first put their heads in the sand, refusing to accept evidence and editing reports from scientists inside the government such as the EPA, sending morale down the tubes. More recently, President Bush has acknowledged that man has contributed to warming, but the U.S. continues to drag its feet in international negotiations and Bush has resisted mandatory emission standards. Top scientific leaders in the administration have sometimes been silenced, including a top NASA climate scientist James Hansen and former Surgeon General Richard Carmona. A number of government scientists have resigned. The President twice vetoed bills for stem cell research over the objections of many in the scientific community as well as Bill Frist, the cardio-surgeon who was a GOP leader in the Senate. The President allowed funding for the National Science Foundation to go essentially flat and after sizable increases, also allowed a flattening of the budget for the National Institutes of Health. The President did sign onto the competitiveness agenda proposed by a special commission of the national academies of science and engineering – and he helped to secure Congressional passage of legislation endorsing the agenda. But, stunningly, the Congress refused to fund it – and the President put up very little fight. This November, the president of the American Association for the Advancement of Science publicly lambasted the administration for putting unqualified political appointees into permanent civil service jobs that make scientific policy decisions. A case in point: Todd Harding, a 30-year old with a bachelor’s degree from Kentucky’s Centre College, was named to a permanent post at the National Oceanic and Atmospheric Administration working on space-spaced science for geostationary and meteorological data. Even as some positions were filled with non-entities, the White House left vacant the post of Executive Director for the President’s Council of Advisors on Science and Technology. Against this backdrop, it is not surprising that the scientific community began rallying to Barack Obama months ago. Periodically, Dr. Harold Varmus, now chief of Memorial Sloan Kettering, convened informal conference calls among leading scientists to provide counsel to the Obama campaign, and they also met with Obama for a morning of conversation in Pennsylvania. This past Saturday, Obama began filling out his appointments to his science and technology team, and it is a star-studded cast, promising a sharp break with the Bush administration. Among those who will be surrounding him are a physicist who has won a Nobel Prize (Steven Chu), a physicist and top expert on global warming who will be his top science adviser in the White House (John Holdren), a chemical engineer who has won acclaim for as an environmental leader in New Jersey (Lisa Jackson), a marine biologist is a leading expert on the impact of global warming on the oceans (Jane Lubchenco),. a polymath who heads up one of the most important genome projects in the country (Eric Lander), and a biologist who won a Nobel prize in medicine (Varmus). It doesn’t get any better than that! For at least half a century, America has been the world’s premier nation for scientific and technological research. Remaining at the cutting edge is not only important for the advancement of knowledge, but it is also critical – absolutely critical - for the creation of high-powered jobs and meeting the challenges of global warming. In his Internet address on Saturday, Obama said, “It’s time we once again put science at the top of our agenda and worked to restore America’s place as the world leader in science and technology.” He’s right – it is none too soon to call off the war and build a strong, new alliance between government and science. . Filed under: Barack Obama • David Gergen • Raw Politics • T1 Finally. The Dark Ages are once again coming to an end. December 23, 2008 at 12:46 pm | John Conservatives have their use, they ride the brakes while progressive people respond to the simple truth that things change, and one has to change with them or become extinct. Oh yes, things change, whether one likes it or not. The planet ages, new discoveries are made, the population expands in proportion to and because of human successes. I have been horrified the last 8 years. It will be nice to see anything resembling sanity guiding our political power. December 23, 2008 at 12:45 pm | Lisa Perhaps this unparalleled team of scientists can illuminate for all the nay-sayers that have had Bush and his "head-in-the-sand" gang supporting their ignorance and denial that global warming can indeed cause the extremes of weather we have been witnessing around the world for the last decade. Global warming does not necessarily mean that the temperature in your little corner of the world will increase noticeably. It means the average of the globe's temperature will increase – which is already creating stronger hurricanes, desertification and changes in the jet stream and ocean currents. I would like to suggest that Obama's team of scientists convene a once-weekly television program that will tackle a different aspect of science each week. Since Americans insist on believing what they see on TV, perhaps this vehicle can be used to set the record straight and to enlist the support of the populace to heal the wounds we have inflicted upon our Earth. December 23, 2008 at 12:45 pm | DQF The human race reserves perhaps its biggest scorn and disgust for animals who fowl their nests. Look in the mirror humans! It is going to be expensive to clean up the mess that we are putting ourselves in but it will be more expensive the longer we wait. Indeed, if we wait too long we may hit a tipping point where no amount of effort or money can fix it. Hopefully this new team will inspire the nation to do the right things fand clean up our only nest. December 23, 2008 at 12:44 pm | Shaun G At least one of the points Mr. Gergen mentions is not a squabble over science but over bioethics. The Bush administration's opposition to embryonic stem cell research is a matter of ethics. You can agree or disagree that it is unethical to kill embryos for the sake of research, but you can't say that it's fundamentally a scientific disagreement. December 23, 2008 at 12:44 pm | Vishal Science and technological innovation made america the great country it is. However we have now been dominated by emerging countries who value science and math much more than we do now, which is a contributing factor to america's decline. schools need to stress math and science much more than they do now and it starts with an executive administration that values science and its search for the truth. December 23, 2008 at 12:43 pm | Ben Stein Uh, science caused the holocaust. Scientists made the gas chambers. Science is bad. I ain't come from no monkey! God made me and everything there is, not only that, but I know HOW he did it cuz the Bible says so in real plain English, and if English was good enough for Jesus, it's good enough for me! Science is the devil and Obama is the anti-christ, long live Bush! December 23, 2008 at 12:43 pm | Marty So, now we can bow to the god of science. How exciting... I can hardly wait, NOT!! Micro evolution may be a fact of life, as in different varieties of dogs, but macro evolution, dogs becoming horses, that is just too ridiculous to believe. And where are the facts for that one? Any, repeat, ANY fossil evidence of half dog half horse out there? No? Just what I thought. And any living evidence of one specie becoming another completely different specie? No? Just what I thought. Enjoy your god of science and let me enjoy the one, true God of the universe, the Creator God. December 23, 2008 at 12:43 pm | Marc M Man-Made global warming is a myth and a hoax that solely exists for groups to get their hands on grant money and to bolster their own agendas. The earth has been icefree before, when man didn't exist. When man wasn't an industrial society. How can you blame man for what has happened before when they weren't here? The earth has cooled every single year since 1999. Man is only responsible for under half of a percent of all the greenhouse gas emissions on the planet. Oceans are CO2 scrubbers that remove CO2 from the atmosphere. Whether the earth warms or not, it is not because of man's activities. Any sensible, logical look at the facts dictates that simple fact. Sadly so many are brainwashed that this hysteria is going to snowball and cause economic depressions where there doesn't have to be. December 23, 2008 at 12:43 pm | Jeff - Massachusetts January 20, 2009 can't come soon enough. Thank you, David Gergen, for putting this in perspective. December 23, 2008 at 12:42 pm | Robert -Philadelphia Thank you David Gergen! For years this administration, has been quietly destroying inroads of our scientific community going with political appointees who just did not do anything. Science is why our counrty went to the moon, invented unlimited tools for mankind, saved lives in operating rooms and discovered drugs to help mankind. Under Bush and his zealots, we have gone intentionally backwards. Why are these people not being prosecuted for negligent performance of their work? Why are they in positions without any qualifications? We can only hope PE Obama weeds these useless entities out of their position and puts the right people back in so we can once again lead the world in science and technology! December 23, 2008 at 12:42 pm | Tyler Creationism vs. Evolution is not a scientific debate. Macroevolution and religion are faith. See Darwin's writings on the subject. Not that I do not respect the human pursuit of truth, but come on, we are no closer to knowing what's going on than 150 years ago. Answers only reveal more mysteries. Also, while I'm glad we will be stepping out of the dark ages of the Bush administration, all of these "mankind is the problem" global warming appointments are disconcerting. Hopefully that fear will be unrealized and we won't be swimming in carbon taxes. December 23, 2008 at 12:41 pm | Vincent Petrosino David Gergen is always clear, concise and factual. His commentaries on CNN are like beacons of light on any topic. Finally, science and technology will assume its rightful place in American government and try to undo the damage created by Bush, the GOP, and most of all right-wing religious conservatives. Favorite David Gergen moment: During the last election, David Gergen was asked how McCain could make up the deficit he had suffered during the campaign especially after his faux-pas of trying to cancel the first debate. Gergen answered blithely "Hell if I know!" You go, David! December 23, 2008 at 12:41 pm | Donna Ramirez THANK GOD!! December 23, 2008 at 12:41 pm | Ed Tallahassee FL Science! Change we can believe in! December 23, 2008 at 12:40 pm | Jim B. Can we have it both ways? Can we have a massive economic rescue plan, and have plenty of money for the advancement of science (not only in the laboratory, but in the classroom!) ? Can we do it? Can we provide medical care for every American, ensure a quality education for our children, recreate the millions of jobs (and then some!) lost these past few years, while ensuring the safety of America from those who want to destroy us? In truth, I don't think "Can we do it?" is even a question. Instead, we must in one loud voice boldly proclaim, "We must do it, and we must not fail!!!" December 23, 2008 at 12:40 pm | Luis, Seattle Hopefully the next step is to remove the myth of "intelligent design" that was dispursed in the educational system of the past 8 years. December 23, 2008 at 12:40 pm | Jason "I would love to see Obama declare the truth once and for all: Evolution is a fact of all life." Yes. Because Obama declaring this would bring an end to all discussion. PLEASE. December 23, 2008 at 12:38 pm | Michael in San Diego This kinds of articles frighten me. I'm glad the Obama administration will make strides to improve the science-environment in this country but he won't be in office forever and I don't think that one or even two terms will be able to reverse the damage done by the current administration. I just hope Obama is the "start" of something new for this country or we're in for a very tough future. I give him the benefit of the doubt but I hope people understand that Obama can lay down the foundation for a brighter future but there are always the "Sarah Palin's" of the world lurking around the corner who don't believe in global warming and are looking to exploit any mistakes Obama might make to take us back into the scientific dark ages where making quick bucks trumps science. December 23, 2008 at 12:37 pm | Charles L. Adams How long can our industrial-based prosperity survive now that the global warming liars and idiots are in charge? James Hansen is a proven member of both groups (liar and idiot). As the globe freezes down these fools will continue to propagate the global warming myth. They will, if allowed, destroy our energy industries and consequently our economy that supports a growing mass of social parasites. How will the parasites have their houses air conditioned, their bellys filled and their health care provided after our industrial base collapses? Wind freaking mills? December 23, 2008 at 12:36 pm | nebi great observation Davide, you always tell the truth keep up the good work. December 23, 2008 at 12:36 pm | Citizen It is about time we get back on the right track and let science help lead this nation to a brighter future. President Bush has been horrible for the U.S. in so many ways. December 23, 2008 at 12:36 pm | Dave Once again,David Gergen's comentary is spot on! Obama is keeping his promise to appoint brightest,the best,the most educated people without political ideologies.That certainly wasn't the case in the last eight years. December 23, 2008 at 12:36 pm | Don Mattox It will be nice to have scientist around President Obama who know the difference between Speculation – Hypothesis – Theory – Proven Fact. As for global warming I suggest your readers go to: U.S. Senate Minority Report: "More than 650 International Scientist Dissent over Man-made Global Warming Claims" by Senator InHofe. December 23, 2008 at 12:36 pm | Njeri Great work Obama!! December 23, 2008 at 12:36 pm | Ursula What is science? LOL just kidding! December 23, 2008 at 12:36 pm | Jim Boulter Finally ... an informed and cogent voice from the wilderness. Go science!!!!! December 23, 2008 at 12:35 pm | Jim W Anyone who mentions "science" and "man-made global warming" in the same sentence doesn't have any credibility, Mr. Gergen. December 23, 2008 at 12:34 pm | abrams-seattle ...and you can have a beer with a scientist, too! December 23, 2008 at 12:34 pm | Scientist Top scientists haven't been silenced by the administration, it has been silenced by the media promoting global warming for their own agendas. The junk science that they have been promoting will do more damage in the long run to our economy and our lives. December 23, 2008 at 12:34 pm | Colin Really we need to just label Bush in the way he already labeled himself, an obsessive evangelical christian. There is no arguing with these people because most of them don't even understand concepts such as 'logic' or 'reason', things that the rest of us take for granted. They see the world through a starry-eyed "God will save everything yeaayyy!!!" mentality. I predict that if we continue on the same trend we are currently on, America's downfall will not linked simply to Bush and his religiosity, but to the entire evangelical christian movement and the moderate christians that allow for the hardcores to have the power that they do. December 23, 2008 at 12:34 pm | cameragirl I love David Gergen and that he is not a typical talking head but a real scholar. Also, I am pleased to know that science will have a place in government again. Thank goodness someone believes that intelligence is NOT a disease....for once. December 23, 2008 at 12:33 pm | Sherrye As an NIH researcher, I can tell you medical research has not suffered like this in over 15 years. Bush cut the NIH budget so horribly that we lost a generation of bright and ambitious scientists. (Under Clinton, roughly the top %16 of grant applications were funded, while under Bush, it has been reduced to roughly 7%.... a level too competitive to sustain even the best and most productive laboratories). I hope Obama's administration will invest in the remaining hardy few who survived Bush.... including me. Without a drastic and sudden increase in the NIH budget, our nation will certainly lose our position as medical research leaders. December 23, 2008 at 12:33 pm | farid shakur So let it be known, so let it be written; This planet is filled with people who are misguided about the real and rational understanding of this creation and how it works. Science "not RELIGION" is the salvation to our existence, if we as humans don't grow away from the misguided untruths we allowed RELIGION to inundate us with we wll be doomed! Get smart go science!! December 23, 2008 at 12:33 pm | Armando This is absurd. The socialist-environmental lobby has turned global warming into a fanatical religion devoid of science. If the world gets hotter it's global warming. If it gets colder it's global warming. If it stays the same it's global warming. Faith of the existence of man-made global warming has replaced science and fact. It is now a political and pseudo-religious movement that ignores all science or evidence. Global warming is not man-made. Climate change is a natural cyclical pattern driven by solar output and has existed for billions of years. On stem-cell research, Bush only prohibited government subsidies and support for embryonic stem cell research. PRIVATE research is still legal and readily available. December 23, 2008 at 12:33 pm | Steve Simply more reasons to separate church and state. I don't want someone's religious views, even if I agree with them, to stand in the way of cures and treatments that could extend life and/or alleviate human suffering. December 23, 2008 at 12:33 pm | « Previous
科技
2016-40/3983/en_head.json.gz/7231
Browsing Posts published in February, 2014 Of Friends and Moles by Adam M. Roberts — Our thanks to Born Free USA for permission to republish this post, which originally appeared on the Born Free USA Blog on February 18, 2014. Roberts is Chief Executive Officer of Born Free USA. It is a special privilege to know someone who has authored a book, and even more exciting when it’s one of your best friends. I have known Dr. Rob Atkinson for more than a decade, and can honestly say that he’s one of the people in my life I admire most. Rob and I have been together on safari in Kenya, searched for wildlife in the jungles of Vietnam, eaten lunch from stalls on the streets of Bangkok, and discussed wildlife trade policy for hours in Geneva coffee shops. And while Rob is a true friend, he is also a learned one who applies his vast knowledge to animal protection and wildlife conservation. His latest endeavor, Moles, enables us all to have access to a significant resource about this enigmatic animal. All news to me: Moles are typically black or dark grey, but they can be cream, apricot, rust, piebald, grey, silver-grey, yellow and grey, or albino. Moles can lift twenty times their own body weight. continue reading… Books We Like, Partner Blogs, Posts Action Alert from the National Anti-Vivisection Society Each week the National Anti-Vivisection Society (NAVS) sends out an e-mail alert called Take Action Thursday, which tells subscribers about current actions they can take to help animals. NAVS is a national, not-for-profit educational organization incorporated in the State of Illinois. NAVS promotes greater compassion, respect, and justice for animals through educational programs based on respected ethical and scientific theory and supported by extensive documentation of the cruelty and waste of vivisection. You can register to receive these action alerts and more at the NAVS Web site. This week’s Take Action Thursday takes a look at current efforts to try to silence animal advocates through the passage of ag-gag legislation. continue reading… Legal Issues, Posts Ag-gag bills, Animal cruelty, Arizona, Central Valley Meat Company, Compassion Over Killing, Factory farming, Food safety, Idaho, Indiana, Mercy for Animals, Nebraska, US Department of Agriculture My Own Private Idaho: Pursuing Ag-Gag Secrecy by Kathleen Stachowski — Our thanks to Animal Blawg, where this post originally appeared on February 22, 2014. Kathleen Stachowski’s web site is Other Nations. “My Own Private Idaho.” You might know it as a ’90s era movie, but its new identity is being forged in the Idaho legislature right now. “My Own Private Idaho” could soon be how factory farm owners refer to their holdings–places where anything goes and no one knows–if ag-gag legislation is signed into law. But according to some, it goes far beyond undercover filming in animal agriculture settings. Bumps and bruises: The “inadvertent cruelty” of factory farming. Mercy for Animals Idaho dairy photo; click image. Ag-gag got a thorough spanking in state legislatures last year. The bills died well-deserved, good deaths–guess you could say they were euthanized–in 11 states. But all bets are off where Idaho is concerned; the Senate voted 23-10 in favor of SB 1337 (find the bill text here) and sent it on to the House. The bill’s sponsor, GOP Senator Jim Patrick, is an American Legislative Exchange Council (ALEC) minion, according to SourceWatch. I’ll wait while you grab the smelling salts. continue reading… Food and Farm Animals, Legal Issues, Partner Blogs, Posts Ag-gag, ALEC, American Legislative Exchange Council, Chickens, Cows, Factory farming, factory farms, Mercy for Animals, Pigs, Undercover photos, Undercover videos See the answer Wolves do it, bulls do it, even educated gulls do it…. At the risk of indelicacy at the very start of this week’s edition, the “it” in question is, well, the elimination of solid waste from the body. In the case of wolves, dogs, and even cows, it would seem that this elimination is effected with an eye toward the cardinal points of the compass.Wood frog (Rana sylvatica)–John Triana, Regional Water Authority, Bugwood.org To be a touch more direct, when dogs poop, scientists hypothesize, they do so on a north–south alignment. Now, given that the words “science” and “scatology” share a deep, deep common root in the speech of the proto-Indo-European peoples, it stands to reason that researchers should want to do more than hypothesize about such matters. But more, zoologists at Germany’s University of Duisburg-Essen are seeking to bring citizen science to bear on the question by gathering data from volunteer observers everywhere. If you’d like to help point them in the right direction, please sign up. continue reading… Animals in the News, Posts Alaska, Dogs, Frogs, Germany, Russia, Salamanders, Snakes Page 1 of 5123...5Next Page
科技
2016-40/3983/en_head.json.gz/7238
A Look Inside One of Samsung’s New Stores at Best Buy April 3, 2013 at 9:01 pm PT From a distance, the Best Buy in Lewisville, Texas, looks like the big-box retailer’s typical outlets that dot the suburban U.S. landscape. The blue-and-yellow logo shines atop a large building in a sprawling strip mall, next to a Guitar Center and not far from a Costco, a Chipotle and a Steak ’n Shake. Inside, though, it is home to one of the first Samsung shops to be located inside a Best Buy. Samsung’s 450-square-foot shop occupies less than 1 percent of the Best Buy’s overall area, but its central spot means that anyone looking to buy an iPad or Mac has to walk by the giant display of Samsung laptops, phones and tablets. That makes life pretty good for Brian Hagglund, the consultant Samsung hired to oversee operations at the store. Hagglund, who has worked as a representative for other electronics companies in the past, said he is often able to sell would-be iPad buyers on the benefits of Samsung’s pen-equipped Galaxy Note tablet. “That’s one of our biggest opportunities,” he said, noting he has a pretty easy sell if he gets just five minutes to show off the Samsung tablet. The Lewisville location is one of six stores where Best Buy has been testing the store-within-a-store concept ahead of Thursday’s formal announcement. So far, Saturdays are the busiest times, for potential buyers as well as those coming in to fix a problem or ask a question. (How to take a screenshot is the most common query, Hagglund said.) Hagglund and his part-time helpers can do basic support tasks such as updating a phone’s software or diagnosing hardware issues. They also have a phone they can use to call if they need further help. That phone has only been needed three times, Hagglund said, oddly enough all on one day filled with particularly challenging issues. With the Samsung Experience Shops inside Best Buy, the Korean electronics giant is hoping to give its customers a place to see multiple Samsung products in action, get help and build some of the same brand loyalty that Apple, Microsoft and Sony get through their stores without having to make the same investment in real estate. Samsung has made smaller retail ventures in the past, including a New York City showroom that it closed about a year ago after running it for several years. Retail head Ketrina Dunagan said the Columbus Circle location was popular internally, but did little for customers. More recently, Samsung had 14 kiosks during the past holiday season where it pitched its Galaxy range of products. Best Buy officials said the retailer found it had to do surprisingly little shifting to clear the space for Samsung, noting that what appears to be prime real estate was previously home to CDs and DVDs, not exactly the hottest segment of its business. But it is making a big investment in Samsung, hoping that the fast-growing brand will help bring in more customers at a time when physical retail store sales are struggling. Best Buy will continue to display Samsung phones, tablets, cameras and PCs in their respective categories, in addition to handing over the space needed for the Samsung shops. 1 of 7 Back Next The Samsung Experience Shop inside the Best Buy in Lewisville, Texas occupies 460 square feet of prime retail space toward the front of the store. Tagged with: Best Buy, big box, mall, phone, retail, Samsung, store-within-a-store, tablet
科技
2016-40/3983/en_head.json.gz/7342
Scientist Gretchen Daily Awarded Volvo Environment Prize By Darci Palmquist | September 11, 2012September 14, 2012 A member of the Conservancy’s Science Council and a founder of the Natural Capital Project, Gretchen Daily has been named Laureate of the 2012 Volvo Environment Prize for her contributions to the field of ecosystem services. Daily co-founded the Natural Capital Project—a joint partnership between The Nature Conservancy, World Wildlife Fund, Stanford University and University of Minnesota—to identify the economic value of nature. People depend on ecosystems such as forests and coral reefs for clean water, fertile soils, food, fuel, storm protection, minerals and flood control. Healthy habitats even help reduce the spread of infectious disease. But putting a price tag on these benefits is no easy task. Since its inception in 2006, the project has developed practical tools and methods for measuring nature’s benefits. While some critics say the value of nature can’t or shouldn’t be measured in terms of dollars, Daily disagrees. “We’re talking about 21st-century environmental protection,” she says. Demonstration of nature’s economic benefits will spur investment in future protection of resources, she explains. One aim of the Natural Capital Project is to help governments, policy leaders and corporations better understand nature’s value and invest in its protection. The prize also recognizes Daily’s expertise in sustainable development—she is a sought-after advisor on projects around the world that seek to conserve natural values while also enabling sustainable economic growth. In addition to her work with the Natural Capital Project, Daily is Bing Professor of Environmental Science at Stanford University and a Senior Fellow at Stanford’s Woods Institute for the Environment. She is also a member of the National Academy of Sciences. The Volvo Environment Prize is awarded annually to individuals for “outstanding scientific discoveries within the area of the environment and sustainable development.” The award—a prize of SEK 1.5 million (roughly $226,000 in U.S. dollars)—will be presented to Daily in November at a ceremony in Stockholm. Learn more about the prize and its 2012 winner, Gretchen Daily. (Image: Professor Gretchen Daily, Stanford University in California, and one of the world’s foremost experts on the valuation of natural capital is awarded the 2012 Volvo Environment Prize. Image courtesy of Volvo Environment Prize.) FacebookTwitterLinkedinIf you believe in the work we’re doing, please lend a hand. Add a Comment Cancel reply
科技
2016-40/3983/en_head.json.gz/7369
Chromebook Pixel: A netbook to challenge the notebooks By John C Abell March 1, 2013 Tags: gadget review | Go Bag Google unleashed a snarkfest when it introduced the Chromebook Pixel. The reaction was swift and mostly merciless. “Sorry, but there’s no defense for the Chromebook Pixel” claimed BGR. “Bizarre, pointless,”said Bruce Berls. The Wirecutter declared: “The Chromebook Pixel is not for you.” In one of the most positive receptions ZDNet’s Matt Baxter-Reynolds calls it “deliberately bad” — and then goes on to give three reasons why Google was smart to release something that was “entirely illogical and unsellable.” So, naturally, I had to see for myself. After using it for four days, I’m not convinced this product is ready for mass adoption. That isn’t because the Chromebook Pixel is a joke, or a toy; it’s as solid a performer as any full-featured computer I’ve used. But it’s going to take a few generations to make this netbook a true contender in a notebook world. At $1,300 or more, this Pixel is clearly an early adopter’s plaything with a price point to prove it. As I wrote when I reviewed the entry-level Samsung Chromebook, there are compromises one has to make when considering a netbook. Chromebooks have a nascent operating system designed to be an all-in cloud-computing platform. You can’t install anything except extensions to Google’s Chrome browser, which serves as the interface to everything. There aren’t a lot of them, and there isn’t necessarily one for anything you might want or need to do. All of these downsides these are easier to swallow when you are paying $250 (the cost of the Samsung) and not $1,300 for Pixel (I reviewed the $1,450 model, which includes 4G LTE connectivity). That price tag will scare many away, but that doesn’t mean this is an Edsel. The first generation MacBook Air with a 64 MB flash drive cost $2,800 five years ago — nearly three times what you’d pay for a better MacBook Air today. Apple stuck with an revolutionary, overpriced design until subsequent models were met with a collective “Now I get it!.” The MacBook Air is now the dominant ultrathin. A 13-inch MacBook Pro with retina display will set you back $1,500. But the MacBook Pro doesn’t have a touch screen, and the Pixel’s screen resolution is slightly better — 2560 x 1700 at 239 PPI vs. 2560-by-1600 at 227 PPI. A comparable model of the highly-touted touchscreen PC like the Lenovo Yoga 13 running Windows 8 would cost about $1,150. However, that machine doesn’t have a high-resolution display. The Pixel has a backlit keyboard, an anodized aluminum case, and a power cord designed to be neat and organized. It boasts an acceptable five hours of battery life and weighs 0.10 kg less than the equivalent MacBook Pro 13-inch and only 0.17 kg more than the equivalent MacBook Air. This is a fast, powerful machine with one of the highest resolution displays on the market and a multi-touch screen — an unusual combo to say the least. Both dramatically enhance the computing experience in ways I did not expect. And they are a big reason why the Pixel’s less-than-perfect score is based on pricing rather than performance. The touchscreen interface was immediately useful and made point-and click seem antiquated. Yes, we’re familiar with multi-touch screens because of smartphones and tablets. But it’s even more compelling on a laptop; since the screen is independently supported and can be tilted to any angle at any time it’s like having a third hand. I found myself almost instinctively reaching for the screen, touchpad and keyboard in swift succession, speeding up tasks. The Pixel’s screen is almost too good for web-standard images and video; visuals that look just fine on a typical display often appear mushy on the Pixel. Even the more smoothly rendered fonts were easier on the eye. It was difficult to return to a lower-resolution screen. HD video is nothing short of breathtaking. While most of my caveats about the entry-level Samsung Chromebook remain, some videos are available now via Amazon Instant and streaming from Netflix is now possible. A Chromebook is still unlikely to be the only computer you can own, and without a more robust software ecosystem you can be sometimes plain out of luck. Now that there are more video options, the biggest shortcomings are the inability to connect to a networked printer that isn’t connected to the Google cloud, and the inability to install third-party VPN software required by many companies. For everyday use, I was won over by the Pixel. Over several days of constant use I preferred the Pixel to my 13” Macbook Air already in my go bag. I just couldn’t quit it and, frankly, I’m dreading returning my review unit. But would I plunk down my own money for one? $1,300 is too steep at the moment. But a machine like this at $1,000 or less would make me think twice about buying another replacement MacBook Air, or a low-end netbook. The Chromebook Pixel is powerful and innovative enough to be taken seriously, and suffers mainly from inflated initial pricing. It is far from the white elephant initial reaction would have had us believe. It’s a serious contender in a new class of what will initially be expensive notebooks sporting gesture and ultra-high-resolution displays. Pixel, and its offspring, is well positioned to ride that wave. By Nook or by crook Next Post » Go Bag grab bag: SXSW survival sundries One comment Mar 5, 20136:27 pm UTC Thank you for mentioning how expensive the Mac Air was when first released ($2800). People are trashing the Pixel (based on price) as if it actually were $2800 itself! Add to it, the fact that you get the best-resolution screen on the market AND it is a touchscreen-netbook, well, the price actually doesn’t seem all that far off. Anyway, I have been using an Acer C7 Chromebook for a few months now (it does 99% of what I need, BTW) and I would *love* one of these. I don’t know why people have to trash the Chromebook concept. Sadly, Chromebooks will only “make sense” to people once Apple releases their own version of one, and then — all of sudden — Apple will get credit for having created the category. Posted by MyProfileName | Report as abusive John C Abell is a Reuters Columnist and reviewer for Reuters Go Bag. Most recently he was Wired's Opinion Editor and New York Bureau Chief. He has worked for Reuters in various capacities, from glorified copy boy to chief architect of the news agency's web news service. In between he was also the founding editor of Reuters.com — a laughable shadow of the current incarnation to which he humbly contributes, irony fully noted. Any opinions expressed here are the author's own.
科技
2016-40/3983/en_head.json.gz/7444
You are here: Home » General » Bytes reports strong performance despite industry margin pressure Bytes reports strong performance despite industry margin pressure Bytes Technology Group was the strongest performer and the largest earnings contributor to the Altron group for the year ended 28 February 2013. Bytes Technology Group, a wholly-owned subsidiary of the Altron Group, was the strongest performer and the largest earnings contributor to the Altron group for the year ended 28 February 2013. Its overall revenue increase of 15% to R7 billion confirms Bytes as the largest South African-owned information technology group on the continent. Bytes reported a strong performance despite margins pressure, commoditisation and globalisation resulting in EBITDA only increasing by 1% to R531 million and the EBITDA margin declining from 8.6% to 7.6%. Rob Abraham, Chief Executive Officer, said: “I am confident that our growth strategy based on improving sales into Africa and targeting the government sector will continue to pay off. I am satisfied with our delivery on this strategy during the year under review. We have grown our share of local government business significantly and extended our footprint in Africa, showing excellent results in countries such as Mozambique, Botswana and Namibia and forming successful joint ventures in Zimbabwe and Zambia.” Commenting on market conditions in the information technology (IT) market, Abraham said: “The South African IT market has shown good growth as businesses continue to invest in new technologies. However, margins remain under pressure due to the highly competitive nature of the sector, deflationary forces and the increasing commoditisation of IT products. Cloud computing is becoming an increasingly important feature of the IT sector which will present both opportunities and some risks to our group.” He said Bytes has focused on growing its share of government business and he is satisfied that this strategy already resulted in many large long-term contracts that will contribute to the group’s annuity income stream targets. Bytes Universal Systems, a new division formed in April 2012 as a result of the acquisition of Unisys Africa, contributed revenue of R231 million and EBITDA of R18 million towards the group over the past 11 months through a strong performance in the petroleum and public sectors. “We have also acquired the Alliance business during the year as well as various products, solutions and services and expect growth to be further driven by more selected ‘bolt-on’ acquisitions that will support our move towards margin-rich service-based solutions,” Abraham said. “Our Bytes UK operations returned excellent results, increasing revenue by 33% and EBITDA by 34% in rand terms on the back of favourable conditions in our customer base. The Software Services side of the business performed well although margins remain under pressure since the change in the Microsoft rebate structure. Bytes Document Solutions UK, the number one reseller in the UK market, performed to expectation, improving both revenue and margins. The successful marketing of the group’s new managed print services offering further enhanced their performance,” said Abraham. Abraham further ascribes the overall profitability of the UK business to the inclusion of a full year of Security Partnerships Limited which was acquired on 1 August 2011 and which has performed ahead of expectation. He said the market for printing devices dropped some 20% during the year and although the Bytes Document Solutions’ (BDS) business showed some improvement on the previous year, with a 6% increase in revenue, EBITDA decreased 28%. “Even so, BDS still made a considerable profit for the year under review and remains the biggest contributor to our group. Going forward, the business will focus on increasing its managed print services business and a higher margin annuity based offering which will ensure less dependence on hardware sales, where margins are under pressure,” said Abraham. Commenting on the overall performance of the South African companies, Abraham said that most of the businesses within the Bytes group performed well with the exception of the Xerox division of Bytes Document Solutions which experienced difficult trading conditions. It did, however, see an improved performance in the second half of the year. Solid growth performances were recorded by Bytes Managed Solutions, Bytes Healthcare Solutions and Bytes People Solutions. Abraham said the first two months after year end have proved to be very good months for the whole group and based on this Abraham remains positive about Bytes’ prospects for the year ahead as it builds on the momentum created over the last few years. He nevertheless cautions that margin pressure will continue and in order to remain competitive in the market, Bytes SA’ business units are being realigned and a system of shared services is being introduced to further enhance the group’s customer service, market reach and reduction in administration costs.
科技
2016-40/3983/en_head.json.gz/7518
Published: 30 October 2013 Related theme(s) and subtheme(s) Energy : Renewable energy sources Environment SMEs Success stories : Environment | Science & business Countries involved in the project described in the article | Greece | Lithuania Harnessing the power of the sea for commercial gain Tidal power has great potential for electricity generation due to the fact that tides are more predictable than wind energy and solar power. However, tidal energy conversion presents a complex engineering challenge: to produce affordable, competitive energy in one of the harshest natural environments, where access to maintenance is both expensive and high risk. An EU-funded project is meeting this challenge by testing long-range ultrasonic sensors for the automated detection of defects in tidal energy conversion devices such as turbine blades. © Alexandr Mitiuc - Fotolia.com TidalSense Demo, which began in February 2012, is a two-year project that is building on its predecessor, TidalSense. The project will use the results obtained in TidalSense with the aim of accelerating the pace of these technologies towards commercial maturity. To achieve this objective, TidalSense Demo will test the feasibility of long-range ultrasonic sensors, which are made up of composite materials such as fibre metal laminates, glass or carbon fibre reinforced plastics in several tidal energy conversion devices. The project will also undertake several sea trials of the system. “There is no standard condition monitoring technique available that can provide details of the tidal blade integrity. The industry currently takes action when necessary - usually after a critical failure occurs, which can lead to serious and costly repairs,” says project coordinator, Dr Nico P. Avdelidis from InnotecUK. “TidalSense Demo is cost-effective as it will continually monitor the tidal blade and can classify and evaluate defects. This will result in a statistical analysis that can feed back into the design process,” adds Avdelidis. As a result, TidalSense Demo will benefit European SMEs and reduce long-term costs for utility companies. As for the savings, these have been estimated at €66 million and profits at €36 million, bringing the figure to a total of €102 million. The consortium has already undertaken a first market analysis and is currently looking into potential buyers such as original equipment manufacturers, utilities and project developers, maintenance contractors in the oil and gas sector, and finally, research and development bodies. The end of 2013 will also see the entire system being filmed with the end product being disseminated to a wider public. “The film will present the project concept and activities developed during the demonstration, as well as the results obtained,” concludes Avdelidis. With EU-funding of €1.62 million, TidalSense Demo is led by technology company, InnoTecUK, and has partners in 7 countries including Spain, Italy, Lithuania, Norway, Germany, Greece, and the United Kingdom. Project acronym: TidalSense Participants: United Kingdom (Coordinator), Spain, Italy, Lithuania, Norway, Germany, Greece. Project FP7 286989 Total costs: €2 949 380 EU contribution: €1 620 000 Duration: February 2012 - January 2014 Project information on CORDIS
科技
2016-40/3983/en_head.json.gz/7573
Happy Birthday, Mary! » « It made my skin crawl Wiring the brain This story is some kind of awesome: For those who don’t want to watch the whole thing, the observation in brief is that color perception is affected by color language. The investigators compare Westerners with our familiar language categories for color (red, blue, green, yellow, etc.) to the people of the Himba tribe in Africa who have very different categories: they use “zoozu”, for instance, for dark colors, which includes reds, greens, blues, and purples, “vapa” for white and some yellows, “borou” for specific shades of green and blue. Linguistically, they lump together some colors for which we have distinct names, and they also discriminate other colors that we lump together as one. The cool thing about it all is that when they give adults a color discrimination test, there are differences in how readily we process and recognize different colors that corresponds well to our language categories. Perception in the brain is colored (see what I did there?) by our experiences while growing up. The study is still missing one part, though. It’s presented as an example of plasticity in wiring the brain, where language modulates color perception…but we don’t know whether people of the Himba tribe might also have subtle genetic differences that effect color processing. The next cool experiment would be to raise a European/American child in a Himba home, or a Himba child in a Western home (this latter experiment is more likely to occur than the former, admittedly) and see if the differences are due entirely to language, or whether there are some actual inherited differences. It would also be interesting to see if adults who learned to be bilingual late experience any shifts in color perception. (Also on Sb) DLC says 10 September 2011 at 8:12 am But how can I describe the color “bleen”, when part of it exists in the Infrared ? PZ Myers says 10 September 2011 at 8:26 am That’s what color discrimination tests are for. If I show you a set of disks, all of which reflect in the red, but one of which also reflects at an infrared wavelength which is invisible to most people’s eyes, would you be able to pick out the bleen-colored disk every time? peterh says 10 September 2011 at 8:28 am I see lots of cultural conditioning in the examples given above; is there actual evidence of physio-neurological differences? The color discrimination test mentioned seems bound primarily by linguistic not neurological factors. AussieMike says 10 September 2011 at 8:28 am That is just freaking incredible. How amazing this world, it animals and its people are. How sad that some try to resolve it down to being gods ‘way’! Rational reasoned thinking is to the rainbow as religion is to black and white. jan says 10 September 2011 at 8:36 am concur with #3 the difference in perception from culture/language are well known in linguistics and anthropology since the early sixties. The new stuff (baby brain imaging of color perception) is not really related to the second part (ethnolinguistic fieldwork on said color perception). Steve says 10 September 2011 at 8:44 am Wouldn’t everybody’s experience of colour be different anyway, just because of random differences in development. Betsy says 10 September 2011 at 8:45 am I seem to recall a similar study with Russian speakers and the color blue. It may have been done with bilingual speakers, but I will have to poke around for it… here: http://www.pnas.org/content/104/19/7780.full Seems unlikely Russian speakers and English speakers would have substantial genetic differences. ChasCPeterson says 10 September 2011 at 8:54 am The new stuff (baby brain imaging of color perception) is not really related to the second part (ethnolinguistic fieldwork on said color perception). Did you watch the video? Did you read the post? a) there is no brain imaging discussed b) of course they’re related. The infant studies are elucidating the process and mechanism by which language and culture influence perception. You seem to think that ethnolinguistic fieldwork is all that’s required to understand these differences in perception among populations. That’s a pretty myopic view. (Apologies if it’s inaccurately attributed; this is what I infer from your comment.) ChasCPeterson says 10 September 2011 at 8:57 am and btw, drawing a bright line between ‘linguistic’ and ‘neurological’ factors is just stupid. Neil Rickert says 10 September 2011 at 9:09 am This is related to the Sapir-Whorf hypothesis, which seems to be quite controversial. madtom1999 says 10 September 2011 at 9:11 am Do they suffer from that red/blue red/green margin wobble thingy too – and what is that called? machintelligence says 10 September 2011 at 9:26 am Are they certain that the members of this tribe actually have color receptors that are identical to those of typical Westerners? There is the possibility that the language follows the perception rather than drives it. In the case of RG colorblind people who also have synesthesia (perceiving numbers as having colors), they say that some numbers have “martian” colors — they have never seen them in nature. This would seem to indicate that the “wiring” to perceive the whole spectrum is functional, even though the “hardware” (photoreceptors) are defective. See the work of V.S. Ramachandran. Dubs says 10 September 2011 at 9:34 am I seem to recall reading something similar about how men & women see colors differently. It’s usually seen most in the context of stand-up comics, or epitomized in the line from Steel Magnolias… “My wedding colors are blush & bashful.” “Your wedding colors are pink and pink.” If you develop a vocabulary distinguishing a difference between ecru & eggshell, you can see the difference, but to someone who hasn’t developed the same vocabulary, they’re all just white. What I find most interesting about the Himbu tribe was the bleed over between categories. It’s easy to see how blues and greens can be in a single color group, but for reds & blues to co-exist as a single color-group is counter-intuitive, at least to me. Steve says 10 September 2011 at 9:35 am With these things I always wonder how you could isolate differences caused by language as opposed to all the differences of growing up in a different culture. The language differences might just be a reflection of this. ICMike says 10 September 2011 at 9:43 am Way back in 1986 Cecil Adams, author of the Straight Dope column in the Chicago Reader, addressed a similar question. Cecil relied on a book by Berlin and Kay for much of his answer (Basic Color Terms: Their Universality and Evolution, 1969). The interesting point with regard to the current discussion is that Berlin and Kay were also relating color and language; they looked at 98 languages, and were able to develop a set of rules relating the number of different colors described by a language and the particular colors the languages described. The entire column, which includes Berlin and Kay’s rules and some history of the subject, is worth a read. peterh says 10 September 2011 at 9:52 am V. S. Ramachandran was mentioned above; some might try his Phantoms in the Brain, Morrow & Co., 1998, ISBN 0-688-12547-3. It’s a fascinating tour of some of the brain’s neurological wonders. Katrina, radicales féministes athées says 10 September 2011 at 10:03 am Japanese language is traditionally not picky about whether something is green or blue. Green not being one of the “six basic colors“. Glen Davidson says 10 September 2011 at 10:11 am I don’t see how this differs from, say, how language shapes our perceptions of sound. It’s clear that children notice differences pre-language that they don’t notice after language, as the latter either lumps sounds together, or treats them as unimportant, or rather, you cease even to “hear” the difference that are “unimportant.” Abstractions play a part of reception/interpretation, where our knowledge is input into the neural “data stream.” That’s how we manage to see what’s literally “not there” to any camera yet likely is there in very fact. Such as, we can see the form of a simple leaf that is covered and not visible, so long as we see enough of the rest of the leaf. This is stuff that makes it so hard to get computers to “see” the world as we do. Knowledge/information is heavily involved in how we see the world, and we found that out to a significant degree because computers did not “see” the world much like we did at all. It is highly unlikely that anyone would bother checking to see if there is anything genetic behind it. Odds are very good that there isn’t, that it’s a matter of experience modifying perception. Someone mentioned culture, but then I think that’s more or less bound up with linguistic differences, so that even though clearly it has something to do with cultural attention, these cultural differences in attention can be considered to be covered by linguistic learning and usage for the purposes of these observations. kathyo says 10 September 2011 at 10:19 am I agree with Dubs (13). The color categories seem baffling. I could understand if all blues and greens were in one category, for example, but they seem to be spread in some incomprehensible (to me) manner through several Himbu color categories. Same for red. And it didn’t seem to be just about dark or light either. I’d love to see a full Himbu color chart. Glen Davidson says 10 September 2011 at 10:28 am Knowledge/information is heavily involved in how we see the world Plus, of course, experience, that is, practice in observing (what does a trained baseball player see in a pitch that one who never even saw the game does?). Likely we could learn to discriminate greens much more finely than we do, if we could get our brains to suppose that it was important to do so. Children would learn a good deal better how to perceive what is “important,” language being one of the more powerful factors in determining “importance.” Carlie says 10 September 2011 at 10:35 am seem to recall reading something similar about how men & women see colors differently. It’s usually seen most in the context of stand-up comics, or epitomized in the line from Steel Magnolias… “My wedding colors are blush & bashful.” “Your wedding colors are pink and pink.” Both of those quotes were spoken by female characters. Tim says 10 September 2011 at 10:46 am So, what are these eleven words for colors that English is supposed to have? I figure red, yellow, blue, orange, green, and purple are in there. Probably also white, black, and brown. So, what are other two? I suspect that only people who work professionally with color printing are likely to think of cyan and magenta as basic colors. There’s pink, tan, and grey, but those are just light versions of red, brown, and black (plus, they would bring us up to twelve words). And, the works of Mr. Roy G. Biv notwithstanding, I don’t think indigo counts. Dubs says 10 September 2011 at 10:55 am Carlie – I apologize; I didn’t mean to conflate the gender differences with the actual lines spoken, just that those lines are a great example of the distinctions between one person seeing shades where another doesn’t. Tim – tan’s your only outlier. A quick google gives me the below as the 11 categories: red, orange, yellow, green, blue, purple, pink, brown, grey, black and white. Dean Buchanan says 10 September 2011 at 11:05 am There is a difference between looking at color on a computer monitor where the light is projecting onto our eyes and looking at other objects where the wavelengths that are not absorbed by the material are reflected onto our eyes. I have ‘seen’ this phenomena regularly over the past 13 years working in the design industry. I am not sure whether this would affect the research, but it might. In addition, the perception of color is greatly affected by the surrounding colors and texture of the surface. In short, our perception of color is very, very contextual. It would also be interesting to see if adults who learned to be bilingual late experience any shifts in color perception While not exactly becoming bi-lingual, as a customer or professional learns the language that designers use in describing small differences in color, over time their perception of color grows tremendously and they can spot differences between various hues and shades that before appeared to them to be the same. stan says 10 September 2011 at 11:15 am I would be very surprised to hear that cones function differently in the Himba versus ‘Westerners,’ but I instead suspect that language has a profound effect on visual processing. That is, the eyes work the same, but the information provided to the brain is handled differently depending on the subtleties of language. Hue distinction is not a project undertaken by the eye itself, but by the brain’s processing of raw visual data. It seems quite clear that linguistic nuance might have a measurable effect on that process. There is, however, a eugenic or culture-centric worry here: if language can so affect one’s visual processing, then might it also affect one’s processing in other areas (i.e. problem-solving, spatial relations, application of logic), and might that not imply a hierarchy of language — insofar as the affected areas are themselves granted some ordered importance? Simply put, this might be seen to give a sense of legitimacy to claims such as, ‘Asians are better at math.’ I’m certainly not trying to support culture-centrism, and I’m very definitely interested in what the science says here, but nonetheless it seems clear that a possible implication of this sort of finding (given no meaningful genetic differences and the sort of follow-up experiment PZ recommends) is that if we want to be good at X, then we should learn language Y at as early an age as possible. Whether or not this is an actual worry is up to you. RFW says 10 September 2011 at 11:19 am Color vision is a fascinating field of research. I was lucky enough to hear Feynman’s lecture on the subject when I was an undergrad at Caltech in the 1960’s, complete with a light show projected on the screen. Two highlights: 1. Brown light. Feynman showed the effect by projecting a disk of dim pink light surrounded by an annulus of brighter white light. The sensation “brown” is dependent on brightness contrast. 2. Individual differences in color perception. By using three projectors with filters (RGB) and rheostats, any mix could be projected on the screen. Feynman would project a spot of color, then use the other three projectors to match it, asking for audience response to signal matching. He then asked if anyone saw the projected patches as not matching, and of course there were a few who felt a slightly different mix was necessary to match. They were invited to adjust the rheostats to obtain, in their view, a match. Oliver Sacks’ book, The Island of the Colorblind, is also well worth reading by anyone interested in the general topic. kijibaji says 10 September 2011 at 11:19 am The particular use of reaction time shown in the video seems like a poor methodological choice for this experiment. The Himba participants presumably have very little experience doing such tasks (as opposed to the university students that complete these studies in Western countries) and that is bound to effect the results. Yes, the researchers would probably do some normalisation of the results, but still. And if you want to make claims that colour perception is affected by language, then a better reaction time methodology would be a speeded discrimination task so that participants don’t have time to activate linguistic categories and you can thus avoid introducing a big fat confound. Dean Buchanan says 10 September 2011 at 11:22 am @Glen #20 language being one of the more powerful factors in determining “importance.” Exactly. In this case, what we talk about focuses our perceptions on certain features of the environment, colors, thus changing our brains. As a designer and client discuss whether things ‘go’ together, they are exploring a broad range of factors like texture, motif, durability, and etc. In this larger context, colors literally look different to us depending on what is ‘important’. I can imagine being able to perceive a lot of different greens would be very valuable to a desert dwelling group. amphiox says 10 September 2011 at 11:29 am There’s pink, tan, and grey, but those are just light versions of red, brown, and black (plus, they would bring us up to twelve words). Think instead of pink, tan, and grey being the half-way mix point between white and red, brown, and black, just as yellow is the half-way mix between red and green, and purple is the half-way mix between red and blue. And to expand on #23, pink and grey are very prominent concepts in the English language, and the definitive distinction between red and pink, pink and white, white and grey, and black and grey are commonly used in a variety of expressions and metaphors. That’s all consistent with their status as major color categories. Tan, though, is used much more rarely, and is more a specialist’s distinction. amphiox says 10 September 2011 at 11:35 am I can imagine being able to perceive a lot of different greens would be very valuable to a desert dwelling group. For me with my western-based language imprinting, it boggles the mind that it’s even possible for there to be any circumstance wherein the ability to distinguish reds from blues is so unimportant that there need only be one language category for both colors. But I think that’s part of the point of choosing to use the Himba as an example, to demonstrate just how distinct cultural differences can really be, how inconceivably “alien” it can appear to a naive native-english speaker. The red-blue distinction is probably one of the most important in English, almost as prominent as the white-black distinction. The Himba example hammers home that something we may automatically think is fundamental need not be so. blbt5 says 10 September 2011 at 11:42 am One of the more interesting posts, especially the ethnic difference between green and blue. The video could do with a few comments about color physics, however. For example, although green and blue seem quite different, green is blue with a tiny bit of yellow, and the amount of yellow needed to turn blue into green is far less than the amount of yellow needed to turn red to orange or the amount of blue to turn red to violet. The human eye can discriminate a nanometer of wavelength in the yellow to orange monochromatic range, but far less on either side and this differential sensitivity makes discrimination complex with colors that are reflected combinations of several colors, such as brown, for which coincidentally there are far fewer descriptive European words than say a primary color such as red, which lists in a thesaurus a great many words for various intensities and shades. It’s also odd that the video doesn’t note that water is clear and that most in the Western world are aware that water takes on a combination of the color of the sky and its dissolved contents, so water can be blue, green or brown (or clear) depending on its context. I wonder if the Himbi have an awareness of this. Jacques says 10 September 2011 at 11:43 am This kind of research is not exactly new, and frankly, I find the results to be completely underwhelming.Take the Russian-English case that was mentioned here earlier (http://www.pnas.org/content/104/19/7780.full): Both English and Russian had nearly identical color boundaries, the difference was that Russian had two different words (goluboy and siniy), whereas English speakers had… two different labels that shared a common word (light blue and dark blue). So it is clear that whatever difference language makes is not in how these colors are actually perceived, but in how the linguistic labels might affect performance under some testing conditions. And these effects are really tiny (ignoring all other possible objections, around 100 ms). How can people sell these findings as “language makes you *see* the world in an entirely different way!” is sincerely beyond me. Davric says 10 September 2011 at 11:47 am I noticed this phenomenon in Turkey once when I was teaching a group of Turkish students the names for hair colour, complexion, etc. In a room of people who, basically, all looked the same to me, they saw about four or five distinctly separate hair colours and complexions. I had about the same colouring as PZ at the time, which they saw as ‘fair’ or ‘blond’ (and they kept to this judgement when shown pictures of faces and hair colours I saw as fair and blond!). A year earlier I was in Angola (teaching marine biologists, as it happened) and one break they asked me what I did back in Sweden. “I teach them English.” “But don’t all Europeans already speak English?” At this point I started trying to explain the difference between blond blue-eyed Swedes, dark-eyed Italians, etc. They stopped me apologetically and one of them said, “We’re really sorry, but all you Europeans look the same to us.” NelC says 10 September 2011 at 11:49 am As a graphic designer, I’ve gotten used to defining colours by their HSL (Hue, Saturation and Luminosity) values, as this is most useful for matching colours, or finding complementary colours or any number of colour manipulations. Not a solid visualisation of the whole 100 levels of each that Photoshop allows you to define, you understand; just able to spot the difference between two colours and understand it in terms of whether one or the other needs to be “darker” or “grayer” or “warmer” reasonably reliably. I guess the HSL system counts as a vocabulary, though it’s a very technical one, and as I said I don’t have a solid visualisation of each of 1,000,000 colours or even a majority of them, but then, I came to this vocab when I was already an adult. I guess my conscious comprehension of colour is quite rich because of it, though I don’t know that my perceptions are sharpened. rusma says 10 September 2011 at 11:54 am There’s a nice book about this subject. “Through The Language Glass” from Guy Deutscher. He compares the definition of colors in different languages and cultures. For example the old greeks used a similar color definitionn like the Himba. Rebekka says 10 September 2011 at 12:12 pm I remember a similar experiment with an Australian language called Guugu Yimithirr. It uses cardinal terms of direction instead of egocentric ones, so they don’t say “left” “right” “in front of” etc., but instead use the equivalents of West, East, North and South. The speakers of this language had a very different perception of mirrored images than English speakers, for example. Better explained in this link here: http://www.nytimes.com/2010/08/29/magazine/29language-t.html?_r=2&pagewanted=4 Lycanthrope says 10 September 2011 at 12:25 pm kathyo @19: I’d love to see a full Himbu color chart. Ah, but the point is: could you? Susannah says 10 September 2011 at 1:07 pm I have a grandson who, when he was small, taught me to see reds differently. I was teaching him the colours, as one does with small children, and ran into difficulties; green was fine, blue ditto, but red, no. Each “red” (pure red, scarlet, cherry, burgundy, mamey, etc.) was a different colour to him, and he insisted on different words for each. Next time I see him, I’ll ask whether he still sees a whole spectrum of colours other people call “red”. Tim says 10 September 2011 at 1:09 pm I really should have done a Google search before I posted my earlier comment. I guess it just seemed like such a strange number, I must have assumed they made it up. It is the BBC and science, after all. But, I guess I can see why pink and grey are included, given their linguistic importance. I’ve occasionally wondered in the past why we have a special word for “light red”, when we don’t have ones for “light blue”, “light green”, etc. dyssebeia, spirit of impiety says 10 September 2011 at 1:10 pm This is awesome. Except for the part where I got really excited about the opportunity to make some sort of Nelson Goodman “grue” joke and it was already made in the first comment. SirBedevere says 10 September 2011 at 1:22 pm Fascinating stuff. And I now have a way to introduce the concept of RGB and CMYK color systems in my Photoshop class next week! Like PZ, I’d be interested to find out if there are any inherited physiological differences in these people. I also wonder what people with tetrachromatic vision would make of the Himba color system. amphiox says 10 September 2011 at 1:28 pm It would be pretty big news if there was actually a genetic difference in the cone opsins. As far as I know, excepting known color-blindness mutations, the cone opsins in humans are supposed to be fixed. Neural plasticity in color perception circuits in the brain probably develops concurrent with the acquisition of the language for naming colors, so the two occur together and reinforce one another by feedback. So it’s probably a cultural factor at the root of the difference in perception, and such cultural considerations could possibly predate the development of true modern language – ie ancient human groups may well have been perceiving colors differently before they fully acquired the ability to communicate such perceptions with language. There’s certainly the possibility that such differences of perception are adaptive, reflecting various selection pressures of different native environments in terms of what kinds of color distinction are most advantageous. But it’s also possible that some aspects are stochastic/random/accidental, an early trend that just happened to become culturally fixed. Steve says 10 September 2011 at 1:38 pm Perhaps the next experiment they could do would be one where the Himba have to learn tasks where the differentiation between blue and green is vital. It might be like learning a new language, as you are more exposed to it you start to find the breaks between words that you couldn’t hear before. I think the role language plays in this is only about focusing of attention and thereby handing down the differences to children who learn the language. I don’t think it can explain the origin of the differences, these could be due to genetics or the particular niches people grow up in or perhaps random accidents. If their particular niche changed to the extent that telling the difference between green and blue mattered I think their ability would change and their language would change with it. ChasCPeterson says 10 September 2011 at 1:42 pm I sincerely doubt that PZ’s reference to possible subtle genetic effects included anything as basic as cone opsins (btw, a difference there would be said to affect sensation, not perception). I think he’s talking about patterns of retinal/brain wiring, and of course we have little to no idea how it is that genes establish those (though there is no doubt that they do, or can, do so somehow). F says 10 September 2011 at 1:53 pm What color does sniny have? I haven’t found a reference yet, but I recall a similar study involving shape and object identification between cultures, which has implications for what Stan considers to be a worry. The one case involved some outdoorsy tribal sort of culture in a warm clime. The people had a harder time distinguishing between, or recalling if they had previously seen, things like scissors or pencils, but they could easily distinguish between two very similar-looking stones and laugh at the researchers for their complete inability to do so. As for Stan’s comment, it doesn’t mean they are “not as smart” or whatever, regardless as to how a handful of idiots would like to spin such a fact. It just means that their mental faculties are very well adapted to their environment and culture, and are not adapted to things which are not a part of their world. If, however, individuals of one society thought they would like to participate in some other society, it would obviously behoove them to learn the things that would be necessary to operate in the adopted culture. For those who would like to make derogatory points of how people of one culture have a “lower IQ” in terms of their own culture, I would point as an example to USAnians, many of whom will flat out refuse to understand anything in terms of another culture in which they demand to participate or visit (frequently in terms of economic exploitation), even in the simplest case of understanding history, let alone how perception or processing works in another society. cyberCMDR says 10 September 2011 at 1:55 pm I wonder how extensible this concept is. Instead of just colors, how does the language environment (even with the same language) affect how a person views the world? Can this be extended to whether a person “sees” a world with a maker, versus someone who analytically “sees” the world in terms of science? In other words, is the debate between science and religion as much a matter of wiring laid down during childhood as it is a debate about legitimate ideas? This could explain the ingrained resistance of some to abandoning their worldview, in spite of evidence to the contrary. bric says 10 September 2011 at 1:56 pm NB this is a small extract from an hour-long Horizon programme [http://youtu.be/5nSDJHAInpo], the point of which was to show that colours as ordinarily understood are mental constructs; it is hardly surprising that different human traditions construct them on different lines. Languages are pragmatic but flexible and deal with concepts their users need; Guy Deutscher (mentioned above) makes similar points about number words, he has an example from a tribe that has no complex numbers: a hunter has 50 arrows in his quiver, his language doesn’t provide a way of expressing ’50’, but he knows if any are missing because each arrow is an individual to him. If the tribe ever needs to count that high the language will undoubtedly provide a way. Teshi says 10 September 2011 at 2:37 pm This is a fascinating clip. I will have to watch the full program! I first heard about this on QI when they were talking about the colour linguistics of the Ancient Greeks who used to be thought to have different perception of colours but are more recently thought, like these Himba people, to simply have different concepts of colours. Sadly, I could only fine writing about this with the former argument rather than the latter (followed by tons of posts of people refuting it): http://serendip.brynmawr.edu/exchange/node/61 Thinking about how much perception changes how we categorise colours in something like looking at a painting or a photograph, I don’t think this is much of a leap. Imagine you are painting from a photograph. It is advisable to isolate the colour of the sea and sky by putting a sheet of paper over the rest of the painting before you decide what colour paint you will use to most accurately represent it. This is partly due to comparison (comparing, say, one type of green that makes another look brown) as well as expected colours– we expect clear sky to be blue, grass to be green etc. I have often been surprised at how green the sky can appear, or how purple water can be, or how orange or red something green can appear. This counter-intuitive red/green issue makes me think of adjusting colours in film. When adjusting the colours of Hobbiton in LOTR: FOTR, the designers wanted Hobbiton to look really green and luscious. However, I remember hearing them describe how eventually they went for adding more red/brown to the green in order to make it look very, very green. Counter-intuitive, yes. But I have a pair of goldeny sunglasses that make everything look like Hobbiton in the movie– totally green (and yet I know how much more orange everything thing is than if I take them off). I can see how certain greens and reds could be associated, just as certain blues and greens are. When it comes to including tones as unifying factors rather than shades, all bets are off. That’s how the sea according to Ancient Greeks can be wine-dark and the sky bronze. These are levels of zingyness, rather than tones. bric says 10 September 2011 at 3:43 pm Teshi – yes the Horizon programme neatly demonstrates how our own perceptions of colour can be manipulated, even when we know what they ‘should’ be. I have trouble distinguishing green from blue especially when remembering an object, so I wasn’t surprised to learn that Classical Chinese used the same term for both (although modern Mandarin enables one to distinguish them the old combined word is still in use). The divisions of the spectrum are just a way of arbitrarily dividing up a continuous set of frequencies; perhaps it’s a bit like the way different cultures perceive musical scales. Wikipedia has a useful article on blue-green terms http://en.wikipedia.org/wiki/Distinguishing_blue_from_green_in_language Samantha Vimes, Chalkboard Monitor says 10 September 2011 at 4:15 pm Does that tribe live in one of the arid areas, where light colors help reflect sunlight away from you during the day, but dark colors absorb the heat so you have something warm to lay on at night? And certain shades of greens and blues are seen only during rainy season or if one has traveled far enough to see big waters? Because if so, I can totally see the logic behind the color groupings. The Dancing Monk says 10 September 2011 at 4:37 pm While we’re on the subject of the brain’s perception of reality, this weeks Horizon deals with how our genetic & environmental conditions govern our response to good & evil. Apparently some of us are born with a predisposition towards anti social behaviour Ellen Bulger says 10 September 2011 at 4:41 pm My suspicion is that what is really happening here is a result of the difference between incoming data and perception, as flavored by expectations and training, cultural and otherwise. As anyone who has gone to art school knows (even those of us who did so back in the days before computers), most people don’t really pay much attention to color. We get trained as tots, in school. We are issued our crayons and given sheets from workbooks to color. THE APPLE IS RED. THE TREE IS GREEN. THE SKY IS BLUE. We color between the lines. We trace the letters: A-P-P-L-E, R-E-D. There is a “right” way and a “wrong” way. Never you mind that the apple on the teacher’s desk is a half a dozen different shades of red and brown and maroon. Never you mind how it reflects the piss yellow of the light from the ceiling fixtures and here, on the side, the blue of the sky from the window. Freshmen art students are often given an exercise in which they have to draw or paint something white, something like an egg or a sheet of crumpled paper. You stare at it until your eyes pop. Then you begin to SEE. Have you tweaked, somehow, the rods in your eyes so that they work better? It is unlikely. Seems to me what happens is you are finally paying careful attention. And after you do a couple of these kinds of exercises, you start noticing the delicacy of colors EVERYWHERE. I’ve been thinking about this a lot. Several years ago I was diagnosed with a rare form of retinitis pigmentosa, sector r.p., and patches of my retina have died. I don’t see so well in low light. I’ve become obsessed with photography since my diagnosis. I so treasure the light. (I have about 25 thousand photos up on flickr: http://www.flickr.com/photos/50728681@N06/sets/72157626254350181/ ) My photography has really improved over the last two years. I used to take pictures of things. Then I took pictures of things in good light. Now, mostly, if I can, I take pictures of good light and if there are THINGS in the shot, well swell. I SEE so much more than I ever did before. I see more colors. But I don’t think it is really because of the loss of retina has substantially changed my vision. It kicks up the contrast is all. I think I see more nuances of light and color because I am constantly paying attention to light and color. These eyes are what I got, and so help me I’m going to make the most of them. I strongly suspect that the differences in the colors that are seen by people in different cultures have everything to do with culture, with early childhood training and a lifetime of expecting to see a certain way. I could be wrong. In any event, it is all terribly interesting. Ben says 10 September 2011 at 7:07 pm @49 — my partner and I are both graphic designers but even we have different ideas of what is classed as blue and what is classed as green. Our car is a particular hue that I call green, but my partner insists that it is blue. I have had clients who have asked for “green” and when I’ve shown them a design they said “no, i asked for green, not blue”. My partner tells me he’s had the opposite happen. MadScientist says 10 September 2011 at 7:17 pm I agree the conclusions are lacking. If people are simply describing a different region of the CIE color chart using a different name, then so what? That would simply demonstrate that different groups decided to group their colors differently. Tsu Dho Nimh says 10 September 2011 at 8:07 pm I’m in favor of the cross-adoption part, but can you get it past the IRB? Dr. I. Needtob Athe says 10 September 2011 at 10:09 pm This color illusion that was posted on Reddit this morning is pretty amazing, and makes you seriously think about how your eyes perceive colors. pseudonymoniae says 10 September 2011 at 10:43 pm Cool study. I would guess there must be some top-down circuit which allows linguistic cues to categorize inputs to the cells which make up our color representations in visual cortex. On the other hand, a more parsimonious explanation is just that the Himba haven’t been exposed to the same range of colors as us, which means that their brain circuitry never developed the ability to differentiate between two similar shades of blue, for example. Either way, it is interesting to consider that we all could have slight variations in the colors that we see, as small differences in how our visual systems develop might cause the cells in one person’s brain which correspond to a specific color like “red” to receive inputs from cells corresponding to a slightly different range of wavelengths than those of the next person. If this were the case, then presumably these differences would be so small as to be hardly noticable, a few um difference here or there probably would just come out as noise. On this other hand, I think this is a good point here made by Davric @33: In a room of people who, basically, all looked the same to me, they saw about four or five distinctly separate hair colours and complexions. … They stopped me apologetically and one of them said, “We’re really sorry, but all you Europeans look the same to us.” If color perception can be so easily modified by early life experience, then isn’t it possible that higher level representations of things like facial structure might be affected as well? Why shouldn’t people of different ethnic groups, at least those who were largely raised apart, have less ability to discriminate between facial features that are peculiar to another ethnic group? I don’t know whether or not this has been studied, but it would be a great social science topic. Too bad I’m in the wrong field :P Something else I noticed which was kind of weird. When they showed the green squares, the BBC version looked like it was all the same color. But in the version that the Himba man was looking at it was immediately apparent to me which square was a lighter shade of green. Did anyone else notice this? I suspect the BBC people just put a bunch of squares up that were all the same colour, whereas the researchers used one square which was off by 20 or 30 um. pseudonymoniae says 10 September 2011 at 10:51 pm Make that nm. J Dubb says 10 September 2011 at 11:18 pm pseudonymoniae, it’s called the cross-race face deficit. There is a fair bit of research on it. As for color perception & language, there is well-known research from the 1970s done by Eleanor Rosch. http://en.wikipedia.org/wiki/Eleanor_Rosch I think the results of this newer research would surprise her. Crissa says 11 September 2011 at 2:40 am My spouse and I often argue about what is green and blue in the turquoise region… I’ll say it’s green and she says it’s blue. And then in the darker regions we’re reversed. I’m pretty sure it’s not just language, but personal experience as people learn how to label and interact with the world. bric says 11 September 2011 at 3:10 am @53 – hmmm yes, my partner is Chinese (and formerly a fashion designer) and never tires of pointing out that I get colours ‘wrong’. The mention of Eleanor Irwin’s work above reminded me that I had noticed that for him, when working with fabrics there was some connection between colour and texture that always eluded me. More on Greek colour perception http://serendip.brynmawr.edu/exchange/node/61 the comments are more perceptive than the article imho Teshi says 11 September 2011 at 3:12 am [quote]Something else I noticed which was kind of weird. When they showed the green squares, the BBC version looked like it was all the same color. But in the version that the Himba man was looking at it was immediately apparent to me which square was a lighter shade of green. Did anyone else notice this? I suspect the BBC people just put a bunch of squares up that were all the same colour, whereas the researchers used one square which was off by 20 or 30 nm.[/quote] I noticed this too, but I think it was to do with the screen the people were looking at. I found it easy to tell where the different one was in the shot that included both the subject (which I think was the woman at that point) as the screen because of the angle I was looking at the screen. I think straight on it would have been more difficult. However, my skepticism about this was a little bit mollified when they showed the blue/green test. It was extremely obvious to me which was blue. Even if I had been able to pick out the different colour facing straight on the screen it would have undoubtedly taken me longer, even if a second longer, than the blue/green test. Svlad Cjelli says 11 September 2011 at 9:14 am The obvious example with japanese blues and greens is averted with younger generations, who do mind the difference. Northern Europe also had a lot of this before, especially with blues and greys I think. The rainbow’s colours have also been grouped differently in the past. Even Newton noted five colours. On a barely related note, old people in Sweden may say fireyellow instead of orange. Gregory says 11 September 2011 at 1:05 pm I took some linguistics classes in college almost 30 years ago, and this was old news then. Inflection says 11 September 2011 at 2:05 pm Now I would like to be able to see the world like the Himba do. There was a guy who wrote an app that would adjust color perception for the colorblind, and I think it could do the reverse so that a colorsighted person could get a sense of how an image would look to someone with various particular forms of colorblindness. I wonder if that app could adjust hues in a camera image — maybe just by measuring experimental reaction times for Westerners and Himba — to show us the world as the Himba see it. Tsu Dho Nimh says 11 September 2011 at 10:36 pm @33 … When you have to use colors to break down a large group that is important to you, you make distinctions. Getting my driver’s license in Mexico was similar to your Turkish experience. There were several hair shades of what the USA would call “brunette”, ranging from crow-black, which is shiny blue-black, through dark brown, selections for reddish brown shades Me? I was “guero/guera” which is all shades of blond, reddish blond, strawberry blond, diswater, ash, etc, etc. And the eyes? They became “azul” which covers all shades of blue and grey. There are enough green eyed Mexicans that they got “zarco”. Skin color was much the same. Seven shades of brown and then “library paste white”. Anthony Bradley says 12 September 2011 at 3:21 am @Neil Rickert: you beat me to it: The SWH implies that language itself affects thought (or at least cognition). No wonder it’s controversial. tonyfleming says 12 September 2011 at 7:26 am If the Himba see blues and greens as the same color, would a green page with blue text appear blank to them? Could you write “secret” messages to another non-Himba by using blue ink on green paper? How about subliminal messages, a la “They Live”? Svlad Cjelli says 12 September 2011 at 8:52 am @68: Not any more than you yourself can’t tell the difference between two shades of blue. Svlad Cjelli says 12 September 2011 at 9:09 am Ever been to a BBS where the only spoiler function is to change the font colour to match the background? It will usually end up a similar shade because nobody cares enough to get it just right. It can be tough to read without highlighting the text, but in most cases you’ll instantly spot that there’s text. Now, if you wanted to examine something interesting, that should be tried with a group of humans who haven’t based their whole lives around spotting texts. Arazin says 12 September 2011 at 11:07 am Wouldn’t this be expected? Growing up in the african savannah I would expect survival would depend on being able to distinguish between shades of the same colour than the stark contrast of different colours. It would be more beneficial to be able tell the difference between the yellow hair of lions and the yellow grass of the savannah. The mechanics of how the brain achieves this is very interesting though………. Neilp says 12 September 2011 at 11:30 am This also works with things like direction. We use left/right, but many languages don’t. Some use NSEW, for example. Take an Aboriginal man from Australia, put him in the middle of central park, and he will instantly know which way is which. The Sailor says 12 September 2011 at 5:01 pm “The divisions of the spectrum are just a way of “arbitrarily dividing up a continuous set of frequencies;” Not exactly, we have overlapping but not contiguous sensitivity in our cones. They do have peak wavelengths, the blue and green have the least overlap, the red and green the most. ++++++++++++++++ The RP inflicted person above: I’m so happy you’ve made the most of your disease. ‘Training’ ones eyes and ears (the brain, of course) to adapt is really cool. I was a sound engineer for many years and I have a bit of hearing loss, but I can still hear details in sounds that most people can’t. I was also wondering about the differences in UV exposure from Europeans v. equatorial peoples. The cornea and the lens are both affected. The cornea is primarily affected by UV-B and the lens by UV-A. rrhain says 12 September 2011 at 10:44 pm They already did do the experiment and found that people who speak different languages see color in the same way despite the differences in color terms, at least to an extent. First, what is a “color term”? It’s a word that describes a color that isn’t related to another object. For example, “blue” is a pure color term in English but “turquoise” is not as it is a reference to the stone. Various languages have different numbers of color terms. Some have only two: What we would call “black” and “white.” This doesn’t mean people who speak such a language don’t see other colors. Instead, they use terms that refer to objects of the color they mean, not a pure color term. They might say something is the color of unripe bananas as there is no pure color term, “green.” Interestingly, there is a pattern to the acquisition of color terms. When a language has three, the third term is always “red.” Next comes either “yellow” or either what we would call “blue”or “green.” Next, if you had “yellow,” you then get “blue/green,” then “blue” and “green” split. Now, how do I know that we would call it “black” as opposed to “dark”? Because they tested speakers of such languages to find out what they meant. They gave the speaker a color-chip board with a rainbow of colors and asked them to pick the most iconic example of the color term and in all cases of languages with only two color terms, they chose the chips that speakers of English choose to represent “black” and “white.” Similarly, three-color speakers always choose a chip that we would call “red,” even though that term would be used to describe other colors we wouldn’t call “red.” So these five-term speakers have terms that cover colors we separate but if you were to show them the color-chip board, they would choose, black, white, red, yellow, and either blue or green. This doesn’t mean that what they’re finding isn’t real. It’s just that the Sapir-Whorf Hypothesis isn’t strongly confirmed by this. Color perception might have some influence from language, but bilingual speakers don’t see the sky as different colors based on what language they speak. Daniel says 19 September 2011 at 5:16 am The evidence seems shaky at best to me: The first test took no account for genes also wiring for language acquisition that kicks in and that will shape perception as well. Nor did they show brain activity with fMRI for instance. It would be a lot more convincing if they’d look at how people learnig their first and different language would look like in the wiring of the brain, so look at when a baby in germany starts to learn german and look at his/her brain and do the same for different languages and see if there are any interesting developments in differences. The second I immediately thought that they were colourblind– that seemed to explain the categories, and the ability to spot contrast differences (remember that during the second world war they hired colourblind people to spot camouflage from planes). And we’re talking about colour! You could sit for eternity slicing up the spectrum to their own categories and it’s completely arbitrary, a human construct, built in through the millenias. Putting colours to categories like darker ones etc seems like a perfectly sensible thing to do. But did they control for genes? That is, were they colourblind? (We’re talking about a small tribe with probably not much genes coming from the outside) Linguistic Determinism/relativism suffers from deepities: The analogue to them and people abusing the observer effect and other concepts from quantum physics is amusing. renren876 says 21 September 2011 at 6:52 am I definitely enjoyed reading this web page. I will certainly put you in my favorites area. John Kingston says 22 September 2011 at 4:36 am There is a vast literature on this issue, which shows that there are both universal and language-specific aspects of color perception. The quickest way to find it is to go to Paul Kay’s website: http://www.icsi.berkeley.edu/~kay/ He’s done or participated in most of the reliable work. John Kingston says 22 September 2011 at 4:40 am P.S. Also check out Lera Boroditsky’s paper: Winawer, J., Witthoft, N., Frank, M., Wu, L., Wade, A., and Boroditsky, L. (2007). Russian blues reveal effects of language on color discrimination. Proceedings of the National Academy of Sciences doi:10.1073/pnas.0701644104 for evidence of quite subtle language-specific effects on color discrimination. coloring printable sheets says 13 October 2011 at 6:28 am This put up (Wiring the brain | Pharyngula) was a very good read so I posted it on my Fb to hopefully provide you with more readers. Perhaps you may review The Lion King in 3D on one of your subsequent posts. I hear they have worked over 12 months on the 3D aspect of it. Simply an thought for you Charlottesville Apartments says 6 November 2011 at 7:12 am Good post, really gives me something to consider working with our social media.
科技
2016-40/3983/en_head.json.gz/7575
Blackberrys warn of doom on21 March 2011 Tweet font size Early warning system launched in the Philippines A Philippine charity has launched an early warning system for disaster-prone areas using Blackberry devices and laptops. The gear is linked to a sms that alerts the communities to typhoons, storm surges, tsunamis, landslides and earthquakes.The Philippine Business for Social Progress (PBSP) has launched the 200,000-dollar project which is backed by the World Bank and one of the country's leading mobile phone operators. It will cover all in Southern Leyte province in the central Visayas region, which lies along a fault line and is also often battered by powerful typhoons.The project's web-based information system enables officials from the towns to use BlackBerries and laptops to access and quickly spread alerts or store surveillance data. The country sits on the Pacific's earthquake and volcano belt, and is battered by an average of 20 typhoons a year. Rate this item early warning system « Kinect technology hard to copy Chrome notebooks to show up at Computex » Blackberry gives up on Smartphones Nvidia talks about new Pascal-based notebooks Blackberry reduces the pricey Priv again
科技
2016-40/3983/en_head.json.gz/7589
Contact and FAQ JAXA Japan Aerospace Exploration Agency About JAXA Global Activity Topics in Your Area TOP > Launch of KAGUYA/H-IIAF13 Special Site > Overview of the "KAGUYA (SELENE)" Overview of the "KAGUYA (SELENE)" KAGUYA Mission Profile [1:17]Movie introducing KAGUYA's trip to the Moon with voice and subtitled explanation. Schedule of the KAGUYA JAXA is pleased to announce that the operation mode of the lunar explorer, KAGUYA (SELENE), was shifted to regular operations from its initial verification on December 21, 2007 (Japan Standard Time) as we were able to acquire satisfactory verification results for all fifteen observation missions. From now on, we will perform observation of the Moon's surface for about ten months to acquire data on "Moon Science" and other studies. >>For more details (Press Release) >>For more details (Project Site) * The current schedule is based on a report to the Space Activities Commission on October 24, 2007. Road to the MoonOnce the "KAGUYA" is launched, it will go around the Earth twice and then toward the Moon, before entering into a lunar orbit. The Main Orbiter of the "KAGUYA" will then separate the Relay Satellite and VRAD Satellite and observe the Moon's surface over a one-year period while it goes around the circular orbit 100 kilometers above the Moon passing over both poles. Each of the small satellites will circulate on different elliptic orbits to observe the Moon. The KAGUYA will help us move closer to solving the mysteries of the Moon. What will it discover? The KAGUYA will carry out more precise research on the Moon than any other previous exploration mission. 1. Science of the Moon There is always volcanic activity on earth and mantle convection takes place under the ground, so the earth is constantly changing. Thus we can not understand the original figure of the earth. If we understand details of the Moon through observations by the KAGUYA (SELENE), we can resolve the mystery about when and how the Moon was formed. →Through research on the origin of the Moon, we can find clues to understand the formation of the earth and the solar system at an early stage. 2. Science on the Moon There is an atmosphere around the earth, but not around the Moon. Thus sunlight directly hits the surface of the Moon. The "KAGUYA" will circulate around the Moon for about a year and conduct research on what influences the Sun is having on the Moon. →The observation results will be important for humans to perform activities on the Moon in the future such as making a base camp on the lunar surface. 3. Science from the Moon The KAGUYA is also equipped with devices that observe things other than the Moon. The space environment is suitable to observe electromagnetic waves in space, because no artificial electromagnetic waves such as from TV and cell phones exist there. In addition, the KAGUYA will be able to study the impact of the Sun on the Earth by observing both auroras of the North and South Poles at the same time from the Moon. →By observing space and earth from the Moon, we can obtain observation results that are difficult to obtain from earth. Japanese © 2003 Japan Aerospace Exploration Agency
科技
2016-40/3983/en_head.json.gz/7591
Is this polar bear really being choked by a research collar? Nicole Mortillaro National Online Journalist, News Global News ; A photo of the polar bear that many believe is suffering due to its collar being too tight. Twitter/Susan Adie Polar bears have been in the limelight recently, often the “poster-bear” of Earth’s changing climate. But now one bear is calling attention to how science is keeping an eye on polar bear populations in the north.The image of the polar bear — which has mysteriously been nicknamed “Andy”on social media — was shared on Twitter in October. The photo shows a collar around its neck with what appears to be blood. Many believe that the collar is digging into the bear’s neck, causing it to suffer.@AEDerocher USFWS claims bear collared by Uni Alberta research 2007-11, can u update us about recovery plans? https://t.co/0JmQ7DkwkU— Susan Adie (@SusanAdie1) November 11, 2015The photo has even elicited an online petition calling for the removal of the collar.But the reality behind the collar isn’t as simple as one might expect. Story continues below Global warming must be reversed to save polar bears: US federal report Polar bear population drops nearly in half in southern Beaufort Sea: study Polar bear safety sessions being held in Churchill “I’m not trying to downplay it. But we don’t actually know if there’s any injury,” said Andrew Derocher, polar bear researcher and professor in the department of biological sciences at the University of Alberta said. He believes that the polar bear in question is likely his. Derocher has been studying polar bears and how they interact with sea ice and what that could mean for the future of the population.READ MORE: Startling image of emaciated polar bear: Sign of climate change?Not surprisingly, Derocher — a scientist — needs more evidence than a single photo. He points out that, while the collar looks too tight and there is what is perceived to be blood around it, that doesn’t mean that it’s the polar bear’s blood, or that it’s too tight.At this time of year, he said, the bears feed off whale carcasses, which involves a lot of blood; as well, it’s difficult for them to clean that area of their body. And while the collar looks like it’s digging into its neck, the fur of a polar bear can measure from four to five centimetres long, giving the appearance of something digging into it.Can it be removed?Derocher has been singled out on social media for not doing enough to remove the collar. But the fact is that this has been a unique situation: the collar, which was likely outfitted more than a year ago, has malfunctioned: These collars can be released remotely, however, this collar has failed to release and is no longer tracking.In a written statement provided to Global News on the issue, Paul Crowly, the vice-president of the Arctic Program at WWF Canada said, “WWF is concerned about reports of the polar bear with the too tight collar on the southern coast of the Beaufort Sea…. Fortunately, it is very rare for individual animals to be harmed by collaring. The knowledge that researchers gather from collaring and tracking is directly used to develop measures of conservation for entire populations.” A polar bear on an ice floe in the Arctic. Kathy Crane, NOAA Arctic Research Program . Derocher agrees, “This is an extremely rare event; so rare that we’ve never seen anything like this before.” And the research that is being done on the movement of these polar bears is valuable in moving forward with climate change concerns in an area that is warming at twice the rate of the rest of the planet.And while people are calling on Derocher and the University of Alberta to find the bear and remove the collar, sea ice has begun to form, making it difficult to locate it.“The issue here is the bear could be anywhere from Banks Island to eastern Russia,” he said.“This is never anything I’d ever wish to do to any animal,” Derocher said. But the fact is, with any animal research, there is an inherent risk to an animal, he said. The ultimate goal, however, is to help them.WATCH: How polar bears are monitored (WWF)Derocher is also dismayed that it took so long for the image to reach social media. Had it been released to the proper authorities — in this case Alaska, as that’s where it was spotted — officials could have gone out to find the bear and manually released the collar. (The U.S. Geological Survey told Global News that they were certain the polar bear in question was not one of theirs.)But it took almost two weeks for it to get out, and by that time, the bear hadn’t been seen. Now, the Arctic is bathed in 24 hours of darkness, making the task almost impossible.But Derocher believes that if the bear really wants the collar off, the massive animal will be able to remove it himself.“It will fall off. And maybe it already has and that’s why the bear hasn’t been spotted again,” Derocher said. “I’m confident that the bear will not suffer from this in the long term. I’m most concerned about the animal. But there are limits to what we can do.”Follow @NebulousNikki © 2015 Shaw Media Report an error Andrew Derocher polar bear collaring Polar Bears International
科技
2016-40/3983/en_head.json.gz/7676
More Bad News for Mogul As I have mentioned before, Robert Hastings has given us UFOs and Nukes and it has provided some very interesting information about the state of UFO research and what the government might know about it. I don’t think he realized by doing that he has also dealt another blow to the Project Mogul explanation for the Roswell recovered debris. Mogul, for those of you who might have been living in a vacuum this last decade or so is the preferred explanation of the skeptics, the debunkers, the Air Force, much of science and more than a few people who would rather let someone else do their thinking. Mogul was an attempt to put an array of weather balloons, radar reflectors and some microphones into the atmosphere at a constant level so that we could spy on Soviet attempts to detonate an atomic bomb. So, what does Hastings tell us that affects this? According to him, "In September 1947, Army Chief of Staff General Dwight D. Eisenhower directed the Army Air Corps [actually the Army Air Forces] to undertake the Constant Phoenix program, an ongoing series of long-distant flights designed to detect atomic explosions ‘anywhere in the world.’ This high-priority activity was continued by the newly-created U.S. Air Force and, on September 3, 1949, radiation sensors aborad a USAF B-29 flying between Alaska and Japan confirmed the detonation of the first Soviet atomic bomb - some five years earlier than expected." What this tells us is that within weeks of the Roswell events, the Army Air Forces were directed to use aircraft in their surveillance of the Soviet Union’s atomic progress and that balloons did not figure into it. Mogul was of no real interest to the military at that point, which might explain why it was compromised by the military in July 1947. Any spying on the Soviet Union would be accomplished by aircraft that could maintain their flight levels for hours on end, which weren’t directed by the wind, and which could carry human observers who could make additional observations. And, as we learned, were not required to penetrate Soviet airspace so there would be no debris lying around for the Soviets to exploit. Those who struggle to convince us that Mogul was so secret, so important, that finding an array by Mack Brazel had to be covered up to protect the project fail to explain why pictures of Mogul arrays were printed in July 10 issues of various newspapers. They fail to convince us the project name was unknown to the members of the Mogul team as the Air Force’s own investigation proved. And now we learn that plans had been in the works to use aircraft for surveillance before the Mogul launches in New Mexico, and that Army Air Force missions were implemented within weeks of the Roswell discovery. We can argue about what really fell at Roswell. We can argue about the efforts to recovery it and to hide it but we can now lay to rest the idea that Project Mogul was responsible. Clearly the effort made to keep the secret would not have been made had it been Mogul balloons. We know this because other arrays, that fell in other parts of New Mexico, were left to rot in the sun if recovery was deemed too difficult. Mogul fails on so many counts. It wasn’t the secret that we have been led to believe it was. There was nothing mysterious about its make-up and the balloons and radar targets were off the shelf items. The officers, pilots, and soldiers at Roswell wouldn’t have been fooled by the debris and, in fact, had been warned by the Mogul team that the flights would be made. And once the events at Roswell began to unfold, Mogul ended up on the front pages of many newspapers complete with pictures of the balloons and some of those who launched them. This last bit if information, courtesy of Robert Hastings will, I hope, put an end to the idea that Mogul accounts for the Roswell debris. Let’s move on to something else, something that makes sense. Let’s put Mogul back in the bag. For those interested in more of what Robert Hastings has reported, you can only order UFOs and Nukes: Extraordinary Encounters at Nuclear Weapons Sites at: ufohastings.com RRRGroup Kevin:While Mogul doesn't begin to explain the Roswell incident, it does seem to factor into it.The Mogul debris has too many facets that correspond to some of the witness statements to be dismissed out of hand.But as you continue to note, there's more to the story -- much more. RR Any coorespondence must be coincidence, since none of that stuff was on the Foster ranch. Kevin: Your latest writings (from Robert Hastings) do not further either the anti-Mogul explanation or the pro-ET solution. Why should the fact that the AF had a manned aircraft program to detect Soviet nuclear tests rule out a similar program by unmanned balloons? This is like saying there were no secret skyhook photo recce balloons sent over Russia because we know Gary Powers, and others, flewU-2 spyplanes over Russia. One method of spying does not exclude another, as you must know full well. You want to discredit the Mogul answer because in doing so it makes your own ET answer slightly more likely. Very very slightly I would say. The reason you favor ET is because a small number of first-hand witnesses (and a much larger number of 2nd-hand or 3rd-hand ones) have told you decades afterwards that they saw an ET craft and bodies. None of these, not one, had or has any idea of what an ET craft looks like, have they? (Apart from their knowledge of SF). Mogul can never explain everything, nor does it have to. People can always poke holes in it. They can poke far far bigger holes in the ET answer. And no, again, the personnel at Roswell were NOT fooled by what they saw and recovered. They merely forwarded the stuff to 'higher HQ' as requested by those at a higher level, even after being perhaps 80-90% certain of its identity.As for secrecy surrounding Mogul, I thought the tests carried out in NM were not the real Mogul tests, but preliminary flights which did not possess the same level of secrecy (and possibly none at all). How secret can anything like this really be anyway? Once a thing such as a balloon, plane or rocket is launched, the risk of it going off course and crashing is always there. Nothing that flies around can be totally safe from public discovery, can it? Jerry Clark I appreciate your continuing documentation of the futility and silliness of the Mogul cult, but its members are impervious to rational argument by now, sorry to say. Being an agnostic myself, I have no dog in the race, but I do enjoy the howling of those losing the Mogul contest. You'd think that by now they'd give up and try to do something productive, such as search for a less intelligence-insulting -- dare I also suggest credible? -- counter-explanation. I for one would like to hear one. I'm sure it would be fascinating and revelatory. It might even alleviate Christopher Allan's intellect-paralyzing fear of alien bodies. For a few moments, it looked as if Nick Redfern had stumbled upon an interesting approach, but that proved just one more dead end. In any event, Mogul, as the kids say, is so over. One would hope that the debate would go beyond it, but I harbor little optimism in that regard, I fear. Too many people have too much invested in Mogul at this stage. Bob Koford By now it isn't just that there are these witnesses to Roswell, there is a mountain of data from oodles of other similar cases that end up lending their collective Vital information.I know that the so-called Blue Book documents contain numerous off-hand references to Mogul. It, in fact, seemed that everyone knew of it.I was surprised to see Moondust mentioned as many times as it was in the same documents. All of these different personnel were aware of a lot of things by the time the Roswell event, and others, occurred. As mentioned in your last article, there had to be a type of semi-centralized group before 1947, as it is obvious there was more going on than once thought.There is good historical evidence for the February 1942 incident as well. Mogul is not a cult. It was a real project of the late 1940s. It is the ET idea that is a cult Jerry. No such thing as an ET is known to science (just in case you had forgotten this).My argument was that Kevin's latest findings did not in any way diminish the Mogul explanation. I simply said that one method of spying on an enemy (i.e. by manned aircraft) does not exclude using another method (by unmanned balloons). And I stick to that statement. "Intelligence insulting"? Why don't you realise for once that the most intelligence insulting answer of all is the totally dotty notion that the US authorities have been sitting on the real physical evidence for ETs and keeping it under wraps now for six decades? It is those who propagate this idea who are invariably the biggest opponents of the Mogul answer. Yes, Nick Redfern had another try and failed, for the same reason. If he were correct, his solution would have been public knowledge long long ago.So who are the real 'cultists'? Those who say Mogul is very likely the answer or those who propagate the 60-year old conspiracy myth?By the way, I have just tumbled upon Tim Good's latest book. Oh dear, oh dear..... CDA -None of this matters now because, the truth is, there was no Flight No. 4... Dr. Crary's diary and notes shows there was no Flight No. 4. It was cancelled. All other flights have been accounted for. Crary is clear on the point.What do you say to that? Indeed the intended flights on June 3 & 4 were cancelled. But the diary shows a flight of some kind did take off early on June 4and was, apparently, not recovered. I am not going to get into all the nitty-gritty of which flight it was: 3, 3A, 4, 4A etc. Neither you nor I knows whether the Mogul logs are complete or whether flights took place that were not logged, what times of day they were or what exactly was launched in each & every flight. All we have is some 50-year old diary notes and one man (Charles Moore's) memories of those far off days. Neither Moore or the USAF claimed that Mogul flight 4 debris was definitely what was found on the ranch. They merely said they thought it quite likely. Nobody can be certain after all this time, least of all myself. The junk described at the time does closely resemble Mogul debris. Some of the stuff described 30-40 years later does, some does not. Agreed, Mogul does not entirely explain the case, but it is a good match. Jerry calls it a 'cult'. How does the Mogul 'cult' rank with the 'conspiracy cult', in your opinion?I have no qualms about ET visits - they may have occurred several, maybe many, times over mankind's existence. But if they have occurred in recent times (say the last 100 years) I am very positive of one thing - the hard evidence would NOT still be stagnating in official cabinets in Washington or anywhere else as 'above top secret' known only to the chosen few. CDA -Now we're going to toss out the documentation? The diary notes tell us there was no Flight No. 4. More importantly, the NYU records list Flight No. 5 as the first successful flight and the wreckage was recovered. It really doesn't matter here how old the records are. They are the records that were created at the time and while you might argue that they seem ambiguous, the truth is, Flight No. 4, if there was anything to it, was only the balloons and nothing else. No radar reflectors. No sonobuoys. Just neoprene balloons that degrade rapidly in the sun and would have been little more than powered black debris by the beginning of July 1947...Which begs the question... Just where did Ramey get that balloon he displayed because it certainly didn't come from New Mexico, but I digress.So, I don't understand why you reject this documentation. It is confirmed through other records. The winds aloft data used to put a balloon array near the Brazel (Foster) ranch can be interpreted in various ways but the real point is, even if you use the best interpretation, you're still 17 miles short... and you really don't have an array in the right time frame anyway.What this means is that Mogul doesn't answer the question, just as an errant rocket from White Sands didn't answer it, a crash of a test or experimental aircraft, of an N9M or XB-35, didn't answer the question. Mogul is the last of a long line of failed explanations.And remember, Moore said, more than once, if there was a gouge in the terrain, then the balloons were not the culprits. Erica307 Interesting topic. I am trying to do my best to read as many articles as I can-since I recently started to seriously research ufo's. I would like to add your url to my favorites, if you approve.http://midwestufo.blogspot.com/thanks,Erica The last best hope for the sceptics is that time will silence those that believe that the object was of ET origins. If they wait long enough we will all die. Sure, we'll die, and be replaced by a new generation even more convinced. The passing of Marcel and others hasn't affected belief. Their testimony will always be there. I'm writing a book, but in Spanish, and I can say that ufologists do not leave anything good standing in that story. It was all a fraud, a build made by four people, Shandera, Staton, Berlitz and Moore. By 'Staton' do you mean StaNton Friedman? Actually Shandera only entered in 1987 with MJ-12. Prior to that he was a 'nobody', and still is. I hope your credentials are green! Revising UFO History The Schirmer Abduction New Dwarf Planet
科技
2016-40/3983/en_head.json.gz/7756
Black Eyed Peas Experience Strong Lyrics The Games on Demand version supports English, French, Italian, German, Spanish. Get the party started in your living room with the world's hottest group in the world's hottest new dance game. The Black Eyed Peas Experience features chart topping tracks from the Black Eyed Peas - including the biggest hits that transformed them into a global phenomenon! Perform iconic dance moves alongside apl.de.ap, Fergie, Taboo and will.i.am with professional choreography designed exclusively for the group. Take the party worldwide and jump on tour to experience all the energy and unforgettable venues inspired by their shows and music videos. The Black Eyed Peas Experience is the ultimate dance game and the ultimate way to keep the party rocking. Download the manual for this game by locating the game on http://marketplace.xbox.com and selecting “See Game Manual". This game requires a Kinect™ Sensor. Genre: Music, Kinect
科技
2016-40/3983/en_head.json.gz/7822
Researchers find explanation for interacting giant, hidden ocean waves Innovation for everyone Nanosensors could help determine tumors’ ability to remodel tissue Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles Scientists identify neurons devoted to social memory Collaborating with community colleges to innovate educational technology Engaging industry in addressing climate change Making smarter decisions about classroom technologies By Topic Environmental regulations cut health costs, MIT team finds Nancy Stauffer, Laboratory for Energy and the Environment MIT researchers are using a novel technique to calculate an underappreciated benefit of environmental regulation: the economic gains that come from having a healthier population with less pollution-induced sickness and death. Initial analyses show significant health-related economic gains stemming from U.S. air-pollution regulation from 1975 to 2000 -- but also economic losses caused by the air pollution that remained. Other analyses predict health-related economic gains from air-pollution and climate-change policies now being considered by China. "Even these first estimates can provide valuable information for Chinese policymakers as they try to make important policy decisions that will have impacts around the world for many years to come," said Kira Matus (S.M. 2005), a member of the research team. Epidemiological studies have shown that specific pollutants cause specific health problems ranging from cough to congestive heart failure and even premature death. "Such adverse health outcomes are not just quality-of-life issues," said Matus, who received her degree through MIT's Engineering Systems Division. "They incur a real cost to the economy, both in the provision of health services and in the labor and leisure time that's lost every time an individual becomes ill." Thus, while regulation that cuts pollution can be costly, it also can bring economic gains by improving people's health as well as labor productivity -- gains that must be recognized in cost-benefit analyses. "In fact, the biggest economic benefits of an environmental policy are often those associated with improved human health," said John Reilly of the MIT Joint Program on the Science and Policy of Global Change and MIT's Laboratory for Energy and the Environment. To calculate those economic benefits, Matus, Reilly, Trent Yang (S.M. 2004) and Sergey Paltsev at the Joint Program turned to the MIT emissions prediction and policy analysis (EPPA) model. This type of model is widely used to estimate the cost of reducing emissions, but has not been widely used to estimate the impacts on the economy of damage to human health. The research team therefore developed a new method for incorporating health effects into the model to show those economic impacts. After generating an estimate of emissions, the model uses published health data to calculate the resulting occurrences of specific diseases. Each time a disease occurs, the effects on the population -- due to lost work, lost nonwork time and/or increased medicine and hospital costs -- are reflected in the appropriate economic sector within the model. And the model keeps track of pollutant exposures, worker status and the impacts on various age groups over time. The researchers believe that their new analytical method yields a clearer picture of the economic gains to be achieved by improving health through pollution control. They are now working to better represent the cost of controlling pollution in their analysis. Ultimately, they hope to be able to assess both the costs and the benefits of pollution control consistently within a single model. This research was supported by the Environmental Protection Agency, the Department of Energy and a group of corporate sponsors through the Joint Program on the Science and Policy of Global Change. Topics: Economics, Environment, Health sciences and technology Benefits of environmental regulation: Calculating the economic gains from better health (PDF)MIT Laboratory for Energy and the EnvironmentMIT Joint Program on the Science and Policy of Global Change About This Website
科技
2016-40/3983/en_head.json.gz/7824
Suvrath Mahadevan Suvrath Mahadevan Top NewsTop news RSS feed Data on 60,000 stars illuminate knowledge of galaxy's origin 8/2/13A team including Penn State astronomers has captured infrared light from 60,000 stars that are helping to reveal how our Milky Way galaxy formed. Split the Difference8/22/12A team of Penn State astronomers using the Hobby-Eberly Telescope (HET) in Texas has obtained precise measurements of these stars, which will aid astronomers in understanding how stars and planetary systems form. New astronomy tool peers through the heart of the Milky Way1/10/12A powerful new tool for probing the structure of our galaxy has been developed by astronomers associated with the Sloan Digital Sky Survey, including two Penn State astronomers. The new tool is an infrared spectrograph for the Apache Point Observatory Galactic Evolution Experiment (APOGEE). Over the next three years, APOGEE's initial census of the chemical constitution and motion of more than 100,000 stars across the Milky Way will bring together data on stars with ages spanning nearly the full age of the universe. Penn State Joins Major Astronomical Survey12/15/10Penn State University has become a participant in the Sloan Digital Sky Survey-III (SDSS-III), a six-year project that will expand our knowledge in fields ranging from the planets outside our solar system to the large-scale structure and evolution of the universe. "The SDSS-III is investigating some of the currently most compelling scientific questions," said Lawrence Ramsey, head of Penn State's Department of Astronomy and Astrophysics. "This is a great opportunity for Penn State faculty and students." Atmospheres of distant worlds probed with new technique9/1/10Astronomers on two research teams, including an astronomer at Penn State, have demonstrated the power of a new technique to determine the chemical composition of the atmospheres of planets far outside our solar system. Using the technique -- called narrow-band transit spectrophotometry -- the teams discovered the element potassium in the atmospheres of giant planets similar in size to Jupiter.
科技
2016-40/3983/en_head.json.gz/7933
Listen Can the People&apos;s House become a social platform for the people? The combined potential of social media and legislative data took the stage at the first congressional hackathon. by Alex Howard | @digiphile | +Alex Howard | | December 12, 2011 Comment InSourceCode developers work on “Madison” with volunteers. There wasn’t a great deal of hacking, at least in the traditional sense, at the “first congressional hackathon.” Given the general shiver that the word still evokes in many a Washingtonian in 2011, that might be for the best. The attendees gathered together in the halls of the United States House of Representatives didn’t create a more interactive visualization of how laws are made or a mobile health app. As open government advocate Carl Malamud observed, the “hack” felt like something even rarer in the “Age of the App for That:” Impressed @MattLira pulled off a truly bipartisan tech event on the hill. *that* is a true hack. #inhackwetrust — Carl Malamud (@carlmalamud) December 7, 2011 In a time when partisanship and legislative gridlock have defined Congress for many citizens, seeing the leadership of the United States House of Representatives agree on the importance of using the power of data and social networking to open government was an early Christmas present. “Increased access, increased connection with our constituents, transparency, openness is not a partisan issue,” said House Majority Leader Eric Cantor. “The Republican leader and I may debate vigorously on many issues, but one area where we strongly agree is on making Congress more transparent and accessible,” said House Democratic Whip Steny Hoyer in his remarks. “First, Congress took steps to open up the Capitol building so citizens can meet with their representatives and see the home of their legislature. In the same way, Congress is now taking steps to update how it connects with the American people online.” An open House While the event was branded as a “Congressional Facebook Developer Hackathon,” what emerged more closely resembled a loosely organized conference or camp. Facebook executives and developers shared the stage with members of Congress to give keynotes to the 200 or so attendees before everyone broke into discussion groups to talk about constituent communications, press relations and legislative data. The event might be more aptly described as a “wonk-a-thon,” as Sunlight Foundation’s Daniel Schuman put it last week. This “hackathon” was organized to have some of the feel of an unconference, in the view of Matt Lira, digital director for the House Majority Leader. Lira sat down for a follow-up interview last Thursday. “There’s a real model to CityCamp,” he said. “We had ‘curators’ for the breakout. Next time, depending on how we structure it, we might break out events that are designed specifically for programming, with others clustered around topics. We want to keep it experimental.” Why? “When Aneesh Chopra and I did that session at SXSW, that personally for me was what tripped my thinking here,” said Lira. “We came down from the stage and formed a circle. I was thinking the whole time that it would have been a waste of intellectual talent to have Tim O’Reilly and Clay Shirky in the audience instead of engaging in the conversation. I was thinking I never want to do a panel again. I want it to be like this.” Part of the challenge, so to speak, of Congress hosting a hackathon in the traditional sense, with judging and prizes, lies in procurement rules, said Lira.”There are legal issues around challenges or prizes for Congress,” he explained. “They’re allowed in the executive branch, under DARPA, and now every agency under the COMPETES Act. We can’t choose winners or losers, or give out prizes under procurement rules.” Whatever you call it, at the end of the event, discussion leaders from the groups came back and presented on the ideas and concepts that had been hashed out. You can watch a short video that EngageDC produced for the House Majority Leader’s office below: What came out of this unprecedented event, in other words, won’t necessarily be measured in lines of code. It’s that Congress got geekier. It’s that the House is opening its doors to transparency through technology. Given the focus on Facebook, it’s not surprising that social media took center stage in many of the discussions. The idea for it came from a trip to Silicon Valley, where Representative Cantor said he met with Facebook founder Mark Zuckerberg and COO Sheryl Sandberg, and discussed how to make the House more social. After that conversation, Lira and Steve Dwyer, director of online communications and technology for the House Democratic Whip, organized the event. For a sense of the ideas shared by the working groups, read the story of the first congressional “hackathon” on Storify. “For government, I don’t think we could have done anything more purposeful than this as a first meeting,” said Lira in our interview. “Next, we’ll focus on building this group of people, strengthening the trust, which will prove instrumental when we get into the pure coding space. I have 100% confidence that we could do a programming-only event now and would have attendance.” A Likeocracy in alpha As the Sunlight Foundation’s John Wonderlich observed earlier this year, access to legislative data brings citizens closer to their representatives. “When developers and programmers have better access to the data of Congress, they can better build the databases and tools that let the rest of us connect with the legislature,” he wrote. If more open legislative data goes online, when we talk about what’s trending in Congress, those conversations will be based upon insight into how the nation is reacting to them on social networks, including Facebook, Twitter, and Google+. Facebook developers Roddy Lindsay, Tyler Brock, Eric Chaves, Porter Bayne, and Blaise DiPersia coded up a simple proof of concept of what making legislative data might look like. “LikeOcracy” pulls legislation from a House XML feed and makes it more social. The first version added Facebook’s ubiquitous “Like” buttons to bill elements. A second version of the app adds more opportunities for reaction by integrating ReadrBoard, which enables users to rate sections or individual lines as “Unnecessary, Problematic, Great Idea or Confusing.” You can try it out on three sample bills, including the Stop Online Piracy Act. Would “social legislation” in a Facebook app catch on? The growth of civic startups like PopVox, OpenCongress and Votizen suggests that the idea has legs. [Disclosure: Tim O’Reilly was an early angel investor in PopVox.] Likeocracy doesn’t tap into Facebook’s Open Graph, but it does hint at what integration might look like in the future. Justin Osofsky, Facebook’s director of platform partnerships, described how the interests of constituents could be integrated with congressional data under Facebook’s new Timeline. Citizens might potentially be able to simply “subscribe” to a bill, much like they can now for any web page, if Facebook’s “Subscribe” plug-in was applied to the legislative process. Opening bill markup online The other app presented at the hackathon came not from the attendees but from the efforts of InSourceCode, a software development firm that’s also coded for Congressman Mike Pence and the Republican National Committee. Rep. Darrell Issa, chairman of the House Committee on Oversight and Government Reform, introduced the beta version of MADISON on Wednesday, a new online tool to crowdsource legislative markup. The vision is that MADISON will work as a real-time markup engine to let the public comment on bills as they move through the legislative process. “The assumption is that legislation should be open in Congress,” said Issa. “It should be posted, interoperable and commented upon.” As Nick Judd reported at techPresident, the first use of MADISON is to host Issa and Sen. Ron Wyden’s “OPEN bill,” which debuted on the app. Last week, the congressmen released the Online Protection and Enforcement of Digital Trade Act (OPEN) at Keepthewebopen.com. The OPEN legislation removes one of the most controversial aspects of SOPA, using the domain name system for enforcement, and instead places authority with the International Trade Commission to address enforcement of IP rights on websites that are primarily infringing upon copyright. Issa said that his team had looked at the use of wikis by Rep. John Culberson, who put the healthcare reform bill online in a wiki. “There are some problems with editors who are not transparent to all of us,” said Issa. “That’s one of the challenges. We want to make sure that if you’re an editor, you’re a known editor.” MADISON includes two levels of authentication: email for simple commenting and a more thorough vetting process for organizations or advocacy groups that wish to comment. “Like most things that are a 1.0 or beta, our assumption is that we’ll learn from this,” said Issa. “Some members may choose to have an active dialog. Others may choose to have it be part of pre-markup record.” Issa fielded a number of questions on Wednesday, including one from web developer Brett Stubbs: “Will there be open access or an API? What we really want is just data.” Issa indicated that future versions might include that. Jayson Manship, the “chief nerd” at InSourceCode, said that MADISON was built in four days. According to Manship, the idea came from conversations with Issa and Seamus Kraft, director of digital strategy for the House Committee on Oversight and Government Reform. MADISON is built with PHP and MySQL, and hosted in RackSpace’s cloud so it can scale with demand, said Manship. “It’s important to be entrepreneurial,” said Lira in our interview. “There are partners throughout institutions that would be willing to do projects of different sizes and scopes. MADISON is something that Issa and Seamus wanted to do. They took it upon themselves to get the ball rolling. That’s the attitude we need.” “We’re working to hold the executive accountable to taxpayers,” said Kraft last week. “Opening up what we do here in these two halls of Congress is equally important. MADISON is our first shot at it. We’re going to need a lot of help to make it better.” Kraft invited the remaining developers present to come to the Rayburn Office Building, where Manship and his team had brought in half a dozen machines, to help get MADISON ready for launch. While I was there, there were conversations about decisions, plug-ins and ideas about improving the interface or functionality, representing a bona fide collaboration to make the app better. There’s a larger philosophical issue relating to open government that Nick Judd touched upon over at techPresident in a follow-up post on MADISON: The terms for the site warn the user that anything they write on it will become public domain — but the code itself is proprietary. Meanwhile, OpenCongress’ David Moore points out that the code that powers his organization’s website, which also allows users to comment on individual provisions of bill text, is open source and has been available for some time. In theory, this means the Oversight staff could have started from that code and built on it instead of beginning from scratch. The code being proprietary means that while people like Moore might be able to make suggestions, they can’t just download it, make their own changes and submit them for community review — which they’d happily do at little or no cost for a project released under an open-source license. As Moore put it, “Get that code on GitHub, we’ll do OpenID, fix the design.” When asked about whether the team had considered making MADISON code open source, Manship said that “he didn’t know, although they weren’t opposed to it.” While Moore welcomed MADISON, he also observed that Open Congress has had open-source code for bill text commenting for years. @seamuskraft @mattlira glad to chat, will email. We see first step as liberating full #opengovdata (API & bulk) for MADISON & OC & open Web. — David Moore (@ppolitics) December 9, 2011 The decision by Issa’s office to fund the creation of an app that was already available as open-source software is one that’s worth noting, so I asked Kraft why they didn’t fork OpenCongress’ code, as Judd suggests. “While there was no specific budget expense for MADISON, it was developed by the Oversight Committee,” said Kraft. “While we like and support OpenCongress’ code, it didn’t fit the needs for MADISON,” Kraft wrote in an emailed statement. What’s next is, so to speak, an “OPEN” question, both in terms of the proposed SOPA alternative and the planned markup of SOPA itself on December 15. The designers of OPEN are actively looking for feedback from the civic software development community, both in terms of what functionality exists now and what could be built in future iterations. THOMAS.gov as a platform What Moore and long-time open-government advocates like Carl Malamud want to see from Congress is more structural change: Re: #hackwetrust, while we do seek leg. version control, public bill markup isn’t ultimate goal. Exhaustive #opengovdata & open API is (2/2) @MattLira @DarrellIssa @SeamusKraft MADISON is much-welcomed, but PPF’s #opengov ultimate goal is open API for @THOMASdotgov -cc @digiphile. They’re not alone. Dan Schuman listed many other ways the House has yet to catch up with 21st century technology: We have yet to see bulk access to THOMAS or public access to CRS reports, important legislative and ethics documents are still unavailable in digital format, many committee hearings still are not online, and so on. As Schuman highlighted, the Sunlight Foundation has been focused on opening up Congress through technology since the organization was founded. To whit: “There have been several previous collaborative efforts by members of the transparency community to outline how the House of Representatives can be more open and accountable, of which an enduring touchstone is the Open House Project Report, issued in May 2007,” wrote Schuman. The notion of making THOMAS.gov into a platform received high-level endorsement from a congressional leader when House Minority Whip Steny Hoyer remarked on how technology is affecting Congress, his caucus and open government in the executive branch: For Congress, there is still a lot of work to be done, and we have a duty to make the legislative process as open and accessible as possible. One thing we could do is make THOMAS.gov — where people go to research legislation from current and previous Congresses — easier to use, and accessible by social media. Imagine if a bill in Congress could tweet its own status. The data available on THOMAS.gov should be expanded and made easily accessible by third-party systems. Once this happens, developers, like many of you here today, could use legislative data in innovative ways. This will usher in new public-private partnerships that will empower new entrepreneurs who will, in turn, yield benefits to the public sector. One successful example is how cities have made public transit data accessible so developers can use it in apps and websites. The end result has been commuters saving time every day and seeing more punctual trains and buses as a result of the transparency. Legislative data is far more complex, but the same principles apply. If we make the information available, I am confident that smart people like you will use it in inventive ways. Hoyer’s specific citation of the growth of open data in cities and an ecosystem of civic applications based upon it is further confirmation that the Gov 2.0 meme is moving into the mainstream. Making THOMAS.gov into a platform for bulk data would change what’s possible for all civic developers. What I really want is “data on everything,” Stubbs told me last week. “THOMAS is just a visual viewer of the internal stuff. If we could have all of this, we could do something with it. What I would like is a data broker. I’d like a RESTful API with all of the data that I could just query. That’s what the government could learn from Facebook. From my point of view, I just want to pull information and compile it.” If Hoyer and the House leadership would like to see THOMAS.gov act as a platform, several attendees at the hackathon suggested to me that Congress could take a specific action: collaborate with the Senate and send the Library of Congress a letter instructing it to provide bulk legislative data access to THOMAS.gov in structured formats so that developers, designers and citizens around the nation can co-create a better civic experience for everyone. “The House administration is working on standards called for by the rule and the letter sent earlier this year,” said Lira. “We think they will be satisfactory to people. The institutions of the House have been following through since the day they were issued. The first step was issuing an XML feed daily. Next year, there will be a steady series of incremental process improvements. When the House Administrative Committee issues standards, the House Clerk will work on them. “ Despite the abysmal public perception of Congress, genuine institutional changes in the House of Representatives driven by the GOP embracing innovation and transparency are incrementally happening. As Tim O’Reilly observed earlier this year, the current leadership of the House on transparency is doing a better job than their predecessors. In April, Speaker Boehner and Majority Leader Cantor sent a letter to the House Clerk regarding legislative data release. Then, in September, a live XML feed for the House floor went online. Yes, there’s a long way to go on open legislative data quality in Congress. That said, there’s support for open-government data from both the White House and the House. “My personal view is that what’s important right now is that the House create the right precedents,” said Lira. “If we create or adopt a data standard, it’s important that it be the right standard.” Even if open government is in beta, there needs to be more tolerance for experiments and risks, said Lira. “I made a mistake in attacking We the People as insufficient. I still believe it is, but it’s important to realize that the precedent is as important as the product in government. In technology in general, you’ll never reach an end. We The People is a really good precedent, and I look forward to seeing what they do. They’ve shown a real commitment, and it’s steadily improving.” A social Congress While Sean Parker may predict that social media will determine the outcome of the 2012 election, governance is another story entirely. Meaningful use of social media by Congress remains challenged by a number of factors, not least an online identity ecosystem that has not provided Congress with ideal means to identify constituents online. The reality remains that when it comes to which channels influence Congress, in-person visits and individual emails or phone calls are far more influential with congressional staffers. As with any set of tools, success shouldn’t be measured solely by media reports or press releases but by the outcomes from their use. The hard work of bipartisan compromise between the White House and Congress, to the extent it occurs, might seem unlikely to be publicly visible in 140 characters or less. “People think it’s always an argument in Washington,” said Lira in our interview. “Social media can change that. We’re seeing a decentralization of audiences that is built around their interests rather than the interests of editors. Imagine when you start streaming every hearing and making information more digestible. All of a sudden, you get these niche audiences. They’re not enough to sustain a network, but you’ll get enough of an audience to sustain the topic. I believe we will have a more engaged citizenry as a result.” Lira is optimistic. “Technology enables our republic to function better. In ancient Greece, you could only sustain a democracy in the size of city. Transportation technology limited that scope. In the U.S., new technologies enabled global democracy. As we entered the age of mass communication, we lost mass participation. Now with the Internet, we can have people more engaged again.” There may be a 30-year cycle at play here. Lira suggested looking back to radio in the 1920s, television in the 1950s, and cable in the 1980s. “It hasn’t changed much since; we’re essentially using the same rulebook since the ’80s. The changes made in those periods of modernization were unique.” Thirty years on from the introduction of cable news, will the Internet help reinvigorate the founders’ vision of a nation of, by and with the people? “I do think that this is a transformational moment,” said Lira. “It will be for the next couple of years. When you talk to people — both Republicans and Democrats — you sense we’re on the cusp of some kind of change, where it’s not just communicating about projects but making projects better. Hearings, legislative government and executive government will all be much more participatory a decade from now. “ In that sweep of history, the “People’s House” may prove to be a fulcrum of change. “If any place in government is going to do it, it’s the House” said Lira. “It’s our job to be close to the public in a way that no other part of government is. In the Federalist Papers, that’s the role of the House. We have an obligation to lead the way in terms of incorporating technology into real processes. We’re not replacing our system of representative government. We’re augmenting it with what’s now possible, like when the telegraph let people know what the votes were faster.” UPDATE: On December 16th, the Committee on House Administration adopted new standards that require all House legislative documents to be published in an open, searchable electronic format. “With the adoption of these standards, for the first time, all House bills, resolutions and legislative documents will be available in XML in one centralized location,” said Representative Dan Lungren (R-CA), chairman of the Committee on House Administration, in a prepared statement. “Providing easy access to legislative information increases constituent feedback and ultimately improves the legislative process.” With these standards, “the House of Representatives took a tremendous step into the 21st century,” wrote Daniel Schuman, the Sunlight Foundation’s policy counsel and director of the Advisory Committee on Transparency. “Three cheers to Chairman Dan Lungren, Ranking Member Bob Brady, members of the committee, and its staff for moving this important issue forward,” wrote Schuman. “As was discussed at the recent #hackthehouse conference, as well as in our longstanding Open House Project Report (pdf), there’s a lot more to do, but this is a major stride towards implementing Speaker Boehner and Majority Leader Cantor’s pledge to ‘publicly releasing the House’s legislative data in machine-readable formats.’ The Senate could do well by following this example, as could legislative support agencies like the Library of Congress and GPO.” Strata 2012 — The 2012 Strata Conference, being held Feb. 28-March 1 in Santa Clara, Calif., will offer three full days of hands-on data training and information-rich sessions. Strata brings together the people, tools, and technologies you need to make data work. Save 20% on registration with the code RADAR20 tags: civic apps, Congress, Facebook, Gov 2.0, hackathon, open data, open government, social media
科技
2016-40/3983/en_head.json.gz/7968
Leaks, Rats and Radioactivity: Fukushima’s Nuclear Cleanup Is Faltering The Fukushima meltdown hasn't been the public-health disaster that many critics feared, but TEPCO has struggled with the cleanup. The latest news that groundwater is becoming contaminated by the stricken reactors is one more reason why the company should not be in charge of the cleanup By Bryan Walsh @bryanrwalshMay 01, 2013 Share IAEA / AFP / Getty ImagesJuan Carlos Lentijo, leader of the IAEA Division of Nuclear Fuel Cycle and Waste Technology, inspects the unit-four reactor building of the Fukushima Daiichi nuclear power plant in Okuma, Japan, on April 17, 2013 Email Honestly, if the consequences weren’t potentially so dire, the ongoing struggles to clean up the Fukushima Daiichi nuclear plant in northern Japan would be the stuff of comedy. In March, an extended blackout disabled power to a vital cooling system for days. The cause: a rat that had apparently been chewing on cables in a switchboard. As if that’s not enough, another dead rat was found in the plant’s electrical works just a few weeks ago, which led to another blackout, albeit of a less important system. The dead rats were just the latest screwups in a series of screwups by Tokyo Electric Power Co. (TEPCO), the owner of the Fukushima plant, that goes back to the day of March 11, 2011, when an earthquake and the resulting tsunami touched off a nuclear disaster that isn’t actually finished yet. I’m not sure things could be much worse if Wile E. Coyote were TEPCO’s CEO. But it’s not funny, not really, because the consequences of the meltdown and TEPCO’s mismanagement are very real. The latest threat comes from nearby groundwater that is pouring into the damaged reactor buildings. Once the water reaches the reactor it becomes highly contaminated by radioactivity. TEPCO workers have to pump the water out of the reactor to avoid submerging the important cooling system — the plant’s melted reactor cores, while less dangerous than they were in the immediate aftermath of the meltdown, still needed to be further cooled down. TEPCO can’t simply dump the irradiated groundwater into the nearby sea — the public outcry would be too great — so the company has been forced to jury-rig yet another temporary solution, building hundreds of tanks, each able to hold 112 Olympic-size pools worth of liquid, to hold the groundwater. So TEPCO finds itself in a race: Can its workers build enough tanks and clear enough nearby space to store the irradiated water — water that keeps pouring into the reactor at the rate of some 75 gal. a minute? More than two years after the tsunami, TEPCO is still racing against time — and just barely staying ahead. (MORE: Despite Fear, Health Risks From Fukushima Accident Are Minimal) TEPCO spokesperson — there’s an unfortunate job — Masayuki Ono put it this way to the New York Times, which has reported closely on Fukushima’s troubles over the past month: The water keeps increasing every minute, no matter whether we eat, sleep or work. It feels like we are constantly being chased, but we are doing our best to stay a step in front. Indeed, the job has been taking its toll on the workers and on TEPCO itself, which recently announced that it lost some $7 billion in the fiscal year to March. Cash-strapped, TEPCO is struggling to make ends meet — and more to the point, the company knows that every yen it spends trying to clean up Fukushima is a yen it will never get back again, as the plant will never produce energy again. TEPCO’s struggles are hardly unique among Japan’s hard-hit utility sector — the country’s regional electricity monopolies posted a combined loss of $16 billion — but the fact that the company is still running the Fukushima cleanup seems like a worse idea with each passing day. TEPCO argues that its workers know Fukushima best, but the performance of the company’s management hardly inspires confidence. As the groundwater debacle demonstrates, TEPCO has been making things up as it goes along since the beginning — and the Japanese government has let it. It shouldn’t have been a surprise that radioactive groundwater would be a threat — the movements of water underground are not exactly unpredictable. The Times reported that TEPCO decide not to build an underground concrete wall that could have prevent the groundwater from reaching the reactor, apparently assuming it would be able to construct a filtering system before the water became a problem. TEPCO was wrong, as it has repeatedly been. Meanwhile, Japan’s Nuclear Regulation Authority has just a handful of inspectors to oversee the more than 3,000 workers at Fukushima. (MORE: Independent Commission Releases Report on Fukushima Meltdown, Blames Japanese Culture) This is a typically Japanese problem. Collusion between industry and the government helped propel the country to economic greatness after World War II, but since the crash in the early 1990s, those tight relationships have held Japan back — especially when it comes to dealing with unexpected crises like Fukushima. If ever there were a moment for letting outsiders have some say, it would be in the Fukushima cleanup — but in Japan, there are no outsiders, only the marginalized. If the price of safeguarding consensus and the cozy relationship between industry and government is a little radioactive water leaking into the Pacific Ocean, so be it. And while Japan is unique, collusion between the tightly closed nuclear industry and the government elsewhere isn’t. (PHOTOS: Japan One Year Later: Photographs by James Nachtwey) In the end, the damage from Fukushima — especially to human health — is still unlikely to be anywhere near as large as nuclear critics feared when the plant first melted down. Indeed, the greater threat to the health of those who lived around the plant may be psychological, as they struggle with the both the upheaval of evacuation and the social taint of living near a meltdown. But that assumes that the Fukushima area isn’t hit by another earthquake or tsunami before the cleanup is finally completed, likely years from now. “The Fukushima Daiichi plant remains in an unstable condition, and there is concern that we cannot prevent another accident,” Shunichi Tanaka, chairman of the Nuclear Regulation Authority, said in a news conference in early April. Not so funny after all. MORE: Nuked: A Year After Fukushima, Nuclear Power Is Down — and Carbon Is Up
科技
2016-40/3983/en_head.json.gz/8031
The Green TimesClimate change is the most widespread & complex problem humanity has ever faced! There is no time to waste and we need to turn green talk into profound green action. This is the intention of the GREEN TIMES. You are here: Home / Articles / COP17: The politics of compromiseCOP17: The politics of compromise January 4, 2012 Leave a Comment In a last-minute deal reached on December 11, 2011 at the 17th session of the Conference of the Parties (COP 17) to the United Nations Framework Convention on Climate Change (UNFCCC) meeting in Durban, South Africa, governments decided to adopt a universal legal agreement on climate change as soon as possible, but not later than 2015. Work will begin on this immediately under a new group called the Ad Hoc Working Group on the Durban Platform for Enhanced Action. The value of an international consensus on any topic should never be underestimated. International law rests largely on agreements in the form of treaties and protocols. Consensus proceeds at the pace of the slowest and to the level of the most reluctant. The existing treaty, the Kyoto protocol, had not secured the participation of developing countries and some of the leading developed countries, not least the United States. With the existing Kyoto targets due to expire next year, only the European Union among major developed countries had committed to a continuation. a comprehensive global agreement Against this background, the agreement in Durban, to write a comprehensive global agreement that reduces greenhouse gas emissions, covers developed and developing countries, and comes into force in 2020 is groundbreaking. But it comes at a price. Scientists at the Climate Action Tracker, an independent science-based climate change assessment, warned that the world continues on a pathway of warming more than 3°Celsius (C) with likely impacts rated as extremely severe. The agreement will not immediately affect the emissions outlook for 2020, and decisions on further emission reductions have been postponed. Catching up on this postponed action will be increasingly costly. According to Bill Hare, Director of Climate Analytics, “What remains to be done is to take more ambitious actions to reduced emissions, and until this is done we are still headed to over 3°C warming. There are still no new pledges on the table and the process agreed in Durban towards raising the ambition and increasing emission reductions is uncertain at its outcome.” The next major UNFCCC conference, COP 18/ CMP 8, is to take place November 26 to December 7, 2012 in Qatar, in close cooperation with the Republic of Korea. Key Decisions from COP17 in Durban: Countries have already started pledging to contribute to start-up costs of the fund, meaning it can be made ready in 2012. This will also help developing countries get ready to access the fund, boosting their efforts to establish their own clean energy futures and adapt to existing climate change. A Standing Committee is to keep an overview of climate finance in the context of the UNFCCC and assist the Conference of the Parties. It will comprise twenty members, represented equally between the developed and developing world. A focused work program on long-term finance was agreed, which will contribute to the scaling up of climate change finance going forward and will analyze options for the mobilization of resources from a variety of sources. The Adaptation Committee, composed of 16 members, will report to the COP on its efforts to improve the coordination of adaptation actions at a global scale. The adaptive capacities above all of the poorest and most vulnerable countries are to be strengthened. National Adaptation Plans will allow developing countries to assess and reduce their vulnerability to climate change. The most vulnerable are to receive better protection against loss and damage caused by extreme weather events related to climate change. The Technology Mechanism will become fully operational in 2012. The full terms of reference for the operational arm of the Mechanism – the Climate Technology Centre and Network – are agreed, along with a clear procedure to select the host. The UNFCCC secretariat will issue a call for proposals for hosts on 16 January 2012. Support of Developing Country Action Governments agreed to establish a registry to record developing country mitigation actions that seek financial support and to match these with support. The registry will be a flexible, dynamic, web-based platform. Other Key Decisions A forum and work program on unintended consequences of climate change actions and policies were established. Under the Kyoto protocol’s Clean Development Mechanism, governments adopted procedures to allow carbon-capture and storage projects. These guidelines will be reviewed every five years to ensure environmental integrity. Governments agreed to develop a new market-based mechanism to assist developed countries in meeting part of their targets or commitments under the Convention. Details of this will be taken forward in 2012. Paul Manning is principal at Manning Environmental Law and an Environmental Law Specialist, Certified by the Law Society of Upper Canada. He has practiced environmental law for more than twenty years in the UK and in Canada. During that time he has dealt with most areas of environmental law for a diverse range of clients. Paul’s practice focuses on environmental, energy, aboriginal and planning law. He appears regularly as counsel in Tribunals and the Courts. He has a special interest in renewable energy and climate change regulation and holds a Certificate in Carbon Finance from the University of Toronto. Story source: EHS Journal Photo source: © Zoonar More that you may like:Bending the CurveAdaptation governance essential in africaA change of heart for water & sanitationGrowing food more impactful than votingShare this:GoogleFacebookTwitterLinkedInMoreEmailPrintPinterest
科技
2016-40/3983/en_head.json.gz/8035
The Name Game – The pitfalls of naming an Internet startup — in Social Media There are lots of challenges to contend with when launching an Internet startup, but one of the more fundamental is its name. It needs to stand out without falling foul of one of the many pitfalls out there. Here are some common stumbling blocks encountered when searching for the perfect name… No no, that’s not what I meant at all… With online businesses, there’s always the risk that the name you think is perfect could be misinterpreted by someone else in another part of the world. Pongr is a US startup turning brand loyalty into a game. We were quite impressed when we took a look recently, although it may struggle to gain a large following in the UK, where “Pong” is slang for a bad smell. How your startup name looks when all the words are run together in a domain name can be a problem too. Just look at celebrity agent search service Who Represents, which resides at whorepresents.com or Therapist Finder, which is at www.therapistfinder.com. It is possible to exploit unfortunate double-meanings though. Up-and-coming fashion startup Fashism has on the face of it a ridiculous name that could be taken as offensive by some (what’s next? “Hitlr”? “Hollow Coursed”?). However, fascist associations aren’t necessarily a bad thing in this case – it’s eye-catching and a little bit of controversy is a good thing in the fashion world. Still, it might make it difficult for the startup to go truly mainstream in future. Twitter on my Facebook and I’ll sue you If your product is designed to work with another well-known service, it might seem sensible to choose a name that alludes to that service. Unfortunately the current kings of the social media world, Twitter and Facebook fiercely protect usage of their names. Even names that sound similar can be in trouble. Twitter has strict guidelines over use of its trademarks. You even have to be careful about the way you use the word “Tweet” in your service name. Facebook is even stricter; Placebook, Teachbook and Faceporn have all suffered at the hands of the Facebook legal department due to their names being close to the social network’s. The domain’s taken, get creative With good .com domain names in high demand, the most common problem of all is choosing a startup name with an available domain name to match. At one time, the most common approach was to drop a vowel – that’s how Flickr got its name. As Flickr co-founder Caterina Fake explained in a recent comment on a TechCrunch post, “We tried to buy the domain from the prior owner… He wasn’t interested in selling…. We liked the name “Flicker” so much we dropped the E. It wasn’t very popular on the team, I had to do a lot of persuasion. Then the dropped “E” thing became something of a cliché…” It’s worth remembering that Twitter started life as Twittr, too. Another option is to go for a domain name that’s different from your startup’s name. This could be risky, but seems to work fine for some – see TV check-in service Miso (at gomiso.com) and real-time web stats startup Clicky (at getclicky.com). Foursquare used to be found at playfoursquare.com until it managed to acquire the more obvious foursquare.com. If your domain name is taken, perhaps the best option is to look at avoiding .com entirely and choose a little-used Top Level Domain. Libya’s .ly has proved a popular choice (see Bit.ly and Eat.ly for example), but use your imagination and you could find something that really stands out. Look at Instagram‘s use of the Armenian domain Instagr.am or Curated.by‘s Belarusian domain name which it even adopted as its actual name. Meanwhile, if you’re a music streaming startup, you might find that the Federated States of Micronesia’s .fm TLD is your friend. Postling, a social media platform to watch, from the founders of Etsy Share on Facebook (2)
科技
2016-40/3983/en_head.json.gz/8045
When Science Points To God 11/24/2008 12:00:51 AM - Dinesh D'Souza Contemporary atheism marches behind the banner of science. It is perhaps no surprise that several leading atheists—from biologist Richard Dawkins to cognitive psychologist Steven Pinker to physicist Victor Stenger—are also leading scientists. The central argument of these scientific atheists is that modern science has refuted traditional religious conceptions of a divine creator. But of late atheism seems to be losing its scientific confidence. One sign of this is the public advertisements that are appearing in billboards from London to Washington DC. Dawkins helped pay for a London campaign to put signs on city buses saying, “There’s probably no God. Now stop worrying and enjoy your life.” Humanist groups in America have launched a similar campaign in the nation’s capital. “Why believe in a god? Just be good for goodness sake.” And in Colorado atheists are sporting billboards apparently inspired by John Lennon: “Imagine…no religion.” What is striking about these slogans is the philosophy behind them. There is no claim here that God fails to satisfy some criterion of scientific validation. We hear nothing about how evolution has undermined the traditional “argument from design.” There’s not even a whisper about how science is based on reason while Christianity is based on faith. Instead, we are given the simple assertion that there is probably no God, followed by the counsel to go ahead and enjoy life. In other words, let’s not let God and his commandments spoil all the fun. “Be good for goodness sake” is true as far as it goes, but it doesn’t go very far. The question remains: what is the source of these standards of goodness that seem to be shared by religious and non-religious people alike? Finally John Lennon knew how to compose a tune but he could hardly be considered a reliable authority on fundamental questions. His “imagine there’s no heaven” sounds visionary but is, from an intellectual point of view, a complete nullity. If you want to know why atheists seem to have given up the scientific card, the current issue of Discover magazine provides part of the answer. The magazine has an interesting story by Tim Folger which is titled “Science’s Alternative to an Intelligent Creator.” The article begins by noting “an extraordinary fact about the universe: its basic properties are uncannily suited for life.” As physicist Andrei Linde puts it, “We have a lot of really, really strange coincidences, and all of these coincidences are such that they make life possible.” Too many “coincidences,” however, imply a plot. Folger’s article shows that if the numerical values of the universe, from the speed of light to the strength of gravity, were even slightly different, there would be no universe and no life. Recently scientists have discovered that most of the matter and energy in the universe is made up of so-called “dark” matter and “dark” energy. It turns out that the quantity of dark energy seems precisely calibrated to make possible not only our universe but observers like us who can comprehend that universe. Even Steven Weinberg, the Nobel laureate in physics and an outspoken atheist, remarks that “this is fine-tuning that seems to be extreme, far beyond what you could imagine just having to accept as a mere accident.” And physicist Freeman Dyson draws the appropriate conclusion from the scientific evidence to date: “The universe in some sense knew we were coming.” Folger then admits that this line of reasoning makes a number of scientists very uncomfortable. “Physicists don’t like coincidences.” “They like even less the notion that life is somehow central to the universe, and yet recent discoveries are forcing them to confront that very idea.” There are two hurdles here, one historical and the other methodological. The historical hurdle is that science has for three centuries been showing that man does not occupy a privileged position in the cosmos, and now it seems like he does. The methodological hurdle is what physicist Stephen Hawking once called “the problem of Genesis.” Science is the search for natural explanations for natural phenomena, and what could be more embarrassing than the finding that a supernatural intelligence transcending all natural laws is behind it all? Consequently many physicists are exploring an alternative possibility: multiple universes. This is summed up as follows: “Our universe may be but one of perhaps infinitely many universes in an inconceivably vast multiverse.” Folger says that “short of invoking a benevolent creator” this is the best that modern science can do. For contemporary physicists, he writes, this “may well be the only viable nonreligious explanation” for our fine-tuned universe. The appeal of multiple universes—perhaps even an infinity of universes—is that when there are billions and billions of possibilities, then even very unlikely outcomes are going to be realized somewhere. Consequently if there was an infinite number of universes, something like our universe is certain to appear at some point. What at first glance seems like incredible coincidence can be explained as the result of a mathematical inevitability. The only difficulty, as Folger makes clear, is that there is no empirical evidence for the existence of any universes other than our own. Moreover, there may never be such evidence. That’s because if there are other universes, they will operate according to different laws of physics than the ones in our universe, and consequently they are permanently and inescapably inaccessible to us. The article in Discover concludes on a somber note. While some physicists are hoping the multiverse will produce empirical predictions that can be tested, “for many physicists, however, the multiverse remains a desperate measure ruled out by the impossibility of confirmation.” No wonder atheists are sporting billboards asking us to “imagine…no religion.” When science, far from disproving God, seems to be pointing with ever-greater precision toward transcendence, imagination and wishful thinking seem all that is left for the atheists to count on.
科技
2016-40/3983/en_head.json.gz/8082
Subscriptions put Apple in antitrust spotlight again Anthony Ha February 17, 2011 7:38 PM Tags: FTC, iPad, music, subscriptions Federal regulators are looking once more into Apple’s control over the applications available on the iPhone and iPad, according to a report in the Wall Street Journal. This time it’s Apple’s subscription feature for apps (which the company unveiled yesterday) that’s attracting antitrust scrutiny. The problem isn’t the subscription plan per se, in which Apple takes the same 30 percent cut that it does on App Store purchases, but rather the restrictions that Apple put around it. The company said that any app offering a subscription plan elsewhere has to offer it within Apple’s iOS app too, and at the same price. In addition, publishers cannot include links inside their app to purchase content or subscriptions elsewhere. The Justice Department and the Federal Trade Commission are both in the preliminary stage of their investigations, according to the Journal’s sources (who are “people familiar with the matter”), so they may not take any action against Apple or even launch a formal investigation. Eric Goldman, director of Santa Clara University’s High Tech Law Institute, told the Journal that Apple’s prohibition of links sounds like “a pretty aggressive position.” And the restriction on offering a better price elsewhere could be considered anti-competitive too if it distorts pricing. It’s widely believed that the FTC was investigating Apple last year for its ban on tools that converted non-native apps into iPhone apps, and that the investigation pressured Apple into backing off. So if this investigation gets real momentum, we may see another about-face. It’s also interesting to see that much of the opposition to Apple’s plan seems to be coming from music startups. Rhapsody said yesterday that its subscription model won’t work if Apple takes 30 percent, and today Last.fm’s co-founder said Apple “fucked over music subs for the iPhone.” The Journal article also includes complaints from music startups, including Axel Dauchez, president of French startup Deezer, who says giving Apple 30 percent of a subscription is “so obviously anticompetitive that it will never survive in Europe.” It’s not surprising that Apple is facing some of its loudest opposition from these companies, since the royalty costs for music make it notoriously difficult for startups in the music field to make money. Not even popular Internet radio app Pandora expects to make a profit this year.
科技
2016-40/3983/en_head.json.gz/8083
Mobile app monetization: Think business model, not ads Oliver Roup June 2, 2013 2:30 PM Tags: affiliate marketing, app monetization, apps, freemium, in-app purchasing, Mobile Experience 2013, Oliver Roup, referral marketing, viglink, Wanelo Oliver Roup is the CEO and founder of VigLink. According to statistics released from the Interactive Advertising Bureau, mobile advertising spend rose to $3.4 billion in 2012, up 111 percent from the prior year’s record levels, and mobile advertising now accounts for 9 percent of all digital revenue. Spending on mobile devices continues to accelerate at an aggressive pace; naturally, advertisers are following the consumers. Despite this growth, the traditional display ad model of advertising that dominates the Internet just does not translate well to the several inches of screen available on the typical smartphone. On a 21-inch screen, a small banner ad simply becomes part of the scenery, and doesn’t take up so much real estate to impact the user experience. While mobile-specific ads are designed to be less obtrusive, they still occupy a significant number of pixels – space which could have been used to improve the app experience. These ads are simply taking the desktop ad model that has been around for years and crudely molding it to the smartphone. Considering the limitations of mobile banner ads, what are the alternatives to traditional display or interstitial ads? Popular with many games and some service-oriented applications, the “freemium” model offers a stripped down experience for free and a fully featured app for those who pay. In a game setting it could mean payment unlocks certain levels, while the basic levels remain free. Freemium can be a way to drive initial traffic, but it does have its downsides. A core drawback is the app developer is intentionally providing a large segment of its users (the “free” ones) with an inferior experience or service. Affiliate and Referral Marketing Affiliate marketing through links is one track mobile developers are taking to monetize applications without devoting valuable screen space to a third party. New services are automatically transforming existing mobile links into revenue-generating links by directing the user to one of thousands of retailers. Instead of an ad, viewers are unobtrusively presented links within relevant content. The popular mobile and social shopping app Wanelo (also available as a web app) showcases products available for sale , each of which is shared by a member of the Wanelo community. The app lets users follow certain stores or individuals and discover products they love. These discoveries turn into purchases, and these purchases provide Wanelo with referral revenue. Another form of affiliate marketing for apps is to promote other mobile apps with the goal of earning commissions when the app is purchased and downloaded. These types of ads typically have a more integrated and natural feel within the app, especially if the content or style (for example a skiing game referring a mountain weather application) of both applications are natural complements. In-App Purchasing Beyond display ads and affiliate marketing, a major and growing mobile monetization strategy is in-app purchases. Buying these virtual goods typically provide deeper levels of engagement with the app content. Games might offer virtual currency or the ability to buy a personalized avatar. When designed well, in-app purchasing can turn a “free” app into a highly lucrative income stream. Indeed, some of the highest grossing apps are free. Which monetization strategy to choose depends on several factors including the purchasing habits of the target audience and the type of application. For example, many shopping-related apps lend themselves to affiliate style marketing, while games are a natural fit for in-app purchasing. For a service, a developer could consider charging a small fee for the app to make it more exclusive, after a period of free trials. Finding the right mobile monetization technology partner is another important step for developers. It’s important to select a partner with flexible APIs, with the developer having the freedom to only make the API calls they want, when they want. Monetization can also be tailored to the capabilities of today’s devices. For example, cameras can enable bar code scanners, GPS enables local targeting, and of course, it’s a phone and many businesses are willing to pay per call. Monetization should not be an afterthought for developers. Business models should be considered first, and then developers should create or integrate technology that will enable the business model to be successful. When app development is approached in this manner, it is unlikely many business models will include relegating 10% of available screen space to static ads. Oliver Roup is founder and CEO of VigLink, an automated affiliate marketing company that works with more than 30,000 merchants to maximize content publisher revenue via links. He brings over 15 years of software experience to VigLink. Previously, he was a Director at Microsoft in charge of product for various media properties including XBOX Live Video Marketplace, Zune Marketplace and MSN Entertainment. iPhone 5 photo by Devindra Hardawar/VentureBeat
科技
2016-40/3983/en_head.json.gz/8098
Drought Puts The Squeeze On Already Struggling Fish Farms By Kristofor Husted Catfish swim in a tub outside the Osage Catfisheries office. Kristofor Husted / KBIA News Osage Catfisheries drained this catfish pond to make room for another, potentially more lucrative species, such as paddlefish or bass. Originally published on January 3, 2013 6:10 pm This year's drought delivered a pricey punch to US aquaculture, the business of raising fish like bass and catfish for food. Worldwide, aquaculture has grown into a $119 billion industry, but the lack of water and high temperatures in 2012 hurt many U.S. fish farmers who were already struggling to compete on a global scale. At Osage Catfisheries, about one mile off the highway in rural, central Missouri, there are dozens of rectangular ponds with rounded corners. Some of them are empty, some have water, but not one is completely full. Co-owner Steve Kahrs dons a pair of shorts on an unusually warm December day and surveys his ponds. Today, the water is fairly still with a few ripples from the warm breeze. He stands in front of one pond filled with catfish about eight to 12 inches long and points to the dirty rings circling up a white PVC pipe for about a foot before it becomes white again. "They're out of the water a ways," he says. "Our average depth is still about five feet. But we're a good 10 or 12 inches down of where we'd keep it." Down the highway a few miles, Kahrs' office can be found in a small house next to two tiny ponds where his father first started raising fish about 60 years ago. Scattered about the property are sinks and large tubs filled with catfish, bluegill and paddlefish. Kahrs says this year, the drought proved to be tough on the family business — one that sorely depends on water. "We did fall short on our production numbers that we wanted," he says. The dry conditions and high temperatures forced many fish farmers here to dig deep to keep their fish healthy and fed. For Kahrs, that meant paying for more energy to pump clean, cool water out of his wells and into the ponds around the clock. "We probably chewed through about 30 percent more of our power than we did the year before, and the year before was not that good," he says. It wasn't just the water levels. The soaring temperatures in the summer turned up the burner on the ponds. When that happens, oxygen levels in the water drop and a fish's metabolism slows down. To counteract this, Kahrs says he was pumping water nearly every day from April through September. Farmers also saw the price of fish feed shoot up because it contains soybean and corn, which underperformed this year because of the drought. John Hargreaves, a former aquaculture professor at Mississippi State University who now consults for global aquaculture development projects, says acreage of catfish ponds have dropped considerably since the early 2000s. The rising production costs of fish farming, erratic weather and a less expensive type of catfish from Asia have all hurt the catfish industry in the US. "Production is down, and one of the big drivers for that was the increase in imports of pangasius catfish from Vietnam, China and so forth," he says. "Those imports have substituted for domestic catfish." Between 2010 and 2011, 20 percent of domestic catfish farms shut down. And as we reported before, seafood imports are also hurting the domestic shrimp market. So what can fish farmers do to survive the stiff competition and spells of inhospitable weather? Some researchers have been looking into modifying the pond system to make it more energy efficient. Others are experimenting with new feed recipes requiring less expensive ingredients. But, Hargreaves says, even that might be not enough to save this domestic industry. "There's no silver bullet or game changer. That's for certain," he says. In the meantime, Kahrs plans to repurpose at least 20 acres of catfish ponds to raise other species, like paddlefish and bass. He hopes they'll be more lucrative. But most of all he's hoping for a solid snowpack this winter and lots of rain in the spring.Copyright 2013 KBIA-FM. To see more, visit http://www.kbia.org. Related Programs: All Things ConsideredAll Things Considered on WAMC HD2View the discussion thread. © 2016 WAMC
科技
2016-40/3983/en_head.json.gz/8180
Industry New biological registered by the EPA By AgBiTech March 07, 2014 | 5:02 pm EST AgBiTech achieved U.S. Environmental Protection Agency approval for a new biological insect control to help U.S. row-crop farmers “redefine caterpillar management” in major crops. Heligen is the new biological-based solution from the Australian-based AgBiTech Pty Ltd. for control of Helicoverpa, a genus of moths. Heligen is the first product released by the company in the U.S., and it closely follows the recent opening of the company’s research laboratory at the University of Missouri Life Science Business Incubator in Columbia, Mo. AgBiTech CEO Anthony Hawes says there is a large potential market in the U.S. for solutions that selectively targeted agricultural pests to help make farming more profitable and sustainable. He suggested that the key to AgBiTech’s success outside the U.S. and now in the U.S. is in making biological-based products like Heligen accessible to farmers, at the right price. The company anticipates a strong demand for Heligen, particularly among farmers growing soybeans, sweet corn, sorghum, tomatoes and other crops. Heligen incorporates a natural baculovirus-based technology, Helicoverpa NPV, which kills only Helicoverpa and Heliothis pests such as corn earworm and tobacco budworm. Depending on the situation, growers can use the product early in the season to suppress the pest, later as a stand-alone application or in combination with other pest control methods to minimize the risk of further sprays. AgBiTech released similar products in Australia in 2002 and in Brazil in 2013. The company contends that while the product is new to the U.S. and is recognized as a leader in Helicoverpa NPV, which plays a vital role in Helicoverpa management in Australia and the insect problem that is rapidly expanding into South America. It is important that AgBiTech is in a position to respond rapidly to grower demand. This could be an issue if for example the devastating pest Helicoverpa armigera finds its way into the southern U.S. This pest had a multi-billion dollar impact on production in Brazil when it appeared a couple of seasons ago, but AgBiTech’s product is emerging as a vital tool in helping growers manage the insect, the company claims. In addition to releasing Heligen, AgBiTech also expects to begin the U.S. EPA registration process in 2014 for a second product targeting fall armyworm (Spodoptera frugiperda). Through its U.S. subsidiary (AgBiTech LLC) the company is funding a cooperative research and development project with the United States Agriculture Department (USDA) to develop the new product. Based on its proprietary baculovirus-manufacturing platform, this product is one of several new solutions under development by AgBiTech, whose pipeline includes viruses against soybean looper (Pseudoplusia incudens) and diamond-back moth (Plutella xylostella). caterpillar managementbiological insect controlcorn earwormtobacco budworm About the Author: AgBiTech
科技
2016-40/3983/en_head.json.gz/8183
Congressman Markey: SRS MOX project a 'disaster' The mixed oxide fuel fabrication facility, currently under construction at the Savannah River Site, is under renewed criticism from Washington, with allegation that the project's cost and environmental impact are far greater than expected. The latest query comes from Congressman Edward Markey (D-Mass), a senior member of the U.S. House Energy & Commerce Committee, who describes the project as “a disaster.” “The government's plutonium plan is a pluperfect disaster,” said Rep. Markey. “It is over budget, riddled with delays and problems and is producing a product that no one wants. And all to produce $2 billion worth of reactor fuel at a cost of tens of billions of taxpayer dollars and damage to our global non-proliferation efforts.” The facility, originally estimated to cost $4.8 billion, is designed to dispose of 34 metric tons of weapons-grade plutonium. The facility will blend the plutonium with uranium to make reactor fuel for power plants. As part of a non-proliferation agreement with the Russian Federation, the process will leave the plutonium in a form unsuitable for weapons. Markey, a longtime opponent of the facility, spoke on Monday after he wrote a letter to Energy Secretary Stephen Chu criticizing the program, its cost and the fact that the end-product has no buyer. Markey also asked for answers to 26 questions relating to the project to be answered by Feb. 15. The project “may be both wasting taxpayer dollars and ultimately failing to reduce our stores of surplus weapons-grade plutonium,” Markey wrote. In his letter, he cited newspaper and industry publication reports that estimate construction costs may be $2 billion over the initial $4.8 billion cost. He also slatted the project for not having any consumers lined up for the facility that is projected to start running in 2016. “With respect to Representative Markey's concerns regarding finding a buyer for the fuel, utility companies do not announce future business decisions publicly years in advance,” said Caroline Delleney, communication director for Congressman Joe Wilson (R-S.C.). “Great progress has been made in locating potential customers, and (Wilson) is supremely confident that there will be multiple buyers for MOX fuel.” Wilson, a supporter of the project, took issue with his colleagues assessment of MOX fuel and the MFFF. “Congressman Wilson believes the MOX project at SRS is an essential national security program which is over 50 percent completed,” said Delleney. “This program serves a vital purpose, as it will allow the United States to honor its international nuclear nonproliferation obligations with Russia. Once operational, the facility will begin to disposition 34 metric tons of weapons grade plutonium, approximately 17,000 nuclear weapons, into fuel for commercial nuclear reactors.” Savannah River Site facility to increase MOX feed production
科技
2016-40/3983/en_head.json.gz/8231
You are hereHome - APC and Hivos launch 2012 GISWatch on “the internet and corruption,” during this year's IGF APC and Hivos launch 2012 GISWatch on “the internet and corruption,” during this year's IGF Nov 5 (APC) On 7 November 2012, the Association for Progressive Communications (APC) and the Humanist Institute for Cooperation with Developing Countries (Hivos) will launch the 2012 edition of the Global Information Society Watch in Room 9 at 12:30 local time during the second day of the Internet Governance Forum at the Baku Expo Centre, in Baku, Azerbaijan. Produced annually by APC and Hivos since 2007, the 2012 GISWatch edition addresses the theme “The internet and corruption – Transparency and accountability online.” Through nine thematic reports and over 50 country reports, it explores how the internet is being used to ensure transparency and accountability, the challenges that civil society activists face in fighting corruption and accounts of when the internet fails as an enabler of a transparent and fair society. By focusing on individual cases and stories of corruption, the country reports take a practical look at the role of the internet in combating corruption at local, national, regional and global levels. The presentation will include Valeria Betancourt on behalf of APC and Monique Doppert and Loe Schout on behalf of Hivos. Short interventions of some of the reports’ authors present at the event will take place, among them Shawna Finnegan (Canada), Ritu Srivastava (India), Giacomo Mazzone (Italy), Alice Munyua (Kenya), Shahzad Ahmad (Pakistan) and Valentina Pelizzer (Bosnia and Herzegovina). The International Institute for Sustainable Development (IISD) will join APC and Hivos in the presentation to launch the publication “ICTs, the Internet and Sustainability: Where Next?” written by David Souter and Don MacLean. The Internet & Society Co:llaboratory will also join APC, Hicos and IISD to present MIND 4. MIND stands for Multistakeholder Internet Dialogue. The 4th edition of MIND discusses Human Rights and Internet Governance. The key article was written by Shirin Ebadi, Nobel Peace Price Winner 2003 from Iran. Her article is discussed from various stakeholders perspectives: governments, private sector, civil society and technical community. Among the commentators are Carl Bildt, Swedish Foreign Minister, Marjette Schaake, member of the European Parliament, Jeremy Brookes, President of GNI, Markus Kummer, Vice-President of ISOC, Joy Liddicoat, APC’s Internet rights are human rights coordinator, and Jeremy Malcom, President of Consumer International. Mind is publisihed by the Collaboratory Internet & Society in Berlin, edited by Wolfgang Kleinwächter and supported by Google. APC, Hivos, IISD and the Internet & Society Co:llaboratory invite all people present at the IGF to join the launch of these timely and promising publications. Download the 2012 GISWatch preview Download the MIND 4 edition The Association for Progressive Communications (APC) is an international network and non-profit organisation founded in 1990 that wants everyone to have access to a free and open internet to improve lives and create a more just world. www.apc.org (END/2012)
科技
2016-40/3983/en_head.json.gz/8323
Prof. Dr. Hans-Werner Schock, department head and spokesman for Solar Energy Research at Helmholtz-Zentrum Berlin (HZB), received the prestigious "Becquerel Prize" at the 25th "European Photovoltaic Solar Energy Conference and Exhibition" in Valencia. The EU Commission honoured the HZB scientist for his life's work in the field of photovoltaics. The award ceremony took place as a highlight of the European photovoltaics conference which was held this year together with the 5th "World Conference on Photovoltaic Energy Conversion". Prof. Schock received the "Becquerel Prize" following his plenary lecture on "The Status and Advancement of CIS and Related Solar Cells". The chairman was Daniel Lincot, head of solar energy research at the Ecole Nationale Suprieure de Chimie de Paris. Prof. H.-W. Schock was distinguished by the committee for his outstanding performance in the field of solar energy technology and the development of thin-film solar cells. The first pioneer tests on chalcopyrite-based solar cells took place under his direction as early as 1980, and were to make solar energy more efficient and more competitive. Such solar cells are made of copper-indium-sulphide (CIS) or copper-indium-gallium-selenide (CIGSe), for example. At present, Hans-Werner Schock's group is researching new material combinations of abundant, environmentally friendly chemical elements and is continuing to refine solar cells based on these materials. The solar cells developed at HZB under Hans-Werner Schock's leadership hold several efficiency records: CIS cells in the high-voltage range (12.8%), flexible cells made from plastics (15.9%) and conventional CIGSe cells (19.4%). The aim is for "solar cells to be integrated into buildings, for example, no longer as an investment, but as a matter of course," says Schock. Scientific director for Research Field Energy at HZB, Prof. Dr. Wolfgang Eberhardt, is delighted about the award: "With its research on thin-film solar cells, HZB has made it its duty to develop the technology for our future energy supply. Mr. Schock's work is a major contribution to this. We are delighted about the worldwide recognition his work has found, and congratulate Mr. Schock on receiving this award." Hans-Werner Schock, born in 1946 in Tuttlingen, studied electrical engineering at University of Stuttgart and earned his doctorate at the Institute of Physical Electronics, where he later became scientific project leader of the research group "Polycrystalline Thin-Film Solar Cells". Since 2004, he has worked at HZB as department head of the Institute for Technology. He is author and co-author of more than 300 publications and has submitted and been involved in more than ten patents in the field of solar energy technology. The "Becquerel Prize" was first awarded in 1989 on the occasion of the 150th anniversary of Becquerel's classic experiment on the description of the photovoltaic effect. With it, French physicist Alexandre Edmond Becquerel laid the foundation for the use of photovoltaics. '/>"/>Contact: Dr. Hans-Werner Schock hans-werner.schock@helmholtz-berlin.de Helmholtz Association of German Research CentresSource:Eurekalert0GOODRelated biology news :1. Three-quarters of new solar systems worldwide were installed in the EU in 20092. See amazing new sun images from NJITs Big Bear Solar Observatory3. Selenium makes more efficient solar cells4. Molecules typically found in blue jean and ink dyes may lead to more efficient solar cells5. Solar panels can attract breeding water insects6. Purple Pokeberries hold secret to affordable solar power worldwide7. Georgia Powers Green Energy Program Redesigned to Include More Solar Power8. Phoenix Solar AG and MiaSole Sign Multi-Year Framework Agreement for the Delivery of Thin-Film Solar Modules9. Innovalight Establishes New Record With Silicon Ink Solar Cells10. Roof integrated solar energy11. Advance made in thin-film solar cell technology
科技
2016-40/3983/en_head.json.gz/8421
Exemplary Science for Prevention of Work Injury, Illness Highlighted in NIOSH Papers June 15, 2004 NIOSH Update: Contact: Fred Blosser (202) 401-3749 Nine scientifically exemplary studies by researchers from the National Institute for Occupational Safety and Health (NIOSH) are nominated by NIOSH for a prestigious 2004 government science award. NIOSH also submitted nominations for an outstanding contribution to public health and a lifetime scientific achievement. NIOSH submitted the nominations for the 2004 Charles C. Shepard Science Awards, sponsored by the U.S. Centers for Disease Control and Prevention (CDC), of which NIOSH is a part. The awards recognize excellence in science at CDC during 2003. CDC will announce the winners on June 21, 2004. "These nominations illustrate NIOSH's ongoing leadership on the frontiers of health and safety science, and they show that our research is among the best in the world," said NIOSH Director John Howard, M.D. "The discoveries we make and the new scientific tools that we develop and use are essential for the increasingly complex task of preventing work-related injuries, illnesses, and deaths." The studies nominated by NIOSH for the Shepard Awards were published in peer-reviewed journals in 2003. The nine papers: Illustrate how NIOSH applies advanced laboratory techniques to identify potential health effects from workplace exposures, including effects at subtle levels in genes and cells that may help explain how occupational exposures can lead to cancer and other illnesses. The nominated papers include four such studies that produced new data for better understanding and assessing potential risks of work-related cancer, toxic effects from asphalt fumes, potential effects from arsenic exposures, and exposures to low-solubility particles. Highlight NIOSH's innovative use of death certificates, illness surveillance systems, and rigorous statistical methods to identify workplace exposures that may cause disease, and to identify worker populations that face serious risk of such illnesses. One of the nominated studies generated new information on statistical associations between crystalline silica exposure and risks for various illnesses. Another study identified a risk for acute pesticide-related illnesses in working youths. Display NIOSH's practical experience in devising and improving engineering controls and personal protective equipment. In this regard, three of the papers report findings from studies, that, respectively, 1) used three-dimensional laser scanning technology to help design fall-protection harnesses for today's diverse workforce, 2) investigated respirator fit factors as an indicator of whether respirators perform as expected in actual workplace environments, and 3) evaluated the contributions of different engineering control measures for reducing levels of respirable dust generated in longwall mining operations. NIOSH nominated the editors and staff of the "NIOSH Pocket Guide to Chemical Hazards" for outstanding scientific contribution to public health. The pocket guide is a key resource for occupational health professionals, employers, employees, and others. With the support of the National Technical Information Service and the private sector, CDC/NIOSH has disseminated more than 2.5 million paper and CD-ROM copies of the Pocket Guide to customers around the world. NIOSH nominated Marilyn A. Fingerhut, Ph.D., for the lifetime scientific achievement award to recognize her outstanding career of scholarship and leadership in preventing occupational disease, injury, and death among workers. During Dr. Fingerhut's 20 year career, she has conducted innovative and ground breaking research on dioxin, established herself as a champion and expert for occupational women's health issues, and has moved forward global occupational health risk assessment. She also was instrumental in the development of the National Occupational Research Agenda (NORA). The NIOSH nominations will appear on the NIOSH web page at www.cdc.gov/niosh/updates/shepard2004.html. For further information about NIOSH research and recommendations for preventing work-related injuries, illnesses, and deaths, call the NIOSH toll-free information number 1-800-35-NIOSH or visit the NIOSH Web site at www.cdc.gov/niosh. Follow NIOSH Page last updated: February 13, 2009
科技
2016-40/3983/en_head.json.gz/8435
Chandra X-ray Observatory Center 60 Garden St. Cambridge, MA 02138 USA http://chandra.harvard.edu X-ray Flare in Sagittarius A*: A compact radio source in the center of our Milky Way Galaxy thought to be a supermassive black hole. (Credit: NASA/MIT/F.Baganoff et al.) Caption: During a Chandra observation of the X-ray source located at the galactic center, the source was observed to brighten dramatically in a few minutes. After about 3 hours, the source declined rapidly to its pre-flare level. This is the most compelling evidence yet that matter falling toward the supermassive black hole at the center of our galaxy is fueling energetic activity. The amount of material needed to produce the energy released in the flare would have had the mass of a comet. Scale: Image is 8 arcmin on a side. Chandra X-ray Observatory ACIS Image CXC operated for NASA by the Smithsonian Astrophysical Observatory
科技
2016-40/3983/en_head.json.gz/8546
CIO Awards 2013 MARK DANCZAK Cohen & Co., Cleveland CIO OF THE YEAR LinkedIn Google+ MARK DANCZAK In less than six years at Cohen & Co., Mark Danczak has earned a seat at the table in the firm's strategic planning process. “Mark has achieved the rare IT executive level of "trusted business partner,' displaying his ability to think strategically, act tactically and motivate others to buy into ideas, concepts and values,” the nomination said. “He believes that IT is the most valuable when it enhances the performance of the organization, so Mark has made it his mission to fully integrate IT with the business strategy of the firm to deliver maximum return on the investment.” At the same time, Mr. Danczak recognizes that adopting fast-changing technologies needs to be paced so that its impact on the business is understood and its results are valuable. “Mark understands that too much of a good thing can be a negative. Technology advances at lightning speed and can be difficult to digest in any meaningful way for an organization,” the nomination said. “Mark recognizes this and, instead of imposing technology on the firm for technology's sake, he purposefully dictates pace. Mark takes time to evaluate the potential business impact of new technologies to determine how value can best be realized, and how to deliver it in a meaningful way.” In that context, it's no surprise that Mr. Danczak has created an initiative called the “balanced scorecard” at Cohen & Co. The scorecard assesses how IT is creating value in the organization as well as taking into account the staff perspective and business factors. Nevertheless, Cohen & Co. is vigorously pursuing IT initiatives. Last year, Mr. Danczak's team implemented a new customer relationship marketing program. This year it is preparing to roll out a new firm-wide scheduling solution for staff and the firm's engagements. The firm is also completing a move to a single tax program to eliminate redundant programs. “Mark "sees the customer' in everything that he does. He has a unique, perfectly clear vision of the value proposition any given initiative can bring to our customers,” the nomination said. “Adding value to our clients is Mark's guiding principle.” For the next five years, Mr. Danczak is charged with leading the charge to provide office transparency so staff can work where and when they need to increase productivity and create greater work-life balance to enhance the firm's recruitment and retention efforts. Mr. Danczak also contributes to the community in diverse ways. He is a founder of the Ohio Microsoft CRM users group and Microsoft IT Advisory Council and is active with the Healthcare Information and Management Systems Society trade group at the national and local levels. He serves on the board of directors of Noteworthy Federal Credit Union, which targets the unique needs of artists and has coached youth ice hockey for more than 15 years in Garfield Heights and at Trinity High School.
科技
2016-40/3983/en_head.json.gz/8549
Nvidia: Tegra 2 Will Usher In The 'Super Phone' Era byRob Wright on January 5, 2011, 7:06 pm EST Nvidia introduced its Tegra 2 mobile processor at last year's CES and showed the product powering a number of new tablet devices. At this year's show, Nvidia demonstrated the product on a new device the company affectionately refers to as a "super phone." While the second generation of Tegra may be last year's news, Nvidia showed off the mobile platform running LG Electronics' new Optimus 2X smartphone at Nvidia's CES press conference Wednesday. Nvidia President and CEO Jen-Hsun Huang called the Optimus 2X "the industry's first Super Phone," describing the new product category as the next generation of smartphones, which will be as powerful and feature-rich as PCs. As described by Nvidia, a super phone includes four-inch-plus screens, single-core, 1-GHz mobile processors, five-plus megapixels cameras, and multiple microphones for video and gaming experiences. In essence, Huang said, a super phone will be "a computer first and a phone second," and touted the Tegra 2 chip as the platform of choice for the new product category. For example, Huang demonstrated the Optimus 2X's full 1080p HD video playback support (the new phone even has an HDMI connection). The new LG phone, which runs on Google's Android OS, also comes with enough performance to handle console-quality gaming. Huang demonstrated several games on an Optimus 2X, including the popular mobile game Angry Birds and Dungeon Defenders, an upcoming PC/console game that can run on Tegra 2 super phones as well. Huang also said the Tegra 2 platform offers fast overall Web performance. The dual-core chip, based on ARM's Cortex-A9, offers five times faster Flash performance than other mobile chips, thanks to Nvidia's close partnership with Adobe, Huang said. At one point during the press conference, Huang was joined on stage by Adobe President and CEO Shantanu Narayen, who voiced his support for Tegra 2 and said that Flash-based content continues to grow rapidly, doubling in the last two year's despite the absence of support on other platforms like Apple's iOS. While Tegra 2 was promoted by Nvidia at last year's show as a major tablet platform, it's clear the graphics technology company, which has been moving more and more into the CPU market, has big plans to leverage the monster growth seen in the smartphone industry.
科技
2016-40/3983/en_head.json.gz/8556
Home > Climate Change > Graduation and deepening: moving climate policy forward Graduation and deepening: moving climate policy forward By rawat By Axel Michaelowa Climate policy is a titanic clash of interests. Emitters of greenhouse gases (GHG), both rich and poor, do not want to bear costs of reducing emissions. Consequently, implementation of climate policy measures on a national scale has often been stalled. Rich and poor alike want to protect themselves against the impact of climate change – but the latter do not have the means to do so. Thus international climate negotiations will become the world trade negotiations of the future. And like those, they will fail from time to time. But they will never be derailed completely because the alternative is a "fortress world" of the rich. Fortresses have never managed to hold out indefinitely… Currently, a bleak mood prevails. The US does not want to participate in the global effort to reduce emissions. Russia holds the Kyoto Protocol hostage to extort as many concessions as possible. Developing countries go berserk whenever someone utters the word "targets". But we will not stay in this abyss forever as several trends emerge that will strengthen climate policy in the medium term. First and foremost, natural science about climate change tends to become more alarmist. The sensitivity of the climate systems to GHG forcing seems to be larger than thought so far. The threshold to avoid dangerous climate change seems to be more around 1.5°C change from pre-industrial values rather than the hitherto suggested 2°C. We have already reached 0.8°C and since 1990 the threshold of 0.2°C decadal change has been exceeded, leading to increased meteorological extremes such as floods and heat waves. Events like this year’s summer in Europe that exceeded the previous instrumental record by up to 3°C and led to over 10,000 heat-related deaths generate an increasing awareness in the public that impacts of climate change can be disruptive. Policy pressure for stronger mitigation efforts has already materialised in some countries, e.g. Germany and the UK. Both countries have announced strong long-term mitigation targets. Moreover, a tendency emerges to make companies liable for climate change impacts due to their emissions. Several NGOs are campaigning in this direction and the small island state of Tuvalu has announced the first legal action. In the 21st century, climate change litigation may become what asbestos and tobacco litigation were in the last years of the 20th century. The challenge is to bend the current emissions growth trend downwards quickly without stifling the growing energy needs of developing countries. Recent promising cost reductions of some forms of renewable energy increase the hope that eventually the gap between fossil fuels and renewables will be closed. The Kyoto Protocol structure is efficient due to its international market mechanisms and adaptable due to the concept of subsequent commitment periods with adjustment of rules. It contains innovative verification and compliance provisions. Beyond 2012, one needs to elaborate the Kyoto framework in a way that eliminates its deficits. The most important issue in this regard is to widen its geographical scope whilst increasing target stringency in an adequate way. Developing countries can only be asked to adopt emission targets ("Graduation") when reduction commitments of industrialised countries are considerably strengthened ("Deepening"). We have developed a "Graduation and deepening" scenario for 2013-2017. Its key elements are: an average current Annex B (industrialised countries) target of –23 per cent compared to 1990, .i.e. –17 per cent from 2012. "Hot air" elimination prevents that each group of graduating countries has to receive hot air to entice them into the system. All types of sinks, terrestrial and marine would be available to reduce costs and to get reluctant industrialised as well as current hot air countries into the regime. Countries would be liable for a reversal of sinks; concentric rings of graduation of current Non-Annex-B countries defined by thresholds of a "graduation index", calculated on the basis of per capita emissions and income. The thresholds would be: the average of the current Annex B, the lowest level of Annex II (i.e. countries financing the Global Environment Facility or GEF), and the lowest level of Annex B. targets of the graduating countries that are the less stringent the lower the threshold. The base year is 2012 meaning that reductions start relatively slowly. Any country that does not accept graduation would lose the rights to host CDM projects and to receive any funding (GEF, adaptation fund) under the Kyoto Protocol and the Convention; The large emitters such as China, India and Indonesia do not graduate under this scheme and can use a CDM extended to include policies and sectoral approaches. Axel Michaelowa is with the Hamburg Institute of International Economics, Neuer Jungfernstieg 21, 20347 Hamburg, a-michaelowa@hwwa.de Related Articles: View the discussion thread. Source URL: http://www.cseindia.org/content/graduation-and-deepening-moving-climate-policy-forward-0
科技
2016-40/3983/en_head.json.gz/8561
The Pacific isn't the only ocean collecting plastic trash Save for later A swirling 'soup' of tiny pieces of plastic has been found in the Atlantic Ocean, and something similar may be present in other ocean areas as well. By Kristen Chick, Contributor to The Christian Science Monitor Many cities are banning plastic shopping bags or passing laws forcing stores to charge for the bags. Plastic bags are a major contributor to the plastic marine debris situation in the oceans when the bags are washed to sea by rivers and runoff after rains. NEWSCOM View Caption About video ads of When Sylvia Earle began diving in 1952, the ocean was pristine. These days, things are different.“For the past 30 years I have never been on a dive anytime, anywhere, from the surface to 2-1/2 miles deep, without seeing a piece of trash,” says the renowned oceanographer and former chief scientist at the National Oceanic and Atmospheric Administration. “There’s life from the surface to the greatest depths – and there’s also trash from the surface to the greatest depths.” Dr. Earle’s experience illustrates the rising tide of plastic accumulating in the world’s oceans.And while the Pacific Ocean has garnered much attention for what some call the "Great Pacific Garbage Patch” – a vast expanse of floating plastic deposited in the middle of the ocean by circulating currents – the problem doesn’t stop there.New research shows that plastic has collected in a region of the Atlantic as well, held hostage by converging currents, called gyres, to form a swirling “plastic soup.” And those fragments of plastic could also be present at the other three large gyres in the world’s oceans, says Kara Lavender Law, a member of the oceanography faculty at the Sea Education Association (SEA) in Woods Hole, Mass., which conducted the study. Because the plastic has broken down into tiny pieces, it is virtually impossible to recover, meaning that it has essentially become a permanent part of the ecosystem. The full impact of its presence there – what happens if fish and other marine animals eat the plastic, which attracts toxins that could enter the food chain – is still unclear. “It's a serious environmental problem from a lot of standpoints,” Dr. Law says. “There are impacts on the ecosystem from seabirds, fish, and turtles, down to microscopic plankton.”The possible effect on humans is “a huge open question,” she adds. “If a marine organism were to ingest a contaminated plastic article, it could move up the food chain. But that is far from proven.”The data collected by SEA, from 22 years of sailing through the North Atlantic and Caribbean, show a high concentration of plastic fragments centered about 30 degrees north latitude (in the western North Atlantic), says Law. That aligns with the ocean’s circular current pattern.But don’t call this region the garbage patch of the Atlantic. Law, who has sailed through the plastic accumulation in the Pacific gyre as well, says the term “plastic soup” is more accurate for both areas. “There’s no large patch, no solid mass of material,” she says.Marcus Eriksen, director of education at Algalita Marine Research Foundation in Long Beach, Calif., agrees.The idea of a garbage “patch” or “island” twice the size of Texas, a favorite term in the media for the now-infamous spot in the Pacific, feeds misconceptions, he says. “It’s much worse. If it were an island, we could go get it. But we can’t,” because it’s a “thin soup of plastic fragments.”The plastic floating in the ocean comes mostly from land. Dumping plastic at sea has been prohibited by an international convention since 1988, but about 80 percent of the plastic in the ocean flows from rivers, is washed out from storm drains or sewage overflows, or is blown out to sea from shore by the wind.According to the UN Environment Program, the world produces 225 million tons of plastic every year.Law says that analyses of the density of the plastics picked up in SEA’s research show that much of it potentially comes from consumer items made of polyethylene and polypropylene plastics, which includes plastic shopping bags, milk jugs, detergent bottles, and other items “common in our everyday lives.”Those post-consumer products eventually break down into small pieces – most of the fragments caught in SEA’s plankton nets are about the size of a pencil eraser. Fish, birds, and sea mammals can mistake those tiny pieces for food and eat them. Fish and birds caught in regions with high plastic concentrations have been found to have numerous bits of plastic in their stomachs.One of the puzzling aspects of SEA’s study is that it does not show an increase in concentration of plastics during the 22 years of sampling.“That’s one of the main questions we’re trying to answer with the data set,” says Law. “I believe the evidence shows there has to be more going into the ocean. The question is, why don’t we see an increase in this region where we collect.”It’s possible that the plastics have broken down into such small pieces that they pass through the plankton nets, she says, or that bacteria or organisms growing on the pieces could cause them to sink. And some of the trash could escape to other areas of the ocean on wayward currents.When it comes to stemming the tide of plastic waste, there is no easy answer. Most experts agree that cleaning up the tiny pieces already swirling in ocean currents thousands of miles from land is impossible. Instead, the focus should be on prevention.Law says that education is key. It's important to raise awareness of what happens to the plastic that millions of people throw away every day. “There’s a perception that if you put it in a recycle bin, it will end up being recycled, but it’s not clear that’s always the case."Perhaps, experts speculate, the real reason that so much plastic ends up at sea is because so much of it is designed to be used once, then tossed.Dr. Eriksen says ending the throwaway design of plastics is essential to combating ocean pollution.“I'm not against plastic, I'm just against the way we abuse the material,” he says. “Knowing the environmental consequences of it, we have to rethink the responsible use of it.”Erickson also advocates economic incentives for plastic recovery – such as giving plastic products a return value in recycling centers – and “extended producer responsibility,” in which manufacturers are responsible for the life cycle of their products. That would force producers to build the cost of recovery or recycling into the cost of the product. How much plastic have we dumped in the ocean? Where will your message in a bottle end up? New math model offers clues. How a humongous garbage patch in the Pacific breeds new bugs (+video)
科技
2016-40/3983/en_head.json.gz/8581
Print Email Font ResizeNASA ready to fly the Front Range as part of pollution test Planes will pass over LongmontBy Scott RochatTimes-Call staff writerPosted: A P-3B aircraft will also be part of the month-long NASA Front Range air pollution study, taking samples from 15,000 feet and 1,000 feet. The NASA planes will pass over Longmont as part of their flight plan, which covers an area between Castle Rock and Fort Collins. (Courtesy of NASA Wallops Flight / Longmont Times-Call) NASA wants to see where the Front Range is having a bad air day. Starting July 16, the space agency will fly a pair of airplanes over the Front Range to distinguish between high-altitude air pollution and the sort found closer to the ground. The information will be used to improve how satellites monitor pollution. The flights will continue through Aug. 20 and would pass over Longmont three times a day, according to NASA. "What they're trying to do is paint a three-dimensional picture over that area," said Michael Finneran, news chief for the agency's Langley Research Center in Hampton, Va. "That area contains a diverse mix of air pollution sources: transportation, power generation, oil and gas activity, agriculture, natural emissions from vegetation and some episodic wildfires. There's a lot of stuff going on there." One of NASA's planes will be too high to draw much attention — a King Air B200 that will operate at about 27,000 feet — but the other, a NASA P-3B Orion could drop from 15,000 feet down to 1,100 feet as it spirals over sites on the ground. The Orion won't be the only low flier, either. During the same period, the National Science Foundation and the National Center for Atmospheric Research will fly a third plane, a C-130 Hercules, over the area for its own air pollution study — FRAPPE, the Front Range Air Pollution and Photochemistry Experiment — and could get as low as 1,000 feet above ground level, according to NCAR. "Both of these aircraft are large and noisy and we can expect some calls from residents, not just about noise, but from a safety perspective as well," Barth wrote in an email. FRAPPE will run through Aug. 16. The Hercules follows a different flight plan than NASA, Finneran said, and may range upwind and downwind of the Northern Front Range area to collect its samples. The planes are too heavy to land at Vance Brand but will instead base out of Rocky Mountain Metropolitan Airport in Broomfield, covering an area from Castle Rock to Fort Collins. Barth will work with traffic controllers in Denver to avoid interference with other flights over Longmont. This is the fourth and final set of flights for the NASA project, called "DISCOVER-AQ." Earlier flights covered the Baltimore and Washington, D.C. area in 2011, the San Joaquin Valley in California in January and February of 2013, and the Houston region last September. Finneran said he was surprised at how few complaints the Maryland flights drew three years ago. "We were really expecting a lot more reaction," he said. "We were flying in a high-service airspace, with millions of people below us, flying over major highways. But there wasn't really any reaction." The planes will enter from the west side of town and head east as they go to and from Fort Collins. NASA expects to get in at least 10 flights and maybe as many as 14 over the month-long project, Finneran said, but flights could be canceled due to weather or mechanical problems. Airplane noise issues have been controversial for Longmont in recent years. In 2013, a group called Citizens for Quiet Skies sued Mile Hi Skydiving — which is based out of Vance Brand — over what it called the planes' incessant "drone." The case is currently scheduled for trial next April. Contact Times-Call staff writer Scott Rochat at 303-684-5220 or srochat@times-call.com A C-130 Hercules will also be taking air quality measurements along the Front Range for a study by the National Science Foundation and the National Center for Atmospheric Research. The study and flight plan are separate from NASA's but will take place over the same time period. (Courtesy of NSF/NCAR / Longmont Times-Call)
科技
2016-40/3983/en_head.json.gz/8600
Scientist Stephen Hawking Believes Humans Must go Into Space Michael Hoffman (Blog) - June 15, 2006 6:24 AM 90 comment(s) - last by RMTimeKill.. on Jun 30 at 3:10 PM Time for humans to start thinking about moving says Hawking For many years humans have dreamed of one day colonizing other planets and moons. Although research would be an important reason for the foreign bases, could the survival of the human race depend on whether or not we can colonize other planets? World-renowned astrophysicist Stephen Hawking recently said that humans need to colonize a planet or moon because the Earth might face destruction -- A man made disaster -- global warming being a good example -- or natural disaster could potentially destroy the planet. Although he believes humans can colonize the moon within 20 years, and establish a sufficient base on Mars within 40 years, humans "won't find anywhere as nice as Earth," unless we visit another solar system. The moon looks to be like an ideal place for a potential new colony. Not only does it appear to have everything needed to sustain humans, ice has also been found at its poles.Nations have been thinking about colonizing other planets for years. DailyTech earlier reported that NASA is working towards a permanent moon base that would be a stepping stone to allow astronauts to explore Mars firsthand. Swedish researchers are also studying different ways to have a self-sustaining colony on the moon. RE: WHY INeedCache "My religion consists of a humble admiration of the illimitable superior spirit who reveals himself in the slight details we are able to perceive with our frail and feeble mind." -Albert Einstein Just think about that for a bit, instead of some of the useless drivel that has been spewed forth in this forum. Parent Decaydence What is this quote meant to rebut? This is simply Einsteins affirmation of Deism or some variation of it. Just because you both choose to personify the forces that created the universe doesn't mean doing so is any less arbitrary. There is no evidence showing that an intelligent force created the universe therefore believing that is an excercise in creativity, not analysis. Parent There is no irrefutable evidence to support evolution, either. Yet many here want to make it fact. The quote was not meant to rebut, but to merely give many who have posted here something else to ponder, especially the part about our frail and feeble minds. Evolution is a fact? I guess for some, ignorance truly is bliss. Parent I didn't say "irrefutable evidence", I said there was no evidence whatsoever. There is evidence to support evolution. You can trace slight changes between species of different eras and even track their movement across the globe. I suppose you could say the evidence isn't irrefutable because we don't have video of it happening, but the evidence is certainly overwhelming and there is absolutely no evidence to the contrary or to any alternate explanations. If we can tell certain species didn't exist at a certain time, yet do now, how did they come to exist? Spontaneous generation? Magic? Did aliens bring them? Whatever the explanation you bring to bear other than evolution, it is going to be one with far less evidence to support it than evolution. If you don't agree with the statement that certain species didn't exist at a certain time yet do now, then you must believe that all the species we know to have existed at one point in time all existed at the same time, which is the dumbest thing I have ever heard in my life. There would obviously not be enough space on the planet for that to have ever been the case. Evolution is not a scientific fact simply because of a semantic distinction. Make no mistake about however, evolution has occurred on this planet. Parent > "There is no irrefutable evidence to support evolution, either" No, there is a vast amount of irrefutable proof for evolution itself. It occurs...and we have daily proof of it...where do you think new flu viruses come from each year? We've even proven that man himself has evolved, and is continually doing so. The only thing without irrefutable proof is the origin of mankind (and, by extension, of life itself). We have evidence...some of the strongest evidence in all science, in fact. But until we build a time machine and go back and take pictures of the event, it still remains "theory", no matter how much data we have to support it. NASA Works On Permanent Moon Base Swedish Plan to Colonise Space
科技
2016-40/3983/en_head.json.gz/8627
U.S. & WorldPlan to streamline solar development in West OK'd By Jason Dearen SAN FRANCISCO — Federal officials on Friday approved a plan that sets aside 285,000 acres of public land for the development of large-scale solar power plants, cementing a new government approach to renewable energy development in the West after years of delays and false starts. At a news conference in Las Vegas, Interior Secretary Ken Salazar called the new plan a "roadmap ... that will lead to faster, smarter utility-scale solar development on public lands." The plan replaces the department's previous first-come, first-served system of approving solar projects, which let developers choose where they wanted to build utility-scale solar sites and allowed for land speculation. The department no longer will decide projects on case-by-case basis as it had since 2005, when solar developers began filing applications. Instead, the department will direct development to land it has identified as having fewer wildlife and natural-resource obstacles. The government is establishing 17 new "solar energy zones" on 285,000 acres in six states: California, Nevada, Arizona, Utah, Colorado and New Mexico. Most of the land — 153,627 acres — is in Southern California. The Obama administration has authorized 10,000 megawatts of solar, wind and geothermal projects that, when built, would provide enough energy to power more than 3.5 million homes, Salazar said. Secretary of Energy Steven Chu said the effort will help the U.S. stay competitive. "There is a global race to develop renewable energy technologies — and this effort will help us win this race by expanding solar energy production while reducing permitting costs," Chu said in a statement. The new solar energy zones were chosen because they are near existing power lines, allowing for quick delivery to energy-hungry cities. Also, the chosen sites have fewer of the environmental concerns — such as endangered desert tortoise habitat — that have plagued other projects. Environmental groups like the Nature Conservancy who had been critical of the federal government's previous approach to solar development in the desert applauded the new plan. "We can develop the clean, renewable energy that is essential to our future while protecting our iconic desert landscapes by directing development to areas that are more degraded," said Michael Powelson, the conservancy's North American director of energy programs. Some solar developers who already are building projects were complimentary of the new approach, saying it will help diversify the country's energy portfolio more quickly. Still, some cautioned that the new plan could still get mired in the same pattern of delay and inefficiency that hampered previous efforts, and urged the government to continue pushing solar projects forward. "The Bureau of Land Management must ensure pending projects do not get bogged down in more bureaucratic processes," said Rhone Resch, president of the Solar Energy Industries Association. Salazar said the country four years ago was importing 60 percent of its oil, and that today that number has dropped to 45 percent. "We can see the energy independence of the United States within our grasp," he said. A map of the new solar energy zones, http://on.doi.gov/SWf5y1 Jason Dearen can be reached at www.twitter.com/JHDearen
科技
2016-40/3983/en_head.json.gz/8642
Home > Cool Tech > Sony ‘subtitle glasses’ could be a hit… Sony ‘subtitle glasses’ could be a hit with deaf moviegoers By For deaf people who like to enjoy films on the big screen, choice is often limited when it comes to the offerings of the local movie theater. It’s usually only foreign-language movies that have subtitles, leaving the hard of hearing with little choice but to wait for the DVD release of other movies they want to see. And even then, who wants to watch a blockbuster on a small TV screen? In a short film on the BBC website, reporter Graham Satchell talked to Brit Charlie Swinbourne, who is hard of hearing, about the problem. “One in six people have some level of deafness and currently that audience isn’t being served well,” he said, adding: ”If you did serve them well, you could well be making more money out of them so there’s good reason for improving the service.” The solution could come in the form of a special pair of glasses being developed by Sony in the UK. Sony’s Tim Potter, who is helping with the design of the ‘subtitle glasses’, explained what they’re about. “What we do is put the closed captions or the subtitles onto the screen of the glasses so it’s super-imposed on the cinema screen, [making it look] like the actual subtitles are on the cinema screen,” he said. After trying them out, Charlie Swinbourne seemed pretty pleased with the effectiveness of the special specs. “The good thing about them is that you’re not refocusing. It doesn’t feel like the words are really near and the screen is far away. It feels like they’re together.” He continued: “It was a great experience. I think it’s a massive opportunity to improve deaf people’s lives and I think there’s great hope that this would give us a cinema-going future.” According to the BBC report, the glasses should become available in UK movie theaters next year, with presumably wider availability in the near future if they prove popular.
科技
2016-40/3983/en_head.json.gz/8643
Home > Cool Tech > US to test fastest aircraft ever, moves at… US to test fastest aircraft ever, moves at 13,000mph By Update: The HTV-2 has been lost in flight, click here for the full story. An aircraft will take to the skies on Thursday that flies so fast it would only take 12 minutes for it to travel from LA to New York. But before you start getting excited about the prospect of the technology being incorporated into the next generation of Boeing or Airbus aircraft, be aware that this is an unmanned machine being developed by the Pentagon’s Defense Advanced Research Projects Agency for military use. A Wall Street Journal report says that if all goes to plan on Thursday’s flight, the Pentagon’s Falcon Hypersonic Technology Vehicle 2, or HTV-2, could reach speeds of around 13,000mph – that’s 3.6 miles a second. Weather permitting (a flight scheduled for Wednesday was scrapped due to bad weather), the test flight will begin on Thursday morning at Vandenberg Air Force Base in California when an Air Force Minotaur IV rocket takes the HTV-2 to the edge of space. From there, the HTV-2 should separate from the rocket before flying over the Pacific at lightning speeds of up to Mach 20. At Mach 20, you could travel between London and Sydney in under 60 minutes. The US military is interested in developing the hypersonic aircraft so that it can possess a machine which would be capable of reaching any part of the world in less than an hour. Presumably it would be armed to the teeth with weapons rather than delivering flowers. Let’s hope Thursday’s test flight goes better than last year’s attempt when controllers lost contact with the craft just a few minutes after launch. The event won’t be shown online, though you can follow news and updates about it on the agency’s Twitter feed. Image: DARPA
科技
2016-40/3983/en_head.json.gz/8646
Home > Mobile > Best augmented-reality apps Best augmented-reality apps By Augmented reality has long sounded like a wild futuristic concept, but the technology has actually been around for years. It becomes more robust and seamless with each passing decade, providing an astonishing means of superimposing computer-generated images atop a user’s view of reality, thus creating a composite view rooted in both real and virtual worlds. Although AR apps run the gamut, from interactive map overlays and virtual showrooms to massive multiplayer skirmishes, each piece of software hones in on smartphone GPS and camera functionality to create a more immersive experience. Related: Want to master Pokémon Go? Here’s every tip you need to know The available selection of augmented reality apps is diverse, encompassing both premium and freemium offerings from a variety of big and no-name developers, but sometimes choosing which apps are worth your smartphone or tablet’s precious memory is tougher than using the apps themselves. Here are our top picks for the best augmented reality apps available, whether you’re searching for iOS or Android apps. The best AR apps Pokémon GO (free) It wouldn’t be a list of the best AR apps without mentioning Niantic’s Pokémon Go, a game that has quickly captured everyone’s attention and given them a reason to go out into the world, walk around, and catch Pokémon. The game uses GPS to mark your location, and move your in-game avatar, while your smartphone camera is used to show Pokémon in the real world. For the most part, it works, provided the game hasn’t crashed or frozen. There aren’t a lot of instructions when you first start, or information regarding game mechanics like the colored rings around wild Pokémon, but thanks to the nature of the internet, figuring out what to do isn’t that tough. Players of Ingress, another creation from Niantic, will see many similarities between the developer’s two games, right down to the locations marked as PokéStops and Gyms. The implementation of the original 150 Pocket Monsters is definitely the biggest thing Pokémon Go has, well, going for it compared to its predecessor. Niantic is set to continue updating the game to improve its performance, however, and add new features like trading, so hopefully Pokémon Go will stick around for a good, long while. Download now for: Android iOS Ink Hunter (free) Ink Hunter is the app you should use when deciding on a tattoo and where to put it. The app lets you try out pre-made tattoos, as well as your own designs, and they can be oriented in whatever position you like and placed on any part of the body. Tattoos placed on the body using the camera look as close to real life as you’re going to get — without actually going under the needle that is — and that’s all thanks to the in-app editor and the way Ink Hunter renders tattoos. The app previously only supported black-and-white tattoos, but its latest update added support for color tattoos as well, meaning you can get a better understanding of what the design will look like before you make it permanent. Currently, Ink Hunter is only available on iOS, but there are plans for Android and Windows Phone versions. So, if you’re planning to get a tattoo sometime soon, but don’t have an Apple device, maybe consider holding off on that tattoo for a bit, or borrow a friend’s iPhone or iPad. WallaMe (free) WallaMe lets you leave hidden messages in various locations around the world that can only be read by other people using the WallaMe app. When using the app, you can take a picture of a nearby wall, street, or sign, then use the in-app drawing and painting tools to create your own special messages. You can also attach pictures to the areas you’ve chosen, if only to prove you were actually there. The augmented reality really comes into play when you’re in a location that has a hidden message, but it can only be found by using WallaMe and your device’s camera. Messages can be made private, too, so that only friends using the app can see them, or they can be made public for everyone to discover. WallaMe’s biggest strength also works against it, in a way. Those that aren’t aware of the app’s existence, or those that don’t regularly use it, may never see the clever messages created by others. That being said, fans of the app may want to keep it that way, in order to maintain the feel of exclusivity. Star Chart (free) Star Chart may be an educational app, but it’s a really cool one that’s sure to appeal to people of all ages. When Star Chart is opened on your Android or iOS device and pointed at the sky, the app will inform you of what stars or planets you’re currently facing, even during the day when the stars are at their hardest to see. It does it all in real-time, too, without you having to press a button to initiate it. Functions don’t stop there, either, because the app can even let you know what the night sky looks like on the other side of Earth, as well as show you where in the sky your star sign is located. If that wasn’t enough, there’s a feature called Time Shift, which allows you to move up to 10,000 years forward or backward in time to see where the stars once were or will be located. Google Translate (free) Google Translate isn’t strictly an AR app, but it does have one AR feature that’s incredibly useful for translating text. That particular feature is part of the app’s camera mode. Simply snap a photo of the text you don’t understand, and the app will translate the text in your photo in real time. When connected to Wi-Fi, the app supports a vast number of languages — 13 of which were added in a recent update — but users can also download a number of language packs if they want to continue using the instant translation feature while offline or without a cellular connection. Next time you take a trip to a country with a language you don’t fluently speak, Google Translate could be your best friend and the very thing that will keep you from getting lost in a strange land. 1 of 4 The best AR apps AR apps for navigation AR games and entertaiment Other AR apps
科技
2016-40/3983/en_head.json.gz/8647
Home > Social Media > Facebook owes you up to $10, here’s why (and… Facebook owes you up to $10, here’s why (and how) you should sign up to claim it By Kwame Opam In the last few weeks, you, your friends, and family may have received a decidedly spammy sounding email alerting you to the fact that Facebook’s potentially owes you money per a proposed $20 million settlement. The good news is that the email is completely legitimate, and, if you’ve received it, you’re eligible to cash in on the settlement. The not-so-great news is you’ll only receive up to $10. At best. At this point, you may be wondering if partaking in this thing is even worth your time. We’re here to tell you that it is. The settlement comes as a result of a recent class action lawsuit, Angel Fraley v. Facebook, Inc, that held that Facebook allegedly used the likenesses of its users in a Sponsored Story prior to December 2, 2012. While Facebook initially denied responsibility, Zuck and company opted to settle. Thus, you and any number of the 150 million Facebook in the States are eligible for up to… 10 bucks. At most. All you have to do is fill out a claim form by May 2 and wait for judgement to pass in June. Simple. Bear in mind, however, that the chances of you receiving even a single Hamilton depends on how many people make a claim. According to the suit’s framers, if the final settlement comes to $12 million and 1.2 million authorized claims get filed, then you’ll get $10. If 2.4 million claims are filed against that sum, you’ll get $5. But the real kicker is that, in all likelihood, you won’t see a dime. With 150 potential claimants waiting in the wings, Facebook may simply opt to divide that total among a number of non-profit organizations. Among them are the Center for Democracy and Technology, Electronic Frontier Foundation, MacArthur Foundation, Joan Ganz Cooney Center, and Berkman Center for Internet and Society, among others. All organizations devoted to teaching children and adults how to use social media safely while keeping companies like Facebook honest. To us, neither eventuality sounds that bad. On top of all this, Facebook will also be forced to adjust its terms of service to make clear when and how it uses our information for Sponsored Stories even if you you don’t do a thing. In the end, this lawsuit was designed to hold Facebook accountable for using data for advertising. That the company is willing to set aside money to express that they take the matter seriously is pretty cool on its part – so sign up to claim what’s yours. And even if you don’t get any money, some of it will be going to organizations that look out for your privacy. It’s all reason enough to take five minutes and get involved. Close
科技
2016-40/3983/en_head.json.gz/8697
VDSL standard needed, says AT&T Rick Merritt2/28/2007 05:00 AM EST Post a comment SAN JOSE, Calif. — AT&T is calling for interoperability standards in VDSL and developing a new class of low loss splitters as two steps forward in home networking. While the telecom giant is betting on phone-line technology, it readily admits there is no silver bullet for home nets. Those were some of the observations from a presentation by Vernon Reed, principal member of technical staff at AT&T Labs. Reed talked about the thorny problems of home networking and defended AT&T's choice of technology from the Home Phoneline Networking Association (HPNA) in a talk at the IPTV 2007 conference here Tuesday (Feb. 27). All home network technologies have their pros and cons and none is a silver bullet. In part, that's because all technologies face a difficult set of conditions operating in the digital home, Reed said. Addressing just one of those problems, AT&T Labs is submitting a request for a patent on a new low-loss splitter. Today's splitters can create signal loss of 20-35 dB, Reed said, but the new AT&T design has a signal loss of just 8 dB. The company expects to license the technology to manufacturers in Asia. While AT&T is firmly committed to HPNA as its primary home net delivery vehicle, it recognizes the technology has at least one major drawback. Unlike powerline plugs sold at retail today, HPNA services for AT&T's VDSL-based video and telephony service is not something users will be able to install themselves anytime soon. The industry needs to set interoperability standards for VDSL, then work on a few generations of more simplified systems based on those standards before AT&T could deliver an IPTV set-top users could set up themselves, Reed said. "It's do-able but it could take three to five years," Reed said. "We're still a long way from VDSL interoperability. The Ikanos, Broadcom and Infineon chip sets don't talk to each other," he added. Installing an external VDSL termination box outside a user's home for video and voice services today takes as much as five hours, Reed said. With better interoperability, the job could be cut to about two hours, he added. Reed said the ideal home network could be used over any medium—coax, twisted pair, powerline or wireless. Standards groups such as the Digital Living Network Alliance could do the industry a great service by benchmark various home nets and defining a best physical layer, media access controller and remote management, though that would be a difficult job, he added.
科技
2016-40/3983/en_head.json.gz/8705
About EMBOFunding & awardsfundingEventsEMBO PressScience policyMembersNewsHeritage Press releasesVideos & podcastsNewsletter – EMBOencountersEMBO in the newsReports & brochuresLogosE-newsArticles HomeNewsPress releases2011 EMBO Gold Medal 2011 awarded to Simon Boulton Groundbreaking research on DNA repair, genome integrity and cancer HEIDELBERG, Germany, 20 April 2011 – The European Molecular Biology Organization (EMBO) today announced Simon Boulton of Cancer Research UK’s London Research Institute, Clare Hall Laboratories as the winner of the 2011 EMBO Gold Medal. Awarded annually, the EMBO Gold Medal recognizes the outstanding contributions of young researchers in the molecular life sciences. Boulton receives the award in recognition of his groundbreaking research on DNA repair mechanisms. The election committee was particularly impressed by his pioneering role in establishing the nematode worm, C. elegans, as a model system to study genome instability. “I am delighted and honoured to receive such a prestigious award,” said Boulton upon hearing the news. Throughout his career, 38-year-old Boulton and his research team have exploited the experimental strengths of several complementary systems, including C. elegans and mouse genetics, proteomics in mammalian cells and in vitro biochemistry. Some of their most important discoveries have come from contrasting the results obtained in different systems and cellular contexts. Boulton’s PhD supervisor Stephen P. Jackson described him as an “absolutely outstanding scientist” and praised his unique combination of approaches that allowed him to make seminal contributions to the field encompassing DNA repair, genome instability and cancer. Simon Boulton’s research highlights include: Discovering the gene RTEL1 as an anti-recombinase that impacts on genome stability and cancer and counteracts toxic recombination, which is also required in meiosis to execute non-crossover repair. Discovering the PBZ motif and establishing that ALC1 (Amplified in Liver Cancer 1) is a poly(ADP-ribose)-activated chromatin-remodelling enzyme required for DNA repair. Poly (ADP-ribosyl)ation (PAR) is a post-translational modification of proteins that play an important role in mediating protein interactions and the recruitment of specific protein targets. These results provided new insights into the mechanisms by which PAR regulates DNA repair. Discovering that the Fanconi Anemia proteins FANCM and FAAP24 are required for checkpoint-kinase signalling (ATR) in response to DNA damage and establishing that DNA repair defects of Fanconi Anemia cells can be suppressed by blocking error prone repair by non-homologous end joining. Potential opportunities for cancer treatment These discoveries gave rise to novel therapeutic approaches. Boulton’s laboratory demonstrated that cells that over-express the ALC1 enzyme are highly susceptible to eradication by the chemotherapeutic Bleomycin. Since ALC1 is amplified in over 50 percent of human liver cancers, these findings may have important implications for liver cancer treatment. The prizewinner also showed that DNA repair defects of Fanconi Anemia cells can be suppressed by blocking non-homologous end joining (NHEJ). This observation raises the possibility that NHEJ inhibitors could be used to suppress cancer predisposition in Fanconi Anemia patients. Career stages The UK-born scientist started his quest to investigate mechanisms of DNA repair while studying for his PhD at the University of Cambridge from 1994-1998. He describes his first exposure to the highly competitive world-class research environment in Cambridge as “extremely influential”. Boulton also recognizes that establishing his own research group at the world-renowned Cancer Research UK’s London Research Institute (LRI), Clare Hall Laboratories in 2002 was a key step in his scientific career. Boulton, while still a young researcher, has been recognized with awards from both UK and international organizations, including the Colworth Medal from the Biochemistry Society and the Eppendorf/Nature Award. He became an EMBO Young Investigator in 2007 and an EMBO Member in 2009. This year he received a Wolfson Research Merit Award from the Royal Society and an Advanced Investigator Award from the ERC. Simon Boulton will receive the EMBO Gold Medal and an award of 10,000 euros on 12 September 2011 at The EMBO Meeting in Vienna where he will give a lecture about his research. Tilmann KießlingHead, CommunicationsT. + 49 160 9019 3839 Gesellschaft zur Förderung der Lebenswissenschaften Heidelberg GmbH | EMBC | EMBL Copyright © 2016 EMBO. All Rights Reserved. Sitemap | Our use of cookies
科技
2016-40/3983/en_head.json.gz/8707
EMINEM Sign up for Updates Main menu Menu You are hereHome News Jun.9.2016 Eminem Partners with Detroit-based StockX StockX, the world’s first online consumer “stock market of things” for high-demand, limited edition products today announced a strategic partnership with hip-hop icon and Detroit native Marshall Mathers (a.k.a., Eminem).The company has targeted the sneaker resale market as its first vertical, and the partnership will continue to accelerate StockX’s growth by building on the rappers’ long and personal passion for sneakers to create exclusive content and access to rare sneakers from Mathers’ personal collection. As part of the partnership, Mathers and longtime manager Paul Rosenberg will be investing in the online trading platform.Detroit-based StockX is unlike any traditional e-commerce or auction website. StockX is a live ‘bid/ask’ marketplace that allows buyers to place bids, sellers to place asks, and execute a trade when the seller’s ask price crosses with a buyer’s bid."Sneakers have always been a huge interest of mine, for at least as long as I’ve been rapping, and I’m proud of the fact that I’ve had so many collaborations with Nike and Jordan Brand," said Mathers. "I really like the fact that sneakers are a big part of what StockX is doing. When I found out that they happen to be doing it from downtown Detroit, it made even more sense to get involved."“We believe in the power of StockX to change the way people buy and sell things online and look forward to helping Josh Luber and the StockX team make that future a reality,” added Rosenberg.StockX, which launched in February of 2016, was co-founded by Luber, who serves as the company’s CEO, and Dan Gilbert, founder and chairman of Quicken Loans and majority owner of the Cleveland Cavaliers. Recently, StockX closed an investment round that included Silicon Valley investor Ron Conway and his SV Angel fund, and Detroit Venture Partners (DVP). Eminem is the first of what is expected to be several high profile investors.In the four months since launch, StockX has reached major milestones with the launch of its iOS and Android apps. While the multi-billion dollar sneaker trading market is StockX’s first vertical, there are numerous other categories that the ‘stock market of things’ is planning to add to its platform in the near future. The platform is open in the United States, but sneakerheads in other countries will very soon be able to use StockX.StockX provides a visible, liquid, anonymous and authentic online experience. Participants in the StockX exchange can find historical price and volume metrics, real-time bids and offers (asks), time-stamped trades and additional analytics on virtually every sneaker model.In addition, buyers of sneakers on StockX are assured of the authenticity of sneakers purchased on the exchange. StockX has developed a proprietary process to verify the authenticity of each pair of shoes traded on the platform and buyers can purchase with confidence because StockX will stand behind their trades. “StockX is already proving that a ‘stock market of things’ is a viable and better way to both buy and sell certain product categories,” said Luber. “Eminem’s partnership is particularly meaningful in that he’s a true sneakerhead who genuinely understands the market and is excited to help grow the community. Beyond that, he’s created one of the worlds’ iconic brands, so it’s an honor to have him and Paul as both investors and strategic partners.”To kick off the partnership, StockX is giving away three coveted prize packages:• Grand Prize: Air Jordan 4 Retro Eminem Carhartt (priceless; pairs have sold for as much as $30,000)• Second Prize: Yeezy Boost Pack - Yeezy Boost 350, 750 and 950 (approximate value $3,000)• Third Prize: Jordan 1 Pack – Jordan 1 Retro Bred 2013, Retro Chicago 2015, Retro UNC, Retro Family Forever and Retro LA (approximate value $1,750) Any StockX participant who executes a sneaker trade on either the buy or sell side and/or refers a new participant to the StockX platform through Thursday, June 23 will receive one entry into the contest. For more information on the promotion click here.About StockX StockX is the world’s first online consumer “stock market of things” for high-demand, limited edition products. Participants buy and sell authenticated products in a live marketplace where they anonymously trade with stock market-like visibility. The StockX exchange offers buyers and sellers historical price and volume metrics, real-time bids and offers (asks), time-stamped trades, individualized portfolio tracking and metrics, as well as in-depth market analysis and news. StockX launched its inaugural marketplace in the secondary sneaker space with plans to expand to additional consumer product segments that have a natural need for a live secondary market. Comment May.23.2016 The Marshall Mathers LP Cassette Re-Release - SOLD OUT In honor of today’s 16th anniversary of the release of his seminal The Marshall Mathers LP, Eminem is offering a series of limited-edition collectible items that are now available at shop.eminem.com. A portion of the proceeds from the sale of these items will be donated to the Marshall Mathers Foundation, which provides funds for organizations working with at-risk youth in Michigan and throughout the U.S.The collectible items include an authentic brick from the rescued remains of Eminem’s childhood home that was featured on the covers of The Marshall Mathers LP and its sequel, 2013’s The Marshall Mathers LP 2. Each of the 700 bricks comes with a numbered Certificate of Authenticity featuring Eminem’s handwritten signature, and a display stand with enclosure that features a commemorative plaque on the side. All of the bricks come in a sleek black packaging with custom artwork and a description of the project. A cassette re-release of the iconic album featuring a 3-D motion printed cover is also available. Finally, in collaboration with Brooklyn-based mill Good Wood, Marshall Mathers dog tags created from re-purposed wood from Eminem’s childhood home are available in extremely limited quantities.Eminem’s The Marshall Mathers LP is a cornerstone of his celebrated body of work. As his sophomore album, it cemented his identity as one of the most compelling and unique artists of his time, which set him off on a career that still holds strong today. While the the project was met with it’s share of controversy, there was an underlying foundation of a very complex and honest personal story fittingly contained within an album whose title carried his birth name, Marshall Mathers. Since its release over 15 years ago, MMLP has been recognized by many as one of hip-hop’s greatest albums.In 2013, the album’s importance was reinforced when Eminem announced that his next record would be titled The Marshall Mathers LP 2, which was labeled by the rapper as more of a “revisitation” than a sequel. In comparing the two records, the 13-year separation of their individual creation is apparent, but the deeply personal reflection remains the same. A common anchor between the two albums can be symbolized in Mather’s choice to feature his childhood home on both records’ cover art. The now iconic imagery of the humble Detroit home symbolizes the honest, personal reflection that has been the root of what has driven both Eminem’s music and the powerful connection to his fan base.Ironically, during the same month of the release of MMLP2 in 2013, the State of Michigan ordered the house at 19946 Dresden Street to be knocked down due to structural safety issues. By reacting quickly, Eminem and his team were able to salvage the brick and wood materials from the home. From this, a painstakingly detailed process was started to turn those raw materials into special, one-of-a-kind collectible items. These offerings are a once-in-a-lifetime opportunity for Eminem fans to own a piece of Eminem history. Comment May.20.2016 19946 Dresden Outtakes Check out some outtakes from Jeremy Deputat's 2013 shoot at 19946 Dresden Street for MMLP2. Stay in the loop for the MMLP Anniversary here. More details coming soon. Comment ‹ previous Back to Top 2013 All Rights Reserved, Shady Records
科技
2016-40/3983/en_head.json.gz/8722
United States – English United Kingdom – English Australia - English UAE - English Germany - Deutsch France - Francais Switzerland – English Switzerland - Deutsch Switzerland - Francais Hong Kong - English Singapore - English Nederlands - English China - English 中国 - 中文 日本 - 日本語 UAE - English NTT America Enables Growth of IPv6 Internet from Switch and Data Network Neutral Data Center TAMPA, Fla. -- January 20, 2009 -- Switch and Data (NASDAQ: SDXC), a leading provider of data center and Internet exchange services, today announced that NTT America, a wholly owned U.S. subsidiary of NTT Communications Corporation (NTT Com) and a Tier-1 global IP network services provider, has completed an expansion across Switch and Data’s colocation facilities in Atlanta, Dallas, New York, Seattle, and Palo Alto, CA. Enterprise and service provider customers in Switch and Data’s sites in these markets will now have direct access to the world’s only commercial grade Global Tier-1 IPv6 backbone operating across four continents, the NTT Communications Global IP Network, operated by NTT America. Heralded as the designated successor to IPv4, the current version of Internet protocol, IPv6 provides a much larger Internet address space, among other benefits, and in doing so eliminates the need to use network address translation to avoid address exhaustion. As of September 2008, there are 39 /8s remaining in the IPv4 free address pool. The Regional Internet Registries (RIRs) have collectively allocated about ten to twelve /8s of IPv4 address space each year, on average. If that trend continues unchanged, by mid-2012 the American Registry for Internet Numbers (ARIN) and the other RIRs will no longer be able to allocate large new blocks of IPv4 address space. This scenario assumes that demand does not increase – which is unlikely, given the ever increasing number of Internet-enabled devices. This scenario also assumes no industry panic (hoarding, withholding, etc.), no Internet Assigned Numbers Authority (IANA) or RIR policy changes, and no other external factors influencing address space allocations, any of which could push the IPv4 depletion date earlier. Once IPv4 address space is depleted, Internet growth cannot be sustained without adopting IPv6. "The new Internet protocol, IPv6, significantly improves the function and commercial use of the Internet in terms of scalability, security, mobility and network management," said Kazuhiro Gomi, CTO of NTT America and vice president of the NTT Communications Global IP Network. "Even with the current economic situation we continue to see traffic growth above 70% annually. With more and more people using the Internet, more mobile devices accessing the Internet and social networking sites becoming ever more popular, the time to upgrade to the new Internet protocol is now. We are very pleased to have an Internet exchange partner like Switch and Data, a company that shares our vision for IPv6 and can help foster its adoption in the United States." "IPv6 connectivity is critical for U.S. firms conducting international business, acquiring or merging with foreign companies, and working with foreign governments, and we’re excited that NTT America is rapidly growing its network with customers across our footprint," said Ernie Sampera, chief marketing officer for Switch and Data. "Our long experience offering IPv6 peering on our PAIX® Internet exchange combined with our broad footprint of sites and high customer densities make Switch and Data the natural partner to help NTT America achieve its growth objectives." NTT America reports demand for IPv6 services from downstream Internet service providers, universities, research institutions, next generation application providers and organizations that focus on wireless technologies. The company already counts a number of customers in Switch and Data’s facilities. NTT America’s award winning IPv6 transit service is available in native, tunneled or dual stack modes on different interfaces up to 10GigE. An early adopter of IPv6, Switch and Data has incorporated IPv6 services into its Layer 2 PAIX® Internet exchanges at all sites. Customers can implement IPv4, IPv6, or dual IPv4/IPv6 peering using a single peering port with no extra charges and no special usage or setup fee. About Switch and Data Switch and Data is a premier provider of network-neutral data centers that house, power, and interconnect the Internet. Leading content companies, enterprises, and communications service providers rely on Switch and Data to connect to customers and exchange Internet traffic. Switch and Data has built a reputation for world-class service, delivered across the broadest colocation footprint and richest network of interconnections in North America. Switch and Data operates 34 sites in the U.S. and Canada, provides one of the highest customer satisfaction scores for technical and engineering support in the industry, and is home to PAIX® – the world's first commercial Internet exchange. For more information, please visit http://www.switchanddata.com/. About NTT America NTT America is North America’s natural gateway to the Asia-Pacific region, with strong capabilities in the U.S. market. NTT America is the U.S. subsidiary of NTT Communications Corporation, the global data and IP services arm of the Fortune Global 500 telecom leader: Nippon Telegraph & Telephone Corporation (NTT). NTT America provides world-class Enterprise Hosting, managed network, and IP networking services for enterprise customers and service providers worldwide. For additional information on NTT America, visit us on the Web at www.nttamerica.com. U.S. product information regarding the NTT Communications Global IP Network and its award winning IPv6 transit services may be found at http://www.us.ntt.net/, by calling 877-8NTT-NET (868-8638), or by emailing sales@us.ntt.net. About NTT Communications Corporation NTT Com delivers high-quality voice, data and IP services to customers around the world. The company is renowned for its diverse information and communication services, expertise in managed networks, hosting and IP networking services, and industry leadership in IPv6 transit technology. The company’s extensive global infrastructure includes Arcstar™ private networks and a Tier 1 IP backbone (connected with major ISPs worldwide), both reaching more than 150 countries, as well as secure data centers in Asia, North America and Europe. NTT Com is the wholly owned subsidiary of Nippon Telegraph and Telephone Corporation, one of the world’s largest telecoms with listings on the Tokyo, London and New York stock exchanges. Please visit www.ntt.com. NTT, NTT Communications, and the NTT Communications logo are registered trademarks or trademarks of NIPPON TELEGRAPH AND TELEPHONE CORPORATION and/or its affiliates. All other referenced product names are trademarks of their respective owners. © 2009 NTT Communications Corporation. This press release contains forward-looking statements that involve risks and uncertainties. Actual results may differ materially from expectations discussed in such forward-looking statements. Factors that might cause such differences include, but are not limited to, the challenges of acquiring, operating and constructing IBX centers and developing, deploying and delivering Equinix services; unanticipated costs or difficulties relating to the integration of companies we have acquired or will acquire into Equinix; a failure to receive significant revenue from customers in recently built out or acquired data centers; failure to complete any financing arrangements contemplated from time to time; competition from existing and new competitors; the ability to generate sufficient cash flow or otherwise obtain funds to repay new or outstanding indebtedness; the loss or decline in business from our key customers; and other risks described from time to time in Equinix's filings with the Securities and Exchange Commission. In particular, see Equinix's recent quarterly and annual reports filed with the Securities and Exchange Commission, copies of which are available upon request from Equinix. Equinix does not assume any obligation to update the forward-looking information contained in this press release. Equinix and IBX are registered trademarks of Equinix, Inc. International Business Exchange is a trademark of Equinix, Inc.
科技
2016-40/3983/en_head.json.gz/8738
Next Generation Firewall Enterprise Apps / 10 Things Microsoft Must Do to Stay Relevant for the Next 40 Years 10 Things Microsoft Must Do to Stay Relevant for the Next 40 Years By Don Reisinger | Posted 2015-04-06 1 of Back To Start Mobile Is the Future Microsoft CEO Satya Nadella has clearly stated the company's future growth will be on mobile devices. In a long manifesto last year, he said that mobile would play a crucial role in his company's future and he even indicated that Microsoft must accept market dynamics and appeal to those working with devices running iOS and Android. So far, Microsoft has done that, and according to the company, it's working out well. Windows 10 Will Right the Sins of the Past Windows 10 will right the sins of Microsoft's past. Windows 8 has been called one of the most disastrous Windows versions of all time. The operating system was too different for those who have been using Windows for a long time. So, Microsoft has gone back to the future a bit, providing updates that bring back the Start Menu, make it feel more like Windows 7 and strip away many of the Windows 8 features that turned off users. Azure as a Cloud Computing Platform Microsoft's Azure cloud computing platform is crucial to the company's future. Microsoft is engaged in an intense competition with the likes of Amazon, Google, IBM and a number of other top companies to be a leader in delivering cloud computing services. Microsoft has steadily expanded its Azure service offerings to help enterprises go virtual, reduce their IT data center overhead and dedicate more of their budgets to cloud software that enhances productivity. Azure has proven popular so far and likely will remain so in the coming years. Microsoft Software, Services Must Work With Android, iOS As noted, Nadella has said that he wants to be mobile-friendly, no matter the platform. But as time goes on, Microsoft needs to be even friendlier to iOS and Android users, ensuring that they get updates to Office first, they have the ability to use apps that would link their devices and Microsoft's other platforms, and more. Being mobile-friendly is one thing, but accepting the realities of the space and accepting that iOS and Android are by far leading Windows Phone and should therefore be treated with special gloves is crucial to Microsoft's success. It Must Convert More Customers to Office 365 Microsoft has made a lot of progress getting enterprises to move to Office 365, the company's office productivity software-as-a-service suite that allows individuals and companies to run a wide range of applications and services both on the desktop on online. But Microsoft has to continue to win market acceptance of this cloud suite to face the prospect that Google or Zoho will steal away more of its productivity market suite. Microsoft Must Shed Its Image as a Plodding, Unresponsive Giant Microsoft has made several mistakes over the past several years in the mobile, operating system and applications markets. It reacted slowly to the growth of cloud computing and generally looked like a plodding hulk that clung to the past and failed to respond to customer needs. Microsoft must project an image of being nimble and innovative as it was in the company's early pioneering days. Keep PC Vendors Happy PC vendors are still crucial to Microsoft's success in the software space. While the company will be offering free upgrades to Windows 10 to users, Microsoft will still derive much of its Windows revenue through PC sales. If PC vendors gladly buy Windows 10 and bundle it with their products, Microsoft can bolster its financials. If Microsoft turns away PC vendors by offering a product in Windows 10 that people don't want, it could be a huge issue for a crucial part of Microsoft's business. Make Sure Windows 10 Works Effectively on all Platforms Microsoft has said that Windows 10 will work effectively across all types of products, including mobile and desktop, but that needs to actually happen when the operating system launches later this year. Microsoft is only making promises right now. When customers actually get their hands on Windows 10, they will need to see that it works just as well on a tablet as it would on a laptop. If Windows 10 can't hold up in mobile, the operating system will fail. Hang On to the Xbox Division While there are reports that Microsoft could be considering selling its Xbox division, most of those rumors have been brushed aside by analysts who say the gaming division is crucial to Microsoft's future. The Xbox is the gateway to the living room for Microsoft, which is the place to be to engage younger users who might otherwise move to Macs. Xbox is also the platform to integrate gaming across Windows 10. Don't kid yourself: Xbox is very important to Microsoft's future. Move In on the Wearables Space With a Splash Market analysts are convinced the wearables market is set to explode, with revenue generated from that space jumping billions of dollars each year over the next several years. That's precisely why Microsoft must invest heavily in wearables. The company has already unveiled a device, called the Microsoft HoloLens that provides a holographic virtual-reality experience for users. HoloLens relies on Windows 10 to work, but it could be one of the most important devices that Microsoft markets in the coming years. 10 DevOps Benefits Enterprises Can Obtain From Automation View Slideshow » 10 Things Samsung Should Do to Repair Its Brand Image View Slideshow » Roku Revamps Its Streaming Video Player Product Line View Slideshow » How Microsoft Is Applying AI for Enterprise Digital Transformation View Slideshow » Spiceworks 2017 Report Shows IT Pros Keeping a Tight Rein on Spending View Slideshow » 10 Features We Want to See in Google's New Pixel Smartphones View Slideshow » GNOME 3.22 Supports Flatpak Cross-Linux Distribution Framework View Slideshow » 10 Things to Look for at Google's Oct. 4 Product Briefing View Slideshow » 10 Tech Policy Issues the 45th U.S. President Will Face View Slideshow » The Shadow Data Threat: What It Is and How to Safeguard Against It View Slideshow » View All Slideshows > After 40 years in business, Microsoft has made a number of changes to try to keep the company growing and relevant in the 21st century. The company has a chief executive not named Steve Ballmer or Bill Gates. It will offer its upcoming Windows for free and embraced the idea of supporting other operating systems, such as Android and iOS. But, perhaps, the most surprising news to come out of the tech giant recently is the announcement from Microsoft Technical Fellow Mark Russinovich that the company was at least considering the idea of making Windows an open-source operating system. It's unlikely Microsoft will make this move any time soon. But the very fact that Microsoft is even thinking about making Windows, which was for years the company's crown jewel and cash cow, an open-source platform, is nothing short of astonishing. It's more proof that the company realizes that technological advances and competitive pressures require it to make moves that would have been unthinkable a few years ago. This slide show looks at a number of things that Microsoft has to do to remain successful in the years ahead.
科技
2016-40/3983/en_head.json.gz/8860
Biomarket Trends: A Myriad of Approaches Fosters Growth in Early Tox Testing Market Expands as Researchers Increasingly Rely on In Vivo, In Vitro, and In Silico Tools Steven Heffner At first glance, drug discovery appears to be a straightforward process of screening compound candidates, optimizing leads, performing preclinical evaluations, and then, with these steps completed, beginning clinical trials. The reality is not so neat. These activities often overlap, merge, and conflict with each other. Parameters such as absorption, distribution, metabolism, excretion (ADME), and toxicology are not simply steps in lead optimization, they are actually an integral part of an ongoing process.While far from perfect, a wide range of assays have made early ADME useful in providing predictive information. It is in the toxicology stage of ADME/Tox where the pitfalls lie. The downside of unfavorable toxicity is a lot more serious than ADME. Unfortunately, pharma companies have found this out the hard way.Within the past few years, we have seen the crash of several major drugs in diabetes, CNS, cardiovascular, and weight control even after these drugs reached the market. The toxicity problems with these drugs should have been caught before they reached clinical trials, but they weren’t. This was not because of ineptitude of the companies involved, it was because better tools for determining toxicity were not available. This is why pharmas are desperately seeking approaches that will screen out these compounds during the discovery or development stages. There are many different approaches to improving early tox, but no clear winning strategies. The issues are multifaceted and complex. It will take time, talent, and a lot of money to develop them. This creates opportunities for pharmas and suppliers alike.Just 20 years ago, toxicology studies were high-information but low-throughput estimates of a candidate drug’s toxicology. Failures were common and expensive. The goal to find better approaches led to the development of cellular assays performed in vitro. Companies like Cellomics developed platforms that would allow investigators to treat live cells with candidate compounds and examine effects of these compounds on cellular structure and cell viability. These assays were faster than in vivo experiments and could be run with much less compound.Pressure continued for the creation of assays that could be used earlier and at even higher throughput. If high-throughput screening generated several hundred leads per day, high-throughput toxicology was needed to evaluate this output. This was a challenge that was difficult to meet. This is because tox is not a single parameter or even a group of parameters: it is a result or an outcome. Without a single silver bullet to hit the tox target, the best that can be achieved is a group of alternative approaches that can provide partial prediction of a compound’s toxicity. These approaches are similar to ones already in use in other areas of drug discovery but they have been modified to describe the unique characteristics of a toxic response. Early tox work can be divided into three general approaches: in silico tools, in vitro assays, and in vivo models. These three approaches to early tox are not mutually exclusive. In silico models are verified with experimental results, and this information is then used to change the model. In vivo techniques can be used with in vitro automated systems. Biological pathways uncovered by one model can be used to identify biomarkers that are utilized with a different technology. This illustrates the nature of early tox testing. One approach seldom provides a complete answer about a compound’s toxicity, but the judicious use of complimentary approaches can provide a wealth of useful information. Products and Markets Previous in silico models for early tox promised too much and delivered too little. Today’s best in silico models are based on experimental results and closely connected to specific compounds of interest. Models like Bio-Rad’s KnowItAll, GeneGo’s MetaDrug, and IDBS’ PredictionBase are efficient and easy to use, while Gene Logic, Ingenuity Systems, and Iconix offer comprehensive databases. Sharing of information through government agency initiatives and industry consortia has led to more complete databases. The market for in silico models and databases for early toxicology totaled over $130 million in 2006, and an 11% growth rate is expected for the next five years.In vitro testing for early tox is the largest and best established of the three product areas. Cell-based assays using high-content screening are becoming more widely used, providing much more content than traditional biochemical assays. Toxicogenomics is a useful tool, although its importance has been exaggerated. It plays a role but is not a substitute for other methods used in early tox. Traditional major players like BD Biosciences, Invitrogen, and Promega are leading suppliers of assays. Instrument makers such as Beckman Coulter, Caliper, and GE Healthcare provide useful platforms for cellular assays. The market for in vitro testing for early toxicology totaled nearly $400 million in 2006, and an annual growth rate of 10% is expected for the next five years.The smallest but fastest growing area is in vivo testing for early tox. Zebrafish are the most promising model organism since they provide whole-animal information from assays that can be run easily in a high-throughput format. Certain, specially designed rodent models are also being used early in the drug development process. Charles River Laboratories is a dominant provider of animal models, but smaller companies like Phylonix and Zygogen have a focus on zebrafish and have demonstrated the versatility of this animal model. Xceleron is a leader in human microdosing, which is the administration of a subclinical dose of a drug directly to humans. Using an ultrasensitive measurement called AMS, the effects of the drug on humans can be determined. This approach (Phase 0 human trials) is still experimental but is gaining wider use and acceptance. While not strictly an early-tox test, microdosing may have a major influence on this market. The market for in vivo testing in early toxicology is projected to grow at more than 20% annually starting from a modest $71 million in revenues in 2006. In addition to the markets for early-tox testing tools, there is an important market for outsourcing early-tox testing services. Service suppliers have expanded their range of offerings into the discovery area now that pharmas have become more willing to have these tests run by others. Large CROs like Covance, Evotec, and Quintiles offer some early toxicology services, although this is a small part of their total business. Small to mid-size service suppliers have found early toxicology to be a profitable niche, and many of these service suppliers sell tools as well. Some start-ups use services as a way to leverage their product sales business. Leading providers of in silico services include Accelrys and Gene Logic. Albany Molecular and CeeTox are among the many providers of in vitro services, and Charles River and Caliper provide animal testing services as well as products. The market for services in early tox was significant in 2006 and will continue to grow at an annual rate of nearly 15% over the next five years.Achieving success in developing early tox tests is a major challenge. Current tools are not perfect, but they are getting better. Managing information obtained from tests requires excellent software, since the processes being investigated are quite complex. There is a major difference between ADME and Tox tests. ADME properties are physical measurements and can be determined directly. Toxicity properties are biological and descriptive; they are not as easy to quantify. Many different technology approaches have been taken for early tox. Combinations of different approaches are more useful than any single approach taken alone, and in the marketplace, suppliers and clients will have to work with each other to produce meaningful results. Steven Heffner is the publisher of Kalorama Information. Early Toxicology: Approaches and Markets was published in June. Web: www.kaloramainformation.com
科技
2016-40/3983/en_head.json.gz/8891
TVs get smarter with $99 Kogan Android stick New Kogan Smart TV dongle competes with Sony Google TV, Apple TV Campbell Simpson (Good Gear Guide) on 20 June, 2012 12:00 Cut-price electronics manufacturer Kogan has announced a $99 Smart TV dongle. The device lets users browse the Internet on their TV, watch YouTube videos, play apps and games and media files.Like the recently unveiled Sony Google TV set-top box, the Kogan Agora Smart TV runs on Android 4.0 Ice Cream Sandwich, the latest version of Google’s smartphone- and tablet-focused operating system.The USB flash drive-sized Agora dongle plugs into a HDMI port, so it’s compatible with any recent television. It also requires an external power source, but can be powered by a TV USB port — the Agora Smart TV uses a mini-USB port for power, so there’s a mini-USB to USB cable in the box as well as a regular power adapter.The Agora Smart TV also has a full-size USB 2.0 port and a microSD slot, each of which will accept up to 32GB of external storage — Kogan suggests using these to store downloaded HD movies and photos for viewing on the device.The bundled remote control of the Agora Smart TV uses infrared, so an additional infrared receiver must be plugged in. Alternatively, a wireless keyboard and mouse can be plugged into the USB port. Another option is the $39 Kogan wireless trackpad and keyboard, also announced today. The wireless trackpad and keyboard uses RF wireless to communicate with its receiver, so line-of-sight is not needed (unlike with IR devices). Kogan is taking pre-orders for the Agora Smart TV dongle, which will ship from the end of July.Kogan founder and entrepreneur Ruslan Kogan told GoodGearGuide that the company was also working on TVs with integrated Smart TV features: “Of course. Internet TV is the future, it's one of our main focuses at Kogan. People are sick and tired of being told what to watch... [they] want to watch what they want, when they want. Internet TV allows that.”Kogan also dished out simultaneous praise and criticism to competitors LG and Samsung, talking about those companies’ attempts at voice and motion control on their Smart TVs, saying, “I think that it's great that these companies are experimenting with new technologies and not resting on their laurels.”He wasn’t entirely enthused about the gesture- and speech-based controls of products like the LG LM9600 and Samsung Series 8, though: “I've played with some of it, and it seems a bit gimmicky to me. I find it much quicker and easier to flick to what I want using a well designed remote control, rather than waving my arms around the room.”He didn’t rule out any possibility of co-opting the technology into future Kogan products, though: “That said, I'm sure the technology is just in its early stages, and will see a lot of improvement. If customers decide that they want this technology and there is demand for it, that is when Kogan will implement it into our TVs.”Smart TV interfaces, integrated into high-end TVs from Samsung, Sony, as well as in the form of the Apple TV and Google TV, rely heavily upon app integration for services like video on demand.Kogan said that while streaming internet TV is the current darling, big-screen games and other apps are the next big thing: “[Video on demand] is currently the main advantage of Internet TV. Big screen apps and games will come shortly after, but there has been very little development in this space.“It will need a lot of developers around the world to recognise that this is the future. When they do, they will dedicate time and energy to the big screen.”
科技
2016-40/3983/en_head.json.gz/8943
Thanks To Harun Yahya, Turkey Has Become The Center Of Creationism Harun Yahya On The Sombreval Publishing Website The Impact Of Atlas Of Creation in France Darwinist Panic Harun Yahya’s Atlas Of Creation Dünyadan Yankılar-Haberler In his newly published book, Taner Edis, an associate professor of physics from Truman State University, considers the theory of evolution and belief in creation in the Islamic world. The book discusses the change in the views regarding the theory of evolution of Muslims living in Turkey in particular, and states that since the 1990s Turkey has become the center of Islamic creationism and has gone way beyond the Christian world on this subject. Some of the statements in the book on this subject read: The work of “Harun Yahya,” said to be a pseudonym for Adnan Oktar, was central to the newest wave of creationism. . . . around 1997, he reappeared as the leading figure of Turkish creationism. A number of books under the name of Harun Yahya hit the shelves, promoting creationism alongside some other preoccupations of Islamic conservatives in Turkey. An organization called the Science Research Foundation promoted the Yahya books and made creationism a centerpiece of its views on science and culture. These efforts tied in with a series of “international conferences” promoting creationism, in which Turkish creationist academics shared the stage with American creationists from ICR and similar organizations. From the beginning, the distinguishing feature of Harun Yahya’s creationism was its very modern, media-savvy nature. Previously, Turkish creationism was a low-budget operation, even when it found official endorsement. . . . Yahya’s operation changed all this. The books that appear under Harun Yahya’s name are attractive, well produced, lavishly illustrated, on good quality paper. . . . this means that creationist literature looks better packaged than books popularizing mainstream science. Moreover, Yahya did not stop at books, or even at advertising creationism through “conferences,” op-eds, and media events. Soon well-made videos and slick monthly magazines promoting Yahya’s creationism appeared on the market. Indeed, there must have been few forms of media that escaped Harun Yahya’s attention. For the many Turks who cannot afford DVDs, there are creationist videos in the cheaper and quite popular VCD (Video CD-ROM) format. For those put off by the price tag on slick books—though their prices are artificially low—there are cheap booklets on low-quality paper, giving abridged versions of Yahya’s prodigious output. . . . Those online can visit one of the many Web sites devoted to Harun Yahya and creationism, from those that claim to expose the many lies of Darwinist media to the main site that makes practically everything written under the Yahya name available at no cost. Yahya’s creationism appeals beyond the core audience of conservative religious believers. There are many pious but also modernized people, many who work in a high-tech world but seek to anchor themselves in tradition and spirituality. So the Harun Yahya material is distributed in secular book and media outlets, not just in religious bookstores or stalls adjoining mosques. They are available in some supermarket chains, just like Christian inspirational books are found in Wal-Marts across the United States. Even the way the Yahya material uses the Turkish language indicates a desire to reach a broader audience. . . . The Yahya material uses a simpler, less Arabized everyday Turkish. . .. The way Adnan Oktar and others associated with the Yahya material present themselves also reinforces the modern image of the new creationism. . . . they conspicuously endorse modern clothing and modern lifestyles. . . . They are … leaders who have a key to reconciling science and religion . . . . . . Yahya touches on just about all the typical creationist themes, alleging that transitional fossils do not exist, that functioning intermediate forms are impossible anyway, that the evidence for human evolution is fraudulent, that radiometric dating methods are unreliable, that physical cosmology produces clear signs that the universe is a divine design, and that evolution at the molecular level is statistically impossible. Yahya also explains why Western scientists and Turkish fellow-travelers are so enamored of evolution if it is so clearly false. Like Christian creationists, Yahya thinks that beguiled by the secular philosophies of the European Enlightenment, scientists got caught up in a long war against God. . . . Harun Yahya favorably cites old-earth creationists who proclaim that the big bang proves the existence of God, and enthusiastically adopts the [creation] view that physical constants are fine-tuned to produce intelligent life and that this fine-tuning cannot have any naturalistic explanation. . . . He uses any suggestion that Darwin was wrong or that the universe is a divine design . . . As a growing media operation, the natural next step for Harun Yahya was to go global. Harun Yahya books, articles, videos, and Web materials were made available first in English, French, German, Malay, Russian, Italian, Spanish, Serbo-Croat (Bosniak), Polish, and Albanian. Interestingly, Western languages and languages used in the periphery of the Islamic world preceded languages of the Islamic heartland. This is not a great surprise—creationism finds its largest market in partially westernized countries like Turkey and in the Muslim immigrant communities in the West. . . . Still, translations into Urdu and Arabic soon followed, as did Indonesian, Estonian, Hausa, Bulgarian, Uighur, Kiswahili, Bengali, and more. Harun Yahya books are now available in many Islamic bookstores around the world, especially as English translations have been printed in London, the global center of Islamic publishing. This global venture appears to be another success. Harun Yahya has become popular throughout the Muslim world; he is no longer just a Turkish phenomenon. Articles under Yahya’s name regularly appear in Islamic publications all over the world. Even in the United States, mass-market introductory books . . . present Yahya as a “top” Muslim scientist with a worthwhile critique of evolution. From small meetings in San Francisco to a series of public presentations in Indonesia, from books to videos to the small Creation Museums that opened in Istanbul in 2006, the gospel of Yahya’s Islamic creationism continues to spread. The popularity of creationism might be a sign of modernization in the Islamic world . . . Harun Yahya does not have to work hard to convince readers that nature is [created]. He just presents the wondrous interlocking complexity of nature, and the conclusion becomes obvious. 2008-01-06 03:25:23
科技
2016-40/3983/en_head.json.gz/9015
Giant Squid Captured On Camera In Japanese Bay, Swimming Alongside Divers ANNnewsCH/YouTube By Tom Hale Giant squids usually remain in the dark depths of the ocean or in pirate stories. However, late last week on Christmas Eve, a squid from this elusive species was spotted swimming at the sea&apos;s surface. The amazing footage was captured on December 24, 2015, in Toyama Bay on the west coast of Japan. It’s believed the beast is a juvenile, measuring an estimated 3.7 meters (12.1 feet) in length, compared to the 13 meters (43 feet) of a fully matured adult. As you can see in the video below, the cephalopod was joined by Akinobu Kimura, the owner of a diving shop, who dived into the bay and swam alongside the giant squid. "This squid was not damaged and looked lively, spurting ink and trying to entangle his tentacles around me," Kimura told CNN. "I guided the squid toward to the ocean, several hundred meters from the area it was found in, and it disappeared into the deep sea." No one is really sure why the squid ended up in Toyama Bay. But, then again, scientists know very little about the behavior or physiology of these mysterious sea creatures. Until 2012, this species had not even been caught on camera in its natural habitat. If you liked this story, you'll love these Week in Science The IFLScience Newsletter Sign up today to get weekly science coverage direct to your inbox This website uses cookies to improve user experience. By continuing to use our website you consent to all cookies in accordance with our cookie policy.
科技
2016-40/3983/en_head.json.gz/9026
News Alarm grows over genetic 'dynamite' NICHOLAS SCHOON Environment Correspondent NICHOLAS SCHOONEnvironment Correspondent The Government's green advisers voiced alarm yesterday about the release of genetically engineered life forms into the environment. The five-strong panel, set up five years ago as a follow-up to the Rio de Janeiro Earth Summit, called for much wider and more careful Government thinking on the rules allowing releases of such plants, animals and micro- organisms and for better scrutiny of the results. ''We are playing not just with fire but with dynamite,'' said Sir Crispin Tickell, the former ambassador to the United Nations, who chairs the panel. Its remit is to give the Prime Minister advice on achieving sustainable development. Britain, in partnership with other European Union countries, must consider developing emergency procedures before any major commercial releases of genetically modified organisms (GMOs) take place, in case there are unforeseen repercussions. In its second annual report, the panel makes the comparison with CFCs, pesticides and thalidomide, new products which were thought to be safe and of great benefit, but found to cause severe damage after their release. ''People simply haven't understood the effects of their actions,'' said Sir Crispin, the warden of an Oxford post-graduate college. Genetic engineering is seen as one of the next century's biggest industries, bringing great advances in agriculture, medicine and other fields. Genes from fungi, bacteria and viruses can be stitched into the genetic material of other micro-organisms or higher plants and animals, giving them abilities foreign to their nature. The genes being transferred can even be designed and created in laboratories. The technology is still mainly at the experimental stage but there have already been hundreds of releases of altered plants and animals into the environment, in Britain and other countries. Giving a crop plant improved resistance to a particular weed-killer is one common example. This weed- killer can then be used to allow the farmer a higher yield. But there are fears that the "foreign" genes could spread into other micro-organisms in the wild. Unlike higher animals, bacteria and viruses have the ability to swap genes between quite different species and they can multiply their numbers very rapidly. There is the possibility that the ability to resist pesticides might be transferred to destructive pests and disease species. Current controls on releases depend on expert committees covering medicine, agriculture and food giving advice to ministers on whether particular experiments should go ahead, case by case. Sir Crispin said the arrangements were ''messy and badly co-ordinated''. Central to its proposals, the panel asks the Government to bring together industrialists, academics, doctors, representatives of consumer and environmental groups, and inde- pendent experts, to consider a broader control regime covering both medicine and industrial/agricultural applications. More about:
科技
2016-40/3983/en_head.json.gz/9043
6/3/201305:20 PMMichael BiddickNewsConnect Directly1 CommentComment NowLogin50%50% 3 Lessons From 5 Years Of Federal Data Center Consolidation DelaysMaybe you don't need to close down 40% of your facilities, like the federal government. But you can learn from Uncle Sam's missteps. Download the entire June 10, 2013, issue of InformationWeek, distributed in an all-digital format (registration required). As the largest IT spender globally, the U.S. government has amassed more than 3,100 data centers and, as of January, about $9.1 billion worth of applications -- with almost no sharing of resources across or even within agencies. Besides the cost to support and maintain these data centers, they consume an eye-popping amount of energy, initially estimated at 100 billion kilowatt-hours, or 2.5% of total U.S. electricity consumption. The good news is that the government is aware of the problem and has been working on it for years, via the Federal Data Center Consolidation Initiative, a plan launched in 2010 (and recently panned by the Government Accountability Office) to chip away at data center energy consumption and the cost of operations. The current plan is to close 1,253 facilities by the end of 2015, but this goal is far from reality -- unfortunately, a lot more time is still being spent on inventories and plans than on actually making data centers more efficient. That's not to say there's no progress; the government expects to save $3 billion within the next few years. But every agency is off to a slow start. In fact, after years of planning, only the Department of Commerce has been deemed to have a complete plan to tackle the problem. Obviously, data center consolidation is more difficult than anyone expected it to be, a reality I've seen firsthand while working on the effort. The problems aren't unique to the public sector. A lack of record keeping and plain old resistance to change are universal human traits. I've learned a few lessons that may help make enterprise data center consolidation efforts more efficient than the feds have thus far been. 1. Allocate time and resources to an inventory, but don't stop the presses. Not surprisingly, one of the biggest technical challenges for the feds is figuring out what's inside those 3,000 or so data centers -- one agency had more than 8,000 applications. Decades of record keeping neglect have resulted in massive data-collection exercises that involve a combination of paper surveys (yes, really) and automated tools. The lesson is to put a detailed inventory process in place from the get-go to stop unmanaged growth before it eats your budget. But if it's too late for that, the next best thing is to start chipping away. Tempting as it is, don't wait for a perfect holistic picture that may never come into focus. You don't need a complete inventory before beginning a consolidation project or migrating some apps to the cloud. Early, steady progress helps justify the cost of the effort. Once you have an application mapped, decide immediately what to do with it. Can it be decommissioned and the function eliminated? If not, can it be purchased in a software-as-a-service model? Can it be run on a virtual machine in-house or in the public cloud? The government has developed an approach based on the Federal Enterprise Architecture that involves collecting inventory using a Software Asset Template (Word document download) to capture critical technical characteristics for major systems, from servers, operating systems and platforms to software engineering and databases/storage to delivery servers. Paper-based data-collection exercises can be effective in small projects, but they don't scale. More progressive agencies are using application-discovery or dependency-mapping tools, such as BMC's Atrium Discovery and Dependency Mapping and Riverbed's AppMapper Xpert, that connect at the network layer, do packet inspections for a few weeks and produce automated application inventory reports. This can cut months off the discovery process. Government agencies also tend to lack monitoring tools to track data center performance and application and energy usage. Don't make this mistake. Visibility into metrics is critical to relentlessly improving performance. 2. Aim for meaningful consolidation. Shuffling gear from one facility to another might save space, but there's a big difference between simple physical consolidation and truly making a data center more efficient. Most government facilities have way too many physical servers and very low utilization because they're packed with legacy applications and databases that don't support virtualization and can't be run in the cloud. Sound familiar? While targeting those applications for elimination makes sense, it's a wretched process. Never underestimate end users' desire to hang on to a legacy Cobol application. To achieve real efficiency, someone has to make the hard decision to retire applications that, while helpful, aren't critical. And that brings us to the most challenging angle. 3. Get personal. People hate change, especially when it puts them out of a job. In the past five years, I've seen both passive and active resistance to data center consolidation. It's a bigger obstacle than technology. Passive resistance might mean an admin is slow to respond to requests or raises objections, such as iffy security concerns, that limit the ability to capture inventory, migrate applications or execute plans. Worse, agency divisions vie to control the new consolidated data center and maintain control over their servers and apps. These turf battles are going to escalate as use of software-defined networking and private clouds increases. The hard reality is that when data centers shut down, part of the cost savings comes from reducing head count. CIOs must use budgets as a strategic tool. Avoid recapitalizing equipment so that when hardware dies, it's not replaced. Along the way, reduce the workforce voluntarily. Yes, you may lose top-notch people, who can easily find new jobs. Use short-term contracts to fill gaps or provide surge resources. This approach will also bring in some fresh perspectives. Eliminating a data center is a massive and traumatic undertaking. Instead of consolidation for its own sake, focus on making IT delivery as efficient as possible. @mbushong, User Rank: Strategist6/10/2013 | 5:29:45 PM re: 3 Lessons From 5 Years Of Federal Data Center Consolidation Delays Interesting to see the kind of scale the government operates at. These numbers were eye popping for me, and I have some experience working with government agencies on their networking purchases.I couldn't agree more with the need to end-of-life applications and gear. I believe that the industry position on perpetual growth, incrementally adding and adding stuff, is a killer. There is rarely a crisp business case for decommissioning things, but this article ought to be a cautionary tale of why you need to.Well done.-Mike BushongPlexxi Next-Gen Messaging & Team Collaboration - New Track!Making Skype for Business Work For Your EnterpriseGet Prepared for Big Data Breaches at GTEC
科技
2016-40/3983/en_head.json.gz/9059
IPCC Fourth Assessment Report: Climate Change 2007 Climate Change 2007: Working Group III: Mitigation of Climate ChangeContents22.62.6.42.6.4 Equity consequences of different policy instruments <>All sorts of climate change policies related to vulnerabilities, adaptation, and mitigation will have impacts on intra- and inter-generational equity. These equity impacts apply at the global, international, regional, national and sub-national levels. Article 3 of the UNFCCC (1992, sometimes referred to as ‘the equity article’) states that Parties should protect the climate system on the basis of equity and in accordance with their common but differentiated responsibilities and respective capabilities. Accordingly, the developed country Parties should take the lead in combating climate change and the adverse effects thereof. Numerous approaches exist in the climate change discourse on how these principles can be implemented. Some of these have been presented to policymakers (both formally and informally) and have been subject to rigorous analysis by academics, civil society and policymakers over long periods of time. The equity debate has major implications for how different stakeholders judge different instruments for reducing greenhouse gases (GHG) and for adapting to the inevitable impacts of climate change. With respect to the measures for reducing GHGs, the central equity question has focused on how the burden should be shared across countries (Markandya and Halsnaes, 2002b; Agarwal and Narain, 1991; Baer and Templet, 2001; Shukla, 2005). On a utilitarian basis, assuming declining marginal utility, the case for the richer countries undertaking more of the burden is strong – they are the ones to whom the opportunity cost of such actions would have less welfare implications. However, assuming constant marginal utility, one could come to the conclusion that the costs of climate change mitigation that richer countries will face are very large compared with the benefits of the avoided climate change damages in poorer countries. In this way, utilitarian-based approaches can lead to different conclusions, depending on how welfare losses experienced by poorer people are represented in the social welfare function. Using a ‘rights’ basis it would be difficult to make the case for the poorer countries to bear a significant share of the burden of climate change mitigation costs. Formal property rights for GHG emissions allowances are not defined, but based on justice arguments equal allocation to all human beings has been proposed. This would give more emissions rights to developing countries – more than the level of GHGs they currently emit. Hence such a rights-based allocation would impose more significant costs on the industrialized countries, although now, as emissions in the developing world increased, they too, at some point in time, would have to undertake some emissions reductions. The literature includes a number of comparative studies on equity outcomes of different international climate change agreements. Some of these studies consider equity in terms of the consequences of different climate change policies, while others address equity in relation to rights that nations or individuals should enjoy in relation to GHG emission and the global atmosphere. Equity concerns have also been addressed in a more pragmatic way as a necessary element in international agreements in order to facilitate consensus. Müller (2001) discusses fairness of emission allocations and that of the burden distribution that takes all climate impacts and reduction costs into consideration and concludes that there is no solution that can be considered as the right and fair one far out in the future. The issue is rather to agree on an acceptable ‘fairness harmonization procedure’, where an emission allocation is initially chosen and compensation payments are negotiated once the costs and benefits actually occur. Rose et al. (1998) provide reasons why equity considerations are particularly important in relation to climate change agreements. First, country contributions will depend on voluntary compliance and it must therefore be expected that countries will react according to what they consider to be fair,[25] which will be influenced by their understanding of equity. Second, appealing to global economic efficiency is not enough to get countries together, due to the large disparities in current welfare and in welfare changes implied by efficient climate policies. Studies that focus on the net costs of climate change mitigation versus the benefits of avoided climate change give a major emphasis to the economic consequences of the policies, while libertarian-oriented equity studies focus on emission rights, rights of the global atmosphere, basic human living conditions etc. (Wesley and Peterson, 1999). Studies that focus on the net policy costs will tend to address equity in terms of a total outcome of policies, while the libertarian studies focus more on initial equity conditions that should be applied to ex ante emission allocation rules, without explicitly taken equity consequences into consideration. Given the uncertainties inherent in climate change impacts and their economic and social implications, it is difficult to conduct comprehensive and reliable consequence studies that can be used for an ex ante determination of equity principles for climate change agreements. Furthermore, social welfare functions and other value functions, when applied to the assessment of the costs and benefits of global climate change policies, run into a number of crucial equity questions. These include issues that are related to the asymmetry between the concentration of major GHG emission sources in industrialized countries and the relatively large expected damages in developing countries, the treatment of individuals with different income levels in the social welfare function, and a number of inter-generational issues. Rights-based approaches have been extensively used as a basis for suggestions on structuring international climate change agreements around emission allocation rules or compensation mechanisms. Various allocation rules have been examined, including emissions per capita principles, emissions per GDP, grandfathering, liability-based compensation for climate change damages etc. These different allocation rules have been supported with different arguments and with reference to equity principles. An overview and assessment of the various rights-based equity principles and their consequences on emission allocations and costs are included in Rose et al. (1998), Valliancourt and Waaub (2004), Leimbach (2003), Tol and Verheyen (2004) and Panayotou et al. (2002). While there is consensus in the literature about how rules should be assessed in relation to specific moral criteria, there is much less agreement on what criteria should apply (e.g. should they be based on libertarian or egalitarian rights-based approaches, or on utilitarian approaches). A particular difficulty in establishing international agreements on emission allocation rules is that the application of equity in this ex ante way can imply the very large transfer of wealth across nations or other legal entities that are assigned emission quotas, at a time where abatement costs, as well as climate change impacts, are relatively uncertain (Halsnæs and Olhoff, 2005). These uncertainties make it difficult for different parties to assess the consequences of accepting given emission allocation rules and to balance emission allocations against climate damages suffered in different parts of the world (Panayotou et al., 2002). Practical discussions about equity questions in international climate change negotiations have reflected, to a large extent, specific interests of various stakeholders, more than principal moral questions or considerations about the vulnerability of poorer countries. Arguments concerning property rights, for example, have been used by energy-intensive industries to advocate emission allocations based on grandfathering principles that will give high permits to their own stakeholders (that are large past emitters), and population-rich countries have, in some cases, advocated that fair emission allocation rules imply equal per capita emissions, which will give them high emission quotas. Vaillancourt and Waaub (2004) suggest designing emission allocation criteria on the basis of the involvement of different decision-makers in selecting and weighing equity principles for emission allocations, and using these as inputs to a multi-criteria approach. The criteria include population basis, basic needs, polluter pays, GDP intensity, efficiency and geographical issues, without a specified structure on inter-relationships between the different areas. In this way, the approach primarily facilitates the involvement of stakeholders in discussions about equity. ^ What countries consider as ‘fair’ may be in conflict with their narrow self-interest. Hence there is a problem with resolving the influence of these two determinants of national contributions to reducing GHGs. One pragmatic element in the resolution could be that the difference between the long-term self interest and what is fair is much smaller than that between narrow self-interest and fairness.AR4 Reports | Contents | Top of page | Previous page | Next page
科技
2016-40/3983/en_head.json.gz/9071
LG's G Watch is serviceable, but not a standout Android Wear deep-dive review: A smart start to smartwatch software Samsung Gear Live vs. LG G Watch: A real-world evaluation 11 of today's (and tomorrow's) hottest smartwatches Which is the more important product announcement, the iPhone 6 or the Apple... The LG G Watch is a simple slab of metal with slightly rounded-off edges. Image credit: IDG News Service/Yewon Kang LG's new entrant into the burgeoning smartwatch market boasts simple-to-use design Buying a smartwatch means adding yet another gadget to the arsenal of devices you already use on a daily basis. But we should expect a well-designed smartwatch to make our hyperconnected lives more manageable by giving us access to a range of features and applications in a more readily accessible manner than smartphones, tablets or PCs do.LG's G Watch may not be as stylish as other entrants in the smartwatch market, but by pairing it with a mobile phone, I found that it does a serviceable job of offering alerts and app notifications, relieving me of, for example, having to call up email to see whether the full text of an incoming message needs to be read right away.The G Watch, which runs Android Wear and is priced at $229 in the U.S., is available for order in 27 different countries starting Tuesday, though actual shipping dates vary depending on the market (in the U.S., availability is set for July 11).[Best Bluetooth speakers: We'll help you find the best wireless speakers for pairing with your smartphone or tablet—whatever your budget, and whatever music floats your boat.]The watch is a squarish 37.9 millimeter-by-46.5 millimeter (about 1.5 inch by 1.8 inch) slab with a black screen -- which could be considered either boring or nice and simple, depending on your taste. Even though it's fairly lightweight, weighing in at 63 grams (2.2 ounces), it felt a bit bulky, as if a 9.95 millimeter thick, flat battery pack was sitting on my wrist.The colors available are "Black Titan" or "White Gold." If you don't like the feel of the rubber-based strap, it can be replaced with other standard 22-mm watch bands.The watch runs on a 400mAh battery, which lasted a whole day for me. It would be great if battery life were longer but the G Watch has a nice charging solution: it snaps onto a magnetic cradle, which plugs into a wall socket with a microUSB cable.The display stays always-on unless you power it off. Although G Watch's 280 pixel by 280 pixel display is not up to the specs of rivals like the Samsung Gear Live, I didn't find that too bothersome. That's because, for how I used the watch, I didn't really end up having to closely read the screen that often. One of the main reasons I'd wear a smartwatch is to screen email. For example, when a new-email notification appears on the watch, I glance at a line of the email to see whether I need to read it right away. When I do end up reading the full text of the email, I do it on my phone.Probably the top selling point of the G Watch is that it is one of the first entrants into the smartwatch market that takes advantage of the advanced functions of Android Wear, the new extension to Google's mobile operating system that's been customized for smaller screens. For example, you can use the "OK Google" command to access a voice-controlled intelligent assistant similar to Apple's Siri.However, if you're thinking about buying a smartwatch, there are some caveats you should be aware of if you're considering the G Watch. OK Google worked fairly well for simple requests such as asking what the weather will be or what time the next World Cup game starts. For some other tasks, though, it did not quite work. For example, when I tried to create and send email using voice commands, a pause or a stutter seemed to throw it off; often it sent messages before I was actually finished with them.In addition, it didn't recognize any foreign names on my contact list, and often misunderstood them.If you don't pair the G Watch with an Android-powered smartphone, it is essentially just another digital watch. It pairs with mobile phones via Bluetooth so if you want access to the full range of Android Wear features, you need to carry the paired mobile phone in your pocket, or leave it on the desk when you're working. I found that if I left my paired mobile phone on a table in the living room, the watch lost connectivity when I walked into another room.LG says the watch is waterproof, a claim I tested by running water on the watch for a few minutes. The watch survived but lost contact with the smartphone during the time it came into contact with water.Ultimately, if you want a device that taps Android Wear's basic voice control functions and its ability to serve app notifications and Google Now alerts, the G Watch does the job, and offers specs similar to those of the Samsung Gear. The G Watch's design and feel may not be to everyone's taste, though, and for the price, you might do better looking elsewhere, especially as more smartwatches come onto the market. Yewon Kang
科技
2016-40/3983/en_head.json.gz/9082
Robot startup co-founder eyes hidden local talent Bloomberg Jan 10, 2014 Article history Online: Jan 10, 2014 Print: Jan 11, 2014 Takashi Kato, co-founder of the robot venture Schaft Inc. bought by Google Inc. in November, has opened a fund to invest in technologies from Japanese startups and universities that have been overlooked by investors. Kato’s 246 Capital plans to raise about ¥2 billion in the next six months mostly from wealthy Japanese, he said during an interview Thursday in Tokyo. The fund will focus on biotechnology and energy efficiency. Kato, 35, struggled to get funding for Schaft even though it had some of the best humanoid robot technology in the world, which persuaded him there are other similar opportunities in Japan. He was turned down by 10 Japanese investment firms and ultimately got funding from the U.S. government. “A slew of intelligent scientists and engineers is out there in Japan, thirsty for money to develop next-generation technologies that could make our lives easier,” said Kato. Kato’s 246 Capital may seek to raise as much as ¥10 billion for an additional fund in the next five years. U.S.-based Google acquired seven companies for a robotics project led by Andy Rubin, former head of the Android software unit, as the world’s largest online search provider pushes beyond its roots. The effort, which included the purchase of Tokyo-based Schaft, came after Rubin stepped down in March from Android, which he built into a leading smartphone operating system. Robotics, robots, Schaft, Takashi Kato, ventures Business
科技
2016-40/3983/en_head.json.gz/9118
Water surplus in Israel? With desalination, once unthinkable is possible U.S. Water surplus in Israel? With desalination, once unthinkable is possible By Ben SalesMay 28, 2013 3:46pm Water from the Mediterranean Sea rushes through pipes en route to being filtered for use across Israel in a process called desalination, which could soon account for 80 percent of the country’s potable water. (Ben Sales/JTA) PALMACHIM, Israel (JTA) – As construction workers pass through sandy corridors between huge rectangular buildings at this desalination plant on Israel’s southern coastline, the sound of rushing water resonates from behind a concrete wall. Drawn from deep in the Mediterranean Sea, the water has flowed through pipelines reaching almost 4,000 feet off of Israel’s coast and, once in Israeli soil, buried almost 50 feet underground. Now, it rushes down a tube sending it through a series of filters and purifiers. After 90 minutes, it will be ready to run through the faucets of Tel Aviv. Set to begin operating as soon as next month, Israel Desalination Enterprises Technologies’ Sorek Desalination Plant will provide up to 26,000 cubic meters – or nearly 7 million gallons – of potable water to Israelis every hour. When it’s at full capacity, it will be the largest desalination plant of its kind in the world. “If we didn’t do this, we would be sitting at home complaining that we didn’t have water,” said Raphael Semiat, a member of the Israel Desalination Society and professor at Israel’s Technion-Israel Institute of Technology. “We won’t be dependent on what the rain brings us. This will give a chance for the aquifers to fill up.” The new plant and several others along Israel’s coast are part of the country’s latest tactic in its decades-long quest to provide for the nation’s water needs. Advocates say desalination — the removal of salt from seawater – could be a game-changing solution to the challenges of Israel’s famously fickle rainfall. Instead of the sky, Israel’s thirst may be quenched by the Mediterranean’s nearly infinite, albeit salty, water supply. Until the winter of 2011-’12, water shortages were a dire problem for Israel; the country had experienced seven straight years of drought beginning in 2004. The Sea of Galilee (also known as Lake Kinneret), a major freshwater source and barometer of sorts for Israel’s water supply, fell to dangerous lows. The situation got so severe that the government ran a series of commercials featuring celebrities, their faces cracking from dryness, begging Israelis not to waste any water. Even as the Sea of Galilee has returned almost to full volume this year, Israeli planners are looking to desalination as a possible permanent solution to the problem of drought. Some even anticipate an event that was once unthinkable: a water surplus in Israel. IDE opened the first major desalination plant in the country in the southern coastal city of Ashkelon in 2005, following success with a similar plant in nearby Cyprus. With Sorek, the company will own three of Israel’s four plants, and 400 plants in 40 countries worldwide. The company’s U.S. subsidiary is designing a new desalination plant in San Diego, the $922 million Carlsbad Desalination Project, which will be the largest desalination plant in America. In Israel, desalination provides 300 million cubic meters of water per year – about 40 percent of the country’s total water needs. That number will jump to 450 million when Sorek opens, and will hit nearly 600 million as plants expand in 2014, providing up to 80 percent of Israel’s potable water. Like Israel’s other plants, Sorek will work through a process called Seawater Reverse Osmosis that removes salt and waste from the Mediterranean’s water. A prefiltration cleansing process clears waste out of the flow before the water enters a series of smaller filters to remove virtually all the salt. After moving through another set of filters that remove boron, the water passes through a limestone filter that adds in minerals. Then, it enters Israel’s water pipes. Semiat says desalination is a virtually harmless process that can help address the water needs prompted by the world’s growing population and rising standard of living. “You take water from the deep sea, from a place that doesn’t bother anyone,” he said. But desalination is not without its critics. Some environmentalists question whether the process is worth its monetary and environmental costs. One cubic meter of desalinated water takes just under 4 kWh to produce – that’s the equivalent of burning 40 100-watt light bulbs for one hour to produce the equivalent of five bathtubs full of water. Freshwater doesn’t have that cost. Giora Shaham, a former long-term planner at Israel’s Water Authority and a critic of Israel’s current desalination policy, said that factories like Sorek could be a waste because if there is adequate rainfall the desalination plants will produce more water than Israel needs at a cost that is too high. Then, surplus water may be wasted, or international bodies like the United Nations could pressure Israel to distribute it for free to unfriendly neighboring countries, Shaham said. Rows of filters at the Sorek Desalination Plant in Israel remove salt from water flowing in from the Mediterranean Sea. (Ben Sales/JTA) “There was a long period of drought where there wasn’t a lot of rain, so everyone was in panic,” Shaham said. “Instead of cutting back until there is rain, they made decisions to produce too much.” Fredi Lokiec, executive vice president for special projects at IDE, says the risks are greater without major desalination efforts. Israel is perennially short on rainfall, and depending on freshwater could further deplete Israel’s rivers. “We’ll always be in the shadow of the drought,” Lokiec said, but drawing from the Mediterranean is like taking “a drop from the ocean.” Some see a water surplus as an opportunity. Orit Skutelsky, water division manager at the Society for the Protection of Nature in Israel, says desalinated water could free up freshwater to refill Israel’s northern streams and raise the level of the Sea of Galilee. “There’s no way we couldn’t have done this,” she said of desalination. “It was the right move. Now we need to let water flow again to the streams.” Never miss breaking news and other must-read features. Like JTA on Facebook » Next: Syrian rebels apparently fire on Hezbollah > Featured Stories
科技