text
stringlengths 212
520k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
Each year more than 4 million homeless pets are killed as a result of overpopulation, but families who adopt from animal shelters or rescue groups can help preserve these lives and support the growing trend of socially responsible holiday shopping. Best Friends Animal Society encourages families this holiday season to give the precious gift of life by adopting homeless pets rather than buying from breeders, pet stores or online retailers.
Also, resist the urge to surprise a friend or family member with a living gift. Choosing the right pet is an extremely personal decision, one that should be made carefully by the adults who will be caring for the animal for its 15- to 20-year lifetime. Instead, offer an adoption gift certificate paired with a basket of pet care items or stuffed animal for the holiday itself, and then let the person or family choose the actual pet that feels right to them.
Once you’ve decided to adopt, keep in mind that welcoming a pet into your life is a big decision and requires important preparation. Best Friends offers tips and advice to help make a smooth transition at home:
* Determine roles and responsibilities – Before bringing home a new pet, discuss what roles and responsibilities each family member will take on. Who will be in charge of feeding, walks, changing the litter box and taking your pet for regular visits to the vet? Giving each family member a specific task will help everyone feel involved, especially young children.
* Prep the house – Adding a pet to the house means adding new items to your shopping lists. For dogs, the basics are a collar and leash, chew toys, a kennel and dog bed. Cats need a litter box and litter, a scratching post and a carrying crate for transportation. Also don’t forget food and toys.
* Have your pet spayed/neutered – Spaying or neutering is one of the greatest gifts you can provide your pet and community. It not only helps control the overabundance of pets, but can also help prevent medical and behavioral problems from developing. Most shelters include this with the adoption package or can recommend a local veterinarian in your area, so check with the staff at the shelter before you leave.
* Research community rules and resources – Do a little research on what identification (tags, microchips, etc.) you might need for your pet. Scout out the local dog parks and runs for future outdoor fun, and make sure you know where emergency vet clinics or animal hospitals are located.
* Set limits – Having pre-determined rules will create consistency in training and help make the home a pleasant environment for you and your pet. Will your pet be allowed to snuggle with you in bed or curl up with you on your furniture? Will treats be limited to one a day? It’s important to discuss these questions as a family before your new family member arrives.
An estimated 17 million people will be adding pets to their families this year, so this season, help bring some holiday cheer to a homeless pet by adopting your newest companion. | <urn:uuid:cefe5610-f57c-40d2-938e-5283619dbcb4> | {
"date": "2013-05-18T06:29:12",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9344787001609802,
"score": 2.59375,
"token_count": 614,
"url": "http://queensledger.com/view/full_story/20992104/article-The-holiday-gift-that-keeps-on-giving--opt-to-adopt-a-pet--save-a-life?instance=pets"
} |
Taking Play Seriously
By ROBIN MARANTZ HENIG
Published: February 17, 2008
On a drizzly Tuesday night in late January, 200 people came out to hear a psychiatrist talk rhapsodically about play -- not just the intense, joyous play of children, but play for all people, at all ages, at all times. (All species too; the lecture featured touching photos of a polar bear and a husky engaging playfully at a snowy outpost in northern Canada.) Stuart Brown, president of the National Institute for Play, was speaking at the New York Public Library's main branch on 42nd Street. He created the institute in 1996, after more than 20 years of psychiatric practice and research persuaded him of the dangerous long-term consequences of play deprivation. In a sold-out talk at the library, he and Krista Tippett, host of the public-radio program ''Speaking of Faith,'' discussed the biological and spiritual underpinnings of play. Brown called play part of the ''developmental sequencing of becoming a human primate. If you look at what produces learning and memory and well-being, play is as fundamental as any other aspect of life, including sleep and dreams.''
The message seemed to resonate with audience members, who asked anxious questions about what seemed to be the loss of play in their children's lives. Their concern came, no doubt, from the recent deluge of eulogies to play . Educators fret that school officials are hacking away at recess to make room for an increasingly crammed curriculum. Psychologists complain that overscheduled kids have no time left for the real business of childhood: idle, creative, unstructured free play. Public health officials link insufficient playtime to a rise in childhood obesity. Parents bemoan the fact that kids don't play the way they themselves did -- or think they did. And everyone seems to worry that without the chance to play stickball or hopscotch out on the street, to play with dolls on the kitchen floor or climb trees in the woods, today's children are missing out on something essential.
The success of ''The Dangerous Book for Boys'' -- which has been on the best-seller list for the last nine months -- and its step-by-step instructions for activities like folding paper airplanes is testament to the generalized longing for play's good old days. So were the questions after Stuart Brown's library talk; one woman asked how her children will learn trust, empathy and social skills when their most frequent playing is done online. Brown told her that while video games do have some play value, a true sense of ''interpersonal nuance'' can be achieved only by a child who is engaging all five senses by playing in the three-dimensional world.
This is part of a larger conversation Americans are having about play. Parents bobble between a nostalgia-infused yearning for their children to play and fear that time spent playing is time lost to more practical pursuits. Alarming headlines about U.S. students falling behind other countries in science and math, combined with the ever-more-intense competition to get kids into college, make parents rush to sign up their children for piano lessons and test-prep courses instead of just leaving them to improvise on their own; playtime versus r?m?uilding.
Discussions about play force us to reckon with our underlying ideas about childhood, sex differences, creativity and success. Do boys play differently than girls? Are children being damaged by staring at computer screens and video games? Are they missing something when fantasy play is populated with characters from Hollywood's imagination and not their own? Most of these issues are too vast to be addressed by a single field of study (let alone a magazine article). But the growing science of play does have much to add to the conversation. Armed with research grounded in evolutionary biology and experimental neuroscience, some scientists have shown themselves eager -- at times perhaps a little too eager -- to promote a scientific argument for play. They have spent the past few decades learning how and why play evolved in animals, generating insights that can inform our understanding of its evolution in humans too. They are studying, from an evolutionary perspective, to what extent play is a luxury that can be dispensed with when there are too many other competing claims on the growing brain, and to what extent it is central to how that brain grows in the first place.
Scientists who study play, in animals and humans alike, are developing a consensus view that play is something more than a way for restless kids to work off steam; more than a way for chubby kids to burn off calories; more than a frivolous luxury. Play, in their view, is a central part of neurological growth and development -- one important way that children build complex, skilled, responsive, socially adept and cognitively flexible brains.
Their work still leaves some questions unanswered, including questions about play's darker, more ambiguous side: is there really an evolutionary or developmental need for dangerous games, say, or for the meanness and hurt feelings that seem to attend so much child's play? Answering these and other questions could help us understand what might be lost if children play less. | <urn:uuid:316c7af5-14e1-4d0b-9576-753e17ef2cc5> | {
"date": "2013-05-18T06:21:47",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9614589214324951,
"score": 2.5625,
"token_count": 1055,
"url": "http://query.nytimes.com/gst/fullpage.html?res=9404E7DA1339F934A25751C0A96E9C8B63&scp=2&sq=taking%20play%20seriously&st=cse"
} |
Financial Accounting - CH 1 & 2
|Four Principal Activities of Business Firms:|| 1.Establishing goals and strategies|
|What are the 2 sources Financing comes from?|| 1. Owners|
|Investments are made in the following:|| 1. Land, buildings, equipment|
2. Patents, licenses, contractual rights
3. Stock and bonds of other organizations
5. Accounts Receivable
|What are the 4 areas for conducting operations?|| 1. Purchasing|
|What are the 4 commonly used conventions in financial statements?|| 1. The accounting period|
2. The number of reporting periods
3. The monetary amounts
4. The terminology and level of detail in the financial statements
|Common Financial Reporting Conventions, Accounting Period||The length of time covered by the financial statements. (The most common interval for external reporting is the fiscal year).|
|Common Financial Reporting Conventions, Number of reporting periods||The number of reporting periods included in a given financial statement presentation, Both U.S. GAAP and IFRS require firms to include results for multiple reporting periods in each report.|
|Common Financial Reporting Conventions, Monetary amounts||This includes measuring units, like thousands, millions, or billions, and the currency, such as dollars ($), euros (€), or Swedish kronor (SEK)|
|Common Financial Reporting Conventions, Terminology and level of detail in the financial statements||U.S. GAAP and IFRS contain broad guidance on what the financial statements must contain, but neither system completely specifies the level of detail or the names of accounts. Therefore, some variation occurs.|
|Characteristics of a Balance Sheet||A Balance Sheet:|
1. is also known as a statement of financial position;
2. provides information at a point in time;
3. lists the firm's assets, liabilities, and shareholders' equity and provides totals and subtotals; and
4. can be represented as the Basic Accounting Equation.
Assets = Liabilities + Shareholders' Equity
|Accounting Equation Components|| 1. Assets|
3. Share Holder's Equity
|Assets|| Assets are economic resources with the potential to provide future economic benefits to a firm. |
Examples: Cash, Accounts Receivable, Inventories, Buildings, Equipment, intangible assets (like Patents)
|Liabilities|| Liabilities are creditors' claims for funds, usually because they have provided funds, or goods and services, to the firm.|
Examples: Accounts Payable, Unearned Income, Notes Payable, Buildings, Accrued Salaries
|Shareholders' Equity|| Shareholders' Equity shows the amounts of funds owners have provided and, in parallel, their claims on the assets of a firm. |
Examples: Common Stock, Contributed Capital, Retained Earnings
|What are the separate sections on a Balance Sheet (Balance sheet classification)||1. Current assets represent assets that a firm expects to turn into cash, or sell, or consume within approximately one year from the date of the balance sheet (i.e., accounts receivable and inventory).|
2. Current liabilities represent obligations a firm expects to pay within one year (i.e., accounts payable and salaries payable).
3. Non-current assets are typically held and used for several years (i.e., land, buildings, equipment, patents, long-term security investments).
4. Noncurrent liabilities and shareholders' equity are sources of funds where the supplier of funds does not expect to receive them all back within the next year.
|Income Statement||1. Sometimes called the statement of profit and loss by firms applying IFRS|
2. Provides information on profitability
3. May use the terms net income, earnings, and profit interchangeably
4. Reports amounts for a period of time
5. Typically one year
6. Is represented by the Basic Income Equation:
Net Income = Revenues - Expenses
|Revenues||(also known as sales, sales revenue, or turnover, a term used by some firms reporting under IFRS) measure the inflows of assets (or reductions in liabilities) from selling goods and providing services to customers.|
|Expenses||measure the outflow of assets (or increases in liabilities) used in generating revenues.|
|Relationship between the Balance Sheet and the Income Statement|| 1. The income statement links the balance sheet at the beginning of the period with the balance sheet at the end of the period.|
2. Retained Earnings is increased by net income and decreased by dividends.
|Statement of Cash Flows|| The statement of cash flows (also called the|
cash flow statement) reports information about
cash generated from or used by:
2. investing, and
3. financing activities during specified time periods.
The statement of cash flows shows where the firm obtains or generates cash and where it spends or uses cash.
|Classification of Cash Flows|| 1. Operations: |
cash from customers less cash paid in carrying out the firm's operating activities
cash paid to acquire noncurrent assets less amounts from any sale of noncurrent assets
cash from issues of long-term debt or new capital less dividends
|Inflows and Outflows of Cash|
|The Relationship of the Statement of Cash Flows to the Balance Sheet and Income Statement||-The statement of cash flows explains the change in cash between the beginning and the end of the period, and separately displays the changes in cash from operating, investing, and financing activities.|
-In addition to sources and uses of cash, the statement of cash flows shows the relationship between net income and cash flow from operations.
|Statement of Shareholders' Equity||This statement displays components of shareholders' equity, including common shares and retained earnings, and changes in those components.|
|Other Items in Annual Reports||Financial reports provide additional explanatory material in the schedules and notes to the financial statements.|
|Who are the 4 main groups of people involved with the Financial Reporting Process|| 1. Managers and governing boards of reporting entities.|
2. Accounting standard setters
and regulatory bodies.
3. Independent external auditors.
4. Users of financial statements.
|What is the Securities and Exchange Commission (SEC)?||An agency of the federal government, that has the legal authority to set acceptable accounting standards and enforce securities laws.|
|What is the Financial Accounting Standards Board (FASB)?||a private-sector body comprising five voting members, to whom the SEC has delegated most tasks of U.S. financial accounting standard-setting.|
|GAAP||1. Common terminology includes the pronouncements of the FASB (and its predecessors) in the compilation of accounting rules, procedures, and practices known as generally accepted accounting principles (GAAP).|
2. Recently, the FASB launched its codification project which organizes all of U.S GAAP by topic (for example, revenues), eliminates duplications, and corrects inconsistencies.
|FASB board members make standard-setting decisions guided by a conceptual framework that addresses:|| 1. Objectives of financial reporting.|
2. Qualitative characteristics of accounting information including the relevance, reliability, and comparability of data.
3. Elements of the financial statements.
4. Recognition and measurement issues.
|Sarbanes-Oxley Act of 2002.|| Concerns over the quality of financial reporting have led, and continue to lead, to government initiatives in the United States.|
Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board (PCAOB), which is responsible for monitoring the quality of audits of SEC registrants.
|International Financial Reporting Standards (IFRS)||-The International Accounting Standards Board (IASB) is an independent accounting standard-setting entity with 14 voting members from a number of countries. Standards set by the IASB are International Financial Reporting Standards (IFRS).|
-The FASB and IASB Boards are working toward converging their standards, based on an agreement reached in 2002 and updated since then.
|Auditor's Opinion||Firms whose common stock is publicly traded are required to get an opinion by an independent auditor who:|
1.Assesses the effectiveness of the firm's internal control system for measuring and reporting business transactions
2.Assesses whether the financial statements and notes present fairly a firm's financial position, results of operations, and cash flows in accordance with generally accepted accounting principles
|Basic Accounting Conventions and Concepts||1. Materiality is the qualitative concept that financial reports need not include items that are so small as to be meaningless to users of the reports.|
2. The accounting period convention refers to the uniform length of accounting reporting periods.
3. Interim reports are often prepared for periods shorter than a year. However, preparing interim reports does not eliminate the need to prepare an annual report.
|Cash vs. Accrual Accounting||Cash basis|
A firm measures performance from selling goods and providing services as it receives cash from customers and makes cash expenditures to providers of goods and services.
A firm recognizes revenue when it sells goods or renders services and recognizes expenses in the period when the firm recognizes the revenues that the costs helped produce.
|What Is an Account? How Do You Name Accounts?||-An account represents an amount on a line of a balance sheet or income statement (i.e., cash, accounts receivable, etc.).|
-There is not a master list to define these accounts since they are customized to fit each specific business's needs.
-Accountants typically follow a conventional naming system for accounts, which increases communication.
|What Accounts Make up the Typical Balance Sheet?|
|Current assets and current liabilities (Balance Sheet Classifications)||Receipt or payment of assets that the firm expects will occur within one year or one operating cycle.|
|Noncurrent assets and noncurrent liabilities (Balance Sheet Classifications)||Firm expects to collect or pay these more than one year after the balance sheet date.|
|Duality Effects of the Balance Sheet Equation (Assets = Liabilites + Shareholders' Equity)||Any single event or transaction will have one of the following four effects or some combination of these effects:|
1.INCREASE an asset and INCREASE either a liability or shareholders' equity.
2.DECREASE an asset and DECREASE either a liability or shareholders' equity.
3.INCREASE one asset and DECREASE another asset.
4.INCREASE one liability or shareholders' equity and DECREASE another liability or shareholders' equity.
A T-account is a device or convention for organizing and accumulating the accounting entries of transactions that affect an individual account, such as Cash, Accounts Receivable, Bonds Payable, or Additional Paid-in Capital.
|T-Account Conventions: Assets|
|T-Account Conventions: Liabilities|
|T-Account Conventions: Shareholders' Equity|
|Debit vs. Credit|
While T-accounts are useful to help analyze how individual transactions flow and accumulate within various accounts, journal entries formalize the reasoning that supports the transaction.
The attached standardized format indicates the accounts and amounts, with debits on the first line and credits (indented) on the second line:
| Revenue or Sales:|
(Common Income Statement Terms)
|Assets received in exchange for goods sold and services rendered.|
| Cost of Goods Sold:|
(Common Income Statement Terms)
|The cost of products sold.|
| Selling, General, and Administrative (SG&A):|
(Common Income Statement Terms)
|Costs incurred to sell products/services as well as costs of administration.|
| Research and Development (R&D) Expense:|
(Common Income Statement Terms)
|Costs incurred to create/develop new products, processes, and services.|
| Interest Income:|
(Common Income Statement Terms)
|Income earned on amounts lent to others or from investments in interest-yielding securities.|
|Unique Relationships Exist Between the Balance Sheet and the Income Statement|
|Important Account Differences||1. Balance sheet accounts are permanent accounts in the sense that they remain open, with nonzero balances, at the end of the reporting period.|
2. In contrast, income statement accounts are temporary accounts in the sense that they start a period with a zero balance, accumulate information during the reporting period, and have a zero balance at the end of the reporting period.
|The Financial Statement Relationships can be summarized as:|
-After preparing the end-of-period income statement, the accountant transfers the balance in each temporary revenue and expense account to the Retained Earnings account.
-This procedure is called closing the revenue and expense accounts. After transferring to Retained Earnings, each revenue and expense account is ready to begin the next period with a zero balance.
|Expense and Revenue Transactions|
|Dividend Declaration and Payment|
|Issues of Capital Stock|
|Posting||1. After each transaction is recognized by a journal entry, the information is transferred in the accounting system via an activity known as posting.|
2. The balance sheet ledger accounts (or permanent accounts) where these are posted begin each period with a balance equal to the ending balance of the previous period.
3.The income statement ledger accounts (or temporary accounts) have zero beginning balances.
|Adjusting Entries|| There are some journal entries that are not triggered by a transaction or exchange.|
-Rather, journal entries known as adjusting entries, result from the passage of time at the end of an accounting period or are used to correct errors (more commonly known as correcting entries).
|Four Basic Types of Adjusting Entries|| 1.Unearned Revenues|
|Closing Process||1. After adjusting and correcting entries are made, the income statement can be prepared.|
2. Once completed, it is time to transfer the balance in each temporary revenue and expense account to the Retained Earnings account. This is known as the closing process.
3. Each revenue account is reduced to zero by debiting it and each expense account is reduced to zero by crediting it.
4. The offset account—Retained Earnings—is credited for the amount of total revenues and debited for the amount of total expenses.
5. Thus, the balance of ending Retained Earnings for a period shows the difference between total revenues and total expenses.
|Preparation of the Balance Sheet||1. After the closing process is completed, the accounts with nonzero balances are all balance sheet accounts.|
2. We can use these accounts to prepare the balance sheet as at the end of the period.
3. The Retained Earnings account will appear with all other balance sheet accounts and now reflects the cumulative effect of transactions affecting that account.
|Final Step in Preparing Financial Statements: The Cash Flow Statement||1. The statement of cash flows describes the sources and uses of cash during a period and classifies them into operating, investing, and financing activities.|
2. It provides a detailed explanation for the change in the balance of the Cash account during that period.
3. Two approaches can be used to prepare this statement: Direct and Indirect | <urn:uuid:03b12dbf-26a3-4290-b6e6-e08368916e2a> | {
"date": "2013-05-18T06:03:37",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9089588522911072,
"score": 3.375,
"token_count": 3227,
"url": "http://quizlet.com/12638820/financial-accounting-ch-1-2-flash-cards/"
} |
Intel demonstrated a wireless electric power system that could revolutionize modern life by eliminating chargers, wall outlets and eventually batteries all together by 2050. Intel chief technology officer Justin Rattner demonstrated a Wireless Energy Resonant Link at Intel’s 2008 developer’s forum.
During the demo electricity was sent wirelessly to a lamp on stage, lighting a 60 watt bulb that uses more power than a typical laptop computer. Most importantly, the electricity was transmitted without zapping anything or anyone that got between the sending and receiving units. “The trick with wireless power is not can you do it; it’s can you do it safely and efficiently,” according to Intel researcher Josh Smith. “It turns out the human body is not affected by magnetic fields; it is affected by elective fields. So what we are doing is transmitting energy using the magnetic field not the electric field.”
Examples of potential applications include airports, offices or other buildings that could be rigged to supply power to laptops, mobile telephones or other devices toted into them. The technology could also be built into plugged in computer components, such as monitors, to enable them to broadcast power to devices left on desks or carried into rooms, according to Mr. Smith.
- Duracell, Energizer, Texas Instruments and Motorola Mobility in Attendance at the International Wireless Power Summit (prweb.com)
- British Start-Up Working to Bring Wireless Charging to the Racetrack (wheels.blogs.nytimes.com) | <urn:uuid:43d4b460-a138-4381-addd-d0464250f94f> | {
"date": "2013-05-18T08:01:59",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9474734663963318,
"score": 2.84375,
"token_count": 312,
"url": "http://rbach.net/blog/index.php/wireless-electricity/"
} |
What is an estimate?
“An 'Estimate' is a computer-generated approximation of a property's market value calculated by means of the Automated Value Model (AVM). As such, an Estimate is calculated on the basis of:
- Publicly available tax assessment records for the property
- Recent sale prices of comparable properties in the same area
There are many additional factors that determine a property's actual market value, including its condition, house style, layout, special features, quality of workmanship, and so on. For this reason, an Estimate should not be viewed as an appraisal, but rather as an approximate basis for making comparisons, and as a starting point for further inquiry. A REALTOR® who specializes in the given area will be able to provide a more accurate valuation based upon current market trends, as well as specific property and neighborhood characteristics.”
In some parts of the country, Realtor.com does not have access to public records data or the available estimates are not considered accurate. In these instances, the company does not display an estimated value. | <urn:uuid:cddfcf13-751b-4468-a44a-9a3cb46519c1> | {
"date": "2013-05-18T06:53:12",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.946603536605835,
"score": 2.6875,
"token_count": 224,
"url": "http://realestate.aol.com/homes-for-sale-detail/2202-Lee-Ln_Sarasota_FL_34231_M62531-69124"
} |
By Roger Fox
I doubt the Keystone project is even a real long term goal by TransCanada,. Certainly in the big picture Keystone is only a single chapter in a much larger book. If you read this diary you will risk information overload, you will be offered numerous disparate data points that at first glance may seem unconnected. You will need to digest all the information offered, and then analyze.
Crude is is classified by the American Petroleum Institute (API) into light, medium, heavy and extra heavy crudes, by API gravity. If its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks. The Albert Tar Sands contain crudes of API 10 or less that is called Extra heavy or Bitumen. Heavy oil is defined as having an API gravity below 22.3, Medium oil is defined as having an API gravity between 22.3 °API and 31.1 °API, Light crude oil is defined as having an API gravity higher than 31.1.
At a production rate of 3 million barells a day the tar sands can last for 170 years. This would also mean a hole in the ground visible from orbit.
The Keystone pipeline is only one of a couple of handfuls of pipeline proposals over the last decade in the Western US, Canada and Alaska.
Alaskan nat gas is largely unexploited, and is used locally on the North Slope. Its estimated that 70 trillion cubic feet of nat gas can be found in Alaska, a lot of it in the North Slope area. There are at least 3 major proposals for nat gas pipelines from the North Slope area and the adjacent Mackenzie River Delta in Canada. 2 of these projects point right at Alberta.
TransCanada and Exxon Mobil are partnered in the Alaska gas pipeline proposal that will directly link nat gas production in the North Slope of ALaska thru Alberta to the US mid west. This project may be the same as the Denali proposal, and was reintroduced to theSenate in Feb, of 2011. There also at least 2 variations. Additionally there is the Dempster Lateral.
-> Next page: Follow the routes south | <urn:uuid:ba49b9f2-b352-4b64-9909-4785d5b6527b> | {
"date": "2013-05-18T06:42:27",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9430086612701416,
"score": 2.59375,
"token_count": 447,
"url": "http://redgreenandblue.org/2011/08/24/keystone-xl-tar-sands-pipeline-a-small-part-of-a-bigger-strategy/"
} |
|Easton's Bible Dictionary|
Baalah of the well, (Joshua 19:8, probably the same as Baal, mentioned in 1 Chronicles 4:33, a city of Simeon.
Int. Standard Bible Encyclopedia
ba'-a-lath-be'-er ba`alath be'er "lady (mistress) of the well"; (Joshua 19:8 (in 1 Chronicles 4:33, Baal)): In Jos this place is designated "Ramah of the South," i.e. of the Negeb, while in 1 Samuel 30:27 it is described as Ramoth of the Negeb. It must have been a prominent hill (ramah = "height") in the far south of the Negeb and near a well be'er. The site is unknown though Conder suggests that the shrine Kubbet el Baul may retain the old name.
Baalath-beer (2 Occurrences)
Joshua 19:8 and all the villages that were round about these cities to Baalath-beer, Ramah of the South. This is the inheritance of the tribe of the children of Simeon according to their families. (ASV BBE DBY JPS WBS YLT NAS)
1 Chronicles 4:33 And all the small places round these towns, as far as Baalath-beer, the high place of the South. These were their living-places, and they have lists of their generations. (BBE) | <urn:uuid:38961357-af4d-4931-975b-8aa63d52a04c> | {
"date": "2013-05-18T06:26:43",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9573608040809631,
"score": 2.515625,
"token_count": 310,
"url": "http://refbible.com/b/baalath-beer.htm"
} |
OBSOLETE UNITS PACKAGE SYMBOL
As of version 9.0, unit functionality is built into Mathematica
is the fundamental CGS unit of mass.
- To use , you first need to load the Units Package using Needs["Units`"].
- is equivalent to Kilogram/1000 (SI units).
- Convert[n Gram, newunits] converts n Gram to a form involving units newunits.
- is typically abbreviated as g. | <urn:uuid:0918420b-56e0-40b1-bd26-4b8fb94c48df> | {
"date": "2013-05-18T05:26:42",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.7426242828369141,
"score": 2.671875,
"token_count": 102,
"url": "http://reference.wolfram.com/mathematica/Units/ref/Gram.en.html"
} |
6 Series with Tags:
Recession Indicators Series
These time series are an interpretation of US Business Cycle Expansions and Contractions data provided by The National Bureau of Economic Research (NBER) at http://www.nber.org/cycles/cyclesmain.html and Organisation of Economic Development (OECD) Composite Leading Indicators: Reference Turning Points and Component Series data provided by the OECD at http://www.oecd.org/document/6/0,3746,en_2649_34349_35726918_1_1_1_1,00.html. Our time series are composed of dummy variables that represent periods of expansion and recession. The NBER identifies months and quarters, while the OECD identifies months, of turning points without designating a date within the period that turning points occurred. The dummy variable adopts an arbitrary convention that the turning point occurred at a specific date within the period. The arbitrary convention does not reflect any judgment on this issue by the NBER's Business Cycle Dating Committee or the OECD. A value of 1 is a recessionary period, while a value of 0 is an expansionary period. | <urn:uuid:fb670c36-a8a2-490f-86d6-7d758632a7c7> | {
"date": "2013-05-18T08:04:41",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.829829752445221,
"score": 2.734375,
"token_count": 237,
"url": "http://research.stlouisfed.org/fred2/release?rid=242&t=japan%3Boecd&at=nsa&ob=pv&od=desc"
} |
Fully revised and updated for the 21st century, 365 Manners Kids Should Know tackles one manner a day. It suggests many games, exercises, and activities that parents, teachers, and grandparents can use to teach children and teens essential etiquette and at what age to present them. Some of the manners covered are when and where to text, how to handle an online bully, how to write a thank-you note, and proper behavior and dress for special events such as weddings, birthday parties, and religious services.
Customer Reviews for 365 Manners Kids Should Know - revised and updated
This product has not yet been reviewed. Click here to continue to the product details page. | <urn:uuid:9e429843-e203-44f4-b932-d13b9513660a> | {
"date": "2013-05-18T08:12:14",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9423240423202515,
"score": 2.78125,
"token_count": 136,
"url": "http://reviews.christianbook.com/2016/88825X/three-rivers-press-365-manners-kids-should-know-revised-and-updated-reviews/reviews.htm"
} |
This work is licensed under the GPLv2 license. See License.txt for details
Autobuild imports, configures, builds and installs various kinds of software packages. It can be used in software development to make sure that nothing is broken in the build process of a set of packages, or can be used as an automated installation tool.
Autobuild config files are Ruby scripts which configure rake to
imports the package from a SCM or (optionnaly) updates it
configures it. This phase can handle code generation, configuration (for instance for autotools-based packages), …
It takes the dependencies between packages into account in its build
process, updates the needed environment variables | <urn:uuid:d4c570b0-6a4e-47fd-afe7-15b6daac7169> | {
"date": "2013-05-18T06:43:43",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8147034049034119,
"score": 2.84375,
"token_count": 144,
"url": "http://rubygems.org/gems/autobuild/versions/1.2.15"
} |
W hy is it important for scientists to contribute to science education?
Our nation has failed to meet important educational challenges, and our children are ill prepared to respond to the demands of today?s world. Results of the Third International Mathematics and Science Study ( TIMSS )--and its successor, TIMSS-R--show that the relatively strong international performance of U.S. 4th graders successively deteriorates across 8th- and 12th-grade cohorts. Related studies indicate that U.S. PreK-12 curricula lack coherence, depth, and continuity and cover too many topics superficially. By high school, unacceptably low numbers of students show motivation or interest in enrolling in physics (only one-quarter of all students) or chemistry (only one-half).
We are rapidly approaching universal participation at the postsecondary level, but we still have critical science, technology, engineering, and mathematics (STEM) workforce needs and too few teachers who have studied science or mathematics. Science and engineering degrees as a percentage of the degrees conferred each year have remained relatively constant at about 5%. In this group, women and minorities are gravely underrepresented.
The consequences of these conditions are serious. The U.S. Department of Labor estimates that 60% of the new jobs being created in our economy today will require technological literacy, yet only 22% of the young people entering the job market now actually possess those skills. By 2010, all jobs will require some form of technological literacy, and 80% of those jobs haven?t even been created yet. We must prepare our students for a world that we ourselves cannot completely anticipate. This will require the active involvement of scientists and engineers.
How is NSF seeking to encourage scientists to work on educational issues?
The NSF Strategic Plan includes two relevant goals: to develop "a diverse, internationally competitive, and globally engaged workforce of scientists, engineers, and well-prepared citizens" and to support "discovery across the frontiers of science and engineering, connected to learning, innovation, and service to society." To realize both of these goals, our nation?s scientists and engineers must care about the educational implications of their work and explore educational issues as seriously and knowledgeably as they do their research questions. The phrase "integration of research and education" conveys two ideas. First, good research generates an educational asset, and we must effectively use that asset. Second, we need to encourage more scientists and engineers to pursue research careers that focus on teaching and learning within their own disciplines.
All proposals submitted to NSF for funding must address two merit criteria: intellectual merit and broader impacts.
In everyday terms, our approach to evaluating the broader impact of proposals is built on the philosophy that scientists and engineers should pay attention to teaching and value it, and that their institutions should recognize, support, and reward faculty, as well as researchers in government and industry, who take their role as educators seriously and approach instruction as a scholarly act. We think of education very broadly, including formal education (K-graduate and postdoctoral study) and informal education (efforts to promote public understanding of science and research outside the traditional educational environment).
What does it mean to take education seriously and explore it knowledgeably?
Any scholarly approach to education must be intentional, be based on a valid body of knowledge, and be rigorously assessed. That is, our approach to educational questions must be a scholarly act. NSF actively invests in educational reform and models that encourage scientists and engineers to improve curriculum, teaching, and learning in science and mathematics at all levels of the educational system from elementary school to graduate study and postdoctoral work.
We recognize that to interest faculty and practicing scientists and engineers in education, we must support research that generates convincing evidence that changing how we approach the teaching of science and mathematics will pay off in better learning and deeper interest in these fields.
Here are a few of the most recent efforts to stimulate interest in education that might be of interest to Next Wave readers. (For more information, go to the NSF Education and Human Resources directorate's Web site .)
The GK-12 program supports fellowships and training to enable STEM graduate students and advanced undergraduates to serve in K-12 schools as resources in STEM content and applications. Outcomes include improved communication and teaching skills for the Fellows, increased content knowledge for preK-12 teachers, enriched preK-12 student learning, and stronger partnerships between higher education and local schools.
The Centers for Learning and Teaching ( CLT ) program is a "comprehensive, research-based effort that addresses critical issues and national needs of the STEM instructional workforce across the entire spectrum of formal and informal education." The goal of the CLT program is to support the development of new approaches to the assessment of learning, research on learning within the disciplines, the design and development of effective curricular materials, and research-based approaches to instruction--and through this work to increase the number of people who do research on education in the STEM fields. This year (FY 02) we are launching some prototype higher education centers to reform teaching and learning in our nation's colleges and universities through a mix of research, faculty development and exploration of instructional practices that can promote learning. Like other NSF efforts, the Centers incorporate a balanced strategy of attention to people, ideas and tools. We hope to encourage more science and engineering faculty to work on educational issues in both K-12 and in postsecondary education.
If you are interested in these issues and want to pursue graduate or postdoctoral study, or want to develop a research agenda on learning in STEM fields, find the location and goals of the currently funded centers and also check later this summer to find out which higher education CLT prototypes are funded.
The following solicitations all involve the integration of research and education as well as attention to broadening participation in STEM careers:
The Science, Technology, Engineering, and Mathematics Talent Expansion Program ( STEP ) program seeks to increase the number of students (U.S. citizens or permanent residents) pursuing and receiving associate or baccalaureate degrees in established or emerging fields within STEM.
The Faculty Early Career Development ( CAREER ) program recognizes and supports the early career development activities of those teacher-scholars who are most likely to become the academic leaders of the 21st century.
The Course, Curriculum, and Laboratory Improvement (CCLI) program seeks to improve the quality of STEM education for all students and targets activities affecting learning environments, course content, curricula, and educational practices. CCLI offers three tracks: educational materials development , national dissemination , and adaptation and implementation .
The Integrative Graduate Education and Research Training ( IGERT ) program addresses the challenges of preparing Ph.D. scientists and engineers with the multidisciplinary backgrounds and the technical, professional, and personal skills needed for the career demands of the future.
The Vertical Integration of Research and Education in the Mathematical Sciences ( VIGRE ) program supports institutions with Ph.D.-granting departments in the mathematical sciences in carrying out innovative educational programs, at all levels, that are integrated with the department?s research activities.
The Increasing the Participation and Advancement of Women in Academic Science and Engineering Careers (ADVANCE) program seeks to increase the participation of women in the scientific and engineering workforce through the increased representation and advancement of women in academic science and engineering careers.
The Science, Technology, Engineering and Mathematics Teacher Preparation ( STEMTP ) program involves partnerships among STEM and education faculty working with preK-12 schools to develop exemplary preK-12 teacher education models that will improve the science and mathematics preparation of future teachers.
The Noyce Scholarship Supplements program supports scholarships and stipends for STEM majors and STEM professionals seeking to become preK-12 teachers.
The views expressed are those of the authors and do not necessarily reflect those of the National Science Foundation. | <urn:uuid:21bc3a09-e45d-497b-add1-4880324aff25> | {
"date": "2013-05-18T06:21:16",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9502217769622803,
"score": 3.03125,
"token_count": 1620,
"url": "http://sciencecareers.sciencemag.org/print/career_magazine/previous_issues/articles/2002_07_12/nodoi.4298361476632626608"
} |
File compression is to perform some algorithm on the file that reduces it in size but the reverse of the algorithm will return it to its original form. In data files, the compression and decompression must be lossless which means that the data must be returned to its exact form. There are various methods to do this: some hardware implementations and some software. The most popular ones that are implemented in hardware usually use a Limpel-Ziv algorithm to look for repeating sequences over a set span of data (the run) and replace that with special identifying information. Compression does save space but may take extra time (latency).
Video and music data are typically already compressed. The compression rates are usually very high because of the data and the fact that a lossy compression algorithm is used. It can be lossy (meaning that all bits may not be decompressed exactly) because it won't be noticeable with video or music.
Zip files are the result of software compression. Another compression round on already compressed data will probably not yield any substantial gain.
Evaluator Group, Inc.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in our Storage Networking forum at http://searchstorage.discussions.techtarget.com/WebX?50@@.ee83ce4 or e-mail us directly at firstname.lastname@example.org.
This was first published in December 2001 | <urn:uuid:89cf800c-2b77-4614-98f0-40e3877e109f> | {
"date": "2013-05-18T05:54:32",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9317011833190918,
"score": 3.75,
"token_count": 298,
"url": "http://searchstorage.techtarget.com/answer/What-is-compression"
} |
Hershey, PA : Information Science Reference, c2009.
xxi, 417 p. : ill. ; 29 cm.
"Premier reference source"--Cover.
Includes bibliographical references (p. 362-407) and index.
Now established as an effective tool in the instructional process, multimedia has penetrated educational systems at almost every level of study. In their quest to maximize educational outcomes and identify best practices, multimedia researchers are now expanding their examinations to extend towards the cognitive functionality of multimedia."Cognitive Effects of Multimedia Learning" identifies the role and function of multimedia in learning through a collection of research studies focusing on cognitive functionality. An advanced collection of critical theories and practices, this much needed contribution to the research is an essential holding for academic libraries, and will benefit researchers, practitioners and students in basic and applied fields ranging from education to cognitive sciences. (source: Nielsen Book Data) | <urn:uuid:d4ea54a6-6831-40cc-b677-7da62fccecf5> | {
"date": "2013-05-18T05:24:50",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8840771913528442,
"score": 2.578125,
"token_count": 185,
"url": "http://searchworks.stanford.edu/view/7815633"
} |
May 20, 2009
The Cook Islands are closely associated to New Zealand. Air New Zealand is the only air carrier that flies directly from the U.S. to the Cook Islands. As you will see below, the Cook Islands use the NZD as their currency.
Despite some 90,000 visitors a year to the capital island, Rarotonga, the Cook Islands are largely unspoiled by tourism. There are no high-rise hotels, only four beach buggies and very little hype. The Cook Islands offer a rare opportunity for an authentic island holiday.
There are a total of 15 islands in the heart of the South Pacific spread over 850,000 square miles with a population of approximately 15,000. The Islands most visited are Rarotonga and Aitutaki which are only 140 miles apart.
Cook Island History
Ru, from Tupua’i in French Polynesia, is believed to have landed on Aitutaki, and Tangiia, also from French Polynesia, is believed to have arrived on Rarotonga around 800 AD. Similarly, the northern islands were probably settled by expeditions from Samoa and Tonga.
Cook Island Climate
Cooled by the gentle breezes of the Pacific, the climate of these islands is sunny and pleasant. Roughly speaking, there are two seasons: from November through May the climate is hot and humid, and from June through October the climate is warm and dry. Most of the rain falls during the hot season, but there are also many lovely sunny days during these months, with refreshing trade-winds.
Cook Island Geography
The Cook Islands consists of two main groups, one in the north and one in the south. The southern group is nine “high” islands mainly of volcanic origin although some are virtually atolls. The majority of the population lives in the southern group. The northern group comprises six true atolls.
Cook Island Southern Group
Aitutaki, Atiu, Mangaia, Manuae, Mauke, Mitiaro, Palmerston, Rarotonga (the capital island), Takutea.
Cook Island Northern Group
Manihiki, Nassau, Tongareva (Penrhyn) also known as Mangarongaro, Pukapuka, Rakahanga, Suwarrow
Cook Island Time Zones
Rarotonga and Aitutaki are in the same time zone.
Cook Island Currency
New Zealand dollar.
Cook Island Language
English and Cook Island Maori.
Call the “Island Travel Gal” at 800 644-6659 or email email@example.com
to secure your seats to the idyllic Cook Islands
If you enjoyed this post, make sure you subscribe to my RSS feed! | <urn:uuid:38abe3cc-4023-4e24-87a6-103174cbb29c> | {
"date": "2013-05-18T06:29:50",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9481472969055176,
"score": 2.671875,
"token_count": 576,
"url": "http://seethesouthpacific.com/tag/cook-island-geography/"
} |
- Exam wrappers. As David Thompson describes the process, "exam wrappers required students to reflect on their performance before and after seeing their graded tests." The first four questions, completed just prior to receiving their graded test, asked students to report the time they spent preparing for the test, their methods of preparation, and their predicted test grade. After reviewing their graded test, students completed the final three reflection questions, including a categorization of test mistakes and a list of changes to implement in preparation for the next test. Thompson then collected and made copies of the wrappers returned them to the students several days later, reminding them to consider what they planned to do differently or the same in preparation for the upcoming test. Thompson reports that each reflection exercise required only 8-10 minutes of class time. Clara Hardy and others also describes uses exam wrappers.
- Reading Reflections. As Karl Wirth writes, reading reflections, effectively outlined by David Bressoud (2008), are designed to address some of the challenges students face with college-level reading assignments. Students submit online reading reflections (e.g., using Moodle or Blackboard) after completing each reading assignment and before coming to class. In each reflection, students summarize the important concepts of the reading and describe what was interesting, surprising, or confusing to them. The reading reflections not only encourage students to read regularly before class, but they also promote content mastery and foster student development of monitoring, self-evaluation, and reflection skills. For the instructor, reading reflections facilitate "just-in-time" teaching and provide invaluable insights into student thinking and learning. According to Wirth, expert readers are skilled at using a wide range of strategies during all phases of reading (e.g., setting goals for learning, monitoring comprehension during reading, checking comprehension, and self-reflection), but most college instruction simply assumes the mastery of such metacognitive skills.
- Knowledge surveys. Many members of the group were influenced by Karl Wirth's work on "knowledge surveys" as a central strategy for helping students think about their thinking. Knowledge surveys involve simple self-reports from students about their knowledge of course concepts and content. In knowledge surveys, students are presented with different facets of course content and are asked to indicate whether they know the answer, know some of the answer, or don't know the answer. Faculty can use these reports to gauge how confident students feel in their understanding of course material at the beginning or end of a course, before exams or papers, or even as graduating seniors or alumni.
Kristin Bonnie's report relates how her students completed a short knowledge survey (6-12 questions) online (via Google forms) on the material covered in class that week. Rather than providing the answer to each question, students indicated their confidence in their ability to answer the question correctly (I know; I think I know; I don't know). Students received a small amount of credit for completing the knowledge survey. She used the information to review material that students seemed to struggle with. In addition, a subset of these questions appeared on their exam – the knowledge survey therefore served as a review sheet.Wirth notes that the surveys need not take much class time and can be administered via paper or the web. The surveys can be significant for clarifying course objectives, structure, and design. For students, knowledge surveys achieve several purposes: they help make clear course objectives and expectations, are useful as study guides, can serve as a formative assessment tool, and, perhaps most critically, aid in their development of self-assessment and metacognitive skills. For instructors, the surveys help them assess learning gains, instructional practices, and course design. | <urn:uuid:9d6abf05-e21c-4917-ab95-cb9f3fd306aa> | {
"date": "2013-05-18T06:51:37",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9568482041358948,
"score": 3.578125,
"token_count": 746,
"url": "http://serc.carleton.edu/acm_teagle/interventions"
} |
Free the Cans! Working Together to Reduce Waste
In a blog about how people share, it’s worth the occasional reference to the bizarre ways that people DON’T SHARE. Is it safe to say we live in a society that places great value on independence, private property, personal space, and privacy? Even sometimes extreme value? Is that why people at an 8-unit apartment building in Oakland, CA have separate caged stalls for eight separate trash cans? I know it’s not nice to stare, but I walked by these incarcerated cans and could not help myself. I returned with my camera, so that I could share my question with the world: Why can’t people share trash cans or a single dumpster? Or, at the very least, why can’t the cans share driveway space?
The Zero Waste Movement has come to the Bay Area and it calls for a new use for these eight cages. Here are my suggestions:
- Turn two of those cages into compost bins. Fill one with grass, leaves, and vegetable scraps, let it decompose for six months, then start filling the second bin in the meantime.
- Put in a green can, which is what Oakland uses to collect milk cartons, pizza boxes, yard trimmings, and all food to send it to the municipal composting facility. If your city doesn’t do this yet, tell them it’s a great idea and they could be as cool and cutting edge as Oakland.
- Put in one or two recycling cans for glass, plastic, cardboard, paper, aluminum, etc.
- Put out a FREE STUFF box for unwanted clothing and household items. The neighbors could sort through it each week, and later put it out on the curb for passers-by to explore. Take what’s left to Goodwill or a comparable donation spot.
- Put in a few small bins for various items that can be recycled, such asbatteries and electronics, which can then be taken to an electronics recycling center every month or two. Styrofoam can be brought to a local packaging store or ceramics business that accepts used packaging material. Or, if you accumulate a bunch of plastic bags,take them to a store or to some other place that accepts used ones.
- Put in ONE trash can. By the time you compost, recycle, re-use, redistribute, and take a few other measures to reduce your waste, you’ll have almost no trash each week.
- Install a bicycle rack or locked bicycle cage.
- With the leftover space, put in a container garden and a bench where neighbors can gather and chat. A much more pleasant alternative to the garbage can jailhouse ambiance, wouldn’t you agree? | <urn:uuid:c970d9a2-a5ce-4050-9ea3-58d7bbd609a8> | {
"date": "2013-05-18T05:49:03",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9323602318763733,
"score": 2.8125,
"token_count": 575,
"url": "http://sharingsolution.com/2009/05/23/free-the-cans-working-together-to-reduce-waste/"
} |
Excerpts for Thames : The Biography
The River as Fact
It has a length of 215 miles, and is navigable for 191 miles. It is the longest river in England but not in Britain, where the Severn is longer by approximately 5 miles. Nevertheless it must be the shortest river in the world to acquire such a famous history. The Amazon and the Mississippi cover almost 4,000 miles, and the Yangtze almost 3,500 miles; but none of them has arrested the attention of the world in the manner of the Thames.
It runs along the borders of nine English counties, thus reaffirming its identity as a boundary and as a defence. It divides Wiltshire from Gloucestershire, and Oxfordshire from Berkshire; as it pursues its way it divides Surrey from Middlesex (or Greater London as it is inelegantly known) and Kent from Essex. It is also a border of Buckinghamshire. It guarded these once tribal lands in the distant past, and will preserve them into the imaginable future.
There are 134 bridges along the length of the Thames, and forty-four locks above Teddington. There are approximately twenty major tributaries still flowing into the main river, while others such as the Fleet have now disappeared under the ground. Its "basin," the area from which it derives its water from rain and other natural forces, covers an area of some 5,264 square miles. And then there are the springs, many of them in the woods or close to the streams beside the Thames. There is one in the wood below Sinodun Hills in Oxfordshire, for example, which has been described as an "everlasting spring" always fresh and always renewed.
The average flow of the river at Teddington, chosen because it marks the place where the tidal and non-tidal waters touch, has been calculated at 1,145 millions of gallons (5,205 millions of litres) each day or approximately 2,000 cubic feet (56.6 cubic metres) per second. The current moves at a velocity between 1Ú2 and 23Ú4 miles per hour. The main thrust of the river flow is known to hydrologists as the "thalweg"; it does not move in a straight and forward line but, mingling with the inner flow and the variegated flow of the surface and bottom waters, takes the form of a spiral or helix. More than 95 per cent of the river's energy is lost in turbulence and friction.
The direction of the flow of the Thames is therefore quixotic. It might be assumed that it would move eastwards, but it defies any simple prediction. It flows north-west above Henley and at Teddington, west above Abingdon, south from Cookham and north above Marlow and Kingston. This has to do with the variegated curves of the river. It does not meander like the Euphrates, where according to Herodotus the voyager came upon the same village three times on three separate days, but it is circuitous. It specialises in loops. It will take the riparian traveller two or three times as long to cover the same distance as a companion on the high road. So the Thames teaches you to take time, and to view the world from a different vantage.
The average "fall" or decline of the river from its beginning to its end is approximately 17 to 21 inches (432 to 533 mm) per mile. It follows gravity, and seeks out perpetually the simplest way to the sea. It falls some 600 feet (183 m) from source to sea, with a relatively precipitous decline of 300 feet (91.5 m) in the first 9 miles; it falls 100 (30.4 m) more in the next 11 miles, with a lower average for the rest of its course. Yet averages may not be so important. They mask the changeability and idiosyncrasy of the Thames. The mean width of the river is given as 1,000 feet (305 m), and a mean depth of 30 feet (9 m); but the width varies from 1 or 2 feet (0.3 to 0.6 m) at Trewsbury to 51Ú2 miles at the Nore.
The tide, in the words of Tennyson, is that which "moving seems asleep, too full for sound and foam." On its flood inward it can promise benefit or danger; on its ebb seaward it suggests separation or adventure. It is one general movement but it comprises a thousand different streams and eddies; there are opposing streams, and high water is not necessarily the same thing as high tide. The water will sometimes begin to fall before the tide is over. The average speed of the tide lies between 1 and 3 knots (1.15 and 3.45 miles per hour), but at times of very high flow it can reach 7 knots (8 miles per hour). At London Bridge the flood tide runs for almost six hours, while the ebb tide endures for six hours and thirty minutes. The tides are much higher now than at other times in the history of the Thames. There can now be a difference of some 24 feet (7.3 m) between high and low tides, although the average rise in the area of London Bridge is between 15 and 22 feet (4.5 and 6.7 m). In the period of the Roman occupation, it was a little over 3 feet (0.9 m). The high tide, in other words, has risen greatly over a period of two thousand years.
The reason is simple. The south-east of England is sinking slowly into the water at the rate of approximately 12 inches (305 mm) per century. In 4000 BC the land beside the Thames was 46 feet (14 m) higher than it is now, and in 3000 BC it was some 31 feet (9.4 m) higher. When this is combined with the water issuing from the dissolution of the polar ice-caps, the tides moving up the lower reaches of the Thames are increasing at a rate of 2 feet (0.6 m) per century. That is why the recently erected Thames Barrier will not provide protection enough, and another barrier is being proposed.
The tide of course changes in relation to the alignment of earth, moon and sun. Every two weeks the high "spring" tides reach their maximum two days after a full moon, while the low "neap" tides occur at the time of the half-moon. The highest tides occur at the times of equinox; this is the period of maximum danger for those who live and work by the river. The spring tides of late autumn and early spring are also hazardous. It is no wonder that the earliest people by the Thames venerated and propitiated the river.
The general riverscape of the Thames is varied without being in any sense spectacular, the paraphernalia of life ancient and modern clustering around its banks. It is in large part now a domesticated river, having been tamed and controlled by many generations. It is in that sense a piece of artifice, with some of its landscape deliberately planned to blend with the course of the water. It would be possible to write the history of the Thames as a history of a work of art.
It is a work still in slow progress. The Thames has taken the same course for ten thousand years, after it had been nudged southward by the glaciation of the last ice age. The British and Roman earthworks by the Sinodun Hills still border the river, as they did two thousand years before. Given the destructive power of the moving waters, this is a remarkable fact. Its level has varied over the millennia--there is a sudden and unexpected rise at the time of the Anglo-Saxon settlement, for example--and the discovery of submerged forests testifies to incidents of overwhelming flood. Its appearance has of course also altered, having only recently taken the form of a relatively deep and narrow channel, but its persistence and identity through time are an aspect of its power.
Yet of course every stretch has its own character and atmosphere, and every zone has its own history. Out of oppositions comes energy, out of contrasts beauty. There is the overwhelming difference of water within it, varying from the pure freshwater of the source through the brackish zone of estuarial water to the salty water in proximity to the sea. Given the eddies of the current, in fact, there is rather more salt by the Essex shore than by the Kentish shore. There are manifest differences between the riverine landscapes of Lechlade and of Battersea, of Henley and of Gravesend; the upriver calm is in marked contrast to the turbulence of the long stretches known as River of London and then London River. After New Bridge the river becomes wider and deeper, in anticipation of its change.
The rural landscape itself changes from flat to wooded in rapid succession, and there is a great alteration in the nature of the river from the cultivated fields of Dorchester to the thick woods of Cliveden. From Godstow the river becomes a place of recreation, breezy and jaunty with the skiffs and the punts, the sports in Port Meadow and the picnic parties on the banks by Binsey. But then by some change of light it becomes dark green, surrounded by vegetation like a jungle river; and then the traveller begins to see the dwellings of Oxford, and the river changes again. Oxford is a pivotal point. From there you can look upward and consider the quiet source; or you can look downstream and contemplate the coming immensity of London.
In the reaches before Lechlade the water makes its way through isolated pastures; at Wapping and Rotherhithe the dwellings seem to drop into it, as if overwhelmed by numbers. The elements of rusticity and urbanity are nourished equally by the Thames. That is why parts of the river induce calm and forgetfulness, and others provoke anxiety and despair. It is the river of dreams, but it is also the river of suicide. It has been called liquid history because within itself it dissolves and carries all epochs and generations. They ebb and flow like water.
The River as Metaphor
The river runs through the language, and we speak of its influence in every conceivable context. It is employed to characterise life and death, time and destiny; it is used as a metaphor for continuity and dissolution, for intimacy and transitoriness, for art and history, for poetry itself. In The Principles of Psychology (1890) William James first coined the phrase "stream of consciousness" in which "every definite image of the mind is steeped . . . in the free water that flows around it." Thus "it flows" like the river itself. Yet the river is also a token of the unconscious, with its suggestion of depth and invisible life.
The river is a symbol of eternity, in its unending cycle of movement and change. It is one of the few such symbols that can readily be understood, or appreciated, and in the continuing stream the mind or soul can begin to contemplate its own possible immortality.
In the poetry of John Denham's "Cooper's Hill" (1642), the Thames is a metaphor for human life. How slight its beginning, how confident its continuing course, how ineluctable its destination within the great ocean:
Hasting to pay his tribute to the sea,
Like mortal life to meet eternity.
The poetry of the Thames has always emphasised its affiliations with human purpose and with human realities. So the personality of the river changes in the course of its journey from the purity of its origins to the broad reaches of the commercial world. The river in its infancy is undefiled, innocent and clear. By the time it is closely pent in by the city, it has become dank and foul, defiled by greed and speculation. In this regress it is the paradigm of human life and of human history. Yet the river has one great advantage over its metaphoric companions. It returns to its source, and its corruption can be reversed. That is why baptism was once instinctively associated with the river. The Thames has been an emblem of redemption and of renewal, of the hope of escaping from time itself.
When Wordsworth observed the river at low tide, with the vista of the "mighty heart" of London "lying still," he used the imagery of human circulation. It is the image of the river as blood, pulsing through the veins and arteries of its terrain, without which the life of London would seize up. Sir Walter Raleigh, contemplating the Thames from the walk by his cell in the Tower, remarked that the "blood which disperseth itself by the branches or veins through all the body, may be resembled to these waters which are carried by brooks and rivers overall the earth." He wrote his History of the World (1610) from his prison cell, and was deeply imbued with the current of the Thames as a model of human destiny. It has been used as the symbol for the unfolding of events in time, and carries the burden of past events upon its back. For Raleigh the freight of time grew ever more complex and wearisome as it proceeded from its source; human life had become darker and deeper, less pure and more susceptible to the tides of affairs. There was one difference Raleigh noticed in his history, when he declared that "for this tide of man's life, after it once turneth and declineth, ever runneth with a perpetual ebb and falling stream, but never floweth again."
The Thames has also been understood as a mirror of morality. The bending rushes and the yielding willows afford lessons in humility and forbearance; the humble weeds along its banks have been praised for their lowliness and absence of ostentation. And who has ventured upon the river without learning the value of patience, of endurance, and of vigilance? John Denham makes the Thames the subject of native discourse in a further sense:
Though deep, yet clear; though gentle, yet not dull;
Strong without rage; without o'erflowing, full.
This suggests that the river represents an English measure, an aesthetic harmony to be sought or wished for, but in the same breath Denham seems to be adverting to some emblem of Englishness itself. The Thames is a metaphor for the country through which it runs. It is modest and moderate, calm and resourceful; it is powerful without being fierce. It is not flamboyantly impressive. It is large without being too vast. It eschews extremes. It weaves its own course without artificial diversions or interventions. It is useful for all manner of purposes. It is a practical river.
When Robert Menzies, an erstwhile Australian prime minister, was taken to Runnymede he was moved to comment upon the "secret springs" of the "slow English character." This identification of the land with the people, the characteristics of the earth and water with the temperament of their inhabitants, remains a poignant one. There is an inward and intimate association between the river and those who live beside it, even if that association cannot readily be understood.
From the Hardcover edition. | <urn:uuid:c8589dab-6a33-4d56-9c69-99faf059b9e4> | {
"date": "2013-05-18T08:09:29",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9624767303466797,
"score": 3.390625,
"token_count": 3140,
"url": "http://sherloc.imcpl.org/enhancedContent.pl?contentType=ExcerptDetail&isbn=9780385528474"
} |
Teach your child the importance of good sportsmanship.Not too long ago, my 10-year-old daughter's indoor soccer team finished their game and lined up to do the traditional end-of-game walk with the other team. If your own child has ever played in a team sport, you likely have seen this walk a hundred times before. Win or lose, each member of the team is expected to essentially tell the other players they did well and good game. This is a classic way to end a game on a positive note and to exhibit good sportsmanship, win or lose.
The opposing team in this case, however, had a unique way of showing their good sportsmanship. They all licked their hands before holding them out for our own girls to "low-five" as they walked down the line. Our girls saw this, and they refused to touch the other girls' slimy, slobbery, germ-ridden hands. You may be wondering if our girls' team beat this other team. The truth is that they beat the other team pretty harshly, but there is no score that would justify the level of poor sportsmanship that the other team exhibited.
As a parent, I can only hope the parents or coach on the other team reprimanded their girls for this unsportsmanlike behavior. This is not the kind of behavior any parent would be proud to see in their own child. However, this is just one of many ways unsportsmanlike behavior is exhibited. From tears on the field to pushing, shoving, "trash talking" and more, there are many different behaviors that are associated with poor sportsmanship.
The fact is that good sportsmanship is a quality that can play a role in your child's ability to react to other situations throughout life. Competition may occur on the field, but it also plays a part in the college admission process, a run for a place on the school board, the job application process and so much more. Teaching your child how to be a good sport now can help him or her to handle wins and losses throughout life with grace. So how can you help your child build a healthy "win-or-lose" attitude?
A Positive Parental Role Model
No parent takes pride in seeing other players, either from their child's own team or on the opposing team, be better than their own child. Parents simply want their child to be the best. However, somewhere between the desire to see your kid to aim for the stars and the truth of reality is the fact that there always will be someone or some team that is better. As a parent, you can talk negatively about these better players or better teams, or you can talk positively about them. You can use these interactions with better competition to point out areas where your own child can improve and to teach your child to respect those with skills and talents that are worthy of respect. This is a great opportunity to teach your child to turn lemons into lemonade.
You Win Some, You Lose Some
Very few children really are the best at what they do. There is always someone who either is better now or who is working hard to be better in the near future. A team that was on top this season may not be the top team the next season. While you want your child to work hard and strive to win, it is unrealistic to expect a child or his or her team to win all of the time. Children will inevitably be disappointed after a loss. This is understandable and justified, especially if he or she has been working hard and did his or her personal best. As a parent, your response to a loss is every bit as important as your response to a win. The fact is that an entire team can play their best, and they may simply be out-matched. Teaching kids that losses do happen, even when they try their hardest, can help them to cope with their defeat. Show them that you are proud of their performance and effort at each game rather than letting the tally mark under the "W" column dictate this.
A Lesson Learned
The fact is that a child or a team simply will not improve very quickly when they are blowing out the competition on a regular basis. To be the best, you have to play the best. You have to be challenged by the best, and sometimes this means a loss will occur. Within each game, whether a win or loss, lies an opportunity for growth, development and improvement. After each game, regardless of the outcome, talk to your child about what he or she did well and what he or she thinks could have been done better. Rather than tell your child what you think, ask your child his or her personal opinion on the matter and what the coach said. Then, remind your child that these are areas that he or she can work on for the next game.
Nobody likes to lose, but challenge and loss are the motivators that make us all better. Whether on the field, in the workplace or any number of other environments, challenge and loss are vital to developing that ever-important trait that true winners in life have. That trait is perseverance.Content by Kim Daugherty . | <urn:uuid:efad95cf-5021-4697-ab61-7e73bd8414e7> | {
"date": "2013-05-18T05:34:10",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9801746010780334,
"score": 2.75,
"token_count": 1056,
"url": "http://shine.yahoo.com/team-mom/building-healthy-win-lose-attitude-161700902.html?.tsrc=attcf"
} |
Groundhogs, as a species, have a large range in size. There are the medium-sized rodents I grew up with, averaging around 4 kg, and groundhogs—like a certain Phil—that are probably more like 14 kg. This is the likely source of my earlier confusion, as that's a huge discrepancy in size. Evidently, it's all in the diet, much like humans.
Where I grew up, in rural Northern Minnesota, we called the groundhog a woodchuck; I thought that the groundhog was some fat cat, East Coast, liberal rodent. As it would turn out, they are actually one in the same creature—Marmota monax, a member of the squirrel family. Woodchucks spend a lot of their time in burrows. It is their safe haven from their many predators, and they are quick to flee to it at the first sign of danger. They will sometimes emit a loud whistle on their way to alert others in the area that something is awry. Groundhogs enjoy raiding our gardens and digging up sod, thereby destroying what we've spent countless hours toiling upon.
Look for groundhog signs. You might not even know there is a groundhog around until your garden has been devoured or your tractor damaged by a collapsed groundhog den. Things to look for are large nibble marks on your prized veggies, gnaw marks on the bark of young fruit trees, root vegetables pulled up (or their tops trimmed off), groundhog-sized holes (25–30 cm) anywhere near your garden, or mounds of dirt near said holes. If you see these signs, take action. Don't wait or it will be too late! If you know it will be a problem and do nothing, you can't blame the animal.
Set groundhog traps. This technique takes some skill as you need to be able to pick a spot in the path of the animal, camouflage it, and mask your strong human scent. Setting a spring trap, whether coil or long-spring, is usually just a matter of compressing the springs and setting a pin that keeps the jaws open into the pan or trigger. Make sure your trap is anchored securely with a stake. Check your traps often, and dispatch the animal quickly and humanely. Shooting them in the head or a hearty whack to the head with club will do the trick. If you can't deal with this, you have no business setting traps. Call a professional.
Guns kill groundhogs. I have never shot a groundhog. I rarely have had problems with them, and they move so damned fast it is difficult to get a shot off. If I had to, I know how I would do it. First, be sure it is legal in your area, and be sure to follow gun safety protocols. After that, it's just a matter of learning where your target is going to be comfortable and let their guard down. I would follow their tracks back to their den, find a spot downwind to sit with a clear shooting lane, and make sure nothing you shouldn't hit with a bullet is down range. Then, I would wait, my sights set on the den, until the groundhog stuck its head up—quick and easy.
Demolish the groundhog burrows. If you find a couple holes around your yard, they are likely the entrances to an elaborate tunnel maze carved into the earth beneath you. About all you can do, short of digging the whole mess up, is to try and fill it in from the top side. First, fill it with a bunch of rocks and then soil—make sure to really pack it in. This will make it difficult for the groundhog to reclaim its hole without a lot of work. You probably want to do this in tandem with other control methods such as trapping, shooting, or fumigating to prevent the groundhog from just digging a new hole.
Do some landscaping and build barriers. As with the control of many pests, it is advisable to keep a yard free of brush, undercover, and dead trees. These types of features are attractive to groundhogs as cover, and without it, they are less likely to want to spend time there. If you want to keep a groundhog out of an area, consider a partially buried fence. This will require a lot of work, but it is going to help a lot. Make sure it extends up at least a meter, and that it is buried somewhere around 30 cm deep. Angle the fencing outward 90 degrees when you bury it, and it will make digging under it a very daunting task for your furry friend.
Try using fumigants to kill groundhogs. What is nice about this product is that you can kill the animal and bury it all in one stroke. The best time to do this is in the spring when the mother will be in the den with her still helpless young. Also, the soil will likely be damp, which helps a lot. You should definitely follow the directions on the package, but the way they usually work is that you cover all but one exit, set off the smoke bomb, shove it down the hole, and quickly cover it up. Check back in a day or two to see if there is any sign of activity, and if so, do it again or consider a different control method. It is important that you don't do this if the hole is next to your house or if there is any risk of a fire.
Poisons are a last resort. I am not a fan of poisons because it is difficult to target what will eat said poison in the wild. Also, you are left with the issue of where the groundhog will die and how bad it will smell if it is somewhere under your house. Or, if it is outside somewhere, who will be affected by eating the dead animal? Where does it end? If you want to use poison, you're on your own.
Use live traps. This is a good option for those of you not too keen on killing things. Try jamming the door open and leaving bait inside for the taking a couple of times so they get used to it. Then, set it normally and you've got your groundhog (or a neighborhood cat). Now what? The relocation is just as important; you need to choose a place that is far away from other humans and can likely support a groundhog. Good luck.
Predator urine. The idea is simple: form a perimeter around an area you want to protect. If the groundhog doesn't recognize the smell as a natural predator, it is probably not going to work too well. Look for brands that have wolf and bobcat urine. Apply regularly, or as the manufacturer recommends. Remember, if it rains, the urine has probably washed away.
Repellents. Another popular method involves pepper-based repellents. These deter groundhogs by tasting horrible and burning their mucous membranes. You can do a perimeter with powdered cayenne pepper or just apply it to the things you want spared in your garden. Be sure to wash your vegetables off before using them (which you should be doing anyway). | <urn:uuid:077623b0-183e-4d40-bf5d-168228829785> | {
"date": "2013-05-18T08:01:50",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9678292274475098,
"score": 3.03125,
"token_count": 1467,
"url": "http://simplepestcontrol.com/groundhog-control.htm"
} |
In my next few blogs, I will provide an overview of Voltage Source Converter (VSC) HVDC technology and its suitability for Smart Grids operation and control discussed.
VSC HVDC is based upon transistor technology and was developed in the 1990′s. The switching element is the Insulated Gate Bipolar Thyristor (IGBT), which can be switched on and off by applying a suitable voltage to the gate (steering electrode). Because of the more switching operations, and the nature of the semiconductor devices itself, the converter losses are generally higher than those of HVDC classic converters.
VSC HVDC is commonly used with underground or submarine cables with a transfer capacity in the range of 10 – 1000 MW, and is suitable to serve as a connection to a wind farm or supply a remote load. VSC HVDC technology has very fast steer and control functionality and is suitable for meshed networks. It is characterised by compactness of the converter stations, due to the reduced need for AC harmonic filters and reactive power compensation. Power flow reversal in VSC systems is achieved by reversal of the current, whereas in HVDC classic systems the voltage polarity has to change. An important consequence of this voltage source behavior is the ability to use cheaper and easier to install XLPE cables, instead of the mass-impregnated cables that are needed for HVDC classic.
Currently, only twelve VSC HVDC projects are in service. A few examples include: Estlink, which connects Estonia to Finland (350 MW), and BorWin1, connecting an offshore wind farm to Northern Germany (400 MW). Both are equipped with ±150 kV submarine cables, and the Trans Bay project in California (400 MW) that consists of 90 km ±200 kV submarine cable.
Most projects have submarine cable, but some projects include long lengths of underground cable, such as Murraylink (220 MW, 177 km underground cable), and Nord E.On 1 (400 MW, 75km underground cable).
The 500 MW East-West interconnector between Ireland and Great Britain, operating at ±200 kV, is scheduled to go into service in 2012. A 2000 MW 65 km cable interconnector ±320kV as part of the Trans European Network—between Spain and France—is scheduled for commissioning in 2013, and will represent the highest power rating for a VSC HVDC system installed at this time.
Make sure to check back next Tuesday for my next blog on the comparison between HVDC classic and VSC HVDC.
By: Peter Vaessen | <urn:uuid:7b22bc99-05e4-4dcf-9143-3715cd5724d7> | {
"date": "2013-05-18T07:24:59",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9366721510887146,
"score": 3.234375,
"token_count": 536,
"url": "http://smartgridsherpa.com/blog/voltage-source-converter-hvdc-a-key-technology-for-smart-grids"
} |
The Operations Layer defines the operational processes and procedures necessary to deliver Information Technology (IT) as a Service. This layer leverages IT Service Management concepts that can be found in prevailing best practices such as ITIL and MOF. The main focus of the Operations Layer is to execute the business requirements defined at the Service Delivery Layer. Cloud-like service attributes cannot be achieved through technology alone and require a high level of IT Service Management maturity.
Change Management process is responsible for controlling the life cycle of all changes. The primary objective of Change Management is to eliminate or at least minimize disruption while desired changes are made to services. Change Management focuses on understanding and balancing the cost and risk of making the change versus the benefit of the change to either the business or the service. Driving predictability and minimizing human involvement are the core principles for achieving a mature Service Management process and ensuring changes can be made without impacting the perception of continuous availability.
Standard (Automated) Change
Non-Standard (Mechanized) Change
It is important to note that a record of all changes must be maintained, including Standard Changes that have been automated. The automated process for Standard Changes should include the creation and population of the change record per standard policy in order to make sure auditability.
Automating changes also enables other key principles such as:
The Service Asset and Configuration Management process is responsible for maintaining information on the assets, components, and infrastructure needed to provide a service. Critical configuration data for each component, and its relationship to other components, must be accurately captured and maintained. This configuration data should include past and current states and future-state forecasts, and be easily available to those who need it. Mature Service Asset and Configuration Management processes are necessary for achieving predictability.
A virtualized infrastructure adds complexity to the management of Configuration Items (CIs) due to the transient nature of the relationship between guests and hosts in the infrastructure. How is the relationship between CIs maintained in an environment that is potentially changing very frequently?
A service comprises software, platform, and infrastructure layers. Each layer provides a level of abstraction that is dependent on the layer beneath it. This abstraction hides the implementation and composition details of the layer. Access to the layer is provided through an interface and as long as the fabric is available, the actual physical location of a hosted VM is irrelevant. To provide Infrastructure as a Service (IaaS), the configuration and relationship of the components within the fabric must be understood, whereas the details of the configuration within the VMs hosted by the fabric are irrelevant.
The Configuration Management System (CMS) will need to be partitioned, at a minimum, into physical and logical CI layers. Two Configuration Management Databases (CMDBs) might be used; one to manage the physical CIs of the fabric (facilities, network, storage, hardware, and hypervisor) and the other to manage the logical CIs (everything else). The CMS can be further partitioned by layer, with separate management of the infrastructure, platform, and software layers. The benefits and trade-offs of each approach are summarized below.
CMS Partitioned by Layer
CMS Partitioned into Physical and Logical
Table 2: Configuration Management System Options
Partitioning logical and physical CI information allows for greater stability within the CMS, because CIs will need to be changed less frequently. This means less effort will need to be expended to accurately maintain the information. During normal operations, mapping a VM to its physical host is irrelevant. If historical records of a VM’s location are needed, (for example, for auditing or Root Cause Analysis) they can be traced through change logs.
The physical or fabric CMDB will need to include a mapping of fault domains, upgrade domains, and Live Migration domains. The relationship of these patterns to the infrastructure CIs will provide critical information to the Fabric Management System.
The Release and Deployment Management processes are responsible for making sure that approved changes to a service can be built, tested, and deployed to meet specifications with minimal disruption to the service and production environment. Where Change Management is based on the approval mechanism (determining what will be changed and why), Release and Deployment Management will determine how those changes will be implemented.
The primary focus of Release and Deployment Management is to protect the production environment. The less variation is found in the environment, the greater the level of predictability – and, therefore, the lower the risk of causing harm when new elements are introduced. The concept of homogenization of physical infrastructure is derived from this predictability principle. If the physical infrastructure is completely homogenized, there is much greater predictability in the release and deployment process.
While complete homogenization is the ideal, it may not be achievable in the real world. Homogenization is a continuum. The closer an environment gets to complete homogeneity, the more predictable it becomes and the fewer the risks. Full homogeneity means not only that identical hardware models are used, but all hardware configuration is identical as well. When complete hardware homogeneity is not feasible, strive for configuration homogeneity wherever possible.
Figure 2: Homogenization Continuum
The Scale Unit concept drives predictability in Capacity Planning and agility in the release and deployment of physical infrastructure. The hardware specifications and configurations have been pre-defined and tested, allowing for a more rapid deployment cycle than in a traditional data center. Similarly, known quantities of resources are added to the data center when the Capacity Plan is triggered. However, when the Scale Unit itself must change (for example, when a vendor retires a hardware model), a new risk is introduced to the private cloud.
There will likely be a period where both n and n-1 versions of the Scale Unit exist in the infrastructure, but steps can be taken to minimize the risk this creates. Work with hardware vendors to understand the life cycle of their products and coordinate changes from multiple vendors to minimize iterations of the Scale Unit change. Also, upgrading to the new version of the Scale Unit should take place one Fault Domain at a time wherever possible. This will make sure that if an incident occurs with the new version, it can be isolated to a single Fault Domain.
Homogenization of the physical infrastructure means consistency and predictability for the VMs regardless of which physical host they reside on. This concept can be extended beyond the production environment. The fabric can be partitioned into development, test, and pre-production environments as well. Eliminating variability between environments enables developers to more easily optimize applications for a private cloud and gives testers more confidence that the results reflect the realities of production, which in turn should greatly improve testing efficiency.
The virtualized infrastructure enables workloads to be transferred more easily between environments. All VMs should be built from a common set of component templates housed in a library, which is used across all environments. This shared library includes templates for all components approved for production, such as VM images, the gold OS image, server role templates, and platform templates. These component templates are downloaded from the shared library and become the building blocks of the development environment. From development, these components are packaged together to create a test candidate package (in the form of a virtual hard disk (VHD) that is uploaded to the library. This test candidate package can then be deployed by booting the VHD in the test environment. When testing is complete, the package can again be uploaded to the library as a release candidate package – for deployment into the pre-production environment, and ultimately into the production environment.
Since workloads are deployed by booting a VM from a VHD, the Release Management process occurs very quickly through the transfer of VHD packages to different environments. This also allows for rapid rollback should the deployment fail; the current release can be deleted and the VM can be booted off the previous VHD.
Virtualization and the use of standard VM templates allow us to rethink software updates and patch management. As there is minimal variation in the production environment and all services in production are built with a common set of component templates, patches need not be applied in production. Instead, they should be applied to the templates in the shared library. Any services in production using that template will require a new version release. The release package is then rebuilt, tested, and redeployed, as shown below.
Figure 3: The Release Process
This may seem counter-intuitive for a critical patch scenario, such as when an exploitable vulnerability is exposed. But with virtualization technologies and automated test scripts, a new version of a service can be built, tested, and deployed quite rapidly.
Variation can also be reduced through standardized, automated test scenarios. While not every test scenario can or should be automated, tests that are automated will improve predictability and facilitate more rapid test and deployment timelines. Test scenarios that are common for all applications, or the ones that might be shared by certain application patterns, are key candidates for automation. These automated test scripts may be required for all release candidates prior to deployment and would make sure further reduction in variation in the production environment.
Knowledge Management is the process of gathering, analyzing, storing, and sharing knowledge and information within an organization. The goal of Knowledge Management is to make sure that the right people have access to the information they need to maintain a private cloud. As operational knowledge expands and matures, the ability to intelligently automate operational tasks improves, providing for an increasingly dynamic environment.
An immature approach to Knowledge Management costs organizations in terms of slower, less-efficient problem solving. Every problem or new situation that arises becomes a crisis that must be solved. A few people may have the prior experience to resolve the problem quickly and calmly, but their knowledge is not shared. Immature knowledge management creates greater stress for the operations staff and usually results in user dissatisfaction with frequent and lengthy unexpected outages. Mature Knowledge Management processes are necessary for achieving a service provider’s approach to delivering infrastructure. Past knowledge and experience is documented, communicated, and readily available when needed. Operating teams are no longer crisis-driven as service-impacting events grow less frequent and are quickly resolves when they do occur.
When designing a private cloud, development of the Health Model will drive much of the information needed for Knowledge Management. The Health Model defines the ideal states for each infrastructure component and the daily, weekly, monthly, and as-needed tasks required to maintain this state. The Health Model also defines unhealthy states for each infrastructure component and actions to be taken to restore their health. This information will form the foundation of the Knowledge Management database.
Aligning the Health Model with alerts allows these alerts to contain links to the Knowledge Management database describing the specific steps to be taken in response to the alert. This will help drive predictability as a consistent, proven set of actions will be taken in response to each alert.
The final step toward achieving a private cloud is the automation of responses to each alert as defined in the Knowledge Management database. Once these responses are proven successful, they should be automated to the fullest extent possible. It is important to note, though, that automating responses to alerts does not make them invisible and forgotten. Even when alerts generate a fully automated response they must be captured in the Service Management system. If the alert indicates the need for a change, the change record should be logged. Similarly, if the alert is in response to an incident, an incident record should be created. These automated workflows must be reviewed regularly by Operations staff to make sure the automated action achieves the expected result. Finally, as the environment changes over time, or as new knowledge is gained, the Knowledge Management database must be updated along with the automated workflows that are based on that knowledge.
The goal of Incident Management is to resolve events that are impacting, or threaten to impact, services as quickly as possible with minimal disruption. The goal of Problem Management is to identify and resolve root causes of incidents that have occurred as well as identify and prevent or minimize the impact of incidents that may occur.
Pinpointing the root cause of an incident can become more challenging when workloads are abstracted from the infrastructure and their physical location changes frequently. Additionally, incident response teams may be unfamiliar with virtualization technologies (at least initially) which could also lead to delays in incident resolution. Finally, applications may have neither a robust Health Model nor expose all of the health information required for a proactive response. All of this may lead to an increase in reactive (user initiated) incidents which will likely increase the Mean-Time-to-Restore-Service (MTRS) and customer dissatisfaction.
This may seem to go against the resiliency principle, but note that virtualization alone will not achieve the desired resiliency unless accompanied by highly mature IT Service Management (ITSM) maturity and a robust automated health monitoring system.
The drive for resiliency requires a different approach to troubleshooting incidents. Extensive troubleshooting of incidents in production negatively impacts resiliency. Therefore, if an incident cannot be quickly resolved, the service can be rolled back to the previous version, as described under Release and Deployment. Further troubleshooting can be done in a test environment without impacting the production environment. Troubleshooting in the production environment may be limited to moving the service to different hosts (ruling out infrastructure as the cause) and rebooting the VMs. If these steps do not resolve the issue, the rollback scenario could be initiated.
Minimizing human involvement in incident management is critical for achieving resiliency. The troubleshooting scenarios described earlier could be automated, which will allow for identification and possible resolution of the root much more quickly than non-automated processes. But automation may mask the root cause of the incident. Careful consideration should be given to determining which troubleshooting steps should be automated and which require human analysis.
Human Analysis of Troubleshooting
If a compute resource fails, it is no longer necessary to treat the failure as an incident that must be fixed immediately. It may be more efficient and cost effective to treat the failure as part of the decay of the Resource Pool. Rather than treat a failed server as an incident that requires immediate resolution, treat it as a natural candidate for replacement on a regular maintenance schedule, or when the Resource Pool reaches a certain threshold of decay. Each organization must balance cost, efficiency, and risk as it determines an acceptable decay threshold – and choose among these courses of action:
The benefits and trade-off of each of the options are listed below:
Option 4 is the least desirable, as it does not take advantage of the resiliency and cost reduction benefits of a private cloud. A well-planned Resource Pool and Reserve Capacity strategy will account for Resource Decay.
Option 1 is the most recommended approach. A predictable maintenance schedule allows for better procurement planning and can help avoid conflicts with other maintenance activities, such as software upgrades. Again, a well-planned Resource Pool and Reserve Capacity strategy will account for Resource Decay and minimize the risk of exceeding critical thresholds before the scheduled maintenance.
Option 3 will likely be the only option for self-contained Scale Unit scenarios, as the container must be replaced as a single Scale Unit when the decay threshold is reached.
The goal of Request Fulfillment is to manage requests for service from users. Users should have a clear understanding of the process they need to initiate to request service and IT should have a consistent approach for managing these requests.
Much like any service provider, IT should clearly define the types of requests available to users in the service catalog. The service catalog should include an SLA on when the request will be completed, as well as the cost of fulfilling the request, if any.
The types of requests available and their associated costs should reflect the actual cost of completing the request and this cost should be easily understood. For example, if a user requests an additional VM, its daily cost should be noted on the request form, which should also be exposed to the organization or person responsible for paying the bill.
It is relatively easy to see the need for adding resources, but more difficult to see when a resource is no longer needed. A process for identifying and removing unused VMs should be put into place. There are a number of strategies to do this, depending on the needs of a given organization, such as:
The benefits and trade-offs of each of these approaches are detailed below:
Option 4 affords the greatest flexibility, while still working to minimize server sprawl. When a user requests a VM, they have the option of setting an expiration date with no reminder (for example, if they know they will only be using the workload for one week). They could set an expiration deadline with a reminder (for example, a reminder that the VM will expire after 90 days unless they wish to renew). Lastly, the user may request no expiration date if they expect the workload will always be needed. If the last option is chosen, it is likely that underutilized VMs will still be monitored and owners notified.
Finally, self-provisioning should be considered, if appropriate, when evaluating request fulfillment options to drive towards minimal human involvement. Self-provisioning allows great agility and user empowerment, but it can also introduce risks depending on the nature of the environment in which these VMs are introduced.
For an enterprise organization, the risk of bypassing formal build, stabilize, and deploy processes may or may not outweigh the agility benefits gained from the self-provisioning option. Without strong governance to make sure each VM has an end-of-life strategy, the fabric may become congested with VM server sprawl. The pros and cons of self-provisioning options are listed in the next diagram:
The primary decision point for determining whether to use self-provisioning is the nature of the environment. Allowing developers to self-provision into the development environment greatly facilitates agile development, and allows the enterprise to maintain release management controls as these workloads are moved out of development and into test and production environments.
A user-led community environment isolated from enterprise mission-critical applications may also be a good candidate for self-provisioning. As long as user actions are isolated and cannot impact mission critical applications, the agility and user empowerment may justify the risk of giving up control of release management. Again, it is essential that in such a scenario, expiration timers are included to prevent server sprawl.
The goal of Access Management is to make sure authorized users have access to the services they need while preventing access by unauthorized users. Access Management is the implementation of security policies defined by Information Security Management at the Service Delivery Layer.
Maintaining access for authorized users is critical for achieving the perception of continuous availability. Besides allowing access, Access Management defines users who are allowed to use, configure, or administer objects in the Management Layer. From a provider’s perspective, it answers questions like:
From a consumer’s perspective, it answers questions such as:
Access Management is implemented at several levels and can include physical barriers to systems such as requiring access smartcards at the data center, or virtual barriers such as network and Virtual Local Area Network (VLAN) separation, firewalling, and access to storage and applications.
Taking a service provider’s approach to Access Management will also make sure that resource segmentation and multi-tenancy is addressed.
Resource Pools may need to be segmented to address security concerns around confidentiality, integrity, and availability. Some tenants may not wish to share infrastructure resources to keep their environment isolated from others. Access Management of shared infrastructure requires logical access control mechanisms such as encryption, access control rights, user groupings, and permissions. Dedicated infrastructure also relies on physical access control mechanisms, where infrastructure is not physically connected, but is effectively isolated through a firewall or other mechanisms.
The goal of systems administration is to make sure that the daily, weekly, monthly, and as-needed tasks required to keep a system healthy are being performed.
Regularly performing ongoing systems administration tasks is critical for achieving predictability. As the organization matures and the Knowledge Management database becomes more robust and increasingly automated, systems administration tasks is no longer part of the job role function. It is important to keep this in mind as an organization moves to a private cloud. Staff once responsible for systems administration should refocus on automation and scripting skills – and on monitoring the fabric to identify patterns that indicate possibilities for ongoing improvement of existing automated workflows. | <urn:uuid:809862ce-ee94-40a5-8400-9b6e42bd25fc> | {
"date": "2013-05-18T07:19:42",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9264841675758362,
"score": 2.640625,
"token_count": 4173,
"url": "http://social.technet.microsoft.com/wiki/contents/articles/4518.private-cloud-planning-guide-for-operations.aspx"
} |
Let and be two differentiable functions. We will say that and are proportional if and only if there exists a constant C such that . Clearly any function is proportional to the zero-function. If the constant C is not important in nature and we are only interested into the proportionality of the two functions, then we would like to come up with an equivalent criteria. The following statements are equivalent:
Therefore, we have the following:
Define the Wronskian of and to be , that is
The following formula is very useful (see reduction of order technique):
Remark: Proportionality of two functions is equivalent to their linear dependence. Following the above discussion, we may use the Wronskian to determine the dependence or independence of two functions. In fact, the above discussion cannot be reproduced as is for more than two functions while the Wronskian does.... | <urn:uuid:b7bc34b8-0f1f-4df8-8e8d-e56fc9c8fec5> | {
"date": "2013-05-18T08:07:38",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9316020011901855,
"score": 2.6875,
"token_count": 180,
"url": "http://sosmath.com/diffeq/second/linearind/wronskian/wronskian.html"
} |
Published May 2008
Properly located digital signage in high traffic areas on school campuses provides students and faculty with a convenient resource to stay up to date about the latest school news and activities.
Signage in Education
By Anthony D. Coppedge
Technology gets high marks.
Digital media and communications have come to play a vital role in people’s everyday lives, and a visit to the local K-12 school, college or university campus quickly illustrates the many ways in which individuals rely on audio and visual technologies each day. The shift from analog media to digital, represented by milestones ranging from the replacement of the Walkman by the MP3 player to the DTV transition currently enabling broadcasts beyond the home to mobile devices, has redefined the options that larger institutions, including those in our educational system, have for sharing information across the campus and facilities.
Flexible And Efficient
Digital signage, in particular, is proving to be a flexible and efficient tool for delivering specific and up-to-date information within the educational environment. As a high-resolution, high-impact medium, it lives up to the now-widespread expectation that visual media be crisp and clear, displayed on a large screen. Although the appeal of implementing digital signage networks does stem, in part, from plummeting screen prices and sophisticated content delivery systems, what’s equally or more important is that digital signage provides valuable information to the people who need it, when and where they need it. On school campuses—whether preschool, elementary, high school or post-secondary institutions—it does so effectively, for both educational purposes and for the security and safety of staff, administration and the student body as a whole.
School campuses have begun leveraging digital signage technology in addition to, or in place of, printed material, such as course schedules, content and location; time-sensitive school news and updates; maps and directions; welcome messages for visitors and applicants; and event schedules. Digital signage simplifies creation and delivery of multiple channels of targeted content to different displays on the network. Although a display in the college admissions office might provide prospective students with a glimpse into student life, for example, another display outside a lab or seminar room might present the courses or lectures scheduled for that space throughout the day.
This model of a distribution concept illustrates a school distributing educational content over a public TV broadcast network.
At the K-12 level, digital signage makes it easy to deliver information such as team or band practice schedules, or to post the cafeteria menu and give students information encouraging sound food choices. Digital signage in the preschool and daycare setting makes it easy for teachers and caregivers to share targeted educational programming with their classes.
Among the most striking benefits of communicating through digital signage is the quality of the pictures and the flexibility with which images, text and video can be combined in one or more windows to convey information. Studies have shown that dynamic signage is noticed significantly more often than are static displays and, furthermore, that viewers are more likely to remember that dynamic content.
Though most regularly updated digital signage content tends to be text-based, digital signage networks also have the capacity to enable the live campus-wide broadcast of key events: a speech by a visiting dignitary, the basketball team’s first trip to the state or national tournament, or even the proceedings at commencement and graduation. When time is short, it’s impractical to gather the entire student body in one place or there simply isn’t the time or means to deliver the live message in any other way.
The ability to share critical information to the entire school community, clearly and without delay, has made digital signage valuable as a tool for emergency response and communications. Parents, administrators, teachers and students today can’t help but be concerned about the school’s ability to respond quickly and effectively to a dangerous situation, whether the threat be from another person, an environmental hazard, an unpredictable weather system or some other menace.
Digital signage screens installed across a school campus can be updated immediately to warn students and staff of the danger, and to provide unambiguous instructions for seeking shelter or safety: where to go and what to do.
Although early digital signage systems relied on IP-based networks and point-to-point connections between a player and each display, current solutions operate on far less costly and much more scalable platforms. Broadcast-based digital signage models allow content to be distributed remotely from a single data source via transport media, such as digital television broadcast, satellite, broadband and WiMAX.
The staff member responsible for maintaining the digital signage network can use popular content creation toolsets to populate both dynamic and static displays. This content is uploaded to a server that, in turn, feeds the digital signage network via broadcast, much like datacasting, to the receive site for playout. By slotting specific content into predefined display templates, each section with its own playlist, the administrator can schedule display of multiple elements simultaneously or a single-window static, video or animated display.
The playlist enables delivery of the correct elements to the targeted display both at the scheduled time and in the appropriate layout. In networks with multicast-enabled routers, the administrator can schedule unique content for displays in different locations.
In the case of delivering emergency preparedness or response information across a campus, content can be created through the same back-office software used for day-to-day digital signage displays. Within the broadcast-based model, three components ensure the smooth delivery of content to each display.
A transmission component serves as a content hub, allocating bandwidth and inserting content into the broadcast stream based on the schedule dictated by the network’s content management component. Content is encapsulated into IP packets that, in turn, are encapsulated into MPEG2 packets for delivery.
Generic content distribution model for digital signage solution.
The content management component of the digital signage network provides for organization and scheduling of content, as well as targeting of that content to specific receivers. Flexibility in managing the digital signage system enables distribution of the same emergency message across all receivers and associated displays, or the delivery of select messages to particular displays within the larger network.
With tight control over the message being distributed, school administrators can immediately provide the information that students and staff in different parts of the campus need to maintain the safest possible environment. Receivers can be set to confirm receipt of content, in turn assuring administrative and emergency personnel that their communications are, in fact, being conveyed as intended. On the receiving end, the third component of the system, content, is extracted from the digital broadcast stream and fed to the display screen.
The relationships that many colleges and universities share with public TV stations provide an excellent opportunity for establishing a digital signage network. Today, the deployed base of broadcast-based content distribution systems in public TV stations is capable of reaching 50% of the US population. These stations’ DTV bandwidth is used not only for television programming, but also to generate new revenues and aggressively support public charters by providing efficient delivery of multimedia content for education, homeland security and other public services.
Educational institutions affiliated with such broadcasters already have the technology, and much of the necessary infrastructure, in place to launch a digital signage network. In taking advantage of the public broadcaster’s content delivery system, the college or university also can tap into the station’s existing links with area emergency response agencies.
As digital signage technology continues to evolve, educational institutions will be able to extend both urgent alerts and more mundane daily communications over text and email messaging. Smart content distribution systems will push consistent information to screens of all sizes, providing messages not only to displays, but also to the cell phones and PDAs so ubiquitous in US schools.
The continued evolution of MPH technology will support this enhancement in delivery of messages directly to each student. MPH in-band mobile DTV technology leverages ATSC DTV broadcasts to enable extensions of digital signage and broadcast content directly to personal devices, whether stationary or on the move. Rather than rely on numerous unrelated systems, such as ringing bells, written memos and intercom announcements, schools can unify messaging and its delivery, in turn reducing the redundancy involved in maintaining communications with the student body.
An effective digital signage network provides day-to-day benefits for an elementary school, high school, college or university while providing invaluable emergency communications capabilities that increasingly are considered a necessity, irrespective of whether they get put to the test. The selection of an appropriate digital signage model depends, of course, on the needs of the organization.
Educational institutions share many of the same concerns held by counterparts in the corporate world, and key among those concerns is the simple matter of getting long-term value and use out of their technical investments. However, before even addressing the type of content the school wishes to create and distribute, the systems integrator, consultant or other AV and media professional should work with the eventual operators of the digital signage network to identify and map out the existing workflow. Once the system designer, integrator or installer has evaluated how staff currently work in an emergency to distribute information, he then can adjust established processes and adapt them to the digital signage model.
The administrative staff who will be expected to update or import schedules to the digital signage system will have a much lower threshold of acceptance for a workflow that is completely unfamiliar or at odds with all their previous experience. An intuitive, easy-to-use system is more likely to be used in an emergency if it has become familiar in everyday practice.
Turnkey digital signage solutions provide end-to-end functionality without forcing users and integrators to work with multiple systems and interfaces. The key in selecting a vendor lies in ensuring that they share the same vision and are moving in the same direction as the end user.
In addition to providing ease of use, digital signage solutions for the education market also must provide a high level of built-in security, preventing abuse or misuse by hackers, or by those without the knowledge, experience or authority to distribute content over the network. Because the network is a conduit for emergency messaging, its integrity must be protected. So, the installer must not only identify the number of screens to be used and where, but also determine who gets access to the system and how that access remains secure.
Scalable systems that can grow in number of displays or accommodate infrastructure improvements and distribution of higher-bandwidth content will provide the long-term utility that makes the investment worthwhile. By going into the project with an understanding of existing infrastructure, such as cabling, firewalls, etc., and the client’s goals, the professional is equipped to advise the customer as to the necessity, options and costs for enhancing or improving on that infrastructure. As with any other significant deployment of AV technology, the installation of a digital signage network also requires knowledge of the site, local building codes, the availability of power and so forth.
Ralph Bachofen, senior director of Product Management and Marketing, Triveni Digital, has more than 15 years of experience in voice and multimedia over Internet Protocol (IP), telecommunications and the semiconductor business.
The infrastructure requirements of a school in deploying a digital signage network will vary, depending on the type of content being delivered through the system. HD and streaming content clearly are bandwidth hogs, whereas tickers and other text-based messages put a low demand on bandwidth. Most facilities today are equipped with Gigabit Ethernet networks that can handle
the demands of live video delivery and lighter content.
However, even bandwidth-heavy video can be delivered by less robust networks, as larger clips can be “trickled” over time to the site, as long as storage on the unit is adequate. There is no set standard for the bandwidth required, just as there is no single way to use a digital signage solution. It all depends on how the system will be used, and that’s an important detail to address up front.
Most digital signage solutions feature built-in content-creation tools and accept content from third-party applications, as well. Staff members who oversee the system thus can use familiar applications to create up-to-date content for the school’s digital signage network. This continuity in workflow adds to the value and efficiency of the network in everyday use, reducing the administrative burden while serving as a safeguard in the event of an emergency.
For educational institutions, the enormous potential of the digital signage network can open new doors for communicating with students and staff, but only if it is put to use effectively. Comprehensive digital signage solutions offer ease of use to administration, deliver clear and useful messaging on ordinary days and during crises, and feature robust design and underlying technology that supports continual use well into the future. | <urn:uuid:4f104f5b-67cc-4de8-87a7-62cb466de5d1> | {
"date": "2013-05-18T05:27:50",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9226340055465698,
"score": 2.5625,
"token_count": 2596,
"url": "http://soundandcommunications.com/archive_site/video/2008_05_video.htm"
} |
How We Found the Missing Memristor
The memristor--the functional equivalent of a synapse--could revolutionize circuit design
Image: Bryan Christie Design
THINKING MACHINE This artist's conception of a memristor shows a stack of multiple crossbar arrays, the fundamental structure of R. Stanley Williams's device. Because memristors behave functionally like synapses, replacing a few transistors in a circuit with memristors could lead to analog circuits that can think like a human brain.
It’s time to stop shrinking. Moore’s Law, the semiconductor industry’s obsession with the shrinking of transistors and their commensurate steady doubling on a chip about every two years, has been the source of a 50-year technical and economic revolution. Whether this scaling paradigm lasts for five more years or 15, it will eventually come to an end. The emphasis in electronics design will have to shift to devices that are not just increasingly infinitesimal but increasingly capable.
Earlier this year, I and my colleagues at Hewlett-Packard Labs, in Palo Alto, Calif., surprised the electronics community with a fascinating candidate for such a device: the memristor. It had been theorized nearly 40 years ago, but because no one had managed to build one, it had long since become an esoteric curiosity. That all changed on 1 May, when my group published the details of the memristor in Nature.
Combined with transistors in a hybrid chip, memristors could radically improve the performance of digital circuits without shrinking transistors. Using transistors more efficiently could in turn give us another decade, at least, of Moore’s Law performance improvement, without requiring the costly and increasingly difficult doublings of transistor density on chips. In the end, memristors might even become the cornerstone of new analog circuits that compute using an architecture much like that of the brain.
For nearly 150 years, the known fundamental passive circuit elements were limited to the capacitor (discovered in 1745), the resistor (1827), and the inductor (1831). Then, in a brilliant but underappreciated 1971 paper, Leon Chua, a professor of electrical engineering at the University of California, Berkeley, predicted the existence of a fourth fundamental device, which he called a memristor. He proved that memristor behavior could not be duplicated by any circuit built using only the other three elements, which is why the memristor is truly fundamental.
Memristor is a contraction of ”memory resistor,” because that is exactly its function: to remember its history. A memristor is a two-terminal device whose resistance depends on the magnitude and polarity of the voltage applied to it and the length of time that voltage has been applied. When you turn off the voltage, the memristor remembers its most recent resistance until the next time you turn it on, whether that happens a day later or a year later.
Think of a resistor as a pipe through which water flows. The water is electric charge. The resistor’s obstruction of the flow of charge is comparable to the diameter of the pipe: the narrower the pipe, the greater the resistance. For the history of circuit design, resistors have had a fixed pipe diameter. But a memristor is a pipe that changes diameter with the amount and direction of water that flows through it. If water flows through this pipe in one direction, it expands (becoming less resistive). But send the water in the opposite direction and the pipe shrinks (becoming more resistive). Further, the memristor remembers its diameter when water last went through. Turn off the flow and the diameter of the pipe ”freezes” until the water is turned back on.
That freezing property suits memristors brilliantly for computer memory. The ability to indefinitely store resistance values means that a memristor can be used as a nonvolatile memory. That might not sound like very much, but go ahead and pop the battery out of your laptop, right now—no saving, no quitting, nothing. You’d lose your work, of course. But if your laptop were built using a memory based on memristors, when you popped the battery back in, your screen would return to life with everything exactly as you left it: no lengthy reboot, no half-dozen auto-recovered files.
But the memristor’s potential goes far beyond instant-on computers to embrace one of the grandest technology challenges: mimicking the functions of a brain. Within a decade, memristors could let us emulate, instead of merely simulate, networks of neurons and synapses. Many research groups have been working toward a brain in silico: IBM’s Blue Brain project, Howard Hughes Medical Institute’s Janelia Farm, and Harvard’s Center for Brain Science are just three. However, even a mouse brain simulation in real time involves solving an astronomical number of coupled partial differential equations. A digital computer capable of coping with this staggering workload would need to be the size of a small city, and powering it would require several dedicated nuclear power plants.
Memristors can be made extremely small, and they function like synapses. Using them, we will be able to build analog electronic circuits that could fit in a shoebox and function according to the same physical principles as a brain.
A hybrid circuit—containing many connected memristors and transistors—could help us research actual brain function and disorders. Such a circuit might even lead to machines that can recognize patterns the way humans can, in those critical ways computers can’t—for example, picking a particular face out of a crowd even if it has changed significantly since our last memory of it.
The story of the memristor is truly one for the history books. When Leon Chua, now an IEEE Fellow, wrote his seminal paper predicting the memristor, he was a newly minted and rapidly rising professor at UC Berkeley. Chua had been fighting for years against what he considered the arbitrary restriction of electronic circuit theory to linear systems. He was convinced that nonlinear electronics had much more potential than the linear circuits that dominate electronics technology to this day.
Chua discovered a missing link in the pairwise mathematical equations that relate the four circuit quantities—charge, current, voltage, and magnetic flux—to one another. These can be related in six ways. Two are connected through the basic physical laws of electricity and magnetism, and three are related by the known circuit elements: resistors connect voltage and current, inductors connect flux and current, and capacitors connect voltage and charge. But one equation is missing from this group: the relationship between charge moving through a circuit and the magnetic flux surrounded by that circuit—or more subtly, a mathematical doppelgänger defined by Faraday’s Law as the time integral of the voltage across the circuit. This distinction is the crux of a raging Internet debate about the legitimacy of our memristor [see sidebar, ”Resistance to Memristance ”].
Chua’s memristor was a purely mathematical construct that had more than one physical realization. What does that mean? Consider a battery and a transformer. Both provide identical voltages—for example, 12 volts of direct current—but they do so by entirely different mechanisms: the battery by a chemical reaction going on inside the cell and the transformer by taking a 110â¿¿V ac input, stepping that down to 12 V ac, and then transforming that into 12 V dc. The end result is mathematically identical—both will run an electric shaver or a cellphone, but the physical source of that 12 V is completely different.
Conceptually, it was easy to grasp how electric charge could couple to magnetic flux, but there was no obvious physical interaction between charge and the integral over the voltage.
Chua demonstrated mathematically that his hypothetical device would provide a relationship between flux and charge similar to what a nonlinear resistor provides between voltage and current. In practice, that would mean the device’s resistance would vary according to the amount of charge that passed through it. And it would remember that resistance value even after the current was turned off.
He also noticed something else—that this behavior reminded him of the way synapses function in a brain.
Even before Chua had his eureka moment, however, many researchers were reporting what they called ”anomalous” current-voltage behavior in the micrometer-scale devices they had built out of unconventional materials, like polymers and metal oxides. But the idiosyncrasies were usually ascribed to some mystery electrochemical reaction, electrical breakdown, or other spurious phenomenon attributed to the high voltages that researchers were applying to their devices.
As it turns out, a great many of these reports were unrecognized examples of memristance. After Chua theorized the memristor out of the mathematical ether, it took another 35 years for us to intentionally build the device at HP Labs, and we only really understood the device about two years ago. So what took us so long?
It’s all about scale. We now know that memristance is an intrinsic property of any electronic circuit. Its existence could have been deduced by Gustav Kirchhoff or by James Clerk Maxwell, if either had considered nonlinear circuits in the 1800s. But the scales at which electronic devices have been built for most of the past two centuries have prevented experimental observation of the effect. It turns out that the influence of memristance obeys an inverse square law: memristance is a million times as important at the nanometer scale as it is at the micrometer scale, and it’s essentially unobservable at the millimeter scale and larger. As we build smaller and smaller devices, memristance is becoming more noticeable and in some cases dominant. That’s what accounts for all those strange results researchers have described. Memristance has been hidden in plain sight all along. But in spite of all the clues, our finding the memristor was completely serendipitous.
In 1995, I was recruited to HP Labs to start up a fundamental research group that had been proposed by David Packard. He decided that the company had become large enough to dedicate a research group to long-term projects that would be protected from the immediate needs of the business units. Packard had an altruistic vision that HP should ”return knowledge to the well of fundamental science from which HP had been withdrawing for so long.” At the same time, he understood that long-term research could be the strategic basis for technologies and inventions that would directly benefit HP in the future. HP gave me a budget and four researchers. But beyond the comment that ”molecular-scale electronics” would be interesting and that we should try to have something useful in about 10 years, I was given carte blanche to pursue any topic we wanted. We decided to take on Moore’s Law.
At the time, the dot-com bubble was still rapidly inflating its way toward a resounding pop, and the existing semiconductor road map didn’t extend past 2010. The critical feature size for the transistors on an integrated circuit was 350 nanometers; we had a long way to go before atomic sizes would become a limitation. And yet, the eventual end of Moore’s Law was obvious. Someday semiconductor researchers would have to confront physics-based limits to their relentless descent into the infinitesimal, if for no other reason than that a transistor cannot be smaller than an atom. (Today the smallest components of transistors on integrated circuits are roughly 45 nm wide, or about 220 silicon atoms.)
That’s when we started to hang out with Phil Kuekes, the creative force behind the Teramac (tera-operation-per-second multiarchitecture computer)—an experimental supercomputer built at HP Labs primarily from defective parts, just to show it could be done. He gave us the idea to build an architecture that would work even if a substantial number of the individual devices in the circuit were dead on arrival. We didn’t know what those devices would be, but our goal was electronics that would keep improving even after the devices got so small that defective ones would become common. We ate a lot of pizza washed down with appropriate amounts of beer and speculated about what this mystery nanodevice would be.
We were designing something that wouldn’t even be relevant for another 10 to 15 years. It was possible that by then devices would have shrunk down to the molecular scale envisioned by David Packard or perhaps even be molecules. We could think of no better way to anticipate this than by mimicking the Teramac at the nanoscale. We decided that the simplest abstraction of the Teramac architecture was the crossbar, which has since become the de facto standard for nanoscale circuits because of its simplicity, adaptability, and redundancy.
The crossbar is an array of perpendicular wires. Anywhere two wires cross, they are connected by a switch. To connect a horizontal wire to a vertical wire at any point on the grid, you must close the switch between them. Our idea was to open and close these switches by applying voltages to the ends of the wires. Note that a crossbar array is basically a storage system, with an open switch representing a zero and a closed switch representing a one. You read the data by probing the switch with a small voltage.
Like everything else at the nanoscale, the switches and wires of a crossbar are bound to be plagued by at least some nonfunctional components. These components will be only a few atoms wide, and the second law of thermodynamics ensures that we will not be able to completely specify the position of every atom. However, a crossbar architecture builds in redundancy by allowing you to route around any parts of the circuit that don’t work. Because of their simplicity, crossbar arrays have a much higher density of switches than a comparable integrated circuit based on transistors.
But implementing such a storage system was easier said than done. Many research groups were working on such a cross-point memory—and had been since the 1950s. Even after 40 years of research, they had no product on the market. Still, that didn’t stop them from trying. That’s because the potential for a truly nanoscale crossbar memory is staggering; picture carrying around the entire Library of Congress on a thumb drive.
One of the major impediments for prior crossbar memory research was the small off-to-on resistance ratio of the switches (40 years of research had never produced anything surpassing a factor of 2 or 3). By comparison, modern transistors have an off-to-on resistance ratio of 10 000 to 1. We calculated that to get a high-performance memory, we had to make switches with a resistance ratio of at least 1000 to 1. In other words, in its off state, a switch had to be 1000 times as resistive to the flow of current as it was in its on state. What mechanism could possibly give a nanometer-scale device a three-orders-of-magnitude resistance ratio?
We found the answer in scanning tunneling microscopy (STM), an area of research I had been pursuing for a decade. A tunneling microscope generates atomic-resolution images by scanning a very sharp needle across a surface and measuring the electric current that flows between the atoms at the tip of the needle and the surface the needle is probing. The general rule of thumb in STM is that moving that tip 0.1 nm closer to a surface increases the tunneling current by one order of magnitude.
We needed some similar mechanism by which we could change the effective spacing between two wires in our crossbar by 0.3 nm. If we could do that, we would have the 1000:1 electrical switching ratio we needed.
Our constraints were getting ridiculous. Where would we find a material that could change its physical dimensions like that? That is how we found ourselves in the realm of molecular electronics.
Conceptually, our device was like a tiny sandwich. Two platinum electrodes (the intersecting wires of the crossbar junction) functioned as the ”bread” on either end of the device. We oxidized the surface of the bottom platinum wire to make an extremely thin layer of platinum dioxide, which is highly conducting. Next, we assembled a dense film, only one molecule thick, of specially designed switching molecules. Over this ”monolayer” we deposited a 2- to 3-nm layer of titanium metal, which bonds strongly to the molecules and was intended to glue them together. The final layer was the top platinum electrode.
The molecules were supposed to be the actual switches. We built an enormous number of these devices, experimenting with a wide variety of exotic molecules and configurations, including rotaxanes, special switching molecules designed by James Heath and Fraser Stoddart at the University of California, Los Angeles. The rotaxane is like a bead on a string, and with the right voltage, the bead slides from one end of the string to the other, causing the electrical resistance of the molecule to rise or fall, depending on the direction it moves. Heath and Stoddart’s devices used silicon electrodes, and they worked, but not well enough for technological applications: the off-to-on resistance ratio was only a factor of 10, the switching was slow, and the devices tended to switch themselves off after 15 minutes.
Our platinum devices yielded results that were nothing less than frustrating. When a switch worked, it was spectacular: our off-to-on resistance ratios shot past the 1000 mark, the devices switched too fast for us to even measure, and having switched, the device’s resistance state remained stable for years (we still have some early devices we test every now and then, and we have never seen a significant change in resistance). But our fantastic results were inconsistent. Worse yet, the success or failure of a device never seemed to depend on the same thing.
We had no physical model for how these devices worked. Instead of rational engineering, we were reduced to performing huge numbers of Edisonian experiments, varying one parameter at a time and attempting to hold all the rest constant. Even our switching molecules were betraying us; it seemed like we could use anything at all. In our desperation, we even turned to long-chain fatty acids—essentially soap—as the molecules in our devices. There’s nothing in soap that should switch, and yet some of the soap devices switched phenomenally. We also made control devices with no molecule monolayers at all. None of them switched.
We were frustrated and burned out. Here we were, in late 2002, six years into our research. We had something that worked, but we couldn’t figure out why, we couldn’t model it, and we sure couldn’t engineer it. That’s when Greg Snider, who had worked with Kuekes on the Teramac, brought me the Chua memristor paper from the September 1971 IEEE Transactions on Circuits Theory. ”I don’t know what you guys are building,” he told me, ”but this is what I want.”
To this day, I have no idea how Greg happened to come across that paper. Few people had read it, fewer had understood it, and fewer still had cited it. At that point, the paper was 31 years old and apparently headed for the proverbial dustbin of history. I wish I could say I took one look and yelled, ”Eureka!” But in fact, the paper sat on my desk for months before I even tried to read it. When I did study it, I found the concepts and the equations unfamiliar and hard to follow. But I kept at it because something had caught my eye, as it had Greg’s: Chua had included a graph that looked suspiciously similar to the experimental data we were collecting.
The graph described the current-voltage (I-V) characteristics that Chua had plotted for his memristor. Chua had called them ”pinched-hysteresis loops”; we called our I-V characteristics ”bow ties.” A pinched hysteresis loop looks like a diagonal infinity symbol with the center at the zero axis, when plotted on a graph of current against voltage. The voltage is first increased from zero to a positive maximum value, then decreased to a minimum negative value and finally returned to zero. The bow ties on our graphs were nearly identical [see graphic, ”Bow Ties”].
That’s not all. The total change in the resistance we had measured in our devices also depended on how long we applied the voltage: the longer we applied a positive voltage, the lower the resistance until it reached a minimum value. And the longer we applied a negative voltage, the higher the resistance became until it reached a maximum limiting value. When we stopped applying the voltage, whatever resistance characterized the device was frozen in place, until we reset it by once again applying a voltage. The loop in the I-V curve is called hysteresis, and this behavior is startlingly similar to how synapses operate: synaptic connections between neurons can be made stronger or weaker depending on the polarity, strength, and length of a chemical or electrical signal. That’s not the kind of behavior you find in today’s circuits.
Looking at Chua’s graphs was maddening. We now had a big clue that memristance had something to do with our switches. But how? Why should our molecular junctions have anything to do with the relationship between charge and magnetic flux? I couldn’t make the connection.
Two years went by. Every once in a while I would idly pick up Chua’s paper, read it, and each time I understood the concepts a little more. But our experiments were still pretty much trial and error. The best we could do was to make a lot of devices and find the ones that worked.
But our frustration wasn’t for nothing: by 2004, we had figured out how to do a little surgery on our little sandwiches. We built a gadget that ripped the tiny devices open so that we could peer inside them and do some forensics. When we pried them apart, the little sandwiches separated at their weakest point: the molecule layer. For the first time, we could get a good look at what was going on inside. We were in for a shock.
What we had was not what we had built. Recall that we had built a sandwich with two platinum electrodes as the bread and filled with three layers: the platinum dioxide, the monolayer film of switching molecules, and the film of titanium.
But that’s not what we found. Under the molecular layer, instead of platinum dioxide, there was only pure platinum. Above the molecular layer, instead of titanium, we found an unexpected and unusual layer of titanium dioxide. The titanium had sucked the oxygen right out of the platinum dioxide! The oxygen atoms had somehow migrated through the molecules and been consumed by the titanium. This was especially surprising because the switching molecules had not been significantly perturbed by this event—they were intact and well ordered, which convinced us that they must be doing something important in the device.
The chemical structure of our devices was not at all what we had thought it was. The titanium dioxide—a stable compound found in sunscreen and white paint—was not just regular titanium dioxide. It had split itself up into two chemically different layers. Adjacent to the molecules, the oxide was stoichiometric TiO 2 , meaning the ratio of oxygen to titanium was perfect, exactly 2 to 1. But closer to the top platinum electrode, the titanium dioxide was missing a tiny amount of its oxygen, between 2 and 3 percent. We called this oxygen-deficient titanium dioxide TiO 2-x , where x is about 0.05.
Because of this misunderstanding, we had been performing the experiment backward. Every time I had tried to create a switching model, I had reversed the switching polarity. In other words, I had predicted that a positive voltage would switch the device off and a negative voltage would switch it on. In fact, exactly the opposite was true.
It was time to get to know titanium dioxide a lot better. They say three weeks in the lab will save you a day in the library every time. In August of 2006 I did a literature search and found about 300 relevant papers on titanium dioxide. I saw that each of the many different communities researching titanium dioxide had its own way of describing the compound. By the end of the month, the pieces had fallen into place. I finally knew how our device worked. I knew why we had a memristor.
The exotic molecule monolayer in the middle of our sandwich had nothing to do with the actual switching. Instead, what it did was control the flow of oxygen from the platinum dioxide into the titanium to produce the fairly uniform layers of TiO 2 and TiO 2-x . The key to the switching was this bilayer of the two different titanium dioxide species [see diagram, ”How Memristance Works”]. The TiO 2 is electrically insulating (actually a semiconductor), but the TiO 2-x is conductive, because its oxygen vacancies are donors of electrons, which makes the vacancies themselves positively charged. The vacancies can be thought of like bubbles in a glass of beer, except that they don’t pop—they can be pushed up and down at will in the titanium dioxide material because they are electrically charged.
Now I was able to predict the switching polarity of the device. If a positive voltage is applied to the top electrode of the device, it will repel the (also positive) oxygen vacancies in the TiO 2-x layer down into the pure TiO 2 layer. That turns the TiO 2 layer into TiO 2-x and makes it conductive, thus turning the device on. A negative voltage has the opposite effect: the vacancies are attracted upward and back out of the TiO 2 , and thus the thickness of the TiO 2 layer increases and the device turns off. This switching polarity is what we had been seeing for years but had been unable to explain.
On 20 August 2006, I solved the two most important equations of my career—one equation detailing the relationship between current and voltage for this equivalent circuit, and another equation describing how the application of the voltage causes the vacancies to move—thereby writing down, for the first time, an equation for memristance in terms of the physical properties of a material. This provided a unique insight. Memristance arises in a semiconductor when both electrons and charged dopants are forced to move simultaneously by applying a voltage to the system. The memristance did not actually involve magnetism in this case; the integral over the voltage reflected how far the dopants had moved and thus how much the resistance of the device had changed.
We finally had a model we could use to engineer our switches, which we had by now positively identified as memristors. Now we could use all the theoretical machinery Chua had created to help us design new circuits with our devices.
Triumphantly, I showed the group my results and immediately declared that we had to take the molecule monolayers out of our devices. Skeptical after years of false starts and failed hypotheses, my team reminded me that we had run control samples without molecule layers for every device we had ever made and that those devices had never switched. And getting the recipe right turned out to be tricky indeed. We needed to find the exact amounts of titanium and oxygen to get the two layers to do their respective jobs. By that point we were all getting impatient. In fact, it took so long to get the first working device that in my discouragement I nearly decided to put the molecule layers back in.
A month later, it worked. We not only had working devices, but we were also able to improve and change their characteristics at will.
But here is the real triumph. The resistance of these devices stayed constant whether we turned off the voltage or just read their states (interrogating them with a voltage so small it left the resistance unchanged). The oxygen vacancies didn’t roam around; they remained absolutely immobile until we again applied a positive or negative voltage. That’s memristance: the devices remembered their current history. We had coaxed Chua’s mythical memristor off the page and into being.
Emulating the behavior of a single memristor, Chua showed, requires a circuit with at least 15 transistors and other passive elements. The implications are extraordinary: just imagine how many kinds of circuits could be supercharged by replacing a handful of transistors with one single memristor.
The most obvious benefit is to memories. In its initial state, a crossbar memory has only open switches, and no information is stored. But once you start closing switches, you can store vast amounts of information compactly and efficiently. Because memristors remember their state, they can store data indefinitely, using energy only when you toggle or read the state of a switch, unlike the capacitors in conventional DRAM, which will lose their stored charge if the power to the chip is turned off. Furthermore, the wires and switches can be made very small: we should eventually get down to a width of around 4 nm, and then multiple crossbars could be stacked on top of each other to create a ridiculously high density of stored bits.
Greg Snider and I published a paper last year showing that memristors could vastly improve one type of processing circuit, called a field-programmable gate array, or FPGA. By replacing several specific transistors with a crossbar of memristors, we showed that the circuit could be shrunk by nearly a factor of 10 in area and improved in terms of its speed relative to power-consumption performance. Right now, we are testing a prototype of this circuit in our lab.
And memristors are by no means hard to fabricate. The titanium dioxide structure can be made in any semiconductor fab currently in existence. (In fact, our hybrid circuit was built in an HP fab used for making inkjet cartridges.) The primary limitation to manufacturing hybrid chips with memristors is that today only a small number of people on Earth have any idea of how to design circuits containing memristors. I must emphasize here that memristors will never eliminate the need for transistors: passive devices and circuits require active devices like transistors to supply energy.
The potential of the memristor goes far beyond juicing a few FPGAs. I have referred several times to the similarity of memristor behavior to that of synapses. Right now, Greg is designing new circuits that mimic aspects of the brain. The neurons are implemented with transistors, the axons are the nanowires in the crossbar, and the synapses are the memristors at the cross points. A circuit like this could perform real-time data analysis for multiple sensors. Think about it: an intelligent physical infrastructure that could provide structural assessment monitoring for bridges. How much money—and how many lives—could be saved?
I’m convinced that eventually the memristor will change circuit design in the 21st century as radically as the transistor changed it in the 20th. Don’t forget that the transistor was lounging around as a mainly academic curiosity for a decade until 1956, when a killer app—the hearing aid—brought it into the marketplace. My guess is that the real killer app for memristors will be invented by a curious student who is now just deciding what EE courses to take next year.
About the Author
R. STANLEY WILLIAMS, a senior fellow at Hewlett-Packard Labs, wrote this month’s cover story, ”How We Found the Missing Memristor.” Earlier this year, he and his colleagues shook up the electrical engineering community by introducing a fourth fundamental circuit design element. The existence of this element, the memristor, was first predicted in 1971 by IEEE Fellow Leon Chua, of the University of California, Berkeley, but it took Williams 12 years to build an actual device. | <urn:uuid:fc30d469-0a3c-4993-a11a-95b648c6e637> | {
"date": "2013-05-18T08:12:30",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9618540406227112,
"score": 3.171875,
"token_count": 6717,
"url": "http://spectrum.ieee.org/semiconductors/processors/how-we-found-the-missing-memristor/5"
} |
Mercury in the Morning
The planet Mercury -- the planet closest to the Sun -- is just peeking into view in the east at dawn the next few days. It looks like a fairly bright star. It's so low in the sky, though, that you need a clear horizon to spot it, and binoculars wouldn't hurt.
Mercury is a bit of a puzzle. It has a big core that's made mainly of iron, so it's quite dense. Because Mercury is so small, the core long ago should've cooled enough to form a solid ball. Yet the planet generates a weak magnetic field, hinting that the core is still at least partially molten.
The solution to this puzzle may involve an iron "snow" deep within the core.
The iron in the core is probably mixed with sulfur, which has a lower melting temperature than iron. Recent models suggest that the sulfur may have kept the outer part of the core from solidifying -- it's still a hot, thick liquid.
As this mixture cools, though, the iron "freezes" before the sulfur does. Small bits of solid iron fall toward the center of the planet. This creates convection currents -- like a pot of boiling water. The motion is enough to create a "dynamo" effect. Like a generator, it produces electrical currents, which in turn create a magnetic field around the planet.
Observations earlier this year by the Messenger spacecraft seem to support that idea. But Messenger will provide much better readings of what's going on inside Mercury when it enters orbit around the planet in 2011.
Script by Damond Benningfield, Copyright 2008
For more skywatching tips, astronomy news, and much more, read StarDate magazine. | <urn:uuid:d0a1999f-a775-4afc-bcfd-ee6ff6243a0b> | {
"date": "2013-05-18T06:50:10",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9436610341072083,
"score": 4,
"token_count": 357,
"url": "http://stardate.org/radio/program/2008-10-20"
} |
Teacher Tip: instead of addressing the class as...
Maps: Behold ORBIS, a Google Maps for the Roman... →
Have you ever wondered how much it would cost to travel from Londinium to Jerusalem in February during the heyday of the Roman Empire? Thanks to a project helmed by historian Walter Scheidel and developer Elijah Meeks of Stanford University, all of your pressing queries about Roman roadways can be answered! This is ORBIS, an online simulation (and thoroughly brainy time sink) that allows you to...
"Telling the Time" presentation →
Don't Insist on English →
chris-english-daily: A TED talk presentation about the spread and usage of the English language - some interesting ideas here kids …. | <urn:uuid:848c99d1-4b57-4148-90f9-66ad418ba789> | {
"date": "2013-05-18T08:08:42",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8552047610282898,
"score": 2.546875,
"token_count": 159,
"url": "http://stepladderteaching.tumblr.com/archive"
} |
Requirements for proficiency in Norwegian
The Norwegian language is the primary language of instruction at Norwegian institutions of higher education. Some foreign students learn Norwegian before they continue with further studies in Norway. Below is an overview of the language requirements for foreign students applying for courses where the language of instruction is Norwegian.
If applying for a course taught in Norwegian, or for general acceptance into an institution, applicants outside of the Nordic countries must meet one of the following requirements:
- Successfully passed 'Norwegian as a second language' from upper secondary school.
- Sucessfully passed Level 3 in Norwegian at a university.
- Successfully passed one-year study in Norwegian language and society for foreign students from a university college.
- Successfully passed test in Norwegian at higher level, 'Bergenstesten', with a minimum score of 450.
In certain cases, institutions may accept other types of documentation. Please contact the institutions directly for details. | <urn:uuid:de1e1761-77e7-432f-856b-39b8c69b3443> | {
"date": "2013-05-18T08:01:47",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9260422587394714,
"score": 2.90625,
"token_count": 191,
"url": "http://studyinnorway.no/Study-in-Norway/Admission-Application/Requirements-for-proficiency-in-Norwegian"
} |
Monitor drivers vs. video adapter drivers: How are they different and which do I need?
Monitor drivers are specific to the monitor. They are usually text files that tell the operating system what the monitor is and what it is capable of. They are not required for the monitor to function.
Video adapter drivers
Your video adapter lets your computer communicate with a monitor by sending images, text, graphics, and other information. Better video adapters provide higher-quality images on your screen, but the quality of your monitor plays a large role as well. For example, a monochrome monitor cannot display colors no matter how powerful the video adapter is.
A video driver is a file that allows your operating system to work with your video adapter. Each video adapter requires a specific video driver. When you update your video adapter, your operating system will provide a list and let you pick the appropriate video driver for it. If you do not see the video driver for your adapter in the list, contact the manufacturer of your video adapter to get the necessary video driver. | <urn:uuid:73cae941-2fef-484a-8887-1a20cde01d2f> | {
"date": "2013-05-18T05:34:40",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9252598881721497,
"score": 2.75,
"token_count": 212,
"url": "http://support.lenovo.com/en_GB/research/hints-or-tips/detail.page?LegacyDocID=MIGR-4MVU4G"
} |
In 1962 President John F. Kennedy’s administration narrowly averted possible nuclear war with the USSR, when CIA operatives spotted Soviet surface-to-surface missiles in Cuba, after a six-week gap in intelligence-gathering flights.
In their forthcoming book Blind over Cuba: The Photo Gap and the Missile Crisis, co-authors David Barrett and Max Holland make the case that the affair was a close call stemming directly from a decision made in a climate of deep distrust between key administration officials and the intelligence community.
Using recently declassified documents, secondary materials, and interviews with several key participants, the authors weave a story of intra-agency conflict, suspicion, and discord that undermined intelligence-gathering, adversely affected internal postmortems conducted after the crisis peaked, and resulted in keeping Congress and the public in the dark about what really happened.
We asked Barrett, a professor of political science at Villanova University, to discuss the actual series of events and what might have happened had the CIA not detected Soviet missiles on Cuba.
The Actual Sequence of Events . . .
“Some months after the Cuban Missile Crisis, an angry member of the Armed Services Committee of the House of Representatives criticized leaders of the Kennedy administration for having let weeks go by in September and early October 1962, without detecting Soviet construction of missile sites in Cuba. It was an intelligence failure as serious as the U.S. ignorance that preceded the Japanese attack on Pearl Harbor in 1941, he said.
Secretary of Defense Robert McNamara aggressively denied that there had been an American intelligence failure or ineptitude with regard to Cuba in late summer 1962. McNamara and others persuaded most observers the administration’s performance in the lead-up to the Crisis had been almost flawless, but the legislator was right: The CIA had not sent a U-2 spy aircraft over western Cuba for about a six week period.
There were varying reasons for this, but the most important was that the Kennedy administration did not wish to have a U-2 “incident.” Sending that aircraft over Cuba raised the possibility that Soviet surface-to-air missiles might shoot one down. Since it was arguably against international law for the U.S. to send spy aircrafts over another country, should one be shot down, there would probably be the same sort of uproar as happened in May 1960, when the Soviet Union shot down an American U-2 flying over its territory.
Furthermore, most State Department and CIA authorities did not believe that the USSR would put nuclear-armed missiles into Cuba that could strike the U.S. Therefore, the CIA was told, in effect, not even to request permission to send U-2s over western Cuba. This, at a time when there were growing numbers of reports from Cuban exiles and other sources about suspicious Soviet equipment being brought into the country.As we now know, the Soviets WERE constructing missile sites on what CIA deputy director Richard Helms would call “the business end of Cuba,” i.e., the western end, in the summer/autumn of 1962. Fortunately, by mid-October, the CIA’s director, John McCone, succeeded in persuading President John F. Kennedy to authorize one U-2 flight over that part of Cuba and so it was that Agency representatives could authoritatively inform JFK on October 16th that the construction was underway.The CIA had faced White House and State Department resistance for many weeks about this U-2 matter."
What Could Have Happened . . .
“What if McCone had not succeeded in persuading the President that the U.S. needed to step up aerial surveillance of Cuba in mid-October? What if a few more weeks had passed without that crucial October 14 U-2 flight and its definitive photography of Soviet missile site construction?
Remember to check out Blind over Cuba: The Photo Gap and the Missile Crisis, which is being published this fall!If McCone had been told “no” in the second week of October, perhaps it would have taken more human intelligence, trickling in from Cuba, about such Soviet activity before the President would have approved a risky U-2 flight.The problem JFK would have faced then is that there would have been a significant number of operational medium-range missile launch sites. Those nuclear-equipped missiles could have hit the southern part of the U.S. Meanwhile, the Soviets would also have progressed further in construction of intermediate missile sites; such missiles could have hit most of the continental United States.If JFK had not learned about Soviet nuclear-armed missiles until, say, November 1st, what would the U.S. have done?There is no definitive answer to that question, but I think it’s fair to say that the President would have been under enormous pressure to authorize—quickly--a huge U.S. air strike against Cuba, followed by an American invasion. One thing which discovery of the missile sites in mid-October gave JFK was some time to negotiate effectively with the Soviet Union during the “Thirteen Days” of the crisis. I don’t think there would have been such a luxury if numerous operational missiles were discovered a couple weeks later.No wonder President Kennedy felt great admiration and gratitude toward those at the CIA (with its photo interpreters) and the Air Force (which piloted the key U-2 flight). The intelligence he received on October 16th was invaluable. I think he knew that if that intelligence had not come until some weeks later, there would have been a much greater chance of nuclear war between the U.S. and the Soviet Union.” | <urn:uuid:7da5e687-07e2-4c8f-9fac-fe3f58c7017a> | {
"date": "2013-05-18T06:56:29",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9771577715873718,
"score": 2.828125,
"token_count": 1150,
"url": "http://tamupress.blogspot.com/2012/07/close-call-what-if-cia-had-not-spotted.html"
} |
If you have ever used the Windows Copy (Ctrl+C) to copy objects to the clipboard and then the Windows Paste (Ctrl+V) to copy/paste AutoCAD object(s), then you know that those clipboard object(s) will have the lower left-hand corner of their extents as the base point (not very precise)... and this always reminds me of some of the graphic editing applets (e.g.: Paint or even the wonderful AutoCAD Button Editor!) that have you draw a circle like a rectangle. (annoying to say the least!)
With AutoCAD you can use the keyboard shortcut of (Ctrl+Shft+C) to pick a base point for your clipboard object(s). COPYBASE is the actual command, and then you can paste to a precise point in the destination AutoCAD DWG file using the keyboard shortcut of (Ctrl+Shift+V). This is the PASTEBLOCK command or you can also use the PASTEORIG command if the COPYBASEd object(s) go in the same exact spot in the receiving DWG file.
Also it is important to note: If you do use the Ctrl+Shift+V PASTEBLOCK method and want to leave it as a block, AutoCAD will assign a name for the block, which is something like "A$C11A06AFD" or "A$C1F7A5022" ... Either use the RENAME command, or use EXPLODE or XPLODE, also watch your layers, with regards to the object(s) original layers and where this new "block" is being INSERTed... or where they go if they are EXPLODEd vs. XPLODEd. (I will save that for a whole different post). | <urn:uuid:ab6efc59-895c-4b89-90f1-13b1d77a46de> | {
"date": "2013-05-18T07:14:21",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.854438841342926,
"score": 2.859375,
"token_count": 381,
"url": "http://tlconsulting.blogspot.com/2005_07_01_archive.html"
} |
On January 16, 1863, Walt Whitman wrote a pained letter to his brother, Thomas Jefferson Whitman, in which he bemoaned the Union’s recent defeat at Fredericksburg as the most “complete piece of mismanagement perhaps ever yet known in the earth's wars.”
While Whitman today is celebrated as one of America’s greatest poets, works like Leaves of Grass, penned in the 1850s, were seen as scandalous by an American reading public unready for Whitman’s unconventional lifestyle. An opponent of slavery, Whitman supported the Union with the poem Beat! Beat! Drums and volunteered as a nurse in army hospitals.
After Lincoln’s assassination in 1865, Whitman penned Oh Captain, My Captain, eulogizing the President for having navigated the ship of state through the storm of war, only to meet a violent end. | <urn:uuid:1c1ea546-62ba-4fed-b94c-3c8787377d6e> | {
"date": "2013-05-18T06:50:34",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9769465327262878,
"score": 2.59375,
"token_count": 180,
"url": "http://tpr.org/post/week-civil-war-485?ft=1&f=168896410,168978447,169074256,169170371,169367809,169981090,169981167,169981640,169982186,169983052,169983689,170665935,170667035,170667818,173159441"
} |
The Neighbor Squirrel
These busy fluffballs have lost their fear of most predators - and they help plant pecan trees.
By Sheryl Smith-Rodgers
Have you ever watched an eastern fox squirrel (Sciurus niger) bury an acorn or pecan? A nuzzle here, another there, then he hurriedly pushes the leaves and grass over the site before scampering up the closest tree. Minutes later, he's back with another nut. Over the course of three months, that industrious squirrel can bury several thousand pecans. Come winter, when food's scarce, he'll find them again with his excellent sense of smell. Some will escape his appetite, though, and sprout into saplings, which is how many native nut trees get planted.
Eastern fox squirrels - the state's most common and wide-ranging squirrel and a popular game animal, too - occur in forests and riparian habitats. They also easily adapt to cities and neighborhoods, where they've lost most of their fear of natural predators.
"Playing the call of a red-tailed hawk didn't phase squirrels on campus," reports Bob McCleery, a wildlife lecturer at Texas A&M University, who has studied urban squirrels in College Station. "When we played a coyote call in the Navasota river bottom, a squirrel immediately flattened itself in the crotch of a tree for a good five minutes."
When agitated, fox squirrels - whose fur closely resembles that of a gray fox - bark and jerk their long, bushy tails, which they use for balance when scampering on utility lines and other high places. Tails provide warmth and protection, too. "In the summer, I've seen them lying down with their tails over their heads to block the sun," McCleery says. | <urn:uuid:3cb858ec-4357-48a5-9912-c7929ec225af> | {
"date": "2013-05-18T05:11:59",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9645190238952637,
"score": 3.296875,
"token_count": 370,
"url": "http://tpwmagazine.com/archive/2008/jan/scout3/"
} |
Ragtime and blues fused ‘All That Jazz’
By Laura Szepesi
Published: Sunday, March 17, 2013, 7:09 p.m.
Updated: Monday, March 18, 2013
EDITOR'S NOTE: Thursday marks the 85th birthday of well-known Connellsville jazz trombonist Harold Betters. We salute him with this four-part series, starting today with a brief history of jazz music.
In 1979, actor Roy Scheider brought the life of Broadway dancer / director Bob Fosse to the big screen in the film “All That Jazz.”
“All” is the perfect way to describe jazz music.
Jazz was born around 1900 in New Orleans — about the same time as the earliest music recordings became available to the public. It grew out of ragtime, which many sources claim is the first true American music.
Like jazz, ragtime has Southern roots, but was also flavored by the southern Midwest. It was popular from the late 1800s to around 1920. It developed in African American communities, a mix of march music (from composers such as John Philip Sousa), black songs and dances including the cakewalk.
Ragtime: Dance on
Eventually, ragtime spread across the United States via printed sheet music, but its roots were as live dance music in the red light districts of large cities such as St. Louis and New Orleans. Ernest Hogan is considered ragtime's father. He named it ragtime because of the music's lively ragged syncopation.
Ragtime faded as jazz's following grew. However, composers enjoyed major success in ragtime's early years. Scott Joplin's 1899 “Maple Leaf Rag” was a hit, as was his “The Entertainer,” which was resurrected as a Top 5 hit when it was featured in the 1974 movie “The Sting” starring Robert Redford and Paul Newman.
Born of ragtime, jazz was also heavily influenced by the blues. Blues originated in the late 1800s, but in the deep South. It is an amalgam of Negro spirituals, work songs, shouts, chants and narrative lyrics.
Fused with blues
Like jazz, the blues comes in many forms: delta, piedmont, jump and Chicago blues. Its popularity grew after World War II when electric guitars — rather than acoustic guitars — became popular. By the early 1970s, blues had formed another hybrid: blues rock.
While ragtime is jangly and spirited, the blues takes after its name: blue, or melancholy. Its name is traced to 1912 when Hart Ward copyrighted the first blues song, “Dallas Blues.”
Jazz — as a mix of ragtime and blues — has fused into many styles since its emergence.
In the 1910s, New Orleans jazz was the first to take off. In the 1930s and 1940s, Big Band swing, Kansas City jazz and bebop prevailed. Other forms include cool jazz and jazz rock; today, there's even cyber jazz.
Jazz: Always changing
The late jazz trombone player J.J. Johnson summed jazz up as restless. “It won't stay put ... and never will,” he was quoted as saying, according to various sources.
Johnson's sentiment is heartily endorsed by Connellsville jazz trombonist Harold Betters. Betters turns 85 years old this week. He will share decades of his memories about music and growing up in Connellsville as his March 21 birthday approaches.
Laura Szepesi is a freelance writer.
Tuesday: Just how did Harold Betters decide to play the trombone?
- Uniontown police investigate shooting injury
- Upper Tyrone family helps pet overcome paralysis
- Several Fayette boroughs have contested races
- Recap of the death of Connellsville police officer McCray Robb in 1882
- Connellsville police officer recognized 131 years after death
- Fayette County man accused of receiving stolen property, multiple drug offenses
- Connellsville set to debut model-railroad train in 2014
- Connellsville airport will remain open
- Connellsville mayoral candidate Joshua DeWitt held for trial in chop shop case
- South Connellsville man charged in pedestrian accident
- Connellsville council to make appointments, reappointments
You must be signed in to add comments
To comment, click the Sign in or sign up at the very top of this page.
Subscribe today! Click here for our subscription offers. | <urn:uuid:9adf6dc2-8439-48ef-addd-49274751b0af> | {
"date": "2013-05-18T05:09:22",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9664452075958252,
"score": 2.75,
"token_count": 959,
"url": "http://triblive.com/news/fayette/3678122-74/jazz-blues-ragtime"
} |
The diagnosis of Trichotillomania (TM) is synonymous with the act of recurrently pulling one’s own body hair resulting in noticeable thinning or baldness. (American Psychiatric Association, Diagnostic and statistical manual of mental disorders, 2000, p. 674) Sites of hair pulling can include any area of the body in which hair is found, but the most common sites are the scalp, eyelashes, eyebrows, and the pubis area. (Kraemer, 1999, p. 298) The disorder itself is categorized in the DSM-IV-TR as an “Impulse Control Disorder Not Elsewhere Classified” along with disorders like Pathological Gambling, Pyromania, Kleptomania, and Intermittent Explosive Disorder. Although TM was previously considered to be a rare disorder, more recent research indicates that prevalence rates of TM may be as high as 2% of the general population. (Kraemer, 1999, p. 298) This prevalence rate is significantly higher than the lifetime prevalence rate of .6% that is cited as a potential baseline among college students the DSM-IV-TR. (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000, p. 676) The condition appears to be more common among women and the period of onset is typically in childhood or adolescence. (Kraemer, 1999, p. 298) As is customary with most DSM-IV-TR diagnoses, the act of hair pulling cannot be better accounted for by another mental disorder (like delusions, for example) or a general medical condition. Like every disorder in the DSM-IV-TR, the disturbance must cause significant distress or impairment in functioning. (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000, p. 675)
Alopecia is a key concept that must be understood in order to complete the differential diagnosis of TM. Alopecia is a condition of baldness in the most general sense. (Shiel, Jr. & Stoppler, 2008, p. 14) Other medically related causes of alopecia should be considered in the differential diagnosis of TM, especially when working with an individual who deny pulling their hair. The common suspects include male-pattern baldness, Discoid Lupus Erythematosus (DLE), Lichen Planopilaris (also known as Acuminatus), Folliculitis Decalvans, Pseudopelade of Brocq, and Alopecia Mucinosa (Follicular Mucinosis). (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000, p. 676) Comprehensive coverage of these medical conditions is beyond the scope of this article – all of the aforementioned confounding variables can be eliminated by a general practitioner.
There are a number of idiosyncratic features associated with TM that bear mentioning. Although the constellation of features covered here is not sufficient to warrant a diagnosis in isolation, they can aid in the differential diagnosis process. Alopecia, regardless of the cause, has been known to lead sufferers to tremendous feats of avoidance so that the hair loss remains undetected. Simply avoiding social functions or other events where the individual (and their attendant hair loss) might be uncovered is a common occurrence. In cases where individual’s focus of attention is on the head or scalp, it is not uncommon for affected individuals to attempt to hide hair loss by adopting complimentary hair styles or wearing other headwear (e.g., hats, wigs, etc). These avoidance behaviors will be the target of exposure and response prevention later in this article.
In addition to avoidant behavior and elaborate attempts to “cover it up,” individuals with TM frequently present with clinically significant difficulty in areas such as self-esteem and mood. Comorbidity, or the presence of one or more disorders in the addition to a primary diagnosis, is the rule not the exception in the stereotypical presentation of TM. Mood disorders (like depression) are the most common (65%) – anxiety (57%), chemical use (22%), and eating disorders (20%) round out the top four mostly likely candidates for comorbidity. (Kraemer, 1999, p. 298) These comorbidity rates are not overly surprising since they parallel prevalence rates across the wider population – perhaps with the notable exception of the high rate of comorbid eating disorders. We can speculate about the source of comorbidity – one possible hypothesis is that a few people who suffer TM also suffer from a persistent cognitive dissonance associated with having happy-go-lucky personality trait which leads them “let the chips fall where they may.” They are individuals prone to impulsivity, but they are subdued and controlled the shame, guilt, frustration, fear, rage, and helplessness associated with the social limitations placed on them by the disorder. (Ingram, 2012, p. 269) On the topic of personality, surprisingly enough, research suggests that personality disorders do not share significant overlap with TM. This includes Borderline Personality Disorder (BPD) despite the fact that BPD is often associated with self-harming behavior. (Kraemer, 1999, p. 299)
Differentiating TM from Obsessive-Compulsive Disorder (OCD) can be challenging in some cases. TM is similar to OCD because there is a “sense of gratification” or “relief” when pulling the hair out. Unlike individuals with OCD, individuals with TM do not perform their compulsions in direct response to an obsession and/or according to rules that must be rigidly adhered to. (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000, p. 676) There are, however, observed similarities between OCD and TM regarding phenomenology, neurological test performance, response to SSRI’s, and contributing elements of familial and/or genetic factors. (Kraemer, 1999, p. 299) Due to the large genetic component contributions of both disorders, obtaining a family history (vis-à-vis a detailed genogram) is highly recommended. The comprehensive genogram covering all mental illness can be helpful in the discovery the comorbid conditions identified above as well.
There is some suggestion that knowledge of events associated with onset is “intriguing, but unnecessary for successful treatment.” (Kraemer, 1999, p. 299) I call shenanigans. There is a significant connection between the onset of TM and the patient enduring loss, perceived loss, and/or trauma. Time is well spent exploring the specific environmental stressors that precipitated the disorder. Although ignoring circumstances surrounding onset might be prudent when employing strict behavioral treatment paradigms, it seems like a terrible waste of time to endure suffering without identifying some underlying meaning or purpose that would otherwise be missed if we overlook onset specifics. “Everything can be taken from a man but one thing: the last of human freedoms – to choose one’s attitude in any given set of circumstances, to choose one’s own way.” (Frankl, 1997, p. 86) If we acknowledge that all behavior is purposeful, then we must know and understand the circumstances around onset if we will ever understand the purpose of said behavior. I liken this to a difference in professional opinion and personal preference because either position can be reasonably justified, but in the end the patient should make the ultimate decision about whether or not to explore onset contributions vis-à-vis “imagery dialogue” or a similar technique. (Young, Klosko, & Weishaar, 2003, p. 123) If such imagery techniques are unsuccessful or undesired by the client, a psychodynamic conversation between “internal parts of oneself” can add clarity to the persistent inability of the client to delay gratification. (Ingram, 2012, p. 292) Such explorations are likely to be time consuming, comparatively speaking, and should not be explored with patients who are bound by strict EAP requirements or managed care restrictions on the type and length of treatment. Comorbid developmental disabilities and cognitive deficits may preclude this existential exploration. I employ the exploration of existential issues of origin in the interest of increasing treatment motivation, promoting adherence, enhancing the therapeutic milieu, and thwarting subsequent lapses by anchoring cognitive dissonance to a concrete event.
TM represents a behavioral manifestation of a fixed action patterns (FAPs) that is rigid, consistent, and predicable. FAPs are generally thought to have evolved from our most primal instincts as animals – they are believed to contain fundamental behavioral ‘switches’ that enhance the survivability of the human species. (Lambert & Kinsley, 2011, p. 232) The nature of FAPs that leads some researchers to draw parallels to TM is that FAPs appear to be qualitatively “ballistic.” It’s an “all or nothing” reaction that is comparable to an action potential traveling down the axon of a neuron. Once they are triggered they are very difficult to suppress and may have a tendency to “kindle” other effects. (Lambert & Kinsley, 2011, p. 233)
There are some unique considerations when it comes to assessing a new patient with TM. Because chewing on or ingesting the hair is reported in nearly half of TM cases, the attending clinician should always inquire about oral manipulation and associated gastrointestinal pain associated with a connected hair mass in the stomach or bowel (trichobezoar). Motivation for change should be assessed and measured because behavioral interventions inherently require a great deal of effort. Family and social systems should not be ignored since family dynamics can exacerbate symptomatlogy vis-à-vis pressure to change (negative reinforcement), excessive attention (positive reinforcement), or both. (Kraemer, 1999, p. 299)
What remains to be seen is the role of stress in the process of “triggering” a TM episode. Some individuals experience an “itch like” sensation as a physical antecedent that remits once the hair is pulled. This “itch like” sensation is far from universal. Some clinicians and researchers believe that the abnormal grooming behavior found in TM is “elicited in response to stress” with the necessary but not sufficient condition of “limited options for motoric behavior and tension release.” (Kraemer, 1999, p. 299) Although this stress hypothesis may materialize as a tenable hypothesis in some cases, it’s by no means typical. Most people diagnosed with TM report that the act of pulling typically occurs during affective states of relaxation and distraction. Most individuals whom suffer from TM do not report clinically significant levels of anxiety as the “trigger” of bouts of hair pulling. We could attribute this to an absence of insight regarding anxiety related triggers or, perhaps anxiety simply does not play a significant role in the onset and maintenance of hair pulling episodes. Regardless of the factors that trigger episodes, a comprehensive biopsychosocial assessment that includes environmental stressors (past, present and anticipated) should be explored.
The options for treatment of TM are limited at best. SSRIs have demonstrated some potential in the treatment of TM, but more research is needed before we can consider SSRIs as a legitimate first-line treatment. SSRIs are worth a shot as an adjunct treatment in cases of chronic, refractory, or treatment resistant TM. I would consider recommending a referral to a psychiatrist (not a general practitioner) for a medication review due in part to the favorable risk profile of the most recent round of SSRIs. Given the high rate of comorbidity with mood and anxiety disorders – if either is anxiety or depression are comorbid, SSRIs will likely be recommended regardless. Killing two birds with one stone is the order of the day, but be mindful that some medication can interfere with certain treatment techniques like imaginal or in vivo exposure. (Ledley, Marx, & Heimberg, 2010, p. 141) Additional research is needed before anxiolytic medications can be recommended in the absence of comorbid anxiety disorders (especially with children). Hypnosis and hypnotic suggestion in combination with other behavioral interventions may be helpful for some individuals, but I don’t know enough about it at this time to recommend it. Call me skeptical, or ignorant, but I prefer to save the parlor tricks for the circus…
Habit reversal is no parlor trick. My goal isn’t to heal the patient; that would create a level of dependence I am not comfortable with… my goal is to teach clients how to heal themselves. Okay, but how? The combination of Competing Response Training, Awareness/Mindfulness Training, Relaxation Training, Contingency Management, Cognitive Restructuring, and Generalization Training is the best hope for someone who seeks some relief from TM. Collectively I will refer to this collection of techniques as Habit Reversal.
Competing Response Training is employed in direct response to hair pulling or in situations where hair pulling might be likely. In the absence of “internal restraints to impulsive behavior,” artificial circumstances are created by identifying substitute behaviors that are totally incompatible with pulling hair. (Ingram, 2012, p. 292) Just like a compulsive gambling addict isn’t in any danger if spends all his money on rent, someone with TM is much less likely to pull hair if they are doing something else with their hands.
Antecedents, or triggers, are sometimes referred to as discriminative stimuli. (Ingram, 2012, p. 230) “We sense objects in a certain way because of our application of priori intuitions…” (Pirsig, 1999, p. 133) Altering the underlying assumptions entrenched in maladaptive priori intuitions is the core purpose of Awareness and Mindfulness Training. “There is a lack of constructive self-talk mediating between the trigger event and the behavior. The therapist helps the client build intervening self-messages: Slow down and think it over; think about the consequences.” (Ingram, 2012, p. 221) The connection to contingency management should be self evident. Utilizing a customized self-monitoring record, the patient begins to acquire the necessary insight to “spot” maladaptive self talk. “Spotting” is not a new or novel concept – it is central component of Abraham Low’s revolutionary self help system Recovery International. (Abraham Low Self-Help Systems, n.d.) The customized self-monitoring record should invariably include various data elements such as precursors, length of episode, number of hairs pulled, and a subjective unit of distress representing the level of “urge” or desire to pull hair. (Kraemer, 1999) The act of recording behavior (even in the absence of other techniques) is likely to produce significant reductions in TM symptomatlogy. (Persons, 2008, p. 182-201) Perhaps more importantly, associated activities, thoughts, and emotions that may be contributing to the urge to pull should be codified. (Kraemer, 1999, p. 300) In session, this record can be reviewed and subsequently tied to “high risk circumstances” and “priori intuitions” involving constructs such as anger, frustration, depression, and boredom.
Relaxation training is a critical component if we subscribe to the “kindling” hypothesis explained previously. Relaxation is intended to reduce the urges that inevitably trigger the habit. Examples abound, but diaphragmatic breathing, progressive relaxation, and visualization are all techniques that can be employed in isolation or in conjunction with each other.
Contingency Management is inexorably tied to the existential anchor of cognitive dissonance described above. My emphasis on this element is where my approach might differ from some other clinicians. “You are free to do whatever you want, but you are responsible for the consequences of everything that you do.” (Ingram, 2012, p. 270) This might include the client writing down sources of embarrassment, advantages of controlling the symptomatlogy of TM, etc. (Kraemer, 1999) The moment someone with pyromania decides that no fire worth being imprisoned, they will stop starting fires. The same holds true with someone who acknowledges the consequences of pulling their hair.
How do we define success? Once habit reversal is successfully accomplished in one setting or situation, the client needs to be taught how to generalize that skill to other contexts. A hierarchical ranking of anxiety provoking situations can be helpful in this process since self-paced graduated exposure is likely to increase tolerability for the anxious client. (Ingram, 2012, p. 240) If skills are acquired, and generalization occurs, we can reasonably expect a significant reduction in TM symptomatlogy. The challenges are significant, cognitive behavioral therapy is much easier said than done. High levels of treatment motivation are required for the behavioral elements, and moderate to high levels of insight are exceptionally helpful for the cognitive elements. In addition, this is an impulse control disorder… impulsivity leads to treatment noncompliance and termination. The combination of all the above, in addition to the fact that TM is generally acknowledged as one of the more persistent and difficult to treat disorders, prevents me from providing any prognosis other than “this treatment will work as well as the client allows it to work.”
Abraham Low Self-Help Systems. (n.d.). Recovery international terms and definitions. Retrieved August 2, 2012, from http://www.lowselfhelpsystems.org/system/recovery-international-language.asp
American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (4th ed., text rev.). Washington, DC: Author.
Frankl, V. E. (1997). Man’s search for meaning (rev. ed.). New York, NY: Pocket Books.
Ingram, B. L. (2012). Clinical case formulations: Matching the integrative treatment plan to the client (2nd ed.). Hoboken, NJ: John Wiley & Sons.
Kraemer, P. A. (1999). The application of habit reversal in treating trichotillomania. Psychotherapy: Theory, Research, Practice, Training, 36(3), 298-304. doi: 10.1037/h0092314
Lambert, K. G., & Kinsley, C. H. (2011). Clinical neuroscience: Psychopathology and the brain (2nd ed.). New York: Oxford University Press.
Ledley, D. R., Marx, B. P., & Heimberg, R. G. (2010). Making cognitive-behavioral therapy work: Clinical process for new practitioners (2nd ed.). New York, NY: Guilford Press.
Persons, J. B. (2008). The case formulation approach to cognitive-behavior therapy. New York, NY: Guilford Press.
Pirsig, R. M. (1999). Zen and the art of motorcycle maintenance: An inquiry into values (25th Anniversary ed.). New York: Quill.
Shiel, W. C., Jr., & Stoppler, M. C. (Eds.). (2008). Webster’s new world medical dictionary (3rd ed.). Hoboken, NJ: Wiley Publishing.
Young, J. E., Klosko, J. S., & Weishaar, M. E. (2003). Schema therapy: A practitioner’s guide. New York: Guilford Press. | <urn:uuid:7947504d-63b5-4a37-bd58-19d265d90077> | {
"date": "2013-05-18T05:48:37",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9121842384338379,
"score": 2.796875,
"token_count": 4098,
"url": "http://try-therapy.com/2012/08/02/trichotillomania/"
} |
SCIENCE announced that it is taking viewers further inside NASA's latest mission to Mars with the exclusive world premiere of i.am.mars: REACH FOR THE STARS tonight, September 19, 2012, at 10 PM ET/PT. The special documents the artistic and technical process behind "Reach for the Stars," will.i.am's newest single that became the first song ever to be broadcast from another planet to Earth.
In what is being hailed as "the most complex Mars mission to date," NASA's Curiosity spacecraft successfully landed on the red planet on August 6, 2012. Since then the Curiosity rover has returned stunning photographs and valuable information about the Martian surface that is helping scientists determine if it has the ability to support life.
Recently, Curiosity also returned will.i.am's new song "Reach for the Stars" as - for the first time in history - recorded music was broadcast from a planet to Earth. i.am.mars: REACH FOR THE STARS profiles will.i.am's passion for science and his belief in inspiring the next generation of scientists through STEM (Science, Technology, Engineering and Math) education.
i.am.mars: REACH FOR THE STARS also gives viewers a window into his creative process, as well as the recording of the song with a full children's choir and orchestra. In addition, viewers also go inside the engineering challenges NASA faced in uploading the song to Curiosity, and the hard work required to make the historic 700 million mile interplanetary broadcast a reality.
"Between MARS LANDING 2012: THE NEW SEARCH FOR LIFE and i.am.mars: REACH FOR THE STARS, SCIENCE is consumed with the bold exploration of the red planet," said Debbie Myers, general manager and executive vice president of SCIENCE. "We hope our viewers are as inspired as we are by the creativity, imagination and daring of both will.i.am and NASA."
i.am.mars will be distributed to schools nationwide through Discovery Education's digital streaming services. SCIENCE and Discovery Education will also work with Affiliates to promote i.am.mars' educational resources for use in schools and with community organizations, brining the magic of Mars to life. | <urn:uuid:cc8f8d73-112d-4456-ae20-1bdc4072b3b4> | {
"date": "2013-05-18T08:03:04",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.950738787651062,
"score": 2.5625,
"token_count": 463,
"url": "http://tv.broadwayworld.com/article/william-Featured-in-Sciences-iammars-REACH-FOR-THE-STARS-919-20120918"
} |
Forecast Texas Fire Danger (TFD)
The Texas Fire Danger(TFD) map is produced by the National Fire Danger Rating System (NFDRS). Weather information is provided by remote, automated weather stations and then used as an input to the Weather Information Management System (WIMS). The NFDRS processor in WIMS produces a fire danger rating based on fuels, weather, and topography. Fire danger maps are produced daily. In addition, the Texas A&M Forest Service, along with the SSL, has developed a five day running average fire danger rating map.
Daily RAWS information is derived from an experimental project - DO NOT DISTRIBUTE | <urn:uuid:a789fd8d-b873-45cf-b01d-af6eca242a5d> | {
"date": "2013-05-18T05:49:51",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8875352144241333,
"score": 3.015625,
"token_count": 136,
"url": "http://twc.tamu.edu/drought/tfdforecast?date=2/29/2012&type=tfdforecast"
} |
Welcome to Jane Addams Hull-House museum
The Jane Addams Hull-House Museum serves as a dynamic memorial to social reformer Jane Addams, the first American woman to receive the Nobel Peace Prize, and her colleagues whose work changed the lives of their immigrant neighbors as well as national and international public policy. The Museum preserves and develops the original Hull-House site for the interpretation and continuation of the historic settlement house vision, linking research, education, and social engagement
The Museum is located in two of the original settlement house buildings- the Hull Home, a National Historic Landmark, and the Residents' Dining Hall, a beautiful Arts and Crafts building that has welcomed some of the world's most important thinkers, artists and activists.
The Museum and its many vibrant programs make connections between the work of Hull-House residents and important contemporary social issues.
Founded in 1889 as a social settlement, Hull-House played a vital role in redefining American democracy in the modern age. Addams and the residents of Hull-House helped pass critical legislation and influenced public policy on public health and education, free speech, fair labor practices, immigrants’ rights, recreation and public space, arts, and philanthropy. Hull-House has long been a center of Chicago’s political and cultural life, establishing Chicago’s first public playground and public art gallery, helping to desegregate the Chicago Public Schools, and influencing philanthropy and culture. | <urn:uuid:f01a8af6-2422-47f6-a2f8-477c864a7d08> | {
"date": "2013-05-18T06:30:48",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9451056718826294,
"score": 3.078125,
"token_count": 294,
"url": "http://uic.edu/jaddams/hull/_museum/visitors.html"
} |
Introduction to principles of chemistry and fundamentals of inorganic and biochemistry. Structure and chemistry of carbohydrates, lipids, proteins, biochemistry of enzymes, metabolism, body fluids and radiation effects. On-line materials includes the course syllabus, copies of the lecture slides and animations, interactive Periodic Table, chapter summaries and practice exams. This course is targeted towards Health Science Majors.
Introduction to principles of chemistry. This course is targeted towards Chemistry Majors.
Laboratory experiments to develop techniques in organic chemistry and illustrate principles. On-line materials include step-by-step prelabs for many of the experiments that students will be conducting.
Theoretical principles of quantitative and instrumental analysis. Emphasis is placed on newer analytical tools and equipment.
Intermediate level course. Includes a discussion of the structure, function and metabolism of proteins, carbohydrates and lipids. In addition, there is a review of enzymes, DNA and RNA.
This course stresses theory and application of modern chromatographic methods. On-line materials include the course syllabus, copies of course lecture slides and animations.
A 'short course' covering the use of a mass spectrometer as a GC detector. Basic instrumentation, data treatment and spectral interpretation methods will be discussed. On-line materials include copies of course lecture slides and tables to assist in the interpretation of mass spectra.
Coverage of statistical methods in Analytical Chemistry. Course includes basic statistics, experimental design, modeling, exploratory data analysis and other multivariate techniques. On-line materials include the course syllabus, homework problems and copies of the lecture slides.
A survey of the basic equipment, data and methodology of Analytical methods that rely on radioisotopic materials. On-line materials include the course syllabus, homework problems. copies of the lecture slides and animations.
Why I missed the exam | <urn:uuid:841e4baa-add2-400d-b3cc-719e93276b6c> | {
"date": "2013-05-18T05:27:09",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8711559772491455,
"score": 2.609375,
"token_count": 381,
"url": "http://ull.chemistry.uakron.edu/classroom.html/genchem/gcms/genobc/periodic/excuses/organic_lab/analytical/chemsep/genobc/chemometrics/gcms/chemometrics/biochem/gcms/analytical/"
} |
Now that we’ve said a lot about individual operators on vector spaces, I want to go back and consider some other sorts of structures we can put on the space itself. Foremost among these is the idea of a bilinear form. This is really nothing but a bilinear function to the base field: . Of course, this means that it’s equivalent to a linear function from the tensor square: .
Instead of writing this as a function, we will often use a slightly different notation. We write a bracket , or sometimes , if we need to specify which of multiple different inner products under consideration.
Another viewpoint comes from recognizing that we’ve got a duality for vector spaces. This lets us rewrite our bilinear form as a linear transformation . We can view this as saying that once we pick one of the vectors , the bilinear form reduces to a linear functional , which is a vector in the dual space . Or we could focus on the other slot and define .
We know that the dual space of a finite-dimensional vector space has the same dimension as the space itself, which raises the possibility that or is an isomorphism from to . If either one is, then both are, and we say that the bilinear form is nondegenerate.
We can also note that there is a symmetry on the category of vector spaces. That is, we have a linear transformation defined by . This makes it natural to ask what effect this has on our form. Two obvious possibilities are that and that . In the first case we’ll call the bilinear form “symmetric”, and in the second we’ll call it “antisymmetric”. In terms of the maps and , we see that composing with the symmetry swaps the roles of these two functions. For symmetric bilinear forms, , while for antisymmetric bilinear forms we have .
This leads us to consider nondegenerate bilinear forms a little more. If is an isomorphism it has an inverse . Then we can form the composite . If is symmetric then this composition is the identity transformation on . On the other hand, if is antisymmetric then this composition is the negative of the identity transformation. Thus, the composite transformation measures how much the bilinear transformation diverges from symmetry. Accordingly, we call it the asymmetry of the form .
Finally, if we’re working over a finite-dimensional vector space we can pick a basis for , and get a matrix for . We define the matrix entry . Then if we have vectors and we can calculate
In terms of this basis and its dual basis , we find the image of the linear transformation . That is, the matrix also can be used to represent the partial maps and . If is symmetric, then the matrix is symmetric , while if it’s antisymmetric then . | <urn:uuid:3bf09a24-c60d-45a0-b8e6-cc02ddac7ed6> | {
"date": "2013-05-18T06:02:00",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9191036224365234,
"score": 2.546875,
"token_count": 608,
"url": "http://unapologetic.wordpress.com/2009/04/14/bilinear-forms/?like=1&source=post_flair&_wpnonce=8cb43e0c56"
} |
The Gram-Schmidt Process
Now that we have a real or complex inner product, we have notions of length and angle. This lets us define what it means for a collection of vectors to be “orthonormal”: each pair of distinct vectors is perpendicular, and each vector has unit length. In formulas, we say that the collection is orthonormal if . These can be useful things to have, but how do we get our hands on them?
It turns out that if we have a linearly independent collection of vectors then we can come up with an orthonormal collection spanning the same subspace of . Even better, we can pick it so that the first vectors span the same subspace as . The method goes back to Laplace and Cauchy, but gets its name from Jørgen Gram and Erhard Schmidt.
We proceed by induction on the number of vectors in the collection. If , then we simply set
This “normalizes” the vector to have unit length, but doesn’t change its direction. It spans the same one-dimensional subspace, and since it’s alone it forms an orthonormal collection.
Now, lets assume the procedure works for collections of size and start out with a linearly independent collection of vectors. First, we can orthonormalize the first vectors using our inductive hypothesis. This gives a collection which spans the same subspace as (and so on down, as noted above). But isn’t in the subspace spanned by the first vectors (or else the original collection wouldn’t have been linearly independent). So it points at least somewhat in a new direction.
To find this new direction, we define
This vector will be orthogonal to all the vectors from to , since for any such we can check
where we use the orthonormality of the collection to show that most of these inner products come out to be zero.
So we’ve got a vector orthogonal to all the ones we collected so far, but it might not have unit length. So we normalize it:
and we’re done. | <urn:uuid:4a2ad899-7ba0-4bfc-9276-c5c5c0845fe6> | {
"date": "2013-05-18T08:12:09",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8971889019012451,
"score": 3.625,
"token_count": 447,
"url": "http://unapologetic.wordpress.com/2009/04/28/the-gram-schmidt-process/?like=1&source=post_flair&_wpnonce=fe7f791e1e"
} |
Sarin was developed
in 1938 in Germany as a pesticide. Its name is derived from the names
of the chemists involved in its creation: Schrader, Ambros, Rudriger
and van der Linde. Sarin is a colorless non-persistent liquid. The
vapor is slightly heavier than air, so it hovers close to the ground.
Under wet and humid weather conditions, Sarin degrades swiftly, but
as the temperature rises up to a certain point, Sarin’s lethal
duration increases, despite the humidity. Sarin is a lethal cholinesterase
inhibitor. Doses which are potentially life threatening may be only
slightly larger than those producing least effects.
Signs and Symptoms
overexposure may occur within minutes or hours, depending upon the
dose. They include: miosis (constriction of pupils) and visual effects,
headaches and pressure sensation, runny nose and nasal congestion,
salivation, tightness in the chest, nausea, vomiting, giddiness, anxiety,
difficulty in thinking, difficulty sleeping, nightmares, muscle twitches,
tremors, weakness, abdominal cramps, diarrhea, involuntary urination
and defecation, with severe exposure symptoms progressing to convulsions
and respiratory failure.
breath until respiratory protective mask is donned. If severe signs
of agent exposure appear (chest tightens, pupil constriction, in
coordination, etc.), immediately administer, in rapid succession,
all three Nerve Agent Antidote Kit(s), Mark I injectors (or atropine
if directed by a physician). Injections using the Mark I kit injectors
may be repeated at 5 to 20 minute intervals if signs and symptoms
are progressing until three series of injections have been administered.
No more injections will be given unless directed by medical personnel.
In addition, a record will be maintained of all injections given.
If breathing has stopped, give artificial respiration. Mouth-to-mouth
resuscitation should be used when mask-bag or oxygen delivery systems
are not available. Do not use mouth-to-mouth resuscitation when facial
contamination exists. If breathing is difficult, administer oxygen.
Seek medical attention Immediately.
Contact: Immediately flush eyes with water for 10-15
minutes, then don respiratory protective mask. Although miosis
(pinpointing of the pupils) may be an early sign of agent exposure,
an injection will not be administered when miosis is the only
sign present. Instead, the individual will be taken Immediately
to a medical treatment facility for observation.
Contact: Don respiratory protective mask and remove
contaminated clothing. Immediately wash contaminated skin with
copious amounts of soap and water, 10% sodium carbonate solution,
or 5% liquid household bleach. Rinse well with water to remove
excess decontaminant. Administer nerve agent antidote kit, Mark
I, only if local sweating and muscular twitching symptoms are
observed. Seek medical attention Immediately.
not induce vomiting. First symptoms are likely to be gastrointestinal.
Immediately administer Nerve Agent Antidote Kit, Mark I. Seek medical
Above Information Courtesy
of United States Army | <urn:uuid:7ba4236f-2dbb-4dce-b113-18fc0fa8af10> | {
"date": "2013-05-18T06:49:44",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8637948036193848,
"score": 3.328125,
"token_count": 680,
"url": "http://usmilitary.about.com/library/milinfo/blchemical-4.htm"
} |
Interagency Coordinating Council
"The mission of the Utah Interagency Coordinating Council for Infants and Toddlers with Special Needs is to assure that each infant and young child with special needs will have the opportunity to achieve optimal health and development within the context of the family."
Introduction to ICC: Interagency Coordinating Council for Infants and Toddlers with Disabilities and their Families
What is Early Intervention?
Baby Watch Early Intervention is a statewide, comprehensive, coordinated, interagency, multidisciplinary system, which provides early intervention services to infants and toddlers, younger than three years of age, with developmental delay or disability, and their families. Early intervention is the "baby" piece of Special Education. The program is authorized through the Individuals with Disabilities Act (IDEA), Part C, (Early Intervention Program for Infants and Toddlers with Disabilities). In 1987, Utah's Governor designated the Department of Health (DOH) as the "Lead Agency" for the early intervention program. Utah was one of the very first states in the nation to fully implement its early intervention program after securing the approval of the State Legislature.
At present, there are 16 early intervention programs that serve more than 2,000 children per month in the state. It is anticipated that the demand for these services will continually increase.
What is an Interagency Coordinating Council (ICC)?
The creation of an ICC was established with the passage of federal law P.L. 99-457 in October 1986. Developers of the legislation recognized the need for a group outside of the Lead Agency to "advise and assist" in the development of such a system. The independent nature of the ICC is one feature that gives the group the potential for making a contribution to the development of the service system.
Another feature of the regulations is the multidisciplinary and the multi-constituency representation on the ICC. By specifying what types of members should be included on the ICC, the legislation enables states to bring together consumer, clinical, political, and administrative communities. This merging of a variety of communities facilitates the building of bridges between the involved agencies. In addition, the committee has provided a broader vision of the service system based upon the participation and contributions of all relevant providers and consumers.
The ICC, a body required by statute to be appointed by each state's Governor, is to be an important participant in the development of a well-coordinated service system (Federal Interagency Coordinating Council, June, 1989). Each state ICC determines, in conjunction with the Lead Agency, the nature of the roles and tasks it chooses to perform at various policy stages.
The Utah ICC is an interagency group whose membership represents the statewide early childhood services community. It is comprised of up to 25 members.
The purpose of the Utah ICC is to advise and assist the lead agency in the Division of Community and Family Health Services, Bureau of Children with Special Health Care Needs in the UDOH.
Much of the work of the ICC is accomplished in standing committees and ad hoc task force meetings that perform long range planning, study specific issues and make appropriate actions. A member of the ICC chairs each committee.
What role does the ICC play?
The Council functions as a planning body at the systems level and advocates for children birth to three years of age and their families with or at-risk for a developmental disability. The Council acts in three major roles:
(1) ADVISOR: Providing advice to the Lead Agency, Governor and the state legislature on issues related to the development of a coordinated system of early intervention services for children with or at-risk for a developmental disability and their families.
The federal law defines the Council membership and the program in order to give it a unique view of the "service systems".
The parent component of the Council gives it a perspective which may be different from that presented by state agencies which are represented on the Council.
The Council can use its special vantage point to be recognized as a source of information for the Lead Agency, Governor, and legislators, as well as other key decision makers in the state.
(2) NEGOTIATOR: Working as an advocate to encourage a particular course of action by the state.
A major activity of the Council is to "review and comment on the annual state plan for services for children birth to three years" as part of its overall responsibility to assess the service system as it exists in the state. This information as well as interagency coordination is another important goal of the program and puts the Council in a position to be effective in making changes in how services are provided in the state. With agency and provider representatives on the Council, communication can more easily be effected and gaps between agencies can hopefully be bridged.
(3) CAPACITY BUILDER: Enhancing the ability of the overall service system to address service needs.
In this role, the Council works to increase the quality and quantity of desired supports and services from the public and private sectors, to ensure that all needy children and families will be provided early intervention services. | <urn:uuid:ca8c9151-949c-43e8-9c9b-d2e43029f3ed> | {
"date": "2013-05-18T06:20:12",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9540891051292419,
"score": 2.734375,
"token_count": 1031,
"url": "http://utahbabywatch.org/icc/index.htm"
} |
By JOHN CARTER
When Abraham Lincoln died from an assassin’s bullet on April 15, 1865, Edwin Stanton remarked to those gathered around his bedside, “Now he belongs to the ages.”
One of the meanings implied in Stanton’s famous statement is that Lincoln would not only be remembered as an iconic figure of the past, but that his spirit would also play a significant role in ages to come.
The Oscar-nominated movie “Lincoln,” which chronicles the struggle to pass the 13th amendment abolishing slavery, has turned our attention again to Lincoln’s legacy and his relevance amid our nation’s present divisions and growing pains.
Here is some of the wit and wisdom of Abraham Lincoln worth pondering:
“As for being president, I feel like the man who was tarred and feathered and ridden out of town on a rail. To the man who asked him how he liked it, he said, ‘If it wasn’t for the honor of the thing, I’d rather walk.’”
“I desire so to conduct the affairs of this administration that if at the end, when I come to lay down the reins of power, I have lost every other friend on earth, I shall at least have one friend left, and that friend shall be down inside of me.”
“Should my administration prove to be a very wicked one, or what is more probable, a very foolish one, if you the people are true to yourselves and the Constitution, there is but little harm I can do, thank God.”
“Bad promises are better broken than kept.”
“I am not at all concerned that the Lord is on our side in this great struggle, for I know that the Lord is always on the side of the right; but it is my constant anxiety and prayer that I and this nation may be on the Lord’s side.”
“I have never had a feeling, politically, that did not spring from the sentiments embodied in the Declaration of Independence.”
“Those who deny freedom to others deserve it not for themselves; and, under a just God, cannot long retain it.”
“As I would not be a slave, so I would not be a master. This expresses my idea of democracy.”
“The probability that we may fail in the struggle ought not to deter us from the support of a cause we believe to be just.”
“The true rule, in determining to embrace or reject anything, is not whether it have any evil in it, but whether it have more evil than good. There are few things wholly evil or wholly good.”
“Some of our generals complain that I impair discipline and subordination in the army by my pardons and respites, but it makes me rested, after a hard day’s work, if I can find some good excuse for saving a man’s life, and I go to bed happy as I think how joyful the signing of my name will make him (a deserter) and his family.”
“I have been driven many times to my knees by the overwhelming conviction that I had nowhere else to go.”
In addition, Lincoln’s Gettysburg Address and his second inaugural speech are ever relevant. And you may wish to add your own favorites to these.
Paul’s advice to us in Philippians 4:8 is to “fill your minds with those things that are good and deserve praise: things that are true, noble, right, pure, lovely, and honorable.”
As we celebrate his birthday on the 12th, Lincoln’s words more than meet this standard!
John Carter is a Weatherford resident whose column, “Notes From the Journey,” is published weekly in the Weatherford Democrat. | <urn:uuid:d53f9812-f42b-4039-a509-209a2d5aac9b> | {
"date": "2013-05-18T06:30:38",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9683926701545715,
"score": 3.390625,
"token_count": 821,
"url": "http://weatherforddemocrat.com/opinion/x1303543173/NOTES-FROM-THE-JOURNEY-Lincoln-is-still-one-for-the-ages"
} |
TAKING EVERY PRECAUTION
Japan Takes Measures to Prevent SARS (June 9, 2003)
As severe acute respiratory syndrome (SARS), a new type
of pneumonia, rages in wide areas of Asia and other places, the Japanese government
has been busy taking measures to prevent an outbreak from occurring in Japan.
The government has urged people to take caution in traveling to affected areas,
and it has been making every effort to prevent SARS from entering Japan. In addition,
work is progressing on a system in which medical institutions, national and local
governments, and corporations will act together to prevent the spread of SARS
in the event of an outbreak in Japan. As a result of these efforts, as of June
9, there have been no confirmed or probable cases of SARS in Japan.
|Medical staff practice using an isolator. (Jiji)
Plans Already Developed for Dealing with Patients
On May 1 the government brought the heads of the relevant ministries and agencies
together for a first-ever meeting devoted to SARS in order to decide what measures
should be taken in the event that someone in Japan is found to be infected with
the virus. The group decided to call on people returning from China to stay at
home for 10 days, which is believed to be the incubation period for the disease.
Taking this into consideration, the
Ministry of Health, Labor, and Welfare made plans for taking action in the
event of an outbreak. It decided to give local governments the authority to direct
people believed likely to be infected, or "probable patients," to hospitalize
themselves. In the event that a patient refuses, the local governments are empowered
to forcibly hospitalize the person.
Local governments are readying themselves to accept patients. According to a survey
conducted by the Nihon Keizai Shimbun in early May,
all of the nation's 47 prefectures had already completed action plans spelling
out what measures would be taken in the event of an outbreak. In addition, some
250 medical institutions around the country have made such preparations as setting
up "negative air-pressure rooms" to prevent the virus from spreading
within the hospital or to the outside. Local governments in such places as Kitakyushu
City, Hokkaido, and Mie Prefecture
have been purchasing capsules called isolators to be used when suspected SARS
patients are moved, and they have conducted drills on how to use them with volunteers
playing the role of patients.
In May a foreign traveler who had been to Japan was found to be infected with
SARS. When this was discovered, the government and local authorities quickly implemented
emergency measures, as a result of which no secondary infections occurred. According
to a survey conducted by the Asahi Shimbun, 28 local
governments out of the 47 prefectures and 13 major cities in Japan, nearly half
the total, were rethinking their plans to cope with a potential SARS outbreak
in light of this news. Fukushima Prefecture decided to check whether visitors
from abroad have come from an area to which the World
Health Organization recommends postponing travel. It will also make use of
the local hotels association to determine the previous whereabouts of such guests.
Kagawa Prefecture, meanwhile, which had previously only planned for people who
had come in close contact with SARS patients, defined as having been within 2
meters, has created an action plan for checking on people who have had even a
low possibility of coming in contact with a carrier.
Public and Private Sectors Taking Action
The Japanese government is stepping up its efforts to take rapid, nationwide measures
to prevent SARS infection. The Ministry of Health, Labor, and Welfare has accelerated
revision of the Infectious Disease Law, for example. And while local governments
are the first line of defense in tracking the path of infection and following
up on people who may have been exposed, the national government will become directly
involved in the event that infection spreads outside of a local area. Japan is
also actively engaged in international cooperation aimed at preventing the spread
The private sector has also been taking action to prevent
the spread of SARS and to reassure travelers. West
Japan Railway Co. (JR West) has set up a SARS-response headquarters and is
considering disinfecting affected carriages in the event that an infected person
is found to have been onboard a certain train at a certain time. The company also
decided to publicly release information on the time and route traveled by any
SARS patients. Orient Ferry, which runs a ferry route from Shimonoseki to China's
Qingdao, has since late April requested that all passengers and crew fill out
health questionnaires, and the company has trained staff for what to do in the
event that a passenger falls ill with SARS while onboard. The terminal in Qingdao,
the shuttle bus, and the inside of the ship are all disinfected every day.
Meanwhile, some companies have taken the step of postponing scheduled business
trips to affected areas, and, in response to requests by the government, airlines
and ship operators whose vessels operate in Japan are distributing health questionnaires
to their staff and passengers.
Japan has avoided SARS so far, and there is every reason to be confident that
the country will remain free of the disease. Even if an outbreak did occur, the
concerted efforts of local and national governments and private enterprises to
prepare for such an eventuality suggest that it would be handled quickly and efficiently.
Note: The government's "Measures upon Entry/Return to Japan" for travelers
heading to Japan can be found here. (http://www.mofa.go.jp/policy/health_c/sars/measure0521.html)
Related Web Sites
the Ministry of Health, Labor, and Welfare
World Health Organization
West Japan Railway Co. (JR West)
Copyright (c) 2004 Web Japan. Edited by Japan Echo Inc. based on domestic Japanese news sources. Articles presented here are offered for reference purposes and do not necessarily represent the policy or views of the Japanese Government.
(November 19, 2002)
GIVE BLOOD AND ENJOY
(September 25, 2002) | <urn:uuid:b366d573-5d03-4927-bd6a-353a5c4ca06f> | {
"date": "2013-05-18T08:01:30",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9509302377700806,
"score": 2.96875,
"token_count": 1318,
"url": "http://web-japan.org/trends/lifestyle/lif030609.html"
} |
Archaeological Site of Rehman Dheri
Department of Archaeology and Museums
Property names are listed in the language in which they have been submitted by the State Party.
The archaeological site of Rehman Dheri consists of a rectangular shaped mound covering some twenty two hectares and standing 4.5 metres above the surrounding field. The final occupational phase of the site is clearly visible on the surface of the mound by eye and also through air photographs. It consisted of a large walled rectangular area with a grid iron network of streets and lanes dividing the settlement into regular blocks. Walls delineating individual buildings and street frontages are clearly visible in the early morning dew or after rain and it is also possible to identify the location of a number of small-scale industrial areas within the site marked, as they are, by eroding kilns and scatters of slag. The surface of the mound is littered with thousands of shreds and artefacts, slowly eroding out of room fills.
The archaeological sequence at the site of Rehman Dheri is over 4.5 metres deep, and covers a sequence of over 1,400 years beginning at c.3,300 BC. The site represents following periods:
I c.3300-3850 BC
II c.2850-2500 BC
III c.2500-1900 BC
It is generally accept that the settlement received its formal plan in its earliest phases and that subsequent phases replicated the plan over time. Although its excavators have cut a number of deep trenches or soundings into the lower levels, the areas exposed have been too limited to undertake a study of change in layout and the spatial distribution of craft activities. It was abandoned at the beginning of the mature Indus phase by the middle of the third millennium BC and subsequent activities, greatly reduced, are only recorded on the neighbouring archaeological mound, Hisam Dheri. The plan of the Early Harappan settlement is therefore undisturbed by later developments and, as such, represents the most exceptionally preserved example of the beginning of urbanisation in South Asia. | <urn:uuid:113e3986-b949-4542-97da-d6842557b2f6> | {
"date": "2013-05-18T07:14:54",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.960351288318634,
"score": 2.765625,
"token_count": 426,
"url": "http://whc.unesco.org/pg_friendly_print.cfm?cid=326&id=1877"
} |
- weak drug regulatory control and enforcement;
- scarcity and/or erratic supply of basic medicines;
- unregulated markets and distribution chains;
- high drug prices and/or
- significant price differentials.
At national level, governments, law enforcement agencies, heath professionals, the pharmaceutical industry, importers, distributors, and consumer organizations should adopt a shared responsibility in the fight against counterfeit drugs. Cooperation between countries, especially trading partners is very useful for combating counterfeiting. Cooperation should include the timely and appropriate exchange of information and the harmonization of measures to prevent the spread of counterfeit medicines.
The World Health Organization has developed and published guidelines, Guidelines for the development of measures to combat counterfeit medicines. These guidelines provide advice on measures that should be taken by the various stakeholders and interested parties to combat counterfeiting of medicines. Governments and all stakeholders are encouraged to adapt or adopt these guidelines in their fight against counterfeiting of medicines.
- Guidelines for the development of measures to combat counterfeit medicines
- Rapid Alert System for counterfeit medicines
Communication and advocacy - creating public awareness
Patients and consumers are the primary victims of counterfeit medicines. In order to protect them from the harmful effects of counterfeit medicines it is necessary to provide them with appropriate information and education on the consequences of counterfeit medicines.
Patients and consumers expect to get advice from national authorities, health-care providers, health professionals and others from where they should buy or get their medicines; what measures they should take in case they come across such medicines or are affected by the use of such medicines.
Ministries of health, national medicines regulators, health professional associations, nongovernmental organizations and other stakeholders have the responsibility to participate in campaign activities targeting patients and consumers to promote awareness of the problem of counterfeit medicines. Posters, brochures, radio and television programmes are useful means for disseminating messages and advice. | <urn:uuid:3ffdac17-ada1-42bf-b987-66bc26ca97f6> | {
"date": "2013-05-18T07:20:16",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9296919703483582,
"score": 3.21875,
"token_count": 378,
"url": "http://who.int/impact/activities/en/"
} |
Detailed Distribution Map Information This map reflects the specimen location information from the Wisconsin Botanical Information System database and attemps to line up the original Town-Range Survey map from 1833 to 1866 with a computer generated table grid over the map of Wisconsin. Because the original Town Range lines are inexact, these "dots" might be somewhat skewed. Also townships near the borders of the state might only be partial, so the "dot" might center outside the state's boundry.
Holding the mouse over the "dot" identifies the Town-Range. Clicking(new window) on the "dot" will link to a list of all specimen accession numbers for this location. You can then link to the individual specimen's label data. Arrange this window side-by-side with the specimen-list window so you can easily go back and forth between this map and the specimen's data. | <urn:uuid:40292269-3406-46cf-84aa-4c4efc553ecc> | {
"date": "2013-05-18T08:11:49",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8508155345916748,
"score": 2.828125,
"token_count": 185,
"url": "http://wisplants.uwsp.edu/scripts/maps.asp?SpCode=BARVUL&bkg=s"
} |
Click the picture above
to see three larger pictures
Show birthplace location
|Previous||(Alphabetically)||Next||Biographies index |
|Version for printing|
Lipman Bers, always known as Lipa, was born into a Jewish family. His parents Isaac Bers and Bertha Weinberg were teachers, his mother being head at an elementary school in Riga where teaching was in Yiddish while his father was head at the Yiddish high school in Riga. Born in 1914, Lipa's early years were much affected by the political and military events taking place in Russia. Latvia had been under Russian imperial rule since the 18th century so World War I meant that there were evacuations from Riga. The Russian Revolution which began in October 1917 caused fighting between the Red Army and the White Army and for the next couple of years various parts of Russia came first under the control of one faction then of the other. Lipa's family went to Petrograd, the name that St Petersburg had been given in 1914 when there was strong anti-German feeling in Russia, but Lipa was too young to understand the difficulties that his parents went through at this time.
At the end of World War I in 1918, Latvia regained its independence although this was to be short-lived. Lipa spent some time back in Riga, but he also spent time in Berlin. His mother took him to Berlin while she was training at the Psychoanalytic Institute. During his schooling mathematics became his favourite subject and he decided that it was the subject he wanted to study at university. He studied at the University of Zurich, then returned to Riga and studied at the university there.
At this time Europe was a place of extreme politics and, in 1934, Latvia became ruled by a dictator. Lipa was a political activist, a social democrat who argued strongly for human rights. He was at this time a soap-box orator putting his views across strongly both in speeches and in writing for an underground newspaper. Strongly opposed to dictators and strongly advocating democracy it was clear that his criticism of the Latvian dictator could not be ignored by the authorities. A warrant was issued for his arrest and, just in time, he escaped to Prague. His girl friend Mary Kagan followed him to Prague where they married on 15 May 1938.
There were a number of reasons why Bers chose to go to Prague at this time. Firstly he had to escape from Latvia, secondly Prague was in a democratic country, and thirdly his aunt lived there so he could obtain permission to study at the Charles University without having to find a job to support himself. One should also not underestimate the fact that by this stage his mathematical preferences were very much in place and Karl Loewner in Prague looked the ideal supervisor.
Indeed Bers did obtain his doctorate which was awarded in 1938 from the Charles University of Prague where he wrote a thesis on potential theory under Karl Loewner's supervision. At the time Bers was rather unhappy with Loewner :-
Lipa spoke of feeling neglected, perhaps even not encouraged, by Loewner and said that only in retrospect did he understand Loewner's teaching method. He gave to each of his students the amount of support needed ... It is obvious that Lipa did not appear too needy to Loewner.
In 1938 Czechoslovakia became an impossible country for someone of Jewish background. Equally dangerous was the fact that Bers had no homeland since he was a wanted man in Latvia, and was a left wing academic. With little choice but to escape again, Bers fled to Paris where his daughter Ruth was born. However, the war followed him and soon the Nazi armies began occupying France. Bers applied for a visa to the USA and, while waiting to obtain permission, he wrote two papers on Green's functions and integral representations. Just days before Paris surrendered to the advancing armies, Bers and his family moved from Paris to a part of France not yet under attack from the advancing German armies. At last he received the news that he was waiting for, the issue of American visas for his family.
In 1940 Bers and his family arrived in the United States and joined his mother who was already in New York. There was of course a flood of well qualified academics arriving in the United States fleeing from the Nazis and there was a great scarcity of posts, even for the most brilliant, so he was unemployed until 1942, living with other unemployed refugees in New York. During this time he continued his mathematical researches. After this he was appointed Research Instructor at Brown University where, as part of work relevant to the war effort, he studied two-dimensional subsonic fluid flow. This was important at that time since aircraft wings were being designed for planes with jet engines capable of high speeds.
Between 1945 and 1949 Bers worked at Syracuse University, first at Assistant Professor, later as Associate Professor. Gelbart wanted to build up the department at Syracuse and attracting both Bers and Loewner was an excellent move. Here Bers began work on the problem of removability of singularities of non-linear elliptic equations. His major results in this area were announced by him at the International Congress of Mathematicians in 1950 and his paper Isolated singularities of minimal surfaces was published in the Annals of Mathematics in 1951. Courant writes:-
The nonparametric differential equation of minimal surfaces may be considered the most accessible significant example revealing typical qualities of solutions of non-linear partial differential equations. With a view to such a general objective, [Bers] has studied singularities, branch-points and behaviour in the large of minimal surfaces.
Abikoff writes in that this paper is:-
... a magnificent synthesis of complex analytic techniques which relate the different parameterisations of minimal surfaces to the representations of the potential function for subsonic flow and thereby achieves the extension across the singularity.
Bers then became a member of the Institute for Advanced Study at Princeton where he began work on Teichmüller theory, pseudoanalytic functions, quasiconformal mappings and Kleinian groups. He was set in the right direction by an inequality he found in a paper of Lavrentev who attributed the inequality to Ahlfors. In a lecture he gave in 1986 Bers explained what happened next:-
I was in Princeton at the time. Ahlfors came to Princeton and announced a talk on quasiconformal mappings. He spoke at the University so I went there and sure enough, he proved this theorem. So I came up to him after the talk and asked him "Where did you publish it?", and he said "I didn't". "So why did Lavrentev credit you with it?" Ahlfors said "He probably thought I must know it and was too lazy to look it up in the literature".
When Bers met Lavrentev three years later he asked him the same questions and, indeed, Ahlfors had been correct in guessing why Lavrentev had credited him. Bers continued in his 1986 lecture:-
I immediately decided that, first of all, if quasiconformal mappings lead to such powerful and beautiful results and, secondly, if it is done in this gentlemanly spirit - where you don't fight over priority - this is something that I should spend the rest of my life studying.
It is ironic, given Bers strong political views on human rights, that he should find that Teichmüller, a fervent Nazi, had already made stunning contributions. In one of his papers on Teichmüller theory, Bers quotes Plutarch:-
It does not of necessity follow that, if the work delights you with its grace, the one who wrought it is worthy of your esteem.
In 1951 Bers went to the Courant Institute in New York, where he was a full professor, and remained there for 13 years. During this time he wrote a number of important books and surveys on his work. He published Theory of pseudo-analytic functions in 1953 which Protter, in a review, described as follows:-
The theory of pseudo-analytic functions was first announced by [Bers] in two notes. These lecture notes not only contain proofs and extensions of the results previously announced but give a self-contained and comprehensive treatment of the subject.
The author sets as his goal the development of a function theory for solutions of linear, elliptic, second order partial differential equations in two independent variables (or systems of two first-order equations). One of the chief stumbling blocks in such a task is the fact that the notion of derivative is a hereditary property for analytic functions while this is clearly not the case for solutions of general second order elliptic equations.
Another classic text was Mathematical aspects of subsonic and transonic gas dynamics published in 1958:-
It should be said, even though this is taken for granted by everybody in the case of Professor Bers, that the survey is masterly in its elegance and clarity.
In 1958 Bers address the International Congress of Mathematicians in Edinburgh, Scotland, where he lectured on Spaces of Riemann surfaces and announced a new proof of the measurable Riemann mapping theorem. In his talk Bers summarised recent work on the classical problem of moduli for compact Riemann surfaces and sketched a proof of the Teichmüller theorem characterizing extremal quasiconformal mappings. He showed that the Teichmüller space for surfaces of genus g is a (6g-6)-cell, and showed how to construct the natural complex analytic structure for the Teichmüller space.
Bers was a Guggenheim Fellow in 1959-60, and a Fulbright Fellow in the same academic year. From 1959 until he left the Courant Institute in 1964, Bers was Chairman of the Graduate Department of Mathematics.
In 1964 Bers went to Columbia University where he was to remain until he retired in 1984. He was chairman of the department from 1972 to 1975. He was appointed Davies Professor of Mathematics in 1972, becoming Emeritus Davies Professor of Mathematics in 1982. During this period Bers was Visiting Miller Research Professor at the University of California at Berkeley in 1968.
Tilla Weinstein describes in Bers as a lecturer:-
Lipa's courses were irresistible. He laced his lectures with humorous asides and tasty tidbits of mathematical gossip. He presented intricate proofs with impeccable clarity, pausing dramatically at the few most critical steps, giving us a chance to think for ourselves and to worry that he might not know what to do next. Then, just as the silence got uncomfortable, he would describe the single most elegant way to complete the argument.
Jane Gilman describes Bers' character:-
Underneath the force of Bers' personality and vivacity was the force of his mathematics. His mathematics had a clarity and beauty that went beyond the actual results. He had a special gift for conceptualising things and placing them in the larger context.
In Bers life is summed up by Abikoff as follows:-
Lipa possessed a joy of life and an optimism that is difficult to find at this time and that is sorely missed. Those of us who experienced it directly have felt an obligation to pass it on. That, in addition to the beauty of his own work, is Lipa's enduring gift to us.
We have yet to say something about Bers' great passion for human rights. In fact this was anything but a sideline in his life and one could consider that he devoted himself full-time to both his mathematical work and to his work as a social reformer. Perhaps his views are most clearly expressed by quoting from an address he gave in 1984 when awarded an honorary degree by the State University of New York at Stony Brook:-
By becoming a human rights activist ... you do take upon yourself certain difficult obligations. ... I believe that only a truly even-handed approach can lead to an honest, morally convincing, and effective human rights policy. A human rights activist who hates and fears communism must also care about the human rights of Latin American leftists. A human rights activist who sympathises with the revolutionary movement in Latin America must also be concerned about human rights abuses in Cuba and Nicaragua. A devout Muslim must also care about human rights of the Bahai in Iran and of the small Jewish community in Syria, while a Jew devoted to Israel must also worry about the human rights of Palestinian Arabs. And we American citizens must be particularly sensitive to human rights violations for which our government is directly or indirectly responsible, as well as to the human rights violations that occur in our own country, as they do.
Bers received many honours for his contributions in addition to those we have mentioned above. He was elected to the American Academy of Arts and Sciences, to the Finnish Academy of Sciences, and to the American Philosophical Society. He served the American Mathematical Society in several capacities, particularly as Vice-President (1963-65) and as President (1975-77). The American Mathematical Society awarded him their Steele Prize in 1975. He received the New York Mayor's award in Science and Technology in 1985. He was an honorary life member of the New York Academy of Sciences, and of the London Mathematical Society.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (5 books/articles)|
|Some Quotations (3)|
|Mathematicians born in the same country|
|Honours awarded to Lipman Bers|
(Click below for those honoured in this way)
|AMS Colloquium Lecturer||1971|
|AMS Steele Prize||1975|
|American Maths Society President||1975 - 1976|
|LMS Honorary Member||1984|
Other Web sites
|Previous||(Alphabetically)||Next||Biographies index |
|History Topics || Societies, honours, etc.||Famous curves |
|Time lines||Birthplace maps||Chronology||Search Form |
|Glossary index||Quotations index||Poster index |
|Mathematicians of the day||Anniversaries for the year|
JOC/EFR © April 2002 |
School of Mathematics and Statistics|
University of St Andrews, Scotland
The URL of this page is:| | <urn:uuid:8e3ce5d6-a76b-4e22-8892-e1280e29f3f7> | {
"date": "2013-05-18T06:24:59",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9804872274398804,
"score": 3,
"token_count": 2948,
"url": "http://www-history.mcs.st-and.ac.uk/~history/Biographies/Bers.html"
} |
x2/3 + y2/3 = a2/3
x = a cos3(t), y = a sin3(t)
Click below to see one of the Associated curves.
|Definitions of the Associated curves||Evolute|
|Involute 1||Involute 2|
|Inverse curve wrt origin||Inverse wrt another circle|
|Pedal curve wrt origin||Pedal wrt another point|
|Negative pedal curve wrt origin||Negative pedal wrt another point|
|Caustic wrt horizontal rays||Caustic curve wrt another point|
The astroid only acquired its present name in 1836 in a book published in Vienna. It has been known by various names in the literature, even after 1836, including cubocycloid and paracycle.
The length of the astroid is 6a and its area is 3πa2/8.
The gradient of the tangent T from the point with parameter p is -tan(p). The equation of this tangent T is
x sin(p) + y cos(p) = a sin(2p)/2
Let T cut the x-axis and the y-axis at X and Y respectively. Then the length XY is a constant and is equal to a.
It can be formed by rolling a circle of radius a/4 on the inside of a circle of radius a.
It can also be formed as the envelope produced when a line segment is moved with each end on one of a pair of perpendicular axes. It is therefore a glissette.
Other Web site:
|Main index||Famous curves index|
|Previous curve||Next curve|
|History Topics Index||Birthplace Maps|
|Mathematicians of the day||Anniversaries for the year|
|Societies, honours, etc||Search Form|
The URL of this page is: | <urn:uuid:367a0525-d005-4467-93f1-a7ac123614d1> | {
"date": "2013-05-18T05:53:51",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8595336079597473,
"score": 2.71875,
"token_count": 409,
"url": "http://www-history.mcs.st-andrews.ac.uk/history/Curves/Astroid.html"
} |
The machete blades turned red with heat in the fire that the rubber workers built on a Liberia plantation, Thomas Unnasch remembers from a visit in the 1980s.
This was how the men tried to quell the intense itchiness that comes with river blindness, a rare tropical disease.
"You can imagine how bad the itching must be, that running a red-hot machete up and down your back would be a relief, but it was," said Unnasch, whose laboratory works on diagnostic tests for the disease.
About 18 million people have river blindness worldwide, according to the World Health Organization, but more than 99% of cases of this disease are found in Africa. It goes by the technical name "onchocerciasis," and it spreads through small black flies that breed in fast-flowing, highly oxygenated waters. When an infected fly bites a person, it drops worm larvae in the skin, which can then grow and reproduce in the body.
Unlike malaria, river blindness is not fatal, but it causes a "miserable life," said Moses Katabarwa, senior epidemiologist for the Atlanta-based Carter Center's River Blindness Program, which has been leading an effort to eliminate the disease in the Americas and several African countries.
Some strains cause blindness, while others come with more severe skin disease. With time, generally all strains of the disease can lead to rough "lizard" skin, depigmented "leopard skin" and hanging groins. Another big problem among patients is itching, which happens when the worms die inside a person.
In southwest Uganda, the locals call the disease "Obukamba," referring to the symptoms of distorted skin appearance and itchiness, Katabarwa said. In western Uganda, he said, "the fly is called 'Embwa fly' or dog fly, for it bites like a dog!"
There is no vaccine for river blindness, but there is a drug, called ivermectin that paralyzes and kills the offspring of adult worms, according to the Mayo Clinic. It may also slow the reproduction of adult female worms, so there are fewer of them in the skin, blood and eyes. The pharmaceutical company Merck has been donating the treatment, under the brand name Mectizan, since 1985.
Great strides have been made against this disease. In the Americas, it was eliminated in Colombia in 2007 and in Ecuador in 2009. | <urn:uuid:415cd5cc-0228-4449-a777-4a0bb2194449> | {
"date": "2013-05-18T05:56:45",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9593093395233154,
"score": 3.15625,
"token_count": 502,
"url": "http://www.4029tv.com/news/health/Fighting-river-blindness/-/8897344/18384038/-/item/0/-/o7f20pz/-/index.html"
} |
Attention Deficit Hyperactivity Disorder or ADHD is a common childhood illness. People who are affected can have trouble with paying attention, sitting still and controlling their impulses. There are three types of ADHD. The most common type of ADHD is when people have difficulties with both attention and hyperactivity. This is called ADHD combined type. Some people only have difficulty with attention and organization. This is ADHD inattentive subtype or Attention Deficit Disorder (ADD). Other people have only the hyperactive and impulsive symptoms. This is ADHD hyperactive subtype.
It is a health condition involving biologically active substances in the brain. Studies show that ADHD may affect certain areas of the brain that allow us to solve problems, plan ahead, understand others' actions, and control our impulses.
Many children and adults are easily distracted at times or have trouble finishing tasks. If you suspect that your child has ADHD, it is important to have your child evaluated by his or her doctor. In order for your child’s doctor to diagnose your child with ADHD, the behaviors must appear before age 7 and continue for at least six months. The symptoms must also create impairment in at least two areas of the child's life-in the classroom, on the playground, at home, in the community, or in social settings. Many children have difficulties with their attention but attention problems are not always cue to ADHD. For example, stressful life events and other childhood conditions such as problems with schoolwork caused by a learning disability or anxiety and depression can interfere with attention.
According to the National Institute of Mental Health, ADHD occurs in an estimated 3 to 5 percent of preschool and school-age children. Therefore, in a class of 25 to 30 children, it is likely that at least one student will have this condition. ADHD begins in childhood, but it often lasts into adulthood. Several studies done in recent years estimate that 30 to 65 percent of children with ADHD continue to have symptoms into adolescence and adulthood.
No one knows exactly what causes ADHD. There appears to be a combination of causes, including genetics and environmental influences Several different factors could increase a child's likelihood of having the disorder, such as gender, family history, prenatal risks, environmental toxins and physical differences in the brain seem to be involved.
A child with ADHD often shows some of the following:
Difficulties with attention:
- trouble paying attention
- inattention to details and makes careless mistakes
- easily distracted
- losing things such as school supplies
- forgetting to turn in homework
- trouble finishing class work and homework
- trouble listening
- trouble following multiple adult commands
- difficulty playing quietly
- inability to stay seated
- running or climbing excessively
- always "on the go"
- talks too much and interrupts or intrudes on others
- blurts out answers
The good news is that effective treatment is available. The first step is to have a careful and thorough evaluation with your child’s primary care doctor or with a qualified mental health professional. With the right treatment, children with ADHD can improve their ability to pay attention and control their behavior. The right care can help them grow, learn, and feel better about themselves.
Medications: Most children with ADHD benefit from taking medication. Medications do not cure ADHD. Medications can help a child control his or her symptoms on the day that the pills are taken.
Medications for ADHD are well established and effective. There are two main types: stimulant and non-stimulant medications. Stimulants include methylphenidate, and amphetamine salts. Non-stimulant medications include atomoxetine. For more information about the medications used to treat ADHD, please see the Parent Med Guide. Before medication treatment begins, your child's doctor should discuss the benefits and the possible side effects of these medications. Your child’s doctor should continue to monitor your child for improvement and side effects. A majority of children who benefit from medication for ADHD will continue to benefit from it as teenagers. In fact, many adults with ADHD also find that medication can be helpful.
Therapy and Other Support: A psychiatrist or other qualified mental health professional can help a child with ADHD. The psychotherapy should focus on helping parents provide structure and positive reinforcement for good behavior. In addition, individual therapy can help children gain a better self-image. The therapist can help the child identify his or her strengths and build on them. Therapy can also help a child with ADHD cope with daily problems, pay better attention, and learn to control aggression.
A therapist may use one or more of the following approaches: Behavior therapy, Talk therapy, Social skills training, Family support groups.
Sometimes children and parents wonder when children can stop taking ADHD medication. If you have questions about stopping ADHD medication, consult your doctor. Many children diagnosed with ADHD will continue to have problems with one or more symptoms of this condition later in life. In these cases, ADHD medication can be taken into adulthood to help control their symptoms.
For others, the symptoms of ADHD lessen over time as they begin to "outgrow" ADHD or learn to compensate for their behavioral symptoms. The symptom most apt to lessen over time is hyperactivity.
Some signs that your child may be ready to reduce or stop ADHD medication are:
- Your child has been symptom-free for more than a year while on medication,
- Your child is doing better and better, but the dosage has stayed the same,
- Your child's behavior is appropriate despite missing a dose or two,
- Or your child has developed a newfound ability to concentrate.
The choice to stop taking ADHD medication should be discussed with the prescribing doctor, teachers, family members, and your child. You may find that your child needs extra support from teachers and family members to reinforce good behavior once the medication is stopped.
Without treatment, a child with ADHD may fall behind in school and have trouble with friendships. Family life may also suffer. Untreated ADHD can increase strain between parents and children. Parents often blame themselves when they can't communicate with their child. The sense of losing control can be very frustrating. Teenagers with ADHD are at increased risk for driving accidents. Adults with untreated ADHD have higher rates of divorce and job loss, compared with the general population. Luckily, safe and effective treatments are available which can help children and adults help control the symptoms of ADHD and prevent the unwanted consequences. | <urn:uuid:40aae48c-b422-4ff3-a8dc-88f9431d1a4e> | {
"date": "2013-05-18T07:20:44",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9592067003250122,
"score": 3.71875,
"token_count": 1307,
"url": "http://www.aacap.org/cs/ADHD.ResourceCenter/adhd_faqs"
} |
Weights linked to lower diabetes risk
Weight gains Weight training, and not just cardio workouts, is linked to a lower risk of developing type 2 diabetes, according to a US study.
"We all know that aerobic exercise is beneficial for diabetes - many studies have looked at that - but no studies have looked at weight training," says study leader Frank Hu, at the Harvard School of Public Health.
"This study suggests that weight training is important for diabetes, and probably as important as aerobic training."
Hu and his colleagues, whose report was published in the Archives of Internal Medicine, used data on more than 32,000 male health professionals, who answered questionnaires every two years from 1990 to 2008.
On average, four out of 1000 men developed type 2 diabetes every year, the researchers found.
The risk of getting the blood sugar disorder was only half as high for men who did cardio, or aerobic, workouts - say brisk walking, jogging or playing tennis - at least 150 minutes a week, as for those who didn't do any cardio exercise.
Men who did weight training for 150 minutes or more had a risk reduction of a third compared to those who never lifted weights, independently of whether or not they did aerobic exercise.
Exercise is beneficial
Whereas weight training increases muscle mass and can reduce abdominal obesity, it tends not to cut overall body mass, says Hu.
The results don't prove that working out staves off diabetes, because many men who stay fit may also be healthier in other ways, but the researchers did their best to account for such potential differences, including age, smoking and diet.
"I think the benefits of weight training are real," says Hu. "Any type of exercise is beneficial for diabetes prevention, but weight training can be incorporated with aerobic exercise to get the best results."
Along with an appropriate diet, exercise is also important for people who already have type 2 diabetes and can help control high blood sugar, he adds. | <urn:uuid:fa576bb7-fea9-461f-8ee3-962a8a33a7f9> | {
"date": "2013-05-18T06:22:58",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9761554002761841,
"score": 2.734375,
"token_count": 400,
"url": "http://www.abc.net.au/science/articles/2012/08/07/3562561.htm?topic=health"
} |
Arctic meltdown not caused by nature
Rapid loss of Arctic sea ice - 80 per cent has disappeared since 1980 - is not caused by natural cycles such as changes in the Earth's orbit around the Sun, says Dr Karl.
The situation is getting rather messy with regard to the ice melting in the Arctic. Now the volume of the ice varies throughout the year, rising to its peak after midwinter, and falling to its minimum after midsummer, usually in the month of September.
Over most of the last 1,400 years, the volume of ice remaining each September has stayed pretty constant. But since 1980, we have lost 80 per cent of that ice.
Now one thing to appreciate is that over the last 4.7 billion years, there have been many natural cycles in the climate — both heating and cooling. What's happening today in the Arctic is not a cycle caused by nature, but something that we humans did by burning fossil fuels and dumping slightly over one trillion tonnes of carbon into the atmosphere over the last century.
So what are these natural cycles? There are many many of them, but let's just look at the Milankovitch cycles. These cycles relate to the Earth and its orbit around the Sun. There are three main Milankovitch cycles. They each affect how much solar radiation lands on the Earth, and whether it lands on ice, land or water, and when it lands.
The first Milankovitch cycle is that the orbit of the Earth changes from mostly circular to slightly elliptical. It does this on a predominantly 100,000-year cycle. When the Earth is close to the Sun it receives more heat energy, and when it is further away it gets less. At the moment the orbit of the Earth is about halfway between "nearly circular" and "slightly elliptical". So the change in the distance to the Sun in each calendar year is currently about 5.1 million kilometres, which translates to about 6.8 per cent difference in incoming solar radiation. But when the orbit of the Earth is at its most elliptical, there will be a 23 per cent difference in how much solar radiation lands on the Earth.
The second Milankovitch cycle affecting the solar radiation landing on our planet is the tilt of the north-south spin axis compared to the plane of the orbit of the Earth around the Sun. This tilt rocks gently between 22.1 degrees and 24.5 degrees from the vertical. This cycle has a period of about 41,000 years. At the moment we are roughly halfway in the middle — we're about 23.44 degrees from the vertical and heading down to 22.1 degrees. As we head to the minimum around the year 11,800, the trend is that the summers in each hemisphere will get less solar radiation, while the winters will get more, and there will be a slight overall cooling.
The third Milankovitch cycle that affects how much solar radiation lands on our planet is a little more tricky to understand. It's called 'precession'. As our Earth orbits the Sun, the north-south spin axis does more than just rock gently between 22.1 degrees and 24.5 degrees. It also — very slowly, just like a giant spinning top — sweeps out a complete 360 degrees circle, and it takes about 26,000 years to do this. So on January 4, when the Earth is at its closest to the Sun, it's the South Pole (yep, the Antarctic) that points towards the Sun.
So at the moment, everything else being equal, it's the southern hemisphere that has a warmer summer because it's getting more solar radiation, but six months later it will have a colder winter. And correspondingly, the northern hemisphere will have a warmer winter and a cooler summer.
But of course, "everything else" is not equal. There's more land in the northern hemisphere but more ocean in a southern hemisphere. The Arctic is ice that is floating on water and surrounded by land. The Antarctic is the opposite — ice that is sitting on land and surrounded by water. You begin to see how complicated it all is.
We have had, in this current cycle, repeated ice ages on Earth over the last three-million years. During an ice age, the ice can be three kilometres thick and cover practically all of Canada. It can spread through most of Siberia and Europe and reach almost to where London is today. Of course, the water to make this ice comes out of the ocean, and so in the past, the ocean level has dropped by some 125 metres.
From three million years ago to one million years ago, the ice advanced and retreated on a 41,000-year cycle. But from one million years ago until the present, the ice has advanced and retreated on a 100,000-year cycle.
What we are seeing in the Arctic today — the 80 per cent loss in the volume of the ice since 1980 — is an amazingly huge change in an amazingly short period of time. But it seems as though the rate of climate change is accelerating, and I'll talk more about that, next time …
Published 27 November 2012
© 2013 Karl S. Kruszelnicki Pty Ltd | <urn:uuid:3a4ac59c-d59d-470b-adad-88e5e1c8a45a> | {
"date": "2013-05-18T07:15:13",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9558244943618774,
"score": 3.5625,
"token_count": 1065,
"url": "http://www.abc.net.au/science/articles/2012/11/27/3640992.htm?topic=latest"
} |
Black holes growing faster than expected
Black hole find Existing theories on the relationship between the size of a galaxy and its central black hole are wrong according to a new Australian study.
The discovery by Dr Nicholas Scott and Professor Alister Graham, from Melbourne's Swinburne University of Technology, found smaller galaxies have far smaller black holes than previously estimated.
Central black holes, millions to billions of times more massive than the Sun, reside in the core of most galaxies, and are thought to be integral to galactic formation and evolution.
However astronomers are still trying to understand this relationship.
Scott and Graham combined data from observatories in Chile, Hawaii and the Hubble Space Telescope, to develop a data base listing the masses of 77 galaxies and their central supermassive black holes.
The astronomers determined the mass of each central black hole by measuring how fast stars are orbiting it.
Existing theories suggest a direct ratio between the mass of a galaxy and that of its central black hole.
"This ratio worked for larger galaxies, but with improved technology we're now able to examine far smaller galaxies and the current theories don't hold up," says Scott.
In a paper to be published in the Astrophysical Journal, they found that for each ten-fold decrease in a galaxy's mass, there was a one hundred-fold decrease in its central black hole mass.
"That was a surprising result which we hadn't been anticipating," says Scott.
The study also found that smaller galaxies have far denser stellar populations near their centres than larger galaxies.
According to Scott, this also means the central black holes in smaller galaxies grow much faster than their larger counterparts.
Black holes grow by merging with other black holes when their galaxies collide.
"When large galaxies merge they double in size and so do their central black holes," says Scott.
"But when small galaxies merge their central black holes quadruple in size because of the greater densities of nearby stars to feed on."
Somewhere in between
The findings also solve the long standing problem of missing intermediate mass black holes.
For decades, scientists have been searching for something in between stellar mass black holes formed when the largest stars die, and supermassive black holes at the centre of galaxies.
"If the central black holes in smaller galaxies have lower mass than originally thought, they may represent the intermediate mass black hole population astronomers have been hunting for," says Graham.
"Intermediate sized black holes are between ten thousand and a few hundred thousand times the mass of the Sun, and we think we've found several good candidates."
"These may be big enough to be seen directly by the new generation of extremely large telescopes now being built," says Graham. | <urn:uuid:e617c5fd-d556-4d43-be1f-042e7e7f2c60> | {
"date": "2013-05-18T06:23:22",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9486631155014038,
"score": 4.25,
"token_count": 552,
"url": "http://www.abc.net.au/science/articles/2013/01/17/3671551.htm?topic=enviro"
} |
Hoodoos may be seismic gurus
Hoodoo prediction Towering chimney-like sedimentary rock spires known as hoodoos may provide an indication of an area's past earthquake activity.
The research by scientists including Dr Rasool Anooshehpoor, from the United States Nuclear Regulatory Commission, may provide scientists with a new tool to test the accuracy of current hazard models.
Hoodoo formations are often found in desert regions, and are common in North America, the Middle East and northern Africa.
They are caused by the uneven weathering of different layers of sedimentary rocks, that leave boulders or thin caps of hard rock perched on softer rock.
By knowing the strengths of different types of sedimentary layers, scientists can determine the amount of stress needed to cause those rocks to fracture.
The United States Geological Survey (USGS) use seismic hazard models to predict the type of ground motion likely to occur in an area during a seismic event. But, according to Anooshehpoor, these models lack long term data.
"Existing hazard maps use models based on scant data going back a hundred years or so," says Anooshehpoor. "But earthquakes have return periods lasting hundreds or thousands of years, so there is nothing to test these hazard models against."
The researchers examined two unfractured hoodoos within a few kilometres of the Garlock fault, which is an active strike-slip fault zone in California's Red Rock Canyon.
Their findings are reported in the Bulletin of the Seismological Society of America.
"Although we can't put a precise age on hoodoos because of their erosion characteristics, we can use them to provide physical limits on the level of ground shaking that could potentially have occurred in the area," says Anooshehpoor.
The researchers developed a three-dimensional model of each hoodoo and determined the most likely place where each spire would fail in an earthquake.
They then tested rock samples similar to the hoodoo pillars to measure their tensile strength and compared their results with previously published data.
USGS records suggest at least one large magnitude earthquake occurred along the fault in the last 550 years, resulting in seven metres of slip, yet the hoodoos are still standing.
This finding is consistent with a median level of ground motion associated with the large quakes in this region, says Anooshehpoor.
"If an earthquake occurred with a higher level of ground motion, the hoodoos would have collapsed," he says.
"Nobody can predict earthquakes, but this will help predict what ground motions are associated with these earthquakes when they happen."
Dr Juan Carlos Afonso from the Department of Earth and Planetary Sciences at Sydney's Macquarie University says it's an exciting development.
"In seismic hazard studies, it's not just difficult to cover the entire planet, it's hard to cover even small active regions near populated areas," says Afonso.
"You need lots of instruments, so it's great if you can rely on nature and natural objects to help you."
He says while the work is still very new and needs to be proven, the physics seems sound. | <urn:uuid:85a979cb-9571-4e06-b38a-2f79912abb44> | {
"date": "2013-05-18T06:47:33",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9556187391281128,
"score": 4.3125,
"token_count": 644,
"url": "http://www.abc.net.au/science/articles/2013/02/05/3682324.htm?site=science&topic=enviro"
} |
Now, it is common knowledge these days that Hitler's final great offensive in the last years of WWII was the Ardennes Offensive of 1944/45, also known as the battle of the Bulge. What was not appreciated at the time by the Allied high command was just how desperately short of vital supplies the Third Reich armies actually were. The Ardennes Offensive was Hitler's bold attempt to capture and hold the Allied army's massive supply of Brussels sprouts, vital - of course - for the full functioning of any army.
German intelligence were aware that the American army was - in particular - massing huge quantities of the vital Brussels sprouts just behind their frontlines in preparedness for their own massive push - and - of course - in time for Christmas.
The German's audacious plan would have succeeded if the Allies had not quickly worked out that it was their stockpiles of Brussels sprouts that were under immediate threat. The bold plan put forward by the Allied Generals was a heavy gamble, but it paid off. They ordered their front-line chefs to begin boiling their entire stocks of Brussels sprouts, and - most importantly - to keep them boiling well past a state of fully preparedness.
So, when the weather altered and the wind direction changed, it blew the smell of over-cooked Brussels sprouts straight into the faces of the advancing Germans. Then the Reich troops knew that they would not be able to replenish their stocks of Brussels sprouts and any sprouts that they did capture from the Allied frontline kitchens would be overcooked to the point of inedibility.
Later in this series, we will discuss the major strategic role that Brussels sprouts have played in world history, such as Hadrian building a wall to protect the Roman Empire's most northern supplies of Brussels sprouts from the northern barbarians, thus thwarting the barbarian's fiendish plan to deep-fry the Roman's entire stockpiles of sprouts.
Then there was, also, Napoleon's retreat from Moscow when his over-long supply line of Brussels sprouts direct from France broke down. Even when his troops could get sprouts, they were of poor quality - dry, wizened and frozen solid. Of course, this led to a massive collapse of morale. Eventually, the lack of good quality sprouts forced a massive retreat where thousands of French troops died from a pitiful lack of sprouts.
And, of course, not forgetting - of course - how the Spanish conquest of the Americas was a result of the Spaniards overwhelming sprout superiority. | <urn:uuid:1e42f564-b487-459a-9400-a0404ff31bff> | {
"date": "2013-05-18T05:33:25",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9742326736450195,
"score": 2.859375,
"token_count": 518,
"url": "http://www.abctales.com/story/hadley/brussels-sprouts-and-their-role-history"
} |
Books Yellow, Red, and Green and Blue,
All true, or just as good as true,
And here's the Blue Book just for YOU!
Hard is the path from A to Z,
And puzzling to a curly head,
Yet leads to Books—Green, Yellow and Red.
For every child should understand
That letters from the first were planned
To guide us into Fairy Land
So labour at your Alphabet,
For by that learning shall you get
To lands where Fairies may be met.
And going where this pathway goes,
You too, at last, may find, who knows?
The Garden of the Singing Rose.
As to whether there are really any fairies or not, that is a difficult question. The Editor never saw any himself, but he knew several people who have seen them-in the Highlands-and heard their music.
If ever you are in Nether Lochaber, go to the Fairy Hill, and you may hear the music your-self, as grown-up people have done, but you must go on a fine day.
This book has been especially re-published to raise funds for:
The Great Ormond Street Hospital Children’s Charity
By buying this book you will be donating to this great charity that does so much good for ill children and which also enables families to stay together in times of crisis. And what better way to help children than to buy a book of fairy tales. Some have not been seen in print or heard for over a century. 33% of the Publisher’s profit from the sale of this book will be donated to the GOSH Children’s Charity.
YESTERDAYS BOOKS for TODAYS CHARITIES
LITTLE RED RIDING HOOD
Once upon a time there lived in a certain village a little country girl, the prettiest creature was ever seen. Her mother was excessively fond of her; and her grandmother doted on her still more. This good woman had made for her a little red riding-hood; which became the girl so extremely well that everybody called her Little Red Riding-Hood.
One day her mother, having made some custards, said to her:
"Go, my dear, and see how thy grandmamma does, for I hear she has been very ill; carry her a custard, and this little pot of butter."
Little Red Riding-Hood set out immediately to go to her grandmother, who lived in another village.
As she was going through the wood, she met with Gaffer Wolf, who had a very great mind to eat her up, but he dared not, because of some faggot-makers hard by in the forest. He asked her whither she was going. The poor child, who did not know that it was dangerous to stay and hear a wolf talk, said to him:
"I am going to see my grandmamma and carry her a custard and a little pot of butter from my mamma."
"Does she live far off?" said the Wolf.
"Oh! aye," answered Little Red Riding-Hood; "it is beyond that mill you see there, at the first house in the village."
"Well," said the Wolf, "and I'll go and see her too. I'll go this way and you go that, and we shall see who will be there soonest."
The Wolf began to run as fast as he could, taking the nearest way, and the little girl went by that farthest about, diverting herself in gathering nuts, running after butterflies, and making nosegays of such little flowers as she met with. The Wolf was not long before he got to the old woman's house. He knocked at the door—tap, tap.
"Your grandchild, Little Red Riding-Hood," replied the Wolf, counterfeiting her voice; "who has brought you a custard and a little pot of butter sent you by mamma."
The good grandmother, who was in bed, because she was somewhat ill, cried out:
"Pull the bobbin, and the latch will go up."The Wolf pulled the bobbin, and the door opened, and then presently he fell upon the good woman and ate her up in a moment, for it was above three days that he had not touched a bit. He then shut the door and went into the grandmother's bed, expecting Little Red Riding-Hood, who came some time afterward and knocked at the door—tap, tap.
Little Red Riding-Hood, hearing the big voice of the Wolf, was at first afraid; but believing her grandmother had got a cold and was hoarse, answered:
"’Tis your grandchild, Little Red Riding-Hood, who has brought you a custard and a little pot of butter mamma sends you."
The Wolf cried out to her, softening his voice as much as he could:
"Pull the bobbin, and the latch will go up."
Little Red Riding-Hood pulled the bobbin, and the door opened.
The Wolf, seeing her come in, said to her, hiding himself under the bed-clothes:
"Put the custard and the little pot of butter upon the stool, and come and lie down with me."
Little Red Riding-Hood undressed herself and went into bed, where, being greatly amazed to see how her grandmother looked in her night-clothes, she said to her:
"Grandmamma, what great arms you have got!"
"That is the better to hug thee, my dear."
"Grandmamma, what great legs you have got!"
"That is to run the better, my child."
"Grandmamma, what great ears you have got!"
"That is to hear the better, my child."
"Grandmamma, what great eyes you have got!"
"It is to see the better, my child."
"Grandmamma, what great teeth you have got!"
"That is to eat thee up."
And, saying these words, this wicked wolf fell upon Little Red Riding-Hood, and tried to start eating her. Red Riding Hood screamed “Someone Help Me!” over and over again.
The woodcutter, who was felling trees nearby, heard Red Riding Hood’s screams for help and ran to the cottage. He burst in to find the wolf trying to eat Red Riding Hood.
He swung his axe, and with one blow killed the bad wolf for which Red Riding Hood was ever so grateful.
Great Book! Really interesting read! Was great to see a published version of Jewish tales! Arrived very quickly too - great service!
A thrilling book about a chase across the US! A great story, my son loved it! Quick and Convenient delivery!
Stories of the famous spice route across Asia! Great to see a volume of Phillipine Folklore Stories in Print, only one I've found on the web!
We deliver to destinations all over the world, and here at Abela, we have some of the best rates in the book industry.
We charge shipping dependant on the book you have ordered and where in the world you are ordering from. This will be shown below the price of the book.
The delivery time is typically dependant on where in the world you are ordering from, Should you need a estimated delivery time, please do not hesitate to contact us.
We pride ourselves on the quality of our packaging and damage rates are very low. In the unlikely event there is damage please contact us before returning your item, as you may have to pay for return shipping, if you have not let us know.
Due to the nature of books being read then returned for a refund, unfortunately we do not accept returns unless the item is damaged and we are notified ON THE DAY OF DELIVERY. | <urn:uuid:417be69e-3827-4c17-971c-f3410cf2c856> | {
"date": "2013-05-18T08:11:03",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9766756892204285,
"score": 2.5625,
"token_count": 1657,
"url": "http://www.abelapublishing.com/the-blue-fairy-book_p23349351.htm"
} |
What exactly does "desecration" mean? Is it just flag burning — or does it also include smearing the flag with dirt? How about dropping it on the ground? And why should law enforcement get to decide who to arrest for such desecration? Free expression and the right to dissent are among the core principles which the American flag represents. The First Amendment must be protected most when it comes to unpopular speech. Failure to do so fails the very notion of freedom of expression.
Our democracy is strong because we tolerate all peaceful forms of expression, no matter how uncomfortable they make us feel, or how much we disagree. If we take away the right to dissent - no matter how unpopular - what freedom will be sacrificed next?
Make a Difference
Your support helps the ACLU defend free speech and a broad range of civil liberties.
Burn the Flag or Burn the Constitution? (2011 blog): Sadly, Congress is once again considering an amendment to the U. S. Constitution banning desecration of the American flag and, in doing so, testing our political leaders' willingness to defend what is arguably one of America's most sacred principles — protecting political speech.
Flag Amendment Defeated, First Amendment Stands Unscathed (2003): On June 27, 2006, the Senate voted down the proposed Flag Desecration Amendment by the slimmest margin ever. The vote was 66-34, just one vote short of the two-thirds needed to approve a constitutional amendment.
Reasons to Oppose the Flag Desecration Amendment (2004 resource): Talking Points on Opposing the Flag Desecration Amendment
Background on the Flag Desecration Amendment (2004 resource)
Fight for the Flag - Resources (2006 resource) | <urn:uuid:cbe1a6ba-f0ca-4f88-86c8-7605d31dcf07> | {
"date": "2013-05-18T06:56:03",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9041697382926941,
"score": 3.09375,
"token_count": 351,
"url": "http://www.aclu.org/free-speech/flag-desecration"
} |
First, an object is placed on the platform of the printer upon – a petrie dish for example. Then the printer must check the height of the object to make sure everything is calibrated correctly. Mr. Carvalho placed a paper card on the platform of the 3D-Bioplotter to demonstrate how the machine works.
Mr. Carvalho then talked us through the printing process. To begin, a liquefied material – in this case a silicone paste – is pressed through a needle-like tip by applying air pressure. The needle moves in all three dimensions which means it is able to create a three dimensional object. The printer is called ‘Bioplotter’ because the unique aspect of this machine is its use of biomaterials to make implants or other objects for biomedical application.
Some of the implants which are made using the 3D Bioplotter are intended to dissolve in the body. The materials which are used in this application include PLLA, PLGA, and silicone.
Implants made with thermoplastics – as they are mostly water and CO2 – are removed by the body naturally in around a week or two. Other materials, such as ceramic paste, may also be used to print implants. The implants printed using ceramic paste do not dissolve. Instead, the body uses this material to create new bone. This actually speeds up the process of the body’s regeneration.
The 3DBioplotter also prints hydrogels – such as collagen or alginate. These materials can have human cells actually added to them. Thus human cells may be printed directly with this machine.
Every Thursday is #3dthursday here at Adafruit! The DIY 3D printing community has thrilled us at Adafruit with its passion and dedication to making solid objects from digital models. Recently, we have noticed that our community integrating electronics projects into 3D printed enclosures, brackets, and sculptures, so each Thursday we celebrate and highlight these bold pioneers!
Have you take considered building a 3D project around an Arduino or other microcontroller? How about printing a bracket to mount your Raspberry Pi to the back of your HD monitor? And don’t forget the countless EL Wire and LED projects that are possible when you are modeling your projects!
The Adafruit Learning System has dozens of great tools to get you well on your way to creating incredible works of engineering, interactive art, and design with your 3D printer! If you have a cool project you’ve made that joins the traditions of 3D printing and electronics, be sure to send it in to be featured here! | <urn:uuid:73055da5-4336-4490-8edc-b8121ae20961> | {
"date": "2013-05-18T05:08:27",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9351903796195984,
"score": 3.421875,
"token_count": 539,
"url": "http://www.adafruit.com/blog/2012/11/29/the-3d-bioplotter-from-envisiontec-3dthursday/"
} |
Re-inventing the Planned City Monday, March 12, 2012
TAU and MIT launch pilot project to re-think 50's era "New Towns"
A bird's-eye view of Kiryat Gat
In response to population growth, many "new towns" or planned cities were built around the world in the 1950s. But according to Dr. Tali Hatuka, head of Tel Aviv University's Laboratory for Contemporary Urban Design (LCUD) at the Department of Geography and the Human Environment, these cities are a poor fit for modern lifestyles — and it's time to innovate.
TAU has launched a pilot project, in collaboration with a team from the Massachusetts Institute of Technology led by Prof. Eran Ben-Joseph, to revitalize this aging model. Last month, a team of five TAU and 11 MIT graduate students visited Kiryat Gat, a mid-sized town in the south of Israel. Home to branches of industrial giants Hewlett-Packard Company and Intel, Kiryat Gat was chosen as a "laboratory" for re-designing outmoded planned civic spaces.
Based on smart technologies, improved transportation, use of the city's natural surroundings, and a reconsideration of the current use of city space, the team's action plan is designed to help Kiryat Gat emerge as a new, technologically-advanced planned city — a prototype that could be applied to similar urban communities.
Planning a future for the mid-sized city
The project, jointly funded by TAU's Vice President for Research and MIT's MISTI Global Seed Funds, will create a new planning model that could reshape the future of Kiryat Gat and similar cities across the world which are often overlooked in academia and practical planning. "Our goal is to put a spotlight on these kinds of towns and suggest innovative ways of dealing with their problems," says TAU student Roni Bar.
MIT's Alice Shay, who visited Israel for the first time for the project, believes that Kiryat Gat, a city that massive urbanization has left behind, is an ideal place for the team to make a change. "The city is at a catalyst point — an exciting moment where good governance and energy will give it the capacity to implement some of these new projects."
To tackle the design and planning challenges of the city, the team of students focused on four themes: the "mobile city," which looked at transport and accessibility; the "mediated city," dealing with technological infrastructure; the "compact city," which reconsidered the use of urban space and population growth; and the "natural city," which integrated environmental features into the urban landscape.
Finding common ground
Ultimately, the team’s goal is to create a more flexible city model that encourages residents and workers to be a more active part of the urban fabric of the city, said Dr. Hatuka. The current arrangement of dedicated industrial, residential, and core zones is out of step with a 21st century lifestyle, in which people work, live, and spend their leisure time in the same environment.
"Much of the past discourse about the design of sustainable communities and 'eco-cities' has been premised on using previously undeveloped land," says Prof. Ben-Joseph. "In contrast, this project focuses on the 'retrofitting' of an existing environment — a more likely approach, given the extent of the world's already-built infrastructure."
The students from TAU and MIT have become a truly cohesive team, and their diversity of background helps challenge cultural preconceptions, Bar says. "They ask many questions that help us to rethink things we took for granted." Shay agrees. "Tali and Eran have created an incredible collaboration, encouraging us all to exchange ideas. Our contexts are different but there is a common urban design language."
The team estimates that they will be able to present the updated model of the city early next year. The next step is further exploring the project's key themes at a March meeting at MIT. And while the project has provided an exceptional educational experience for all involved, ideas are already leaping off the page and into the city's urban fabric. "In the next two months, the Mayor of Kiryat Gat would like to push this model forward and implement the initial steps that we have offered," says an enthusiastic Dr. Hatuka. | <urn:uuid:08c12af3-3a0e-45eb-852d-5a9c514658cb> | {
"date": "2013-05-18T05:50:40",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9586700797080994,
"score": 2.609375,
"token_count": 895,
"url": "http://www.aftau.org/site/News2/596546752?page=NewsArticle&id=16181&news_iv_ctrl=-1"
} |
Elderly people are at increased risk of food-borne illness because as they age, their immune systems become weaker. In fact, the website for the Centers for Disease Control estimates that each year about 48 million people get sick, 128,000 are hospitalized and 3,000 die from food-borne diseases. The most severe cases tend to occur in the very old.
The good news is that food poisoning can be prevented if you follow proper home food safety practices.
Ruth Frechman, a registered dietitian and spokesperson for the American Dietetic Association, spoke with AgingCare.com about home food safety for elderly people. "Since older adults are at particular risk for food-borne illness, good food safety habits are extremely crucial."
Ms. Frechman says three common cooking and food preparation mistakes can result in unsafe food and potential food poisoning.
Bacteria in raw meat and poultry juices can be spread to other foods, utensils and surfaces. . "To prevent cross-contamination, keep raw foods separate from ready-to-eat foods and fresh vegetables," she says. "For example, use two cuttings boards: one strictly for raw meat, poultry and seafood; the other for ready-to-eat foods like breads and vegetables."
She recommends washing cutting boards thoroughly in hot soapy water after each use or placing them in the dishwasher. Use a bleach solution or other sanitizing solution and rinse with clean water. Always wash your hands after handling raw meat.
Leaving food out too long
Leaving food out too long at room temperature can cause bacteria to grow to dangerous levels that can cause illness. "Many people think it's okay to leave food sitting out for a few hours," Ms. Frechman says. "But that's a dangerous habit. Food should not be left out for more than two hours. And if it's over 90 degrees, like at an outdoor summer barbecue, food should not be out for more than one hour."
Its common knowledge that meat should be cooked to proper temperatures. However, most people don't know that even leftovers that were previously cooked should be re-heated to a certain temperature. Ms. Frechman says re-heating foods to the proper temperature can kill many harmful bacteria.
Leftovers should be re-heated to at least 165 degrees Fahrenheit. "Harmful bacteria are destroyed when food is cooked to proper temperatures," she says. "That's why a food thermometer comes in handy not only for preparing food, but also for re-heating."
How long it is safe to eat leftovers? Not as long as you would think, Ms. Frechman says. Chicken, fish and beef expire after three to four days in the refrigerator. To help seniors track if leftovers are still good, she recommends writing the date on the package of leftovers.
Seniors and their caregivers should take these preventive measures to avoid germs in food and contracting food poisoning. Pay attention to the foods that are eaten, how food is prepared, and properly maintain the food in the refrigerator, and you may avoid an illness that could cause great discomfort, weakening of the body or even death. | <urn:uuid:fa1546c3-bbdd-441b-a8ad-eb263ea80d04> | {
"date": "2013-05-18T06:29:09",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9592506885528564,
"score": 3.25,
"token_count": 656,
"url": "http://www.agingcare.com/Articles/Top-3-Food-Preparation-Mistakes-That-Cause-Food-borne-Illness-147181.htm"
} |
What is HIV? And what is AIDS? Find answers to some common questions in this section.
How is HIV transmitted - and how is it not transmitted? Find out the answers in this section.
Worried you might have HIV? Have an HIV test - it's the only way to know for sure.
HIV treatment is not a cure, but it is keeping millions of people well. Start learning about it in this section.
In this section we have answered some of the questions you might have if you have just found out you have HIV.
Find healthcare services and support.
A series of illustrated leaflets designed to support conversations between professionals and people with HIV.
Our award-winning series of patient information booklets. Each title provides a comprehensive overview of one aspect of living with HIV.
Twice-monthly email newsletter on the practical aspects of delivering HIV treatment in resource-limited settings.
Our regular newsletter, providing in-depth discussion of the latest research across the HIV sector. Free to people personally affected by HIV.
Find contact details for over 3000 key organisations in more than 190 countries
An instant guide to HIV & AIDS in countries and regions around the world
The most comprehensive listing of HIV-related services in the UK
Pre-exposure prophylaxis (PrEP) – free webinar 18 April 2013As part of its European HIV prevention work, NAM is collaborating...
Learning the basics about hepatitis C 05 April 2013If you are familiar with NAM’s patient information materials, hopefully you...
Treatment as prevention – free webinar 20 March 2013As part of its European HIV prevention work, NAM is collaborating... | <urn:uuid:77e4b735-9163-48f4-84c1-c755c8c2ff0e> | {
"date": "2013-05-18T08:08:43",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9356427788734436,
"score": 2.875,
"token_count": 342,
"url": "http://www.aidsmap.com/resources/treatmentsdirectory/drugs/iCombiviri-AZT3TC/page/1730921/"
} |
Select Preservation Resources
- Shocking Statistics: Reasons for Preservation Week: Facts that illustrate the need for national preservation awareness.
- PW Fact Sheet: More facts that discusses how items become damaged and simple steps to keep them safe.
- Preserving Your Memories: Organized by material type, these web sites, books, and other sources give useful information on caring for any kind of collection.
- Disaster Recovery: Information for before and after a disaster has damaged precious collections.
- Bibliographies & Indexes: A list of links to resources collected by professional preservation organizations
- Videos: Video resources depict ways and reasons to preserve collections
- Preservation for Children: Tools to help children understand the importance of preservation.
- Comprehensive Resources
- Resources in Other Languages: Spanish, French, Chinese, Italian, and Arabic resources for spreading the preservation message.
- Books of Fiction about Conservation
- Books of Fiction about Books for Book Groups | <urn:uuid:2ca2992e-4394-4461-91b7-99387774cf64> | {
"date": "2013-05-18T05:31:58",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8655974864959717,
"score": 2.765625,
"token_count": 190,
"url": "http://www.ala.org/alcts/confevents/preswk/tools/select"
} |
In preparation for Christmas, I read Stephen Nissenbaum's 1998 "The Battle for Christmas," a thorough exploration of this season.
The book's title will be deceiving, because it has nothing to do with the recent sacred-vs.-secular Christmas quarrels. Nissenbaum explores the myriad ways that Christmas has evolved in our nation. It turns out we've been jockeying for more than 300 years over what this holiday means.
In Colonial America our faith-filled ancestors banned Christmas altogether, outlawing it in some colonies. Until the 1760s, one could not even find an almanac that would print the word "Christmas" on the date Dec. 25.
This opposition was because Christmas had become a drunken spectacle where gangs of poor young men roamed the streets, making merry and engaging in acts of petty rowdyism, vaguely like today's New Year's Eve. It was customary and permissible for these gangs to knock on doors of strangers to demand gifts. ("So give us some figgy pudding....")
Our nation's first "battle" for Christmas was the movement to domesticate the holiday, a battle that Nissenbaum suggests involved merchants, the middle and upper classes and the church.
Merchants began linking Christmas and the purchase of manufactured gifts as early as the 1830s as society began to stress family celebrations in front of a tree and with Santa visiting every home. In case you think that your complaining will reverse the commercialism of this holiday, according to Nissenbaum that complaint first emerged in the 1830s. Complain if you must, but don't expect results.
Nissenbaum so thoroughly explores Clement Moore's "'Twas the Night before Christmas" that one learns why Saint Nick touches the side of his nose and why his pipe is a short one. Nissenbaum contends that the ascendance of Santa Claus, the emergence of the Christmas tree and even the giving of gifts contribute to this gradual process of making Christmas a less revolutionary, more predictable holiday. He explores Dickens and Scrooge, Christmas parties for poor children and even the complicated master-slave relationship at Christmas leading up to and immediately following the Civil War.
If you prefer to maintain that Christmas was a pure season of private devotion and public worship until Sears, Roebuck, Wal-Mart and the Supreme Court got involved, don't read this book. Ditto if you enjoy lamenting that "They've taken Christmas away from us," Nissenbaum might say that a pure, simple Christmas never existed. Rather it has evolved since the first day the Colonists set foot on our shore, an evolution showing no sign of abating.
Nissenbaum's scholarly, heavily footnoted book is enlightening and readable. But his analysis of Christmas reminds me of a scientist who thoroughly explains the rainbow but never grasps its beauty. And so as this season continues to evolve, I'll enjoy my Christmas tree, sing both "White Christmas" and "Joy to the World," and be grateful again for the mystery of Bethlehem, which properly understood, is the most revolutionary act of history.
Contact columnist minister Creede Hinshaw at Wesley Monumental United Methodist Church in Savannah at email@example.com. | <urn:uuid:fc291606-3625-4033-97fb-f0d3e531c2bc> | {
"date": "2013-05-18T08:02:41",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9519102573394775,
"score": 2.640625,
"token_count": 664,
"url": "http://www.albanyherald.com/news/2009/dec/11/christmas-has-been-evolving-for-centuries/?features"
} |
Heal Our Planet Earth
Secondary and Universities
Educational Outreach: Secondary (High) Schools and Universities
Anthony Marr from the HOPE Foundation’s main point was the wild tigers. He believes that something must be done to keep these animals alive. "If we let this go, life will be less beautiful and worth less living," Anthony Marr quoted. The money that he receives as a conservationist is donated to help out the endangered species. He makes many trips to India to help them find other solutions to their problems. If the people living in India keep living the way they’ve done, India will soon become a desert. Changes need to be made and people need to adapt to these changes.
I agree with Anthony’s beliefs. Even if tigers are bred, it does not make
a difference, because they cannot survive on their own. No matter what
humans do, it still will not change the fact that one of God’s creations
is becoming destroyed. No animals should be killed for the purpose of
human needs. It is not necessary to kill tigers to sell products and make
money because of silly beliefs that of they eat this then something will
happen. There are so many alternatives. Humans need food, but they do not
have to consume so much meat. Every time they eat meat, a precious animal
is being killed. Animals do not kill us and eat us, then why should we do
the same? More solutions need to be found and more people need to become
more involved in saving the beauty of the world.
Go on to Student - 10 | <urn:uuid:285c3b9d-ca3c-48fd-8ece-77b958f31b4b> | {
"date": "2013-05-18T08:02:54",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.960506796836853,
"score": 2.6875,
"token_count": 335,
"url": "http://www.all-creatures.org/hope/edout-hs-20060928-09.htm"
} |
Science Fair Project Encyclopedia
The chloride ion is formed when the element chlorine picks up one electron to form the anion (negatively charged ion) Cl−. The salts of hydrochloric acid HCl contain chloride ions and are also called chlorides. An example is table salt, which is sodium chloride with the chemical formula NaCl. In water, it dissolves into Na+ and Cl− ions.
The word chloride can also refer to a chemical compound in which one or more chlorine atoms are covalently bonded in the molecule. This means that chlorides can be either inorganic or organic compounds. The simplest example of an inorganic covalently bonded chloride is hydrogen chloride, HCl. A simple example of an organic covalently bonded chloride is chloromethane (CH3Cl), often called methyl chloride.
Other examples of inorganic covalently bonded chlorides which are used as reactants are:
- phosphorus trichloride, phosphorus pentachloride, and thionyl chloride - all three are reactive chlorinating reagents which have been used in a laboratory.
- Disulfur dichloride (SCl2) - used for vulcanization of rubber.
Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl− into specific neurons.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:4e76b8fd-c479-45d7-8ee7-faf61495aecb> | {
"date": "2013-05-18T08:08:06",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 5,
"language": "en",
"language_score": 0.8968929052352905,
"score": 4.59375,
"token_count": 320,
"url": "http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Chloride"
} |
Science Fair Project Encyclopedia
Industrial Design is an applied art whereby the aesthetics and usability of products may be improved. Design aspects specified by the industrial designer may include the overall shape of the object, the location of details with respect to one another, colors, texture, sounds, and aspects concerning the use of the product ergonomics. Additionally the industrial designer may specify aspects concerning the production process, choice of materials and the way the product is presented to the consumer at the point of sale. The use of industrial designers in a product development process may lead to added values by improved usability, lowered production costs and more appealing products.
Product Design is focused on products only, while industrial design has a broader focus on concepts, products and processes. In addition to considering aesthetics, usability, and ergonomics, it can also encompass the engineering of objects, usefulness as well as usability, market placement, and other concerns.
Product Design and Industrial Design can overlap into the fields of user interface design , information design and interaction design. Various schools of Industrial Design and/or Product Design may specialize in one of these aspects, ranging from pure art colleges (product styling) to mixed programs of engineering and design, to related disciplines like exhibit design and interior design.
In the US, the field of industrial design hit a high-water mark of popularity in the late 30's and early 40's, with several industrial designers becoming minor celebrities. Raymond Loewy, Norman bel Geddes, and Henry Dreyfuss remain the best known.
In the UK, the term "Industrial Design" increasingly implies design with considerable engineering and technology awareness alongside human factors - a "Total Design" approach, promoted by the late Stuart Pugh (University of Strathclyde) and others.
Famous industrial designers
- Egmont Arens (1888-1966)
- Norman bel Geddes (1893-1958)
- Henry Dreyfuss (1904-1972)
- Charles and Ray Eames (1907-1978) and (1912-1988)
- Harley J. Earl (1893-1969)
- Virgil Exner (1909-1973)
- Buckminster Fuller (1895-1983)
- Kenneth Grange (1929- )
- Michael Graves (1934- )
- Walter Adolph Gropius (1883-1969)
- Jonathan Ive (1967- )
- Arne Jacobsen (1902-1971)
- Raymond Loewy (1893-1986)
- Ludwig Mies van der Rohe (1886-1969)
- László Moholy-Nagy (1895-1946)
- Victor Papanek (1927-1999)
- Philippe Starck (1949- )
- Brooks Stevens (1911-1995)
- Walter Dorwin Teague (1883-1960)
- Eva Zeisel (1906- )
- Industrial design rights
- Design classics
- Interaction Design
- Automobile design
- Six Sigma
- Famous Industrial Designers
- Design Council on Product Design Design Council one stop shop information resource on Product Design by Dick Powell.
- Industrial Designers Society of America
- The Centre for Sustainable Design
- International Council of Societies of Industrial Designers
- U.S. Occupational Outlook Handbook: Designers
- Core77: Industrial Designers' Online Community
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:a466a758-3d7d-477a-8ae7-30c1404a9da8> | {
"date": "2013-05-18T05:26:33",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8557734489440918,
"score": 3.203125,
"token_count": 743,
"url": "http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Industrial_design"
} |
Help kids practice their counting skills with this printable counting to eight (8) worksheet that has a fun birds theme. This worksheet will be a great addition to any numbers or counting lesson plan as well as any birds themed lesson plan. On this worksheet, kids are asked to count the number of cardinals and circle the correct number (eight) at the bottom of the page.
View and Print Your Birds Themed Counting Worksheet
All worksheets on this site were done personally by our family. Please do not reproduce any of our content on your own site without direct permission. We welcome you to link directly to any pages on our site without specific permission. We also welcome any feedback, ideas or anything you want to share with us - just email us at firstname.lastname@example.org. | <urn:uuid:62414080-4b8a-49db-816c-5a3eca396b17> | {
"date": "2013-05-18T06:43:25",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9313437342643738,
"score": 3.515625,
"token_count": 168,
"url": "http://www.allkidsnetwork.com/worksheets/animals/birds/birds-worksheet-counting8.asp"
} |
Dr. Carl Auer von Welsbach (1858-1929) had a rare double talent of understanding how to pursue fundamental science and, at the same time, of commercializing himself successfully as a inventor and discoverer.
He discovered 4 elements (Neodymium, Praseodymium, Ytterbium, and Lutetium).
He invented the incandescent mantle, that helped gaslighting at the end of the 19th century to a renaissance.
He developed the Ferrocerium - it`s still used as a flint in every disposable lighter.
He was an eminent authority, and great expert in the field of rare earths (lanthanoides).
He invented the electric metal filament light bulb which is used billions of times today.
Additionally, all his life he took active part in different fields, from photography to ornithology. His personal qualities are remembered highly by the people of Althofen, he not only had an excellent mind but also a big heart. These qualities ensured him a prominent and lasting place not only in Austria`s science and industrial history.
9th of Sept. 1858: Born in Vienna, son of Therese and Alois Ritter Auer von Welsbach ( his father was director of the Imperial printing office the "Staatsdruckerei").
1869-73: went to the secondary school in Mariahilf, (then changed to the secondary school in Josefstadt.)
1873-77: went to secondary school in Josefstadt, graduation.
1877-78: military service, became a second lieutenant.
1878-80: Inscribed into the technical University of Vienna; studies in math, general organic and inorganic chemistry, technical physics and thermodynamics with the Professors Winkler, Bauer, Reitlinger; and Pierre.
1880-82: Changed to the University of Heidelberg; lectures on inorganic experimental chemistry and Lab. experiments with Prof. Bunsen, introduction to spectral analysis and the history of chemistry, mineralogy and physics.
5th of Feb. 1882: Promotion to Doctor of Philosophy at the Ruperta-Carola-University in Heidelberg.
1882: Return to Vienna as unpaid Assistant in Prof. Lieben`s laboratory; work with chemical separation methods for investigations on rare earth elements.
1882-1884: Publications: " Ueber die Erden des Gadolinits von Ytterby", "Ueber die Seltenen Erden".
1885: The first separation of the element "Didymium" with help from a newly developed separation method from himself, based on the fractioned crystalisation of a Didym-ammonium nitrat solution. After the characteristical colouring, Auer gave the green components the name Praseodymium, the pink components the name Neodidymium. In time the latter element was more commonly known as Neodymium.
1885-1892: Work on gas mantle for the incandescent lighting.
Development of a method to produce gas mantle ("Auerlicht) based on the impregnation from cottontissue by means, measures, methods of liquids, that rare earth has been absolved in and the ash from the material in a following glow process.
Production of the first incandescent mantle out of lanthanum oxide, in which the gas flame is surrounded from a stocking; definite improvement in light emmission, but lack of stability in humidity.
Continuous improvements in the chemical composition of the incandescent mantel "Auerlicht", experimentations of Lanthanum oxide-magnesium oxide- variations.
18th of Sept. 1885: The patenting of a gas burner with a "Actinophor" incandescent mantle made up of 60% magnesium oxide, 20% lanthanum oxide and 20% yttrium oxide; in the same year, the magnesium oxide part was replaced with zirconium oxide and the constitution of a second patent with reference to the additional use of the light body in a spirits flame.
9th of April 1886: Introduction the name "Gasgluehlicht" through the Journalist Motiz Szeps after the successful presentation from the Actinophors in the lower Austrian trade union ; regular production of the impregnation liquid, called "Fluid", at the Chemical Institute.
1887: The acquisition of the factory Würth & Co. for chemical-pharmaceutical products in Atzgersdorf and the industrial production of the light bodies.
1889: The beginning of sales problems because of the defaults with the earlier incandescent mantle, ie. it`s fragility, the short length of use, as well as having an unpleasant, cold, green coloured light , and the relatively high price. The factory in Atzgersdorf closes.
The development of fractioned cristallisation methods for the preparation of pure Thorium oxide from and therefore cheap Monazitsand.
The analysis of the connection between the purity of Thorium oxide and its light emission. The ascertainment of the optimal composition of the incandescent mantle in a long series of tests.
1891: Patenting of the incandescent mantle out of 99% Thorium oxide and 1% Cerium oxide, at that period of time, because of the light emission it was a direct competition for the electric carbon-filament lamp. The resuming of production in Atzgersdorf near Vienna and the quick spreading of the incandescent mantle because of their high duration. The beginning of a competition with the electric lighting.
Work with high melting heavy metals to improve and higher the filament temperature, and therefore the light emission as well.
The development of the production of thin filaments.
The making of incandescent mantle with Platinum threads that were covered with high melting Thorium oxide, whereby it was possible to use the lamps over the melting temperature of Platinum.
This variation was discarded because with smelting the platinum threads either the cover would burst or by solidifying it would rip apart.
The taking out of a patent for two manufacturing methods for filaments.
In the patent specification Carl Auer von Welsbach described the manufacturing of filaments through secretion of the high smelting element Osmium onto the metallic-filament.
The development and experimentation of further designing methods such as the pasting method for the manufacturing of suitable high smelting metallic-filaments. With this method Osmium powder and a mixture of rubber or sugar is mixed together and kneaded into a paste. The manufacturing results in that the paste gets stamped through a delicate nozzle discharged cylinder and the filament subsequently dries and sinters. This was the first commercial and industrial process in the powder metallurgy for very high smelting metals.
1898: The acquiring of a industrial property in Treibach and the beginning of the experimentation and discovery work at this location. The taking out of a patent for the metallic-filament lamp with Osmium filament.
1899: Married Marie Nimpfer in Helgoland.
1902: Market introduction of the "Auer-Oslight" the first industrial finished Osmium metallic-filament lamp using the paste method.
The advantages of this metallic-filament lamp over the, at that period of time, widely used carbon-filament lamp were:
57% less electricity consumption; less blackening of the glass; because of the higher filament temperature, a "whiter" light; a longer life span and therefore more economic.
The beginning of the investigation of spark giving metals with the aim ignition mechanisms for lighters, gas lighters and gas lamps as well as projectile and mine ignition.
Carl Auer von Welsbach knew of the possibility to produce sparks by mechanical means from Cerium from his teacher Prof. Bunsen.
The ascertainment of the optimal compound from Cerium-Iron alloys for spark production.
1903: The taking out of a patent for his pyrophoric alloys (by scratching with hard and sharp surfaces a splinter which could ignite itself.) In the patent specification 70% Cerium and 30% Iron was given as an optimal compound.
Further development of a method to produce the latter alloy cheaply.
The optimizing of Bunsen, Hillebrand and Norton´s procedure, used at that
time mainly for producing Cerium, was based on the fusion electrolysis from
smelted Rare Earth chlorides. The problem at that time was in the leading
of the electrolysis to secrete a pore-free and long lasting metal.
This was the first industrial process and commercial utilization of the rare earth metals.
30th of March 1905: A report to the "Akademie der Wissenschaften" in Vienna that the results of the spectroscopic analysis show that Ytterbium is made up of two elements. Auer named the elements after the stars Aldebaranium and Cassiopeium. He ommitted the publication of the attained spectras and the ascertained atomic weights.
1907: The founding of the "Treibacher Chemische Werke GesmbH" in Treibach-Althofen for the production of Ferrocerium- lighter flints under the trade name "Original Auermetall".
The publication of the spectras and the atomic weights of both new, from Ytterbium separated elements, in the completion of his report to the Academie der Wissenschaften.
Priority dispute with the french Chemist Urbain concerning the analysis of Ytterbium.
1908: The solution of the electrolysis of fused salts (cerium chloride) problem, at which the minerals Cerit and Allanite are used as source substances.
1909: The adaption of the procedure, from his collaborator, Dr.Fattinger, to be able to use the Monazitsand residue out of the incandescent mantle production, for the production of cerium metal for the lighter flints.
The production of three different pyrophoric alloys:
"Cer" or Auermetall I : Alloy out of fairly pure Cerium and Iron. Used for igniting purposes.
"Lanthan" or Auermetall II : The Cerium-Iron alloy enriched with the element Lanthan. Used for light signals because of its particularly bright sparking power.
Erdmetall or Auermetall III : Alloy out of Iron and "natural" Cermischmetall; a rare earth metal alloy of corresponding natural deposits.
Both of the first alloys could not win its way through the market. only the easy to produce
Erdmetall, after the renaming it Auermetall I, obtained world wide status as the flint in the lighter industry.
1909: The International Atomic weight Commission decided in favour of Urbain´s publication instead of Auer´s because Urbain handed it in earlier. The Commission of the term from Urbain Neoytterbium- known today as Ytterbium and Lutetium for the new elements.
The carrying-out of large scale chemical separations in the field of radioactive substances.
The production of different preparations of Uran, Ionium (known today as Th230 isotop), a disintegration product in the Uranium-Radium-line, Polonium and Aktinium, that Auer made available, for research use, to such renowned Institutions and scientists as F.W.Aston and Ernest Rutherford at the Cavendish Laboratory in Cambridge (1921) and the "Radiuminstitut der Akademie der Wissenschaften" in Vienna.
1922: A report on his spectroscopic discoveries to the "Akademie der Wissenschaften" in Vienna.
1929:World-wide production of ligther flints reached 100,000 kg.
8th of April 1929: Carl Auer von Welsbach died at the age of 70. | <urn:uuid:f684139c-4f94-4f1f-821a-2847edc6ba5b> | {
"date": "2013-05-18T06:42:52",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9037157893180847,
"score": 3.046875,
"token_count": 2533,
"url": "http://www.althofen.at/AvW-Museum/Englisch/biographie_e.htm"
} |
Ethics of dementia research
What are clinical trials and how are they controlled/governed?
A clinical trial is a biomedical/health-related study into the effects on humans of a new medical treatment (medicine/drug, medical device, vaccine or new therapy), sometimes called an investigational medicinal product (IMP). Before a new drug is authorised and can be marketed, it must pass through several phases of development including trial phases in which its safety, efficacy, risks, optimal use and/or benefits are tested on human beings. Existing drugs must also undergo clinical testing before they can be used to treat other conditions than that for which they were originally intended.
Organisations conducting clinical trials in the European Union must, if they wish to obtain marketing authorisation, respect the requirements for the conduct of clinical trials. These can be found in the Clinical Trials Directive (“Directive 2001/20/EC of the European Parliament and of the Council of 4 April 2001 on the approximation of the laws, regulations and administrative provisions of the Member States relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use”).
There are also guidelines to ensure that clinical trials are carried out in accordance with good clinical practice. These are contained in the “Commission Directive 2005/28/EC of 8 April 2005 laying down principles and detailed guidelines for good clinical practice as regards investigational medicinal products for human use, as well as the requirements for authorisation of the manufacturing or importation of such products” (also known as the Good Clinical Practice or GCP for short). This document provides more concrete guidelines and lends further support to the Clinical Trials Directive.
The London-based European Medicines Agency (EMA) has published additional, more specific guidelines which must also be respected. These include guidelines on inspection procedures and requirements related to quality, safety and efficacy.
Copies of the above-mentioned documents in 22 languages can be found at: http://ec.europa.eu/enterprise/pharmaceuticals/clinicaltrials/clinicaltrials_en.htm
The protection of people participating in clinical trials (and in most cases in other types of research) is further promoted by provisions of:
- the European Convention on Human Rights and Biomedicine (Oviedo Convention, Act 2619/1998),
- the Additional protocol to the Oviedo Convention concerning Biomedical Research
- the Nuremberg Code of 1949,
- the revised Helsinki Declaration of the World Medical Association regarding Ethical Principles for Medical Research Involving Human Subjects,
- The Belmont Report of 18 April 1979 on the Ethical Principles and Guidelines for the Protection of Human Subjects of Research.
What are the different phases of trials?
Testing an experimental drug or medical procedure is usually an extremely lengthy process, sometimes lasting several years. The overall procedure is divided into a series of stages (known as phases) which are described below.
Clinical testing on humans can only begin after a pre-clinical phase, involving laboratory studies (in vitro) and tests on animals, which has shown that the experimental drug is considered safe and effective.
Whilst a certain amount of testing can be carried out by means of computer modelling and by isolating cells and tissue, it becomes necessary at some point in time to test the drug on a living creature. Animal testing is an obligatory stage in the process of obtaining regulatory approval for new drugs and medicines, and hence a legal requirement (EU Directive 2001/83/EC relating to Medicinal Products for Human Use). The necessity of carrying out prior testing on animals is also stated in the World Medical Association’s “Ethical Principles for Medical Research Involving Human Subjects.
In order to protect the well-being of research animals, researchers are guided by three principles which are called the 3Rs:
Reduce the number of animals used to a minimum
Refine the way that experiments are carried out so that the effect on the animal is minimised and animal welfare is improved
Replace animal experiments with alternative (non-animal) techniques wherever possible.
In addition, most countries will have official regulatory bodies which control animal research. Most animals involved in research are mice. However, no animal is sufficiently similar to humans (even genetically modified ones) to make human testing unnecessary. For this reason, the experimental drug must also be tested on humans.
The main phases of clinical trials
Clinical trials on humans can be divided into three main phases (literally, phase I, II and III). Each phase has specific objectives (please see below) and the number of people involved increases as the trial progresses from one phase to the next.
Phase I trials
Phase 1 trials are usually the first step in testing a new drug or treatment on humans after successful laboratory and animal testing. They are usually quite small scale and usually involve healthy subjects or sub-groups of patients who share a particular characteristic. The aims of these trials are:
- to assess the safety of experimental drugs,
- to evaluate any possible side effects,
- to determine a safe dose range,
- to see how the body reacts to the drug (how it is absorbed, distributed and eliminated from the body, the effects that it has on the body and the effects it has on biomarkers).
Dose ranging, sometimes called dose escalation, studies may be used as a means to determine the most appropriate dosage, but the doses administered to the subjects should only be a fraction of those which were found to cause harm to animals in the pre-clinical studies.
The process of determining an optimal dose in phase I involves quite a high degree of risk because this is the first time that the experimental treatment or drug has been administered to humans. Moreover, healthy people’s reactions to drugs may be different to those of the target patient group. For this reason, drugs which are considered to have a potentially high toxicity are usually tested on people from the target patient group.
There are a few sequential approaches to phase I trials e.g. single ascending dose studies, multiple ascending dose studies and food effect.
In single ascending dose studies (SAD), a small group of subjects receive a very low dose of the experimental drug and are then observed in order to see whether that dose results in side effects. For this reason, trials are usually conducted in hospital settings. If no adverse side effects are observed, a second group of subjects are given a slightly higher dose of the same drug and also monitored for side-effects. This process is repeated until a dose is reached which results in intolerable side effects. This is defined as the maximum tolerated dose (MTD).
Multiple ascending dose studies (MAD) are designed to test the pharmacokinetics and pharmacodynamics of multiple doses of the experimental drug. A group of subjects receives multiple doses of the drug, starting at the lowest dose and working up to a pre-determined level. At various times during the period of administration of the drug, and particularly whenever the dose is increased, samples of blood and other bodily fluids are taken. These samples are analysed in order to determine how the drug is processed within the body and how well it is tolerated by the body.
Food effect studies are investigations into the effect of food intake on the absorption of the drug into the body. This involves two groups of subjects being given the same dose of the experimental drug but for one of the groups when fasting and for the other after a meal. Alternatively, this could be done in a cross-over design whereby both groups receive the experimental drug in both conditions in sequence (e.g. when fasting and on another occasion after a meal). Food effect studies allow researchers to see whether eating before the drug is given has any effect on the absorption of the drug by the body.
Phase II trials
Having demonstrated the initial safety of the drug (often on a relatively small sample of healthy individuals), phase II clinical trials can begin. Phase II studies are designed to explore the therapeutic efficacy of a treatment or drug in people who have the condition that the drug is intended to treat. They are sometimes called therapeutic exploratory trials and tend to be larger scale than Phase I trials.
Phase II trials can be divided into Phase IIA and Phase IIB although sometimes they are combined.
Phase IIA is designed to assess dosing requirements i.e. how much of the drug should patients receive and up to what dose is considered safe? The safety assessments carried out in Phase I can be repeated on a larger subject group. As more subjects are involved, some may experience side effects which none of the subjects in the Phase I experienced. The researchers aim to find out more about safety, side effects and how to manage them.
Phase IIB studies focus on the efficacy of the drug i.e. how well it works at the prescribed doses. Researchers may also be interested in finding out which types of a specific disease or condition would be most suitable for treatment.
Phase II trials can be randomised clinical trials which involve one group of subjects being given the experimental drug and others receiving a placebo and/or standard treatment. Alternatively, they may be case series which means that the drug’s safety and efficacy is tested in a selected group of patients. If the researchers have adequately demonstrated that the experimental drug (or device) is effective against the condition for which it is being tested, they can proceed to Phase III.
Phase III trials
Phase III trials are the last stage before clinical approval for a new drug or device. By this stage, there will be convincing evidence of the safety of the drug or device and its efficacy in treating people who have the condition for which it was developed. Such studies are carried out on a much larger scale than for the two previous phases and are often multinational. Several years may have passed since the original laboratory and animal testing.
The main aims of Phase III trials are:
to demonstrate that the treatment or drug is safe and effective for use in patients in the target group (i.e. in people for whom it is intended)
to monitor side effects
to test different doses or different ways of administering the drug
to determine whether the drug could be used at different stages of the disease.
to provide sufficient information as a basis for marketing approval
Researchers may also be interested in showing that the experimental drug works for additional groups of people with conditions other than that for which the drug was initially developed. For example, they may be interested in testing a drug for inflammation on people with Alzheimer’s disease. The drug would have already have proven safe and obtained marketing approval but for a different condition, hence the need for additional clinical testing.
Open label extension trails
Open label extension studies are often carried out immediately after a double blind randomised clinical trial of an unlicensed drug. The aim of the extended study is to determine the safety and tolerability of the experimental drug over a longer period of time, which is generally longer than the initial trial and may extend up until the drug is licensed. Participants all receive the experimental drug irrespective of which arm of the previous trial they were in. Consequently, the study is no longer blind in that everybody knows that each participant is receiving the experimental drug but the participants and researchers still do not know which group participants were in during the initial trial.
Post-marketing surveillance studies (phase IV)
After the three phases of clinical testing and after the treatment has been approved for marketing, there may be a fourth phase to study the long-term effects of drugs or treatment or to study the impact of another factor in combination with the treatment (e.g. whether a particular drug reduces agitation).
Usually, such trials are sponsored by pharmaceutical companies and described as pharmacovigilance. They are not as common as the other types of trials (as they are not necessary for marketing permission). However, in some cases, the EMA grants restricted or provisional marketing authorisation, which is dependent on additional phase IV trails being conducted.
Expanded access to a trial
Sometimes, a person might be likely to benefit from a drug which is at various stages of testing but does not fulfil the conditions necessary for participation in the trial (e.g. s/he may have other health problems). In such cases and if the person has a life-threatening or serious condition for which there is no effective treatment, s/he may benefit from “expanded access” use of the drug. There must, however, be evidence that the drug under investigation has some likelihood of being effective for that patient and that taking it would not constitute an unreasonable risk.
The use of placebo and other forms of comparison
The main purpose of clinical drug studies is to distinguish the effect of the trial drug from other influences such as spontaneous change in the course of the disease, placebo effect, or biased observation. A valid comparison must be made with a control. The American Food and Drugs Administration recognises different types of control namely,
- active treatment with a known effective therapy or
- no treatment,
- historical treatment (which could be an adequately documented natural history of the disease or condition, or the results of active treatment in comparable patients or populations).
The EMA considers three-armed trials (including the experimental medicine, a placebo and an active control) as a scientific gold standard and that there are multiple reasons to support their use in drug development .
Participants in clinical trials are usually divided into two or more groups. One group receives the active treatment with the experimental substance and the other group receives a placebo, a different drug or another intervention. The active treatment is expected to have a positive curative effect whereas the placebo is expected to have zero effect. With regard to the aim to develop more effective treatments, there are two possibilities:
1. the experimental substance is more effective than the current treatment or
2. it is more effective than no treatment at all.
According to article 11 of the International Ethical Guidelines for Biomedical Research (IEGBR) of 2002, participants allocated to the control group in a trial for a diagnostic, therapeutic or preventive intervention should receive an established effective intervention but it may in some circumstances be considered ethically acceptable to use a placebo (i.e. no treatment). In article 11 of the IEGBR, reasons for the use of placebo are:
1. that there is no established intervention
2. that withholding an established effective intervention would expose subjects to, at most, temporary discomfort or delay in relief of symptoms
3. that use of an established effective intervention as comparator would not yield scientifically reliable results and use of placebo would not add any risk of serious or irreversible harm to the subjects.
November 2010, EMA/759784/2010 Committee for Medicinal Products for Human Use
The use of placebo and the issue of irreversible harm
It has been suggested that clinical trials are only acceptable in ethical terms if there is uncertainty within the medical community as to which treatment is most suitable to cure or treat a disease (National Bioethics Commission of Greece, 2005). In the case of dementia, whilst there is no cure, there are a few drugs for the symptomatic treatment of dementia. Consequently, one could ask whether it is ethical to deprive a group of participants of treatment which would have most likely improved their condition for the purpose of testing a potentially better drug (National Bioethics Commission of Greece, 2005). Can they be expected to sacrifice their own best interests for those of other people in the future? It is also important to ask whether not taking an established effective intervention is likely to result in serious or irreversible harm.
In the 2008 amended version of the Helsinki Declaration (World Medical Association, 1964), the possible legitimate use of placebo and the need to protect subjects from harm are addressed.
“32. The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best current proven intervention, except in the following circumstances:
The use of placebo, or no treatment, is acceptable in studies where no current proven intervention exists; or
Where for compelling and scientifically sound methodological reasons the use of placebo is necessary to determine the efficacy or safety of an intervention and the patients who receive placebo or no treatment will not be subject to any risk of serious or irreversible harm. Extreme care must be taken to avoid abuse of this option.” (WMA, 1964 with amendments up to 2008)
The above is also quite similar to the position supported by the Presidential Commission for the Study of Bioethical Issues (PCSBI) (2011). In its recently published report entitled “Moral science: protecting participants in human subjects research ”, the Presidential Commission argues largely in favour of a “middle ground” for ethical research, citing the work of Emanuel and Miller (2001) who state:
“A placebo-controlled trial can sometimes be considered ethical if certain methodological and ethical standards are met. It these standards cannot be met, then the use of placebos in a clinical trial is unethical.” (Emanuel and Miller, 2001 cited in PCSBI, 2011, p. 89).
One of the standards mentioned is the condition that withholding proven effective treatment will not cause more than minimal harm.
The importance of placebo groups for drug development
The ethical necessity to include a placebo arm in a clinical trial may differ depending on the type of drug being developed and whether other comparable drugs exist. For example, a placebo arm would be absolutely necessary in the testing of a new compound for which no drug has yet been developed. This would be combined with comparative arms involving other alternative drugs which have already been proven effective. For studies involving the development of a drug based on an existing compound, a comparative trial would be necessary but not necessarily with a placebo arm, or at least with a smaller placebo arm Nevertheless, the EMA emphasises the value of placebo-controlled trials in the development of new medicinal products even in cases where a proven effective drug exists:
“forbiddingplacebo-controlled trials in therapeutic areas where there are proven, therapeutic methods would preclude obtaining reliable scientific evidence for the evaluation of new medicinal products, and be contrary to public health interest as there is a need for both new products and alternatives to existing medicinal products.” (EMA, 2001).
In 2001, concerns were raised about the interpretation of paragraph 29 of the 2000 version of the Helsinki Declaration in which prudence was called for in the use of placebo in research trials and it was advised that placebo should only be used in cases where there was no proven therapy for the condition under investigation. A document clarifying the position of the WMA regarding the use of placebo was issued by the WMA in 2001 in which it was made clear that the use of placebo might be ethically acceptable even if proven therapy was available. The current version of this statement is article 32 of the 2008 revised Helsinki Declaration (quoted in sub-section 7.2.1).
The PCSBI (2011) highlight the importance of ensuring that the design of clinical trials enables the researchers to resolve controversy and uncertainty over the merits of the trial drug and whether the trial drug is better than an existing drug if there is one. They suggest that studies which cannot resolve such questions or uncertainty are likely to be ignored by the scientific community and this would be unethical as it would mean that people had been unnecessarily exposed to risk without there being any social benefit.
Reasons for participation
People with dementia who take part in clinical trials may do so for a variety of reasons. One possible reason is that they hope to receive some form of treatment that will improve their condition or even result in a cure. This is sometimes called the “therapeutic misconception”. In such cases, clinical trials may seem unethical in that advantage is being taken of the vulnerability of some of the participants. On the other hand, the possibility of participating in such a trial may help foster hope which may even enable a person to maintain their morale.
A review of 61 studies on attitudes to trials has shed some light on why people participate in clinical trials (Edwards, Lilford and Hewison, 1998). In this review, it was found that over 60% of participants in seven studies stated that they did or would participate in clinical trials for altruistic reasons. However, in 4 studies, over 70% of people stated that they participated out of self-interest and in two studies over 50% of people stated that they would participate in such a study out of self-interest. As far as informed consent is concerned, in two studies (which were also part of this review) 47% of responding doctors thought that few patients were actually aware that they were taking part in a clinical trial. On the other hand, an audit of four further studies revealed that at least 80% of participants felt that they had made an autonomous decision. There is no proof whether such perceptions were accurate or not. The authors conclude that self-interest was more common than altruism amongst the reasons given for participating in clinical trials but draw attention to the poor quality of some of the studies reviewed thereby suggesting the need for further research. It should not be necessary for people to justify why they are willing to participate in clinical trials. Reasons for participating in research are further discussed in section 3.2.4 insofar as they relate to end-of-life research.
In a series of focus groups organised in 8 European countries plus Israel and covering six conditions including dementia, helping others was seen as the main reason why people wanted to take part in clinical trials (Bartlam et al., 2010). In a US trial of anti-inflammatory medication in Alzheimer’s disease in which 402 people were considered eligible, of the 359 who accepted, their main reasons for wanting to participate were altruism, personal benefit and family history of Alzheimer’s disease.
Random assignment to study groups
As people are randomly assigned to the placebo or the active treatment group, everyone has an equal chance of receiving the active ingredient or whichever other control groups are included in the study. There are possible advantages and drawbacks to being in each group and people are likely to have preferences for being a particular study group but randomization means that allocation is not in any way linked to the best interests of each participant from a medical perspective. This is not an ethical issue provided that each participant fully understands that the purpose of research is not to provide a tailor-made response to an individual’s medical condition and that while some participants benefit from participation, others do not.
There are, however, medical issues to consider. In the case in double-blind studies, neither the participant nor the investigator knows to which groups a participant has been allocated. Consequently, if a participant encounters medical problems during the study, it is not immediately known whether this is linked to the trial drug or another unrelated factor, but the problems must be addressed and possible contraindications avoided, which may necessitate “de-blinding” (DuBois, 2008).
Although many people would perhaps like to benefit from a new drug which is more effective than existing drugs, people have different ideas about what is an acceptable risk and different reasons for taking part in clinical trials. People who receive the placebo are not exposed to the same potential risks as those given the experimental drug. On the other hand, they have no possibility to benefit from the advantages the drug may offer. Those receiving a drug commonly considered as the standard therapy are not necessarily better off than those receiving a placebo as some participants may already know that they do not respond well to the accepted treatment (DuBois, 2008).
If people who participate in a clinical trial are not informed which arm of the trial they were in, valuable information is lost which might have otherwise contributed towards to treatment decisions made after the clinical trial. Taylor and Wainwright (2005) suggest that “unblinding” should occur at the end of all studies and so as not to interfere with the analysis of data, this could be done by a person who is totally independent of the analysis. This would, however, have implications for open label extended trials as in that case participants, whilst better equipped to give informed consent would have more information than the researchers and this might be conveyed to researchers in anad hocmanner.
Open label extension trails
Open label extension studies (mentioned in sub-section 7.1.8) seem quite fair as they give each participant the opportunity to freely consent to continuing with the study in the full knowledge that s/he will receive the experimental drug. However, Taylor and Wainwright (2005) have highlighted a couple of ethical concerns linked to the consent process, the scientific value of such studies and issues linked to access to drugs at the end of the prior study.
With regard to consent, they argue that people may have had a positive or negative experience of the trial but do not know whether this was due to the experimental drug, another drug or a placebo. They may nevertheless base their decision whether to continue on their experience so far. For those who were not taking the experimental drug, their experience in the follow-up trial may turn out to be very different. Also, if they are told about the possibility of the open label extension trial when deciding whether or not to take part in the initial trial (i.e. with the implication that whatever group they are ascribed to, in the follow-up study they will be guaranteed the experimental drug), this might induce them to participate in the initial study which could be considered as a form of subtle coercion. Finally, researchers may be under pressure to recruit as they can only recruit people in an open label extended trial who took part in the initial study. This may lead them in turn to put pressure (even inadvertently) on participants to continue with the study.
The scientific validity of open label extension trials is questioned by Taylor and Wainwright (2005) on the grounds that people from the experimental arm of the first study who did not tolerate the drug would be unlikely to participate in the extension trial and this would lead to bias in the results. In addition, open-label trials often lack a precise duration other than “until the drug is licensed” which casts doubt on there being a valid research purpose.
The above authors suggest that open label extension studies are dressed up marketing activities which lack the ethical justification for biomedical research which is the prospect of finding new ways of benefiting people’s health. However, it could be argued that the aim of assessing long-term tolerability of a new drug is a worthwhile pursuit and if conducted in a scientific manner could be considered as research. Moreover, not all open label extension trials are open-ended with regard to their duration. The main problem in interpreting open label extension studies is that little is known about the natural course of the disease.
Protecting participants’ well-being at the end of the clinical trial
Some people who participate in a clinical trial and who receive the experimental drug experience an improvement in their condition. This is to be hoped even if benefit to the health of individuals is not the aim of the study. However, at the end of the study, the drug is not yet licenced and there is no legal right to continue taking it. This could be psychologically disturbing to the participants in the trial and also to their families who may have seen a marked improvement in their condition.
Taylor and Wainwright (2005) suggest that the open label trials may serve the purpose of prescribing an unlicensed drug on compassionate grounds, which whilst laudable, should not be camouflaged as scientific research. Rather governments should take responsibility and set up the appropriate legal mechanisms to make it possible for participants whose medical condition merits prolonged treatment with the experimental drug to have access to it.
Minimising pain and discomfort
Certain procedures to which people with dementia or their representatives consent may by burdensome or painful or simply worrying but in accordance with the principles of autonomy or justice/equity, people with dementia have the right to participate. The fact that they have made an informed decision to participate and are willing to tolerate such pain or burden does not release researchers from the obligation to try to minimise it. For example, if repeated blood samples are going to be necessary, an indwelling catheter could be inserted under local anaesthetic to make it easier or medical staff should provide reassurance about the use of various scanning equipment which might be worrying or enable the person’s carer to be present. In order to minimize fear, trained personnel are needed who have experience dealing with people with dementia. The advice of the carer, if there is one, could also be sought.
Drug trials in countries with less developed safeguards
Clinical trials are sometimes carried out in countries where safeguards are not well developed and where the participants and even the general population are likely to have less possibility to benefit from the results of successful trials. For example, some countries have not signed the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine (1997) (referred to in section 188.8.131.52). The participants in those countries may be exposed to possible risks but have little chance of future medical benefit if the trial is successful. Yet people in countries with stricter safeguards for participants (which are often richer countries) stand to benefit from their efforts and from the risks they take, as they are more likely to be able to afford the drugs once developed. This raises ethical issues linked to voluntariness because there may be, in addition to the less developed safeguards, factors which make participation in such trials more attractive to potential participants. Such practices also represent a lack of equity in the distribution of risk, burden and possible benefit within society and could be interpreted as using people as a means to an end.
Parallels can also be drawn to the situation whereby people in countries where stem cell research is banned profit from the results of studies carried out in countries where it is permitted or to the results of studies carried out in countries where research ethics are slack or inexistent.
For a detailed discussion of the ethical issues linked to the involvement in research of people in other countries, particularly lower and middle income countries where standards of protection may by lower, please refer to the afore-mentioned report by the Presidential Commission for the Study of Bioethical Issues.
- Researchers should consider including a placebo arm in clinical trials when there are compelling and sound methodological reasons for doing so.
- Researchers should ensure that patients are aware that the aim of a randomised controlled trial is to test a hypothesis and provide generalizable knowledge leading to the development of a medical drug or procedure. They should explain how this differs from medical treatment and care which are aimed at enhancing the health and wellbeing of individual patients and where there is a reasonable expectation that this will be successful.
- Researchers should ensure that potential participants understand that they may be allocated to the placebo group.
- It should not be presumed that the treating doctor or contact person having proposed the participant for a trial has been successful in communicating the above information.
- Researchers conducting clinical trials may need training in how to ensure effective communication with people with dementia.
- Appropriate measures should be taken by researchers to minimize fear, pain and discomfort of participants.
- All participants should, when possible, preferably have the option of receiving the experimental drug (if proven safe) after completion of the study.
- Pharmaceutical companies should not be discouraged from carrying out open-label extension studies but this should not be the sole possibility for participants to access the trial drug after the end of the study if it is proving beneficial to them.
- In multi-centre clinical trials, where data is transferred to another country in which data protection laws are perhaps less severe, the data should be treated as stated in the consent form signed by the participant.
Last Updated: jeudi 29 mars 2012 | <urn:uuid:9d3f2101-f19c-4a5e-a7b4-dcd94a6d33f1> | {
"date": "2013-05-18T06:22:02",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.955619752407074,
"score": 3.640625,
"token_count": 6489,
"url": "http://www.alzheimer-europe.org/FR%20%20%20%20%20%20%20%20%20%20%20%20%EF%BF%BD%20%EF%BF%BD%C2%B3/Ethics/Ethical-issues-in-practice/Ethics-of-dementia-research/Clinical-trials"
} |
United Kingdom - Scotland
Restrictions of freedom
Mental Health (Care and Treatment) (Scotland) Act 2003
The Act was designed to modernise and improve the use of compulsory measures in mental health care. It reflects the general move over the last two decades towards care and treatment in the community rather than in hospitals or other residential settings. The title reflects the philosophy of the legislation with the focus on ‘care’ and ‘treatment’. In basic terms, the Act provides for the protection of people with a mental disorder in a hospital or community setting.
It contains mechanisms for dealing with offenders who have a mental disorder and so interacts with the criminal justice system.
The Act covers individuals who are defined as having a ‘mental disorder’. The term includes mental illness, personality disorder and learning disability. The majority of cases involving compulsory measures have been in relation to people diagnosed with a mental illness. However, the Mental Welfare Commission for Scotland monitors the use of compulsory measures and has found increasing use of emergency or short term measures being used for people aged over 75 years with a diagnosis of dementia.
Detention (Involuntary internment)
The Act deals with several forms of compulsion in relation to a person with mental disorder where:
There is a significant risk to the person’s health, safety or welfare or the safety of any other person (what is a significant risk is a question of judgement for health and social care professionals. The tribunal will test this assessment during an appeal or on an application for a compulsory treatment order).
Treatment is available to prevent the person’s condition from deteriorating or to relieve its symptoms or effects
Compulsory admission is necessary because the person will not agree to admission and/or treatment; and
The person’s ability to make decisions about the provision of medical treatment is significantly impaired because of mental disorder.
Types of order
Emergency Detention (72 hours)
Short Term Detention (28 days and can be extended)
Compulsory Treatment Order (6 months – can be extended)
Mental Health Tribunals
The Act introduced a new system of mental health tribunals with a number of functions, including considering applications for orders and appeals against orders.
This is detention in a psychiatric hospital for up to 72 hours if necessary. It does not authorise any medical treatment. In an emergency, common law powers might be used. A registered medical practitioner can sign an emergency detention certificate if s/he believes that a person’s ability to make decisions about medical treatment is significantly impaired because of mental disorder. This authorises the removal of the individual to a specific hospital. Before signing the certificate the medical practitioner must be satisfied that:
There is an urgent need to detain the person in hospital to access the medical treatment s/he needs
If the person was not detained, there would be a significant risk to his or her health, safety, or welfare or the safety of another person, and
Any delay caused by starting the short term detention procedure is undesirable.
If any treatment is needed the short-term detention procedure must generally be used.
Short term detention
This may be used where it is necessary to detain an individual with mental disorder who cannot be treated voluntarily and without the treatment the person would be at risk of significant harm. To obtain a certificate the approved medical practitioner must consult and gain the approval of a Mental Health Officer whatever the circumstances.
Compulsory Treatment Order
Compulsory Treatment Orders (CTOs) are granted by the Mental Health Tribunal. They last for 6 months, can be extended by the responsible medical officer for a further six months and then extended annually. The Tribunal reviews them at least every two years. Therefore, they can restrict or deprive liberty for long periods of time. The Mental Welfare Commission for Scotland looks at how these orders are used for people of different ages and genders to see if there are any trends. Over recent years, the number of new orders has come down. The use of CTOs for people aged 65 and over has increased for people with dementia in recent years.
‘De facto detention’
Practitioners must be careful that they are not using excessive coercion to prevent people from leaving hospital when they wish to. They must take care to document situations where they have concerns if an informal patient wishes to leave. The Tribunal can, under section 291 of the 2003 Act, order that an informal patient is being unlawfully detained. People with dementia pose a difficult problem. The Tribunal has ruled that a person with dementia is unlawfully detained in a general hospital when prevented from leaving. It can be appropriate to redirect someone and dissuade him/her from leaving but repeatedly thwarting a determined effort to leave is likely to a significant deprivation of liberty, and the patient should be formally detained.
Adults with Incapacity (Scotland) Act 2000
Scottish incapacity laws were reformed with the introduction of the Adults with Incapacity (Scotland) Act in 2000. This Act covers people with a mental disorder who lack some or all capacity to make decisions or act in their own interests. It recognises that capacity is not all or nothing but is ‘decision specific’. The Act introduced a number of measures to authorise someone else to make decisions on behalf of the person with incapacity, on the basis of a set of principles on the face of the Act. These are fundamental. Any action or decision
- Must benefit the person
- Must be the least restrictive of the person’s liberty in order to gain that benefit
- Must take account of the person’s past and present wishes (s/he must be given assisted to communicate by whatever means is appropriate to the individual)
- Must follow consultation with relevant others as far as practicable
- Must encourage and support the person to maintain existing skills and develop new skills.
The individual may, whilst competent, appoint one or more persons to act their financial (continuing) and or welfare attorney. This must be registered with the Office of the Public Guardian. It does not allow the attorney to detain the grantor in a psychiatric hospital. If the person refuses to comply with the attorney the attorney has no compulsory powers to detain. Where there is concern for the person’s safety the attorney can apply to the court for a welfare guardianship order. Powers can be granted to allow the guardian to decide on the accommodation of the person and other powers such as who they can consort with. Where the welfare guardian has powers over accommodation s/he is able to restrict the freedom of the person by placing them in a care home against their will. However, whether this amounts to deprivation of liberty under the European Court of Human Rights ruling will depend on a number of other circumstances and the accumulative impact of which would need to be considered (Patrick and Smith, 2009; Mental Welfare Commission for Scotland, 2011). With regard to the issue of non-compliance, if the person on guardianship, for example, runs away, the guardian can apply to the Court under s70 for an order to require the person to return.
Because there is no automatic review of welfare guardianship orders there is concern that the Adults with Incapacity (Scotland) Act 2000 may not be compliant with the European Convention on Human Rights. The Act states that the order should be for a standard 3 years but can be more or less at the discretion of the Court. However, there has been a practice of orders being granted for indefinite periods and this has given rise to concern in relation to certain groups. However, for people with dementia, who have a progressive brain disorder, an indefinite order may be deemed appropriate.
The Scottish Law Commission is currently undertaking a review of the Adults with Incapacity (Scotland) Act 2000 in relation to deprivation of liberty issues. It has established an advisory group of key stakeholders, including Alzheimer Scotland, and will be reporting in due course.
The Road Traffic Act of 1991 contains a few articles relating to offences involving driving when unfit to do so, e.g.:
- A person who causes the death of another person by driving a mechanically propelled vehicle dangerously on a road or other public place is guilty of an offence.
- A person who drives a mechanically propelled vehicle dangerously on a road or other public place is guilty of an offence.
- If a person drives a mechanically propelled vehicle on a road or other public place without due care and attention, or without reasonable consideration for other persons using the road or place, he (or she) is guilty of an offence.
- According to the provisions of this act, a person is regarded as driving dangerously if the way s/he drives falls far below what would be expected of a competent and careful driver and it would be obvious to a competent and careful driver that driving in that way would be dangerous.
A person who has been diagnosed with dementia must inform the Driver and Vehicle Licensing Authority (DVLA). Failure to do could lead to a fine of up to £1,000. Moreover, a person who had an accident but did not previously inform the DVLA of his/her dementia might not be covered by his/her insurance company. Once the DVLA has been informed of that someone has dementia, they send a questionnaire to the person and request a medical report. A driving assessment may also be required. The Medical Advisers at the DVLA then decide whether the person can continue driving (Alzheimer Scotland, 2003).
Patrick, H. and Smith, N. (2009),Adult Protection and the Law in Scotland, Bloomsbury Professional.
Mental Welfare Commission for Scotland Annual Report 2010 – 2011 www.mwcscot.org.uk
Last Updated: mercredi 14 mars 2012 | <urn:uuid:ca6ee1dd-37e5-4f09-89d9-c924539ff0e6> | {
"date": "2013-05-18T08:03:57",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9486571550369263,
"score": 2.75,
"token_count": 1987,
"url": "http://www.alzheimer-europe.org/FR%C2%AF/Policy-in-Practice2/Country-comparisons/Restrictions-of-freedom/United-Kingdom-Scotland"
} |
Our opinion on ...
- Executive Summary
- Necessity for a response
- Genetic testing
- General principles
- Other considerations
The present paper constitutes the input of Alzheimer Europe and its member organisations to the ongoing discussions within Europe about genetic testing (in the context of Alzheimer's disease and other forms of dementia).
Alzheimer Europe would like to recall some general principles which guide this present response:
- Having a gene associated with Alzheimer's disease or another form of dementia does not mean that a person has the disease.
- People who have a gene linked to Alzheimer's disease or another form of dementia have the same rights as anyone else.
- Genetic testing does not only affect the person taking the test. It may also reveal information about other relatives who might not want to know.
- No genetic test is 100% accurate.
- The extent to which health cover is provided to citizens by the State social security system and/or privately contracted by individuals differs from one country to the next.
On the basis of these principles, Alzheimer Europe has developed the following position with regard to genetic testing:
- Alzheimer Europe firmly believes that the use and/or possession of genetic information by insurance companies should be prohibited.
- Alzheimer Europe strongly supports research into the genetic factors linked to dementia which might further our understanding of the cause and development of the disease and possibly contribute to future treatment.
- Based on its current information, Alzheimer Europe does not encourage the use of any genetic test for dementia UNLESS such test has a high and proven success rate either in assessing the risk of developing the disease (or not as the case may be) or in detecting the existence of it in a particular individual.
- Alzheimer Europe requests further information on the accuracy, reliability and predictive value of any genetic tests for dementia.
- Genetic testing should always be accompanied by adequate pre- and post-test counselling.
- Anonymous testing should be possible so that individuals can ensure that such information does not remain in their medical files against their will.
It is extremely important for people with dementia to be diagnosed as soon as possible. In the case of Alzheimer’s disease, an early diagnosis may enable the person concerned to benefit from medication, which treats the global symptoms of the disease and is most effective in the early to mid stages of the disease. Most forms of dementia involve the gradual deterioration of mental faculties (e.g. memory, language and thinking etc.) but in the early stages, it is still possible for the person affected to make decisions concerning his/her finances and care etc. – hence the importance of an early diagnosis.
If it were possible to detect dementia before the first symptoms became obvious, this would give people a greater opportunity to make informed decisions about their future lives. This is one of the potential benefits of genetic testing.
On the other hand, such information could clearly be used in ways which would be contrary to their personal interests, perhaps resulting in employment discrimination, loss of opportunities, stigmatisation, increased health insurance costs or even loss of health insurance to name but a few examples.
The present discussion paper outlines some of the recommendations of Alzheimer Europe and its member organisations and raises a few points which deserve further clarification and discussion.
The necessity for a response by Alzheimer Europe
In the last few years, the issue of genetic testing has been increasingly debated. In certain European countries there are already companies offering such tests. Unfortunately, the general public do not always fully understand what the results of such tests imply and there are no regulations governing how they are carried out i.e. what kind of information people receive, how the results are presented, whether there is any kind of counselling afterwards and the issue of confidentiality etc.
In order to provide information to people with dementia and other people interesting in knowing about their own state of health and in order to protect them from the unscrupulous use of the results of genetic tests, Alzheimer Europe has developed the present Position Paper.
These general principles as well as the Convention of Human Rights and Biomedicine and the Universal Declaration on the Human Genome and Human Rights dictate Alzheimer Europe’s position with regard to genetic testing.
Alzheimer Europe would like to draw a distinction between tests which detect existing Alzheimer's disease and tests which assess the risk of developing dementia Alzheimer's disease at some time in the future:
- Diagnostic testing : Familial early onset Alzheimer’s disease (FAD) is associated with 3 genes. These are the amyloid precursor protein (APP), presenilin-1 and presenilin-2. These genetic mutations can be detected by genetic testing. However, it is important to note that the test only relates to those people with FAD (i.e. about 1% of all people with Alzheimer’s disease). In the extremely limited number of families with this dominant genetic disorder, family members inherit from one of their parents the part of the DNA (the genetic make-up), which causes the disease. On average, half the children of an affected parent will develop the disease. For those who do, the age of onset tends to be relatively low, usually between 35 and 60.
- Assessment for risk testing : Whether or not members of one’s family have Alzheimer’s disease, everyone risks developing the disease at some time. However, it is now known that there is a gene, which can affect this risk. This gene is found on chromosome 19 and it is responsible for the production of a protein called apolipoprotein E (ApoE). There are three main types of this protein, one of which (ApoE4), although uncommon, makes it more likely that Alzheimer’s disease will occur. However, it does not cause the disease, but merely increases the likelihood. For example, a person of 50, would have a 2 in 1,000 chance of developing Alzheimer’s disease instead of the usual 1 in 1,000, but might never actually develop it. Only 50% of people with Alzheimer’s disease have ApoE4 and not everyone with ApoE4 suffers from it.
There is no way to accurately predict whether a particular person will develop the disease. It is possible to test for the ApoE4 gene mentioned above, but strictly speaking such a test does not predict whether a particular person will develop Alzheimer’s disease or not. It merely indicates that he or she is at greater risk. There are in fact people who have had the ApoE4 gene, lived well into old age and never developed Alzheimer’s disease, just as there are people who did not have ApoE4, who did develop the disease. Therefore taking such a test carries the risk of unduly alarming or comforting somebody.
Alzheimer Europe agrees with diagnostic genetic testing provided that pre- and post-test counselling is provided, including a full discussion of the implications of the test and that the results remain confidential.
We do not actually encourage the use of genetic testing for assessing the risk of developing Alzheimer's disease. We feel that it is somewhat unethical as it does not entail any health benefit and the results cannot actually predict whether a person will develop dementia (irrespective of the particular form of ApoE s/he may have).
We are totally opposed to insurance companies having access to results from genetic tests for the following reasons:
- This would be in clear opposition to the fundamental principle of insurance which is the mutualisation of risk through large numbers (a kind of solidarity whereby the vast majority who have relatively good health share the cost with those who are less fortunate).
- Failure to respect this principle would create an uninsurable underclass and lead to a genetically inferior group.
- This in turn could entail the further stigmatisation of people with dementia and their carers.
- In some countries, insurance companies manage to reach decisions on risk and coverage without access to genetic data.
- We therefore urge governments and the relevant European bodies to take the necessary action to prohibit the use or possession of genetic data by insurance companies.
Alzheimer Europe recognises the importance of research into the genetic determinants of Alzheimer’s disease and other forms of dementia. Consequently,
- we support the use of genetic testing for the purposes of research provided that the person concerned has given informed consent and that the data is treated with utmost confidentiality; and
- we would also welcome further discussion about the problem of data management.
In our opinion, any individual who wishes to take a genetic test should be able to choose to do so anonymously in order to ensure that such information does not remain in his/her medical file.
At its Annual General Meeting in Munich on 15 October 2000, Alzheimer Europe adopted recommendations on how to improve the legal rights and protection of adults with incapacity due to dementia. This included a section on bioethical issues. These recommendations obviously need to guide any response of the organisation regarding genetic testing for people who suspect or fear they may have dementia and also those who have taken the test and did develop dementia.
- The adult with incapacity has the right to be informed about his/her state of health.
- Information should, where appropriate, cover the following: the diagnosis, the person's general state of health, treatment possibilities, potential risks and consequences of having or not having a particular treatment, side-effects, prognosis and alternative treatments.
- Such information should not be withheld solely on the grounds that the adult is suffering from dementia and/or has communication difficulties. Attempts should be made to provide information in such a way as to maximise his/her ability to understand, making use of technology and other available techniques to enhance communication. Attention should be paid to any possible difficulty understanding, retaining information and communicating, as well as his/her level of education, reasoning capacity and cultural background. Care should be taken to avoid causing unnecessary anxiety and suffering.
- Written as well as verbal information should always be provided as a back-up. The adult should be granted access to his/her medical file(s). S/he should also have the opportunity to discuss the contents of the medical file(s) with a person of his/her choice (e.g. a doctor) and/or to appoint someone to receive information on his/her behalf.
- Information should not be given against the will of the adult with incapacity.
- The confidentiality of information should extend beyond the lifetime of the adult with incapacity. If any information is used for research or statistical purposes, the identity of the adult with incapacity should remain anonymous and the information should not be traceable back to him/her (in accordance with the provisions of national laws on respect for the confidentiality of personal information). Consideration should be given to access to information where abuse is suspected.
- A clear refusal by the adult with incapacity to grant access to information to any third party should be respected regardless of the extent of his/her incapacity, unless this would be clearly against his/her best interests e.g. carers should have provided to them information on a need to know basis to enable them to care effectively for the adult with incapacity.
- People who receive information about an adult with incapacity in connection with their work (either voluntary or paid) should be obliged to treat such information with confidentiality.
People who take genetic tests and do not receive adequate pre and post test counselling may suffer adverse effects.
Fear of discrimination based on genetic information may deter people from taking genetic tests which could be useful for research into the role of genes in the development of dementia.
Certain tests may be relevant for more than one medical condition. For example, the ApoE test is used in certain countries as part of the diagnosis and treatment of heart disease. There is therefore a risk that a person might consent for one type of medical test and have the results used for a different reason.
Last Updated: jeudi 06 août 2009 | <urn:uuid:62210bfc-b709-4c59-93ac-36ec9784506d> | {
"date": "2013-05-18T05:52:22",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9431832432746887,
"score": 2.625,
"token_count": 2443,
"url": "http://www.alzheimer-europe.org/FR%C5%A0%C2%B7%C5%A0%20/Policy-in-Practice2/Our-opinion-on/Genetic-testing"
} |
About American Frontiers
About American Frontiers: A Public Lands Journey
America's public lands are a treasured part of our national heritage, representing its grandeur, bountiful promise, and vast natural resources. All citizens share in the rights and the responsibilities of seeing that our public lands are cared for and managed in a way that meets the current and future needs of the American people.
To highlight the beauty, the accessibility, and the benefits of our public lands, the Public Lands Interpretive Association (PLIA), an Albuquerque, New Mexico-based non-profit organization that provides interpretive and educational resources to the public, mapped out a Canada-to-Mexico trek exclusively on public lands, called American Frontiers: A Public Lands Journey.
The Journey, or Trek,involved two groups of travelers: one starting north from the Mexican border and the second headed south from the Canadian. Their route lay entirely on public lands, a feat that has never been accomplished before. The trek began on July 31, 2002 and ended two months later when the two teams met in Wasatch-Cache National Forest near Salt Lake City, Utah on September 27.
Inspired by American Frontiers: A Public Lands Journey, National Geographic Society has designed its Geography Action! 2002 curriculum around the theme of public lands. Aimed at teaching school-aged children the beauty and the benefits of America's public lands, Geography Action! 2002 followed the trekkers along the two-month journey, highlighting the diversity and grandeur of our nation's public lands.
To demonstrate the different ways people get about on our public lands--and to stay within the 60-day limit of the journey--trek participants utilized numerous modes of transportation for this historic border-to-border journey across America. The hiked and backpacked, rode horses, mountain bikes, ATVs and dual sport motorcycles; rafted, canoed, drove pickup trucks, motorboats and 4WD vehicles, and even spent a few leisurely days on a houseboat.
Along the route the two teams attended special events, round table discussions, visited schools and communities to learn about public land issues. And, of course, they saw some of the most spectacular scenery of the American West. Their journal entries eloquently describe the feelings public lands awoke in them and also the daily routine of the long trek. You'll enjoy reading them.
Three years in the making, American Frontiers: A Public Lands Journey has enlisted numerous partners and sponsors including the National Geographic Society, the Department of the Interior, the USDA Forest Service, Bureau of Land Management, USGS, National Environmental Education and Training Foundation, Fire Wise Communities, American Honda, Kodak, the Coleman Company, and many others. For a full list of our sponsors, please look under "Our Sponsors" on the home page.
To learn more about American Frontiers, please spend some time on this website, read the team members' journals, enjoy the photographs
ot follow their route on the maps. For more information about American Frontiers, please contact the Public Lands Interpretive Association, 6501 Fourth Street, NW, Albuquerque, NM 87107 or call our toll-free number 877-851-8946. You can also email to SMaurer@plia.org
Everyone inspired by the Public Lands Journey should pay a visit to our public lands. To find out more about recreation opportunities on public lands, please visit the Public Lands Information Center online. There, you can find detailed recreation information, interactive recreation maps, and a large selection of guidebooks and maps.
All material copyright ©2002 - 2013, Public Lands Interpretive Association except photographs where ownership is otherwise indicated. All rights reserved. | <urn:uuid:b0a31cfd-73e9-4bc0-8d69-4691c2c822f9> | {
"date": "2013-05-18T06:29:35",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9157739281654358,
"score": 2.890625,
"token_count": 755,
"url": "http://www.americanfrontiers.net/about/"
} |
- Historic Sites
“To A Distant And Perilous Service”
Westward with the course of empire Colonel Jonathan Drake Stevenson took his way in 1846. With him went the denizens of New York’s Tammany wards, oyster cellars, and gin mills—the future leaders of California.
June/July 1979 | Volume 30, Issue 4
Three weeks later, in a millrace on the American River in northern California, John Marshall spotted the flicker of gold. By the end of summer, 1849, the Los Angeles garrison, like every other encampment of the New York volunteers, was abandoned, the harbinger of Anglo-Saxon civilization scattered to the hills, the coastal towns and villages of California half-deserted. The little port of San Francisco had become the focus of world migration. Captain Folsom, the staff quartermaster, having secured appointment as collector of the port, was on his way to becoming a millionaire. The Russ family, purveyors of Moroccan leather and holiday fireworks, had opened a jewelry shop and begun assembling an empire of hotels, beer gardens, office buildings, and residential blocks. Sergeant John C. Pulis, late of Lippitt’s monstrous Company F, had become the first sheriff of San Francisco. Lieutenant Edward Gilbert was editing the Alta California , the leading newspaper in the territory; Captain Naglee (he of the bathtime rebellion) had founded the territory’s first bank; Lieutenant Hewlett had opened a boardinghouse; Captain Frisbee had started a commission agency and was in prospect of marrying the eldest daughter of General Vallejo; Lieutenant Vermeule, the plague of Abel Stearns, had set himself up as a lawyer and would soon be elected a delegate to the California constitutional convention and a member of the state legislature; and the Reverend Mr. Thaddeus M. Leavenworth, chaplain to the regiment, had attained the quasi-judicial position of alcalde of San Francisco and was granting homesteads and auctioning public lands with a Christian generosity that scandalized even his former associates.
As for the colonel, he was well on his way to a second career (and a second marriage and a second family) as a legal counselor, politician, and founder of a grandiose ghost city called New-York-of-the-Pacific, which endures today only in the name of a slough on the edge of San Francisco Bay.
The former New York boys were scattered by then throughout California, styling themselves doctors, lawyers, judges, or capitalists. A few in San Francisco called themselves the “Hounds”—or, on formal occasions, the “Regulators.” They were the first recognizable New York—style drinking-and-marching society in the Far West, and their raucous behavior soon aroused the more orderly citizens of the town to form the prototype of San Francisco’s several committees of vigilance.
For better or worse, the Americanization of California had begun. | <urn:uuid:1dd05bbf-f83d-4605-ba5c-81c1a7767a1d> | {
"date": "2013-05-18T06:23:17",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9680232405662537,
"score": 3,
"token_count": 625,
"url": "http://www.americanheritage.com/content/%E2%80%9C-distant-and-perilous-service%E2%80%9D?page=10"
} |
- Historic Sites
George C. Marshall Museum
The museum dedicated to the life of General George C. Marshall also houses many of his papers in the research library.
This museum profiles one of the 20th century's greatest military and diplomatic leaders through exhibits related to the Marshall Era (1880-1960). Marshall led the Allied military through World War II and organized the European Recovery Program (Marshall Plan), which rebuilt Europe when the war was over. The general's papers are housed in the research library. | <urn:uuid:27cdbc2c-b70f-4a92-bc8b-4a053bc38ed6> | {
"date": "2013-05-18T06:31:08",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.933735728263855,
"score": 2.65625,
"token_count": 102,
"url": "http://www.americanheritage.com/content/george-c-marshall-museum"
} |
The scientific world is abuzz with news of the ratification of the existence of the subatomic particle called the Higgs boson - or more colloquially, the 'God particle.' This subatomic particle's existence - which was verified recently (with virtually near certainty) by experiments at the Large Hadron Collider in Switzerland - lends credence to several long-standing physical theories such as the so-called Standard Model and the Big Bang Theory.
The nickname God particle is ironic for two reasons. First, generally, the nuclear physicists who deal with these matters - postulating the fundamental physical laws of the universe and then setting about to either verify or refute them - tend not to be regular church-goers. While there are some highly prominent scientists who balance personal, religious beliefs with professional, scientific quests, most probably go along with the thoughts of the world-famous physicist, Stephen Hawking:
I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark. [Interview in The Guardian, 7/9/12]
Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God... [from his book; The Grand Design, 2010]
So it is a bit ironic that physics' most famous quest has resulted in the discovery of the 'God particle.' Most physicists are quite comfortable having their names associated with famous - even if dead - humans like Newton, Einstein or the afore-mentioned Hawking. One will find few, if any, attributions to deities in the objects that physicists discover and name or the theories they propose.
Second, and more importantly, the discovery that the God particle really exists does not - as the name suggests - imply that God played some role in the creation of the universe. In fact, quite the opposite. The matter is discussed at some length in the July 9 Daily Beast by Lawrence Kraus, a well-known physicist/cosmologist from Arizona State University:
This term [God particle] appeared first in the unfortunate title of a book written by physicist Leon Lederman two decades ago, and while to my knowledge it was never used by any scientist (including Lederman) before or since, it has captured the media's imagination.
What makes this term particularly unfortunate is that nothing could be further from the truth. Assuming the particle in question is indeed the Higgs, it validates an unprecedented revolution in our understanding of fundamental physics and brings science closer to dispensing with the need for any supernatural shenanigans all the way back to the beginning of the universe...If these bold, some would say arrogant, notions derive support from the remarkable results at the Large Hadron Collider, they may reinforce two potentially uncomfortable possibilities: first, that many features of our universe, including our existence, may be accidental consequences of conditions associated with the universe's birth; and second, that creating "stuff" from "no stuff" seems to be no problem at all-everything we see could have emerged as a purposeless quantum burp in space or perhaps a quantum burp of space itself. Humans, with their remarkable tools and their remarkable brains, may have just taken a giant step toward replacing metaphysical speculation with empirically verifiable knowledge. The Higgs particle is now arguably more relevant than God.
So the term God particle was first used by a scientist, but was picked up and popularized by the media. It's catchy and enhances interest in the subject among the public. But like so much else that the media promotes, it is misleading and inappropriate. | <urn:uuid:ed184b23-5659-4b91-97c0-fd818297d417> | {
"date": "2013-05-18T05:49:06",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9590326547622681,
"score": 2.546875,
"token_count": 743,
"url": "http://www.americanthinker.com/blog/2012/07/does_the_god_particle_prove_that_god_does_or_does_not_exist.html"
} |
Acrylic A synthetic fabric often used as a wool substitute. It is warm, soft, holds colors well and often is stain and wrinkle resistant.
Angora A soft fiber knit from fur of the Angora rabbit. Angora wool is often combined with cashmere or another fiber to strengthen the delicate structure. Dry cleaning is reccommended for Angora products.
Bedford A strong material that is a raised corded fabric (similar to corduroy). Bedford fabric wears well and is usually washable.
Boucle A fabric made with boucle yarn(s) in wool, rayon and or cotton causing the surface of the fabric to appear looped.
Brocade An all-over floral, raised pattern produced in a similar fashion to embroidery.
Burnout Process of printing a design on a fabric woven of paired yarns of different fibers. One kind of yarn is burned out or destroyed leaving the ground unharmed.
Cable Knit Patterns, typically used in sweaters, where flat knit columns otherwise known as cables are overlapped vertically.
Cashmere A soft, silky, lightweight wool spun from the Kashmir goat. Cashmere must be dry-cleaned due to its delicate fibers and is commonly used in sweaters, shawls, outerwear, gloves and scarves for its warmth and soft feel.
Chiffon A common evening wear fabric made from silk, cotton, rayon or nylon. It's delicate in nature and sheer.
Chintz A printed and glazed fabric made of cotton. Chintz is known for its bright colors and bold patterns.
Corduroy Cotton fibers twisted as they are woven to create long, parallel grooves, called wales, in the fabric. This is a very durable material and depending on the width of the wales, can be extremely soft.
Cotton A natural fiber that grows in the seed pod of the cotton plant. It is an inelastic fiber.
Cotton Cashmere A blend of cotton and cashmere fibers, typically 85% to 15% respectively, this combination produces an extremely soft yarn with a matte finish.
Crepe Used as a description of surfaces of fabrics. Usually designates a fabris that is crimped or crinkled.
Crinoline A lightweight, plain weave, stiffened fabric with a low yarn count. Used to create volume beneath evening or wedding dresses.
Crochet Looping threads with a hooked needle that creates a wide, open knit. Typically used on sweaters for warm seasons.
Denim Cotton textile created with a twill weave to create a sturdy fabric. Used as the primary material of blue jeans.
Dobby Woven fabric where the weave of the fabric actually produces the garment's design.
Embroidery Detailed needlework, usually raised and created by yarn, silk, thread or embroidery floss.
Eyelet A form of lace in a thicker material that consists of cut-outs that are integrated and repeated into a pattern. Usually applied to garments for warmer seasons.
Faille A textured fabric with faint ribbing. Wears wonderfully for hours holding its shape due to the stiffness of the texture. Used in wedding dresses and women's clothes.
Fil'Coupe A small jacquard pattern on a light weight fabric, usually silk, in which the threads connecting each design are cut, creating a frayed look.
French Terry A knit cloth that contains loops and piles of yarn. The material is very soft, absorbent and has stretch.
Gabardine A tightly woven twill fabric, made of different fibers such as wool, cotton and silk.
Georgette A crinkly crepe type material usually made out of silk that consists of tightly twisted threads. Georgette is sheer and has a flowy feeling.
Gingham Two different color stripes "woven" in pattern to appear checked.
Glen Plaid Design of woven, broken checks. A form of traditional plaid.
Guipure Lace A lace without a mesh ground, the pattern in held in place by connecting threads.
Herringbone A pattern originating from masonry, consists of short rows of slanted parallel lines. The rows are formatted opposing each other to create the pattern. Herringbone patterns are used in tweeds and twills.
Hopsack A material created from cotton or woolthat is loosely woven together to form a coarse fabric.
Houndstooth A classic design containing two colors in jagged/slanted checks. Similar to Glen Plaid.
Jacquard A fabric of intricate varigated weave or pattern. Typically shown on elegant and more expensive pieces.
Jersey A type of knit material usually made from cotton and known to be flexible, stretchy, soft and very warm. It is created using tight stitches.
Knit A knit fabric is made by interlocking loops of one or more yarns either by hand with knitting needles or by machine.
Linen An exquisite material created from the fibers of the flax plant. Some linen contain slubs or small knots on the fabric. The material wrinkles very easily and is a light fabric perfect for warm weather.
Lurex A metallic fiber woven into material to give the garment shine.
LycraTM Lycra is a type of stretch fabric where the fibers are woven into cotton, silk or synthetic fiber blends. These materials are lightweight, comfortable (need trademark symbol) and breathable, and the stretch will not wear away.
Madras Originating from Madras, India, this fabric is a lightweight, cotton material used for summer clothing. Madras usually has a checked pattern but also comes in plaid or with stripes. Typically made from 100% cotton.
Marled Typically found in sweaters, marled yarn occurs when two colored yards are twisted together.
Matelasse A compound fabric made of cotton, wool or other fibers with quilted character and raised patterns.
Matte A matte finish has a lusterless surface.
Merino Wool Wool sheered from the merino sheep and spun into yarn that is fine but strong.
Modal A type of rayon that is made from natural fibers but goes through a chemical treatment to ensure it has a high threshold of breakage. Modal is soft and breathable which is why it's used as a cotton replacement.
Non-iron A treated cotton that allows our Easy Care Shirts to stay crisp throughout the day and does not need ironing after washing/drying.
Nylon A synthetic fiber that is versatile, fast drying and strong. It has a high resistance to damage.
Ombre A color technique that shades a color from light to dark.
Ottoman A firm, lustrous plain weave fabric with horizontal cords that are larger and rounder than those of the faille. Made of wool, silk, cotton and other manufactured fibers.
Paisley A pattern that consists of crooked teardrop designs in a repetitive manner
Placket The piece of fabric or cloth that is used as a concealing flap to cover buttons, fasteners or attachments. Most commonly seen in the front of button-down shirts. Also used to reinforce openings or slits in garments.
Piping Binding a seam with decoration. Piping is similar to tipping or edging where a decorative material is sewn into the seams.
Pointelle An open-work knitting pattern used on garments to add texture. Typically a cooler and general knit sweater.
Polyester A fabric made from synthetic fibers. Polyester is quick drying, easy to wash and holds its shape well.
Ponte A knit fabric where the fibers are looped in an interlock. The material is very strong and firm.
Poplin A strong woven fabric, heavier in weight, with ribbing.
Rayon A manufactured fiber developed originally as an alternative for silk. Rayon drapes well and looks luxurious.
Sateen A cotton fabric with sheen that resembles satin.
Seersucker Slack-tension weave where yarn is bunched together in certain areas and then pulled taught in others to create this summery mainstay.
Shirring Similar to ruching, shirring gathers material to create folds.
Silk One of the most luxurious fabrics, silk is soft, warm and has shine. It is created from female silkworm's eggs.
Silk Shantung A rough plain weave fabric made of uneven yarns to produce a textured effect, made of fibers such silk in which all knots and lumps are retained.
Space dyed Technique of yarn dyeing to produce a multi-color effect on the yarn itself. Also known as dip dyed yarn.
Spandex Also known as Lycra (trademark symbol), this material is able to expand 600% and still snap back to its original shape and form. Spandex fibers are woven with cotton and other materials to make fabrics stretch.
Tipping Similar to edging, tipping includes embellishing a garment at the edges of the piece, hems, collars etc.
Tissue Linen A type of linen that is specifically made for blouses or shirts due to its thinness and sheerness.
Tweed A loose weave of heavy wool makes up tweed which provides warmth and comfortability.
Twill A fabric woven in a diagonal weave. Commonly used for chinos and denim.
Variegated Multi-colored fabrics where colors are splotched or in patches.
Velour A stretchy knit fabric that looks similar to velvet. Very soft to the touch.
Velvet A soft, silky woven fabric that is similar to velour. Velvet is much more expensive than velour due to the amount of thread and steps it takes to manufacture the material.
Velveteen A more modern adaptation of velvet, velveteen is made from cotton and has a little give. Also known as imitation velvet.
Viscose Created from both natural materials and man-made fibers, viscose is soft and supple but can wrinkle easily.
Wale Only found in woven fabrics like corduroy, wale is the long grooves that give the garment its texture.
Windowpane Dark stripes run horizontal and vertical across a light background to mimic a window panes.
Woven A woven fabric is formed by interlacing threads, yarns, strands, or strips of some material. | <urn:uuid:04a048d3-152b-45eb-ac6e-e7717919a899> | {
"date": "2013-05-18T06:30:40",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.931941032409668,
"score": 2.53125,
"token_count": 2146,
"url": "http://www.anneklein.com/Matte-Jersey-Sleeveless-Drape-Top/90724376,default,pd.html?variantSizeClass=&variantColor=JJQ47XX&cgid=90316517&pmin=25&pmax=50&prefn1=catalog-id&prefv1=anneklein-catalog"
} |
Dante Alighieri was the greatest Italian poet and one of the most important
European writers. Dante live through the years of 1265 – 1321. He has a very
unique way of writing and started his works at the age of 35. Dante wrote La
Divina Comedia based on the era he live through and all the knowledge of his
lifetime was imbedded in his works. In this specific work he writes about a journey
which he wishes to better understand the afterlife receive his salvation. Throughout
his journey Virgil was his guide and taught him all about the nine circles of hell
and the punishments that the sinners received in each circle. Dante was a very
powerful writer and his writing has many significant symbols to many different
objects. This story is very complex and interesting to read and understand each
After Dante exits hell with his guide Virgil, he arrives into Purgatory.
Purgatory is the in between where Dante sees sinners being punished. Each sin has
a different punishment depending on the sin. Purgatory is a place where sinners
temporarily get punished in order to purify themselves and be ready for heaven.
There these people learn the mistakes they have made and realize the seriousness
of their sins and prepare to enter heaven/paradise. Virgil guides Dante throughout
purgatory and leads him to paradise where Beatrice will be there to guide him to
A very important character of the Divine Comedy is Virgil. Virgil is Dante’s
guide throughout Purgatory. Virgil is a very helpful guide and is a poet whom
Dante looked up to. Virgil symbolized human reason and taught Dante everything
he knows about the inferno and purgatory. Virgil is in the first realm of hell, limbo.
He is in limbo because he is a pagan and was never baptized. Virgil takes Dante
through each circle and describes each circle, the reason people are there and
describes each punishment. Virgil protects Dante from the leaders of each circle as... | <urn:uuid:f5054873-6adf-4f04-95b1-a3fdb4f478b6> | {
"date": "2013-05-18T06:21:03",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.97272789478302,
"score": 3.265625,
"token_count": 420,
"url": "http://www.antiessays.com/free-essays/140615.html"
} |
St. Anne, Patroness of Detroit
St. Anne was named by the Vatican as the patron saint of the Archdiocese of Detroit. We honor the mother of the Blessed Virgin Mary and prayerfully ask for her intercession.
One may pray to any saint for any intention, but a patron saint is seen as the particular advocate for a chosen place or activity.
St. Anne is the mother of the Blessed Virgin Mary. Though she is not mentioned by name in the Bible, we know of her through early Christian writings, the most important of which is the Protoevangelium of James, written in about 150 A.D.
We are told that Anne, the wife of Joachim, was advanced in years before her prayers for a child were answered. An angel appeared and told her she would conceive a child who "shall be spoken of in all the world."
St. Anne's feast day is celebrated on July 26. She is known as the patron saint of equestrians, housewives, women in labor, cabinet-makers, and miners.
Devotion to St. Anne became popular in the Christian East by the fourth century, and that tradition later spread to the Christian West. When the French began to colonize modern-day Quebec, they brought their devotion to St. Anne with them—asking for her protection in the New World.
This devotion was planted on the banks of the Detroit River by the original French-Canadian settlers. Two days after Antoine de la Mothe Cadillac landed with 51 others in what is now downtown Detroit on July 24, 1701, they celebrated Mass and began construction of a church named after Saint Anne.
Today, Ste. Anne de Detroit Church is the second oldest continually operating parish in the United States. As is now recognized by the Holy See, the church of Detroit was placed under St. Anne's protection from its very founding. | <urn:uuid:8e1cef78-a3fc-4eeb-b94f-f8ec251e2e20> | {
"date": "2013-05-18T08:01:57",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9797120094299316,
"score": 2.796875,
"token_count": 386,
"url": "http://www.aod.org/our-archdiocese/history-of-the-archdiocese/patron-saint/"
} |
Demands for lower cost manufacturing, lighter components, and recycleability are forcing manufacturers to switch from metal components to plastic. While the assembly processes are different, some of the same concerns apply. Finding a reliable assembly equipment supplier, defining part requirements, getting them involved early, and choosing the right assembly process are the keys to success.
chart outlines the different characteristics, capabilities,
and requirements of a variety of plastic welding processes.
CLICK for the full-size graphic.
Appliance components come in
all shapes and sizes, and each one has its own unique characteristics that
demand an assembly process to fit. Ultrasonic, hot-plate, spin, thermal,
laser, and vibration welding are the most common plastic assembly methods.
Choosing the correct method can be difficult. A supplier who has technical
knowledge in all of the processes is the best choice. They will have knowledge
of all the different process joint designs, can provide assistance in material
selection, and can support once the process is in production.
part requirement before the design of the plastic part is critical.
This will save dollars in tooling costs and help assure that the correct
process to achieve the requirements is chosen. All too often, the plastic
molds are manufactured, the first parts are assembled, and then quality
control determines that the parts will not pass a pressure test. This
is too late; now significant dollars will need to be spent to correct
problem. Requirements such as a need for pressurization, exposure to
extreme cold or heat, cosmetic-part status (requires no blemishes),
and parts assembled
per minute are all factors in determining the correct process and plastic
Each process has unique plastic-joint design requirements to assure
proper weld strength. Assembly equipment suppliers can help design the
weld area joint design. An example of joint design requirements for ultrasonic
assembly is given by Guide to Ultrasonics from Dukane
Charles, IL, U.S.): "Mating services should be in intimate contact around the entire joint. The joint should be in one plane, if possible. A small initial contact area should be established between mating halves. A means of alignment is recommended so that mating halves do not misalign during the weld operation." Obviously,
these joint requirements should all be designed into the part prior to
machining of the injection molds.
What assembly process is correct for a part? As stated earlier, ultrasonic, hot-plate, spin, thermal, vibration, and laser welding are the most common methods used in production today. Each method has unique advantages.
Ultrasonic assembly is a fast, repeatable, and reliable process that allows for sophisticated process control. High-volume small parts that have very tight assembly tolerances lend themselves well to ultrasonic assembly. Ultrasonic systems have the capability of exporting relevant assembly process data for SPC documentation and FDA validation. Ultrasonic welding can be easily integrated into automated systems.
Hot-plate welding can accommodate a wide range of parts sizes and configurations. These machines offer high-reliability hermetic seals and strong mechanical bonds on complex part geometries. The process is fairly simple; the two parts to be jointed are brought in close proximity to a heated platen until the joint area is in a molten state. The platen is removed and the parts are clamped together until the joint cools off and returns to a solid state.
Spin welding is a very cost-effective method for joining large, medium, or small circular parts such as washing machine tubs to agitator components. Water purification filters, thermal mugs, and irrigation assemblies typically are joined using the spin welding process. Careful attention to joint design is critical for parts that require flash-free appearance.
Assemblies that require inserts at multiple points on multiple planes, like computer or vacuum cleaner housings, typically benefit from thermal insertion/staking. Thermal staking is ideal for attachment of non-plastic components to the plastic housing, such as circuit boards and metal brackets. Dates coding, embossing, and degating are other uses for thermal presses. Thermal welding can be a slower assembly process than ultrasonic, so, depending on the volumes of assemblies required, ultrasonic maybe a better choice.
Vibration welding physically moves one of the two parts horizontally under pressure to create heat through surface friction. Compared to ultrasonic welding, vibration welding operates at much lower frequencies, much higher amplitude, and with greater clamping force. The limitation to vibration welding is simply that the joint must be in a single plane in at least one axis in order to allow the vibration motion. Like hot-plate welding, vibration welding is a highly reliable process that can handle large parts in challenging materials or multiple parts per cycle with ease. Chain saw housings, blower and pump assemblies, and large refrigerator bins are examples of potential vibration welding applications. Cycle times for vibration welds are very short, thus they are ideal for high volume and are easily automated.
Laser welding is the newest technology of the processes available today. One benefit of laser welding is that the weld joints produce no flash or particulate outside of the joint. Assemblies that require absolutely no contamination for particulate, like medical filters, are good candidates. A second benefit is that the assembly is not exposed to heat or vibration. Devices that have very sensitive electronic internal components that may be damaged from vibration can now be assembled effectively. Laser welding requires the parts to be transmissive and absorbive, specifically how transparent the parts appear to the laser beam. One material transmits the coherent laser light and the other material absorbs the light and converts it to heat. Parts that appear black to the human eye can be transparent or opaque at the wavelength of the laser light. Clear-to-clear joints and joints that are optically transparent can be readily achieved by use of special coatings. Depending on the part geometry, laser welding can be a slower process then vibration or ultrasonic welding.
Plastic appliance components are the direction of the future - they can be assembled
economically and produce functional products.
This information is provided by Michael Johnston, national sales and
marketing manager, Dukane
Charles, IL, U.S.). | <urn:uuid:134c9c22-8518-4641-bb31-a84d5ca3c9d3> | {
"date": "2013-05-18T08:08:06",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9172310829162598,
"score": 2.640625,
"token_count": 1282,
"url": "http://www.appliancemagazine.com/editorial.php?article=235&zone=1&first=1"
} |
| ||How to Identify and Control Water Weeds and Algae|
"How to Identify and Control Water Weeds and Algae" is the complete guide to identification, treatment options and measuring lakes and ponds to apply product to common aquatic plants. Identifying the type or species of plant(s) to be controlled is the first step in implementing a management strategy. This book is intended for government, commercial and private concerns responsible for maintaining the recreational, aesthetic and functional value of water resources. Recommendations are based upon decades of research in aquatic plant control and water management by industry, universities and government agencies.
Literature files are in PDF file format and require a PDF reader to view. Need to find a hard copy of this book? Please contact us. | <urn:uuid:95cb4cb4-f885-463c-98a0-ad7ebe6588d9> | {
"date": "2013-05-18T05:29:47",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9024631381034851,
"score": 2.75,
"token_count": 155,
"url": "http://www.appliedbiochemists.com/weedbook.htm"
} |
On Friday, the Department of Health
and Human Services (HHS) released a new report identifying interventions that can help increase physical activity
in youth aged 3-17 years across a variety of settings. The primary audiences
for the report are policymakers, health care providers, and public health
professionals. APTA submitted comments in December 2012 on the draft report.
Physical Activity Guidelines for
Americans Midcourse Report: Strategies to Increase Physical Activity Among
Youth summarizes intervention strategies based on the evidence from
literature reviews and is organized into 5 settings where youth live, learn,
and play: school, preschool and childcare, community, family and home, and
primary health care.
Key findings of the report suggest that:
Other materials released by HHS include an infographic
highlighting opportunities to increase physical activity throughout the day and
a youth fact sheet summarizing the report's
recommendations for youth aged 6-17 years. More information can be found at www.health.gov/paguidelines/midcourse/.
APTA has long supported HHS' efforts to increase awareness about the
benefits of physical activity. It provided input on the 2008
Physical Activity Guidelines for Americansrelated to the importance of considering physical activity needs
and barriers for people with disabilities. It also served on the Physical
Activity Guidelines Reaction Group. The association also contributes to the Be Active Your Way Blog.
therapists (PTs) and physical therapist assistants (PTAs), especially those who
have patients with wounds, are encouraged to take steps to protect their most
vulnerable patients from carbapenem-resistant Enterobacteriaceae (CRE), a
family of germs that have become difficult to treat because they have high
levels of resistance to antibiotics. In addition to patients at high risks, PTs
and PTAs should take all necessary precautions to prevent the spread of CRE to
According to the Centers for Disease Control and Prevention
(CDC), CRE are resistant to all, or nearly all, antibiotics—even the most
powerful drugs of last-resort. CRE also have high mortality rates, killing 1 in
2 patients who get bloodstream infections from them. Additionally, CRE easily
transfer their antibiotic resistance to other bacteria. For example,
carbapenem-resistant klebsiella can spread its drug-destroying properties to a
normal E. coli bacteria, which makes the E.coli resistant to
antibiotics also. "That could create a nightmare scenario since E. coli
is the most common cause of urinary tract infections in healthy people,"
CRE are usually transmitted
person-to-person, often on the hands of health care workers. Currently,
almost all CRE infections occur in people receiving significant medical
care. However, their ability to spread and their resistance raises the
concern that potentially untreatable infections could appear in otherwise
healthy people, including health care providers.
includes resources for patients, providers, and
facilities. The agency's CRE prevention toolkit has in-depth recommendations to
control CRE transmission in hospitals, long-term acute care facilities, and
is in the process of updating its Infectious Disease Control webpage to ensure that
PTs and PTAs have the information they need to understand their critical role
in helping to halt the spread of CRE. Look for a follow-up article in News
Now when the webpage is launched.
has selected 9 association members to serve on the PTA Education Feasibility
Study Work Group: Wendy Bircher, PT, EdD (NM), Derek Brandes (WA), Barbara
Carter, PTA (WI), Martha Hinman, PT, EdD (TX), Mary Lou Romanello, PT, PhD, ATC
(MD), Steven Skinner, PT, EdD (NY), Lisa Stejskal, PTA, MAEd (IL), Jennifer
Whitney, PT, DPT, KEMG (CA), and Geneva Johnson, PT, PhD, FAPTA (LA). The work group is addressing the motion Feasibility Study
for Transitioning to an Entry-Level Baccalaureate Physical Therapist Assistant
Degree (RC 20-12) from the 2012 House of Delegates. The work group will address
the first phase of the study, finalizing the study plan and identifying
relevant data sources for exploring the feasibility of transitioning the
entry-level degree for the PTA to a bachelor's degree.
APTA supporting staff members
are Janet Crosier, PT, DPT, MEd, lead PTA services specialist; Janet Bezner,
PT, PhD, vice president of education and governance and administration; Doug
Clarke, accreditation PTA programs manager; and Libby Ross, director of
than 200 individuals volunteered to serve on the work group by submitting their
names to the Volunteer Interest Pool (VIP). APTA expects to engage additional
members in the data collection process. | <urn:uuid:984179e4-db53-480f-9340-d6d19260bb41> | {
"date": "2013-05-18T08:03:54",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9075648188591003,
"score": 3.328125,
"token_count": 1025,
"url": "http://www.apta.org/PTinMotion/NewsNow/2012/8/28/MidlifeFitnessStudy/?blogmonth=3&blogday=11&blogyear=2013&blogid=10737418615"
} |
Overview of content related to 'java'
This page provides an overview of 1 article related to 'perl'. Note that filters may be applied to display a sub-set of articles in this category (see FAQs on filtering for usage tips). Select this link to remove all filters.
Perl is a high-level, general-purpose, interpreted, dynamic programming language. Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions and become widely popular amongst programmers. Larry Wall continues to oversee development of the core language, and its upcoming version, Perl 6. Perl borrows features from other programming languages including C, shell scripting (sh), AWK, and sed. The language provides powerful text processing facilities without the arbitrary data length limits of many contemporary Unix tools, facilitating easy manipulation of text files. Perl gained widespread popularity in the late 1990s as a CGI scripting language, in part due to its parsing abilities. (Excerpt from Wikipedia article: Perl)
See our 'perl' overview for more data and comparisons with other tags. For visualisations of metadata related to timelines, bands of recency, top authors, and and overall distribution of authors using this term, see our 'perl' usage charts.
Ariadne contributors most frequently referring to 'perl': | <urn:uuid:3467d50d-60b1-442f-8fbb-77a31bf0642b> | {
"date": "2013-05-18T04:58:36",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.886810302734375,
"score": 2.8125,
"token_count": 282,
"url": "http://www.ariadne.ac.uk/category/buzz/java?article-type=&term=&organisation=&project=&author=clare%20mcclean"
} |
Outmaneuvering Foodborne Pathogens
At various locations, ARS scientists are doing research to make leafy greens and other fresh produce safer for consumers. Produce and leafy greens in the photo are (clockwise from top): romaine lettuce, cabbage, cilantro in a bed of broccoli sprouts, spinach and other leafy greens, green onions, tomatoes, and green leaf lettuce.
If pathogens like E. coli O157:H7 or Salmonella had a motto for survival, it might be: “Find! Bind! Multiply!”
That pretty much sums up what these food-poisoning bacteria do in nature, moving through our environment to find a host they can bind to and use as a staging area for multiplying and spreading.
But ARS food-safety scientists in California are determined to find out how to stop these and other foodborne pathogenic bacteria in their tracks, before the microbes can make their way to leafy greens and other favorite salad ingredients like tomatoes and sprouts.
The research is needed to help prevent the pathogens from turning up in fresh produce that we typically eat uncooked. That’s according to Robert E. Mandrell, who leads the ARS Produce Safety and Microbiology Research Unit. His team is based at the agency’s Western Regional Research Center in Albany, California.
The team is pulling apart the lives of these microbes to uncover the secrets of their success. It’s a complex challenge, in part because the microbes seem to effortlessly switch from one persona to the next. They are perhaps best known as residents of the intestines of warm-blooded animals, including humans. For another role, the pathogens have somehow learned to find, bind, and multiply in the world of green plants.
Sometimes the pathogenic microbes need the help of other microbial species to make the jump from animal inhabitant to plant resident. Surprisingly little is known about these powerful partnerships, Mandrell says. That’s why such alliances among microbes are one of several specific aspects of the pathogens’ lifestyles that the Albany scientists are investigating. In all, knowledge gleaned from these and other laboratory, greenhouse, and outdoor studies should lead to new, effective, environmentally friendly ways to thwart the pathogens before they have a chance to make us ill.
In a greenhouse, microbiologist Maria Brandl examines cilantro that she uses as a model plant to investigate the behavior of foodborne pathogens on leaf surfaces.
A Pathogen Targets Youngest Leaves
Knowing pathogens’ preferences is essential to any well-planned counter-attack. So microbiologist Maria T. Brandl is scrutinizing the little-understood ability of E. coli O157:H7 and Salmonella enterica to contaminate the elongated, slightly sweet leaves of romaine lettuce. With a University of California-Berkeley colleague, Brandl has shown that, if given a choice, E. coli has a strong preference for the young, inner leaves. The researchers exposed romaine lettuce leaves to E. coli and found that the microbe multiplied about 10 times more on the young leaves than on the older, middle ones. One explanation: The young leaves are a better nutrition “buy” for E. coli. “These leaves exude about three times more nitrogen and about one-and-one-half times more carbon than do the middle leaves,” says Brandl.
Scientists have known for decades that plants exude compounds from their leaves and roots that bacteria and fungi can use as food. But the romaine lettuce study, published earlier this year in Applied and Environmental Microbiology, is the first to document the different exudate levels among leaves of the two age classes. It’s also the first to show that E. coli can do more than just bind to lettuce leaves: It can multiply and spread on them.
Research assistant Danielle Goudeau inoculates a lettuce leaf with E. coli O157:H7 in a biological safety cabinet to study the biology of the human pathogen on leafy greens.
Adding nitrogen to the middle leaves boosted E. coli growth, Brandl found. “In view of the key role of nitrogen in helping E. coli multiply on young leaves,” she says, “a strategy that minimizes use of nitrogen fertilizer in romaine lettuce fields may be worth investigating.”
In other studies using romaine lettuce and the popular herb cilantro as models, Brandl documented the extent to which E. coli and Salmonella are aided by Erwinia chrysanthemi, an organism that causes fresh produce to rot.
“When compared to plant pathogens, E. coli and Salmonella are not as ‘fit’ on plants,” Brandl says. But the presence of the rot-producing microbe helped E. coli and Salmonella grow on lettuce and cilantro leaves.
“Soft rot promoted formation of large aggregates, called ‘biofilms,’ of E. coli and Salmonella and increased their numbers by up to 100-fold,” she notes.
The study uncovered new details about genes that the food-poisoning pathogens kick into action when teamed up with plant pathogens such as soft rot microbes.
Brandl, in collaboration with Albany microbiologist Craig Parker, used a technique known as “microarray analysis” to spy on the genes. “The assays showed that Salmonella cells—living in soft rot lesions on lettuce and cilantro—had turned on some of the exact same genes that Salmonella uses when it infects humans or colonizes the intestines of animals,” she says. Some of these activated genes were ones that Salmonella uses to get energy from several natural compounds common to both green plants and to the animal intestines that Salmonella calls home.
Using a confocal laser scanning microscope, microbiologist Maria Brandl examines a mixed biofilm of Salmonella enterica (pink) and Erwinia chrysanthemi (green) in soft rot lesions on cilantro leaves (blue).
A One-Two Punch to Tomatoes
Salmonella also benefits from the presence of another plant pathogen, specifically, Xanthomonas campestris, the culprit in a disease known as “bacterial leaf spot of tomato.” But the relationship between Salmonella and X. campestris may be different than the relation of Salmonella to the soft rot pathogen. Notably, Salmonella benefits even if the bacterial spot pathogen is at very low levels—so low that the plant doesn’t have the disease or any visible symptoms of it.
That’s among the first-of-a-kind findings that microbiologist Jeri D. Barak found in her tests with tomato seeds exposed to the bacterial spot microbe and then planted in soil that had been irrigated with water contaminated with S. enterica.
In a recent article in PLoS ONE, Barak reported that S. enterica populations were significantly higher in tomato plants that had also been colonized by X. campestris. In some cases, Salmonella couldn’t bind to and grow on—or in—tomato plants without the presence of X. campestris, she found.
Listeria monocytogenes on this broccoli sprout shows up as green fluorescence. The bacteria are mainly associated with the root hairs.
“We think that X. campestris may disable the plant immune response—a feat that allows both it and Salmonella to multiply,” she says.
The study was the first to report that even as long as 6 weeks after soil was flooded with Salmonella-contaminated water, the microbe was capable of binding to tomato seeds planted in the tainted soil and, later, of spreading to the plant.
“These results suggest that any contamination that introduces Salmonella from any source into the environment—whether that source is irrigation water, improperly composted manure, or even insects—could lead to subsequent crop contamination,” Barak says. “That’s true even if substantial time has passed since the soil was first contaminated.”
Crop debris can also serve as a reservoir of viable Salmonella for at least a week, Barak’s study showed. For her investigation, the debris was composed of mulched, Salmonella-contaminated tomato plants mixed with uncontaminated soil.
“Replanting fields shortly after harvesting the previous crop is a common practice in farming of lettuce and tomatoes,” she says. The schedule allows only a very short time for crop debris to decompose. “Our results suggest that fields known to have been contaminated with S. enterica could benefit from an extended fallow period, perhaps of at least a few weeks.”
Ordinary Microbe Foils E. coli
While the bacterial spot and soft rot microbes make life easier for certain foodborne pathogens, other microbes may make the pathogens’ existence more difficult. Geneticist Michael B. Cooley and microbiologist William G. Miller at Albany have shown the remarkable effects of one such microbe, Enterobacter asburiae. This common, farm-and-garden-friendly microorganism lives peaceably on beans, cotton, and cucumbers.
In one experiment, E. asburiae significantly reduced levels of E. coli and Salmonella when all three species of microbes were inoculated on seeds of thale cress, a small plant often chosen for laboratory tests.
The study, published in Applied and Environmental Microbiology in 2003, led to followup experiments with green leaf lettuce. In that battle of the microbes, another rather ordinary bacterium, Wausteria paucula, turned out to be E. coli’s new best friend, enhancing the pathogen’s survival sixfold on lettuce leaves.
“It was the first clear example of a microbe’s supporting a human pathogen on a plant,” notes Cooley, who documented the findings in the Journal of Food Protection in 2006.
But E. asburiae more than evened the score, decreasing E. coli survival 20- to 30-fold on lettuce leaves exposed to those two species of microbes.
The mechanisms underlying the competition between E. asburiae and E. coli are still a mystery, says Cooley, “especially the competition that takes place on leaves or other plant surfaces.”
Nevertheless, E. asburiae shows initial promise of becoming a notable biological control agent to protect fresh salad greens or other crops from pathogen invaders. With further work, the approach could become one of several science-based solutions that will help keep our salads safe.—By Marcia Wood, Agricultural Research Service Information Staff.
This research is part of Food Safety, an ARS national program (#108) described on the World Wide Web at www.nps.ars.usda.gov.
To reach scientists mentioned in this article, contact Marcia Wood, USDA-ARS Information Staff, 5601 Sunnyside Ave., Beltsville, MD 20705-5129; phone (301) 504-1662, fax (301) 504-1486.
Listeria monocytogenes on this radish sprout shows up as green fluorescence. The bacteria are mainly associated with the root hairs.
What Genes Help Microbes Invade Leafy Greens?
When unwanted microbes form an attachment, the consequences—for us—can be serious.
That’s if the microbes happen to be human pathogens like Listeria monocytogenes or Salmonella enterica and if the target of their attentions happens to be fresh vegetables often served raw, such as cabbage or the sprouted seeds of alfalfa.
Scientists don’t yet fully understand how the malevolent microbes form colonies that cling stubbornly to and spread across plant surfaces, such as the bumpy leaves of a cabbage or the ultra-fine root hairs of a tender alfalfa sprout.
But food safety researchers at the ARS Western Regional Research Center in Albany, California, are putting together pieces of the pathogen puzzle.
A 1981 food-poisoning incident in Canada, caused by L. monocytogenes in coleslaw, led microbiologist Lisa A. Gorski to study the microbe’s interactions with cabbage. Gorski, with the center’s Produce Safety and Microbiology Research Unit, used advanced techniques not widely available at the time of the cabbage contamination.
“Very little is known about interactions between Listeria and plants,” says Gorski, whose study revealed the genes that Listeria uses during a successful cabbage-patch invasion.
The result was the first-ever documentation of Listeria genes in action on cabbage leaves. Gorski, along with coinvestigator Jeffrey D. Palumbo—now with the center’s Plant Mycotoxin Research Unit—and others, documented the investigation in a 2005 article in Applied and Environmental Microbiology.
Listeria, Behaving Badly
“People had looked at genes that Listeria turns on, or ‘expresses,’ when it’s grown on agar gel in a laboratory,” says Gorski. “But no one had looked at genes that Listeria expresses when it grows on a vegetable.
“We were surprised to find that when invading cabbage, Listeria calls into play some of the same genes routinely used by microbes that are conventionally associated with plants. Listeria is usually thought of as a pathogen of humans. We hadn’t really expected to see it behaving like a traditional, benign inhabitant of a green plant.
“It’s still a relatively new face for Listeria, and requires a whole new way of thinking about it.”
In related work, Gorski is homing in on genetic differences that may explain the widely varying ability of eight different Listeria strains to successfully colonize root hairs of alfalfa sprouts—and to resist being washed off by water.
In a 2004 article in the Journal of Food Protection, Gorski, Palumbo, and former Albany associate Kimanh D. Nguyen reported those differences. Poorly attaching strains formed fewer than 10 Listeria cells per sprout during the lab experiment, while the more adept colonizers formed more than 100,000 cells per sprout.
Salmonella’s Cling Genes
Colleague Jeri D. Barak, a microbiologist at Albany, led another sprout investigation, this time probing the ability of S. enterica to attach to alfalfa sprouts. From a pool of 6,000 genetically different Salmonella samples, Barak, Gorski, and coinvestigators found 20 that were unable to attach strongly to sprouts.
Scientists elsewhere had already identified some genes as necessary for Salmonella to successfully invade and attach to the guts of animals such as cows and chickens. In the Albany experiments, some of those same genes were disrupted in the Salmonella specimens that couldn’t cling to alfalfa sprouts.
Their 2005 article in Applied and Environmental Microbiology helped set the stage for followup studies to tease out other genes that Salmonella uses when it is living on and in plants.
A deeper understanding of those and other genes may lead to sophisticated defense strategies to protect tomorrow’s salad greens—and us.—By Marcia Wood, Agricultural Research Service Information Staff.
Geneticist Michael Cooley collects a sediment sample to test for E. coli O157:H7. The pathogen was found near fields implicated in the 2006 outbreak of E. coli O157:H7 on baby spinach.
Environmental Surveillance Exposes a Killer
It started as a manhunt for a microbe, but it became one of the nation’s most intensive farmscape searches for the rogue pathogen E. coli O157:H7.
ARS microbiologist Robert E. Mandrell and geneticist Michael B. Cooley of the Produce Safety and Microbiology Research Unit in Albany, California, had already been collaborating in their own small-scale study of potential sources of E. coli O157:H7 in the state’s produce-rich Salinas Valley when, in 2005, they were asked to join another one. The new investigation became a 19-month surveillance—by the two scientists and other federal and state experts—of E. coli in Salinas Valley watersheds.
“It may seem like an obvious concept today,” says Mandrell, “but at the time, there was little proof that E. coli contamination of produce before harvest could be a major cause of food-poisoning outbreaks.”
Mandrell and Cooley aided the California Food Emergency Response Team, as this food-detective squad was named, in tracing movement of E. coli through the fertile valley. This surveillance showed that E. coli O157:H7 can travel long distances in streamwater and floodwater.
In 2006, E. coli O157:H7 strains indistinguishable from those causing human illness associated with baby spinach were discovered in environmental samples—including water—taken from a Salinas Valley ranch.
Wild pigs were added to the list of animal carriers of the pathogen when one of the so-called “outbreak strains” of E. coli O157:H7 was discovered in their dung. The team documented its work in 2007 in PLoS ONE and Emerging Infectious Diseases.
The Albany scientists used a relatively new technique to detect E. coli O157:H7 in water. Developed at the ARS Meat Animal Research Center in Clay Center, Nebraska, for animal hides, the method was adapted by the Albany team for the outdoor reconnaissance.
Because of their colleagues’ work, says Cooley, “We had the right method at the right time.”—By Marcia Wood, Agricultural Research Service Information Staff.
"Outmaneuvering Foodborne Pathogens" was published in the July 2008 issue of Agricultural Research magazine. | <urn:uuid:b366eb1a-184b-4885-b5a6-92a6d0c0f67f> | {
"date": "2013-05-18T05:53:05",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9299995303153992,
"score": 3.078125,
"token_count": 3794,
"url": "http://www.ars.usda.gov/is/AR/archive/jul08/pathogen0708.htm"
} |
PROTECTION OF SUBTROPICAL AND TROPICAL AGRICULTURE COMMODITIES AND ORNAMENTALS FROM EXOTIC INSECTS
Location: Subtropical Horticulture Research
Title: Laurel wilt: A global threat to avocado production
| Ploetz, R - |
| Smith, J - |
| Inch, S - |
| Pena, J - |
| Evans, E - |
| Crane, J - |
| Hulcr, J - |
| Stelinski, L - |
| Schnell, R - |
Submitted to: World Avocado Congress
Publication Type: Proceedings
Publication Acceptance Date: November 28, 2011
Publication Date: February 1, 2012
Citation: Ploetz, R.C., Smith, J.A., Inch, S.A., Pena, J.E., Evans, E.A., Crane, J.H., Kendra, P.E., Hulcr, J., Stelinski, L., Schnell, R. 2012. Laurel wilt: A global threat to avocado production. World Avocado Congress. 186-197 In Proceedings VII World Avocado Congress. 5-9 September 2011, Cairns, Australia.
Interpretive Summary: Laurel wilt is a lethal vascular disease of trees in the plant family Lauraceae, including avocado. It is caused by a fungal pathogen (Raffaelea lauricola) that is introduced into host trees by an exotic wood-boring beetle, the redbay ambrosia beetle (Xyleborus glabratus). The beetle was first detected in Georgia in 2002, and since has spread to six states in the southeastern U.S. Laurel wilt poses an imminent threat to commercial avocado production in south Florida, and a future threat to avocado in California, Mexico, Central and South America. Scientists at the USDA-ARS Subtropical Horticulture Research Station, in collaboration with the University of Florida, are conducting multidisciplinary research on the pest complex, including (1) evaluation of fungicides for laurel wilt, (2) screening for disease resistant avocado varieties, (3) determination of pathways for disease transmission, (4) identification of beetle attractants, repellents, and insecticides, and (5) assessment of host preferences. Information from these studies will be used by avocado growers and by state and federal action agencies engaged in monitoring programs for redbay ambrosia beetle.
Laurel wilt kills members of the Lauraceae plant family, including avocado. The disease has invaded much of the southeastern USA, and threatens avocado commerce and homeowner production in Florida, valuable germplasm in Miami (USDA-ARS), and major production and germplasm in California and MesoAmerica. Laurel wilt is caused by a recently described fungus, Raffaelea lauricola, which is vectored by an invasive ambrosia beetle, Xyleborus glabratus. Current research topics include: disease management with fungicides; identifying host resistance; vector mitigation with insecticides and repellents; host ranges of, and interactions with, the pathogen and vector; and transmission of R. lauricola via avocado seed, scion material, root grafts and pruning tools. Although highly resistant avocado cultivars have not been identified, screening work continues on additional cultivars and new germplasm. Effective fungicides (e.g. triazoles) have been identified, but cost-effective disease management will depend on improved measures for xylem loading and retention of these chemicals. Insecticides have been identified that reduce boring activity of X. glabratus and its attraction to avocado and other hosts, but much remains to be learned about their impact on disease management. Although the disease’s host range is generally restricted to American members of the Lauraceae, nonhosts that attract the beetle are known. Raffaelela lauricola rapidly colonizes avocado after infection, but to low levels; tylose and gel induction in the host, rather than xylem obstruction by fungal biomass, are associated with impeded water transport and symptom development. Seed and fruit from laurel wilt-affected avocado trees do not appear to be infected by R. lauricola. | <urn:uuid:f5bb6156-473c-456e-b20e-b29dce901e8a> | {
"date": "2013-05-18T08:03:11",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8892325162887573,
"score": 3,
"token_count": 905,
"url": "http://www.ars.usda.gov/research/publications/publications.htm?seq_no_115=280283"
} |
From The Art and Popular Culture Encyclopedia
The fact-value distinction is a concept used to distinguish between arguments that can be claimed through reason alone and those in which rationality is limited to describing a collective opinion. In another formulation, it is the distinction between what is (can be discovered by science, philosophy or reason) and what ought to be (a judgment which can be agreed upon by consensus). The terms positive and normative represent another manner of expressing this, as do the terms descriptive and prescriptive, respectively. Positive statements make the implicit claim to facts (e.g. water molecules are made up of two hydrogen atoms and one oxygen atom), whereas normative statements make a claim to values or to norms (e.g. water ought to be protected from environmental pollution). | <urn:uuid:b5c9bea2-4439-4246-a5bd-da1fb437c875> | {
"date": "2013-05-18T06:50:08",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9296513199806213,
"score": 3.140625,
"token_count": 155,
"url": "http://www.artandpopularculture.com/Fact-value_distinction"
} |
The Nature Elephant
The Karen people have always lived naturally in the forest, and, for many generations have relied on elephants to help them.
Because elephants are ideal for carrying heavy loads they are essential for transportation through rural areas, and, more recently, for carrying tourists. The Karen people simply would not survive without them. The Karen people have always used elephants to help carry them through dense parts of the jungle which would be difficult on foot, such as down steep hills to fetch water from the creek, or carrying heavy bags of rice from the fields to the barn. What is little effort for an elephant would be a huge amount of labour
for humans. Because they are so important to the Karen people, elephants are their friends, and are treated with respect.
To manage an elephant and gain its trust requires knowledge, love and understanding. This is why the Karen people look after their elephants so well, and only certain members of the Karen family are trained enough to do this. Some of them call elephant-care a kind of black magic, and this black magic is passed down through families.
Part of the skill of caring for elephants is to ensure the elephant is listened to. Karen legend has it that if a female elephant is ignored, it is likely that her eggs will become infected, and therefore she will not be able to continue the elephant family. This serious consequence acts as a grave warning to those handling elephants. A sense of duty, honor and patience are as important to the elephant as they are to the Karen people as a whole.
The legend of Chang Karen
This is a story about how elephants became so important in the life of the Karen hill tribe. The legend goes that once upon a time, there were two brothers living in the forest. One day, their mother needed to leave home for a business, so instructed the two boys to look after the house, be good, and by no means split open the bamboo tree, as it contained many flies. Being the mischievous boys they were, as soon as their mother was out of sight, they crept up to the forbidden bamboo and cracked it open, curious to see what would happen.
Immediately, the room was filled with flies, two of which flew up into each of the boys' noses. Panicking, the boys didn't know what to do. Soon, they felt their bodies changing. Their legs began to itch, and grow longer and wider. Their heads began to swell, until they felt the size and shape of footballs. Their noses grew longer and their bodies became heavier and more clumsy.
When their mother returned home, she was shocked to see what had happened to her sons. She offered them cooked rice, but they turned it down with a slow shake of their large heads, their noses swinging from side to side. They were still growing, and were too ashamed of their bad behaviour to eat. The mother offered them water, but they did not want to drink it. Soon, when the sons had grown too big for the house, and could now only walk on four legs, they left the house to find grass. This was all they felt like eating.
Very soon the word spread, and people came from all over the valley to see the mutated boys. Their tongues had become too big for them to speak, so the sons had stopped talking. As if to compensate, their ears grew large so they cold hear very, very well. They had become elephant-boys.
One day, some workers came to see if the elephant-boys could help them carry heavy loads. They gave them wood and lead them to their workshops, and the elephant-boys were calm and obedient. The workers realised that what was a huge job for them, was little effort for these giant elephant boys. And life continued this way for many generations. This is the remarkable story of how elephants and humans came to work together in harmony, explaining how they can exist together in the forest.
Elephants and the Karen Hill Tribe people
Deep in the rich forests of northern Thailand, in the bowl of a green valley, lies the Karen hill tribe community. Making the most of their natural surroundings, this tribe has managed to forge an incredibly simple life in the forest using no modern machinery or medicine. They need only the trees, plants, animals, and are especially reliant on the mighty elephant.
The Karen people have a strong bond with elephants: their self-sufficient lifestyles are surprisingly similar, and intertwining. Wild elephants play a very important role in the Karen way of life, as well as the relationships of valley inhabitants, and the magic of the valley. | <urn:uuid:e54319d9-029a-48b0-8d3d-42adb0690b50> | {
"date": "2013-05-18T07:20:06",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9856395721435547,
"score": 3.046875,
"token_count": 937,
"url": "http://www.asianelephantsafari.com/news.php"
} |
Why Man U?
AskMen / Getty Images
"Gathering together at Old Trafford must have given these people something of the sense of community that they had previously known in their villages."
Visiting Manchester the other day, I was driving down a nondescript road past dreary shops and offices when I saw the top of a sports stadium poking into the gray sky. It was Old Trafford. Team buses carrying soccer players from more glamorous cities such as Barcelona have been known to echo with cries of disgust as they pull in here.
The home of Manchester United is rainy and underwhelming. The estimated 333 million humans who consider themselves United fans don’t all know that Manchester is a city in England, but many of those who do would probably be surprised to find just how mid-ranking a city it is. Yet when United’s American ruling family, the Glazers, sold club shares in August, United was valued at $2.3 billion. That made it the world’s most valuable sports franchise, ahead of Real Madrid and baseball’s New York Yankees, according to Forbes. In short, United is bigger than Manchester. So why on earth did this global behemoth arise precisely here? And how, in the last 134 years, has United shaped soccer, in England and now the world?
When a soccer club was created just by the newish railway line in 1878, the Manchester location actually helped. The city was then growing like no other on earth. In 1800 it had been a tranquil little place of 84,000 inhabitants, so insignificant that as late as 1832 it didn’t have a member of parliament. The Industrial Revolution changed everything. Workers poured in from English villages, from Ireland, from feeble economies everywhere (my own great-grandparents arrived on the boat from Lithuania). By 1900, Manchester was Europe’s sixth-biggest city, with 1.25 million inhabitants.
The club by the railway line was initially called Newton Heath, because the players worked at the Newton Heath carriage works of the Lancashire and Yorkshire Railway Company. They played in work clogs against other work teams. Jim White’s Manchester United: The Biography nicely describes the L&YR workers as “sucked in from all over the country to service the growing need for locomotives and carriages.” Life in Manchester then was neither fun nor healthy, White writes. In some neighborhoods, average male life expectancy was just 17. This was still the same brutal city where a few decades before, Karl Marx’s pal Friedrich Engels had run his father’s factory. The conditions of the industrial city were so awful it inspired communism. (My own great-grandparents lost two of their children to scarlet fever in Manchester before moving on to much healthier southern Africa.)
Inevitably, most of these desperate early Mancunians were rootless migrants. Unmoored in their new home, many embraced the local soccer clubs. Gathering together at Old Trafford must have given these people something of the sense of community that they had previously known in their villages. That’s how the world’s first great industrial city engendered the world’s greatest soccer brand. Next Page >> | <urn:uuid:223e0d4e-5454-48af-94fe-79e339cdb888> | {
"date": "2013-05-18T05:14:45",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9740930199623108,
"score": 2.609375,
"token_count": 667,
"url": "http://www.askmen.com/sports/fanatic/why-man-u.html"
} |
The CIA, the NSA, the FBI and all other three-letter, intelligence-gathering, secret-keeping agencies mimic and are modeled after secret societies. They gather and filter information by compartmentalizing the organization in a pyramid-like hierarchical structure keeping everyone but the elite on a need-to-know basis. The CIA was born from the WWII intelligence arm, the OSS (Office of Strategic Services), and was funded into permanence by the Rockefeller and Carnegie Foundations, which donated $34 million 1945-48 alone. Nearly every person instrumental in the creation of the CIA was already a member of the CFR, including the Rockefellers and Dulles brothers.
In 1945 when the CIA was still the OSS, they began Operation Paperclip which brought over 700 Nazi scientists directly into the forming CIA, NSA, and other high-level government organizations. Since it was illegal to even allow these Nazis into the US, let alone into top-secret government agencies, the CIA convinced the Vatican to issue American passports for these 700+ Nazi scientists under the pretense that it was to keep them out of the hands of the Russians.
“After WWII ended in 1945, victorious Russian and American intelligence teams began a treasure hunt throughout occupied Germany for military and scientific booty. They were looking for things like new rocket and aircraft designs, medicines, and electronics. But they were also hunting down the most precious ‘spoils’ of all: the scientists whose work had nearly won the war for Germany. The engineers and intelligence officers of the Nazi War Machine. Following the discovery of flying discs (foo-fighters), particle/laser beam weaponry in German military bases, the War Department decided that NASA and the CIA must control this technology, and the Nazi engineers that had worked on this technology. There was only one problem: it was illegal. U.S. law explicitly prohibited Nazi officials from immigrating to America--and as many as three-quarters of the scientists in question had been committed Nazis.” -Operation Paperclip Casefile: New World Order and Nazi Germany
Hundreds of Nazi mind-control specialists and doctors who performed horrific experiments on prisoners instantly had their atrocious German histories erased and were promoted into high-level American jobs. Kurt Blome, for instance, was a high-ranking Nazi scientist who experimented with plague vaccines on concentration camp prisoners. He was hired by the U.S. Army Chemical Corps to work on chemical warfare projects. Major General Walter Schreiber was a head doctor during Nazi concentration camp prisoner experiments in which they starved, and otherwise tortured the inmates. He was hired by the Air Force School of Medicine in Texas. Werner Von Braun was technical director of the Nazi Peenemunde Rocket Research Center, where the Germans developed the V2 rocket. He was hired by the U.S. Army to develop guided missiles and then made the first director of NASA!
“Military Intelligence ‘cleansed’ the files of Nazi references. By 1955, more than 760 German scientists had been granted citizenship in the U.S. and given prominent positions in the American scientific community. Many had been longtime members of the Nazi party and the Gestapo, had conducted experiments on humans at concentration camps, had used slave labor, and had committed other war crimes. In a 1985 expose in the Bulletin of the Atomic Scientists Linda Hunt wrote that she had examined more than 130 reports on Project Paperclip subjects - and every one ‘had been changed to eliminate the security threat classification.’ A good example of how these dossiers were changed is the case of Werner von Braun. A September 18, 1947, report on the German rocket scientist stated, ‘Subject is regarded as a potential security threat by the Military Governor.’ The following February, a new security evaluation of Von Braun said, ‘No derogatory information is available on the subject … It is the opinion of the Military Governor that he may not constitute a security threat to the United States.’” -Operation Paperclip Casefile: New World Order and Nazi Germany
Shortly after Operation Paperclip came Operation Mockingbird, during which the CIA trained reporters and created media outlets to disseminate their propaganda. One of Project Mockingbird’s lead roles was played by Philip Graham who would become publisher of The Washington Post. Declassified documents admit that over 25 organizations and 400 journalists became CIA assets which now include major names like ABC, NBC, CBS, AP, Reuters, Time, Newsweek and more.
In 1953 the Iranian coup classified as Operation AJAX was the CIA’s first successful overthrow of a foreign government. In 1951 Iran Parliament and Prime Minister Dr. Mohammed Mosaddeq voted for nationalizing their oil industry which upset western oil barons like the Rockefellers. On April 4th, 1953, CIA director Allen Dulles transferred $1 million to Iranian General Fazlollah Zahedi to be used “in any way that would bring about the fall of Mosaddeq.” Coup leaders first planted anti-Mosaddeq propaganda throughout the Iranian press, held demonstrations, and bribed officials. Then they began committing terror attacks to blame on Mosaddeq hoping to bring public sentiment away from their hero. They machine-gunned civilians, bombed mosques, and then passed out pamphlets saying, “Up with Mosaddeq, up with Communism, down with Allah.” Zahedi’s coup took place between August 15th and 19th after which the CIA sent $5 million more for helping their new government consolidate power. Soon America controlled half of Iran’s oil production and American weapons merchants moved in making almost $20 billion off Iran in the next 20 years.
“In 1953 the Central Intelligence Agency working in tandem with MI6 overthrew the democratically-elected leader of Iran Dr. Mohammed Mosaddeq. Mosaddeq had been educated in the west, was pro-America, and had driven communist forces out of the north of his country shortly after being elected in 1951. Mosaddeq then nationalized the oil fields and denied British Petroleum a monopoly. The CIA’s own history department at cia.gov details how U.S. and British intelligence agents carried out terror attacks and then subsequently blamed them on Mosaddeq … The provocations included propaganda, demonstrations, bribery, agents of influence, and false flag operations. They bombed the home of a prominent religious leader and blamed it on Moseddeq. They attacked mosques, machine-gunned crowds, and then handed out thousands of handbills claiming that Moseddeq had done it … Dr. Mohammed Moseddeq, who was incarcerated for the duration of his life, fared better than any of his ministers who were executed just days after the successful coup for crimes that MI6 and the CIA had committed.” -Alex Jones, “Terrorstorm” DVD
In 1954 the CIA performed its second coup d’etat overthrow of a foreign democracy; this time it was Guatemala, whose popular leader Jacobo Arbenz Guzman, had recently nationalized 1.5 million acres of land for the peasants. Before this, only 2.2% of Guatemala’s land-owners owned 70% of the land, which included that of United Fruit Co. whose board of directors were friends with the Dulles brothers and wanted to keep Guatemala a banana republic. So once again the CIA sent in propagandists and mercenaries, trained militia groups, bombed the capital, and installed their puppet dictator Castillo Armas, who the gave United Fruit Co. and the other 2.2% land-owners everything back. Military dictators ruled Guatemala for the next 30 years killing over 100,000 citizens. Guatemalan coroners were reported saying they could not keep up with the bodies. The CIA called it Operation Success.
“The CIA has overthrown functioning democracies in over twenty countries.” -John Stockwell, former CIA official
They always follow the same strategy. First, globalist interests are threatened by a popular or democratically elected foreign leader; leaders who help their populations nationalize foreign-owned industries, protect workers, redistribute wealth/land and other such actions loved by the lower and middle-class majority, hated by the super-rich minority. Next, the CIA identifies and co-operates with opposition militia groups within the country, promising them political power in trade for American business freedom. Then they are hired, trained and funded to overthrow the current administration through propaganda, rigged elections, blackmail, infiltration/disruption of opposition parties, intimidation, torture, economic sabotage, death squads and assassinations. Eventually the CIA-backed militia group stages a coup and installs their corporate sympathizer-dictator and the former leaders are propagated as having been radicals or communists and the rest of the world is taught to shrug and view American imperialism as necessary world policing. The CIA has now evolved this whole racket into a careful science which they teach at the infamous “School of the Americas.” They also publish books like “The Freedom Fighter’s Manual” and “The Human Resource Exploitation Training Manual” teaching methods of torture, blackmail, interrogation, propaganda and sabotage to foreign military officials.
Starting in 1954 the CIA ran operations attempting to overthrow the communist North Vietnamese government, while supporting the Ngo Dinh Diem regime in South Vietnam. From 1957-1973 the CIA conducted what has been termed “The Secret War” in Laos during which they carried out almost one coup per year in an effort to overthrow their democracy. After several unsuccessful attempts, the US began a bombing campaign, dropping more explosives and planting more landmines on Laos during this Secret War than during all of World War II. Untold thousands died and a quarter of the Laotian people became refugees often living in caves. Right up to the present, Laotians are killed/maimed almost daily from unexploded landmines. In 1959 the US helped install “Papa Doc” Duvalier, the Haitian dictator whose factions killed over 100,000. In 1961 CIA Operation Mongoose attempted and failed to overthrow Fidel Castro. Also in 1961 the CIA assassinated the Dominican Republic’s leader Rafael Trujillo, assassinated Zaire’s democratically-elected Patrice Lumumba, and staged a coup against Ecuador’s President Jose Velasco, after which US President JFK fired CIA director Allen Dulles. In 1963 the CIA was back in the Dominican Republic and Ecuador performing military coups overthrowing Juan Bosch and President Arosemana. In 1964 another CIA-funded/armed coup overthrew Brazil’s democratically-elected Joao Goulart replacing him with Dictator General Castelo Branco, CIA-trained secret police, and marauding death squads. In 1965 the CIA performed coups in Indonesia and Zaire and installed oppressive military dictators; General Suharto in Indonesia would then go on to slaughter nearly a million of his countrymen. In 1967 a CIA-backed coup overthrew the government of Greece. In 1968 they helped capture Che Guevara in Bolivia. In 1970 they overthrew Cambodia’s popular Prince Sahounek, an action that greatly strengthened the once minor opposition Khmer Rouge party who went on to murder millions. In 1971 they backed a coup in Bolivia and installed Dictator Hugo Banzer who went on torture and murder over 2000 of his political opponents. In 1973 they assassinated Chile’s democratically-elected Salvador Allende and replaced him with General Augusto Pinochet who murdered thousands of his civilians. On and on it goes; The Association for Responsible Dissent put out a report estimating that by 1987, 6 million people worldwide had died resulting from CIA covert ops. Since then there have been many untold millions more.
“Throughout the world, on any given day, a man, woman or child is likely to be displaced, tortured, killed or disappeared, at the hands of governments or armed political groups. More often than not, the United States shares the blame.” -Amnesty International annual report on U.S. Military aid and human rights, 1996
1979-1989 CIA Operation Cyclone, with joint funding from Britain’s MI6, heavily armed and trained over 100,000 Afghani Mujahideen (“holy warriors”) during the Soviet war in Afghanistan. With the help of the Pakistani ISI (Inter-Services Intelligence), billions of dollars were given to create this Islamic army. Selig Harrison from the Woodrow Wilson International Centre for Scholars stated, “The CIA made a historic mistake in encouraging Islamic groups from all over the world to come to Afghanistan. The US provided $3 billion [now many more billion] for building up these Islamic groups, and it accepted Pakistan’s demand that they should decide how this money should be spent … Today that money and those weapons have helped build up the Taliban … [who] are now making a living out of terrorism.”
“The United States has been part and parcel to supporting the Taliban all along, and still is let me add … You have a military government in Pakistan now that is arming the Taliban to the teeth … Let me note; that [US] aid has always gone to Taliban areas … And when people from the outside try to put aid into areas not controlled by the Taliban, they are thwarted by our own State Department … Pakistan [has] initiated a major resupply effort, which eventually saw the defeat, and caused the defeat, of almost all of the anti-Taliban forces in Afghanistan.” -Congressional Rep. Dana Rohrbacher, the House International Relations Committee on Global Terrorism and South Asia, 2000
British Foreign Secretary Robin Cook stated before the House of Commons that “Al Qaeda” is not actually a terrorist group, but a database of international Mujahadden and arms dealers/smugglers used by the CIA to funnel arms, money, and guerrillas. The word “Al Qaeda” itself literally translates to “the database.” Not only did the CIA create the Taliban and Al-Qaeda, they continued funding them right up to the 9/11 attacks blamed on them. For example, four months prior to 9/11, in May, 2001, Colin Powell gave another $43 million in aid to the Taliban.
“Not even the corporate US media could whitewash these facts and so explained it away by alleging that US officials had sought cooperation from Pakistan because it was the original backer of the Taliban, the hard-line Islamic leadership of Afghanistan accused by Washington of harboring Bin Laden. Then the so called ‘missing link’ came when it was revealed that the head of the ISI was the principal financier of the 9/11 hijackers ... Pakistan and the ISI is the go between of the global terror explosion. Pakistan's military-intelligence apparatus, which literally created and sponsored the Taliban and Al Qaeda, is directly upheld and funded by the CIA. These facts are not even in dispute, neither in the media nor in government. Therefore when we are told by the neocon heads of the new world order that they are doing everything in their power to dismantle the global terror network what we are hearing is the exact opposite of the truth. They assembled it, they sponsored it and they continue to fund it. As any good criminal should, they have a middleman to provide plausible deniability, that middleman is the ISI and the military dictatorship of Pakistan.” -Steve Watson, “U.S. Intel Officer: Al Qaeda Leadership Allowed to Operate Freely” (http://www.infowars.net/articles/july2007/160707ISI.htm)
In a late 1980’s Newsweek article, outspoken opponent of President Bush and recently assassinated Pakistani Prime Minister Benazir Bhutto, told George Bush Sr., “you are creating a Frankenstein,” concerning the growing Islamist movement. She also came out in 2007 to say that Osama Bin Laden was already long dead having been murdered by Omar Sheikh. She was murdered herself a month after the interview, only two weeks before the Pakistani 2008 general elections. | <urn:uuid:96c4ba0b-5575-4365-9b5c-1164b7bca6d5> | {
"date": "2013-05-18T07:00:01",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9628726243972778,
"score": 2.859375,
"token_count": 3320,
"url": "http://www.atlanteanconspiracy.com/2008/09/history-of-cia.html?showComment=1358287968011"
} |
Deciphering the function and regulation of AUTS2
University of California, San Francisco
Autism is a neurodevelopmental disorder with complex genetic and environmental causes. Many gene mutations have been associated with autism; however, they explain only a small part of the genetic cause for this disorder. 98% of our genome does not encode for protein and is thus termed noncoding. In this noncoding space are gene regulatory sequences that tell genes when, where and at what amount to turn on or off. Mutations in these gene regulatory elements could thus be an important cause of autism. Despite the potential importance of these noncoding gene regulatory regions in autism susceptibility, very few studies have been performed trying to implicate them in this disorder. This pilot study focuses on a strong autism candidate gene, the autism susceptibility candidate 2 (AUTS2) gene. Mutations in its regulatory elements have been associated with autism and its function is not well known. Using both zebrafish and mice as the model organisms, the project aims to identify noncoding gene regulatory sequences of AUTS2. The fellow will then look to see if any individuals with autism have mutations in the regulatory regions identified. They will also reduce the expression of this gene in zebrafish and look for abnormalities to further clarify its function. This study promises to further our understanding of how differences in the noncoding region of the genome can lead to autism. It also aims to advance our understanding of a gene of unknown function that has been implicated in autism. | <urn:uuid:b119f23a-e8b4-478b-8b57-2472864187b9> | {
"date": "2013-05-18T05:50:54",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9440978765487671,
"score": 2.53125,
"token_count": 307,
"url": "http://www.autismspeaks.org/genetics/grants/deciphering-function-and-regulation-auts2"
} |
By Mike Bennighof, Ph.D.
During the 1700s, European armies grew enormously
in size. The Seven Years’ War of 1756
– 1763 heightened the trend, and by
the end of the Napoleonic Wars field armies
had become enormous. Forces of 100,000 or
even more, unheard of a century before, were
not at all unusual by 1815.
The French army introduced the concept of
a corps d’armee, a body of infantry,
cavalry and artillery plus essential services.
The corps could fight alone or in cooperation
with other corps, and included all necessary
combat and administrative elements. By the
end of the Napoleonic wears, all participants
had organized their troops into corps, usually
made up of varying numbers of divisions and
During the years after 1815, some nations
kept their corps structure in place during
peacetime, using them to administer recruiting,
training and other non-combat functions. This
would speed mobilization and keep the staffs
employed. The size and composition of corps
also became regularized, with each usually
having the same number and types of subordinate
By the middle of the 19th century, an army
corps had become defined as the number of
troops that could be deployed from a single
road in less than two hours: roughly 20,000
men. That rule of thumb had been badly exceeded
as extra troops were added: cavalry, engineers,
artillerymen, light infantry, medical services,
supply columns and more.
The Prussian corps organization used in
the 1866 Austro-Prussian War had been introduced
as part of War Minister Albrecht von Roon’s
reforms starting in 1860. In 1859, the Prussian
Army mobilized its four army corps for war
on the side of Austria against France. The
mobilization found many troops untrained,
officers of poor quality and supply services
either insufficient or non-existent.
It also showed just how unwieldy the army’s
corps organization would prove in action.
The German Confederation, which included both
Austria and Prussia along with 36 other german
states, had adopted a corps of four divisions.
Each division consisted in turn of two or
three brigades, each brigade with two regiments
of two battalions each plus one of light infantry.
All told, a German division would go to war
with 10 or 15 battalions, a corps with between
40 and 60.
Roon rationalized this organization; in
battle, he believed, a general was most efficient
with fewer maneuver elements to command. A
new-model Prussian infantry corps would have
two divisions. Each division in turn had two
brigades, and each of them had two regiments.
The regiments would be larger, with three
battalions rather than the former two, as
a regimental colonel was expected to control
all three by line of sight.
A brigade commander only had to control
the two regiments under his command. At the
division level, things got more complex. The
division controlled two brigades, plus an
artillery detachment of four six-gun batteries.
These would usually be parceled out to the
brigades in action. During peacetime the division
was responsible for either a pioneer battalion
or a light infantry battalion; during wartime
these would be held in the corps reserve.
The corps controlled the two infantry divisions,
plus attachments of artillery and cavalry.
This varied from four to seven batteries (six
guns each) and two to five cavalry regiments.
Austria also reformed its corps organization
in 1860, based on the lessons of the 1859
war. An Austrian corps had included two or
three divisions, each in turn of two or three
five-battalion brigades. Each brigade included
the four field battalions of a single regiment
plus a light infantry battalion: usually jägers
but in a few cases grenzers (Croatian border
troops) or volunteer student battalions.
Austrian generals performed poorly in the
1859 war, and the reform commission appointed
after the war recommended using fewer of them.
In particular, it pointed out that the small
brigades made regimental colonels superfluous.
A peacetime regiment had contained four field
battalions and a grenadier battalion; now
they would have three field battalions, a
fourth reserve battalion and in wartime a
fifth training battalion. Two of these three-battalion
regiments would be grouped in a brigade along
with a light infantry battalion and an eight-gun
artillery battery. It was a powerful and flexible
organization, led by a major general (Austria
did not have a “brigadier general”
rank and this was the imperial army’s
equivalent). The larger brigades required
fewer light infantry battalions, allowing
the role to be filled exclusively by jägers.
The organization became less flexible at
the larger echelons. An Austrian corps included
four infantry brigades, a cavalry regiment
and a brigade-sized artillery reserve as well
as engineer, supply and medical units. The
new arrangement required fewer general officers,
which had been the goal. But handling six
maneuver elements proved beyond the capability
of most Austrian corps staffs in 1866, and
the intermediate stage of division headquarters
gave Austria’s Prussian opponents a
decided advantage in flexibility and reaction
speed. Though the Prussian staff was undoubtedly
better organized and more efficient than their
Austrian counterparts, their organization
also gave them a lighter workload.
of 1866: Frontier Battles the units are infantry brigades,
cavalry regiments and artillery batteries,
but players maneuver their units by corps.
The corps are activated by the army command,
or through the initiative of the corps commander.
The Prussians generally activate in a much
more predictable fashion, and can get all
of their units into action thanks to the division
commanders. An Austrian corps is much more
difficult to handle and often only gets into
action piece by piece.
here to order Battles of 1866: Frontier Battles now. | <urn:uuid:7d84fe71-cb98-4aca-863d-b31baad541c3> | {
"date": "2013-05-18T07:12:17",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9506396651268005,
"score": 3.5625,
"token_count": 1343,
"url": "http://www.avalanchepress.com/PrussianOrganization.php"
} |
Viewed against a dangerous and costly backdrop, clear team communication is obviously essential to create a safe, productive and effective work environment on the ramp. Tractor operators and wing walkers need to warn each other of impending dangers. The tractor operator needs to keep the flight deck informed of ground movement. And all ground personnel should at least be able to hear the flight deck and each other during a pushback.
Ramp workers can do much more without the wire, the shouting or the hand signals.
A typical wireless pushback and towing configuration uses a portable transceiver for continuous two-way communication among one or more wing walkers and the tractor operator during aircraft movement. The tractor operator is free to concentrate on correct maneuvering, and all crew members can warn others instantly of impending dangers.
To optimize the flow of communication and minimize chatter, the system is configured so that all team members can hear the pilot, but only the tractor operator can talk directly to the flight deck. Because wireless communication increases coordination and enables real-time verbal warnings, it decreases the risk of accidents, shortens turn-around times, and increases the likelihood of hitting flight slots.
In addition to pushbacks and towing, wireless team communication systems can also be used to improve safety and efficiency during deicing, cargo and maintenance operations. In a deicing configuration, a wireless system connects the driver and the basket, and the system itself can be connected to two-way radios enabling communication with remote users.
Communication between the driver and the basket takes place on open microphone over a 1.9GHz (1.8GHz in the EU) encrypted frequency while also allowing radio monitoring and transmitting with a push to-talk button on the headsets. Systems can be configured to enable multiple deicing crews to communicate while working on the same aircraft - further improving efficiency. Additional configurations are available for maintenance teams and are scalable to almost any size.
Choosing a Wireless Communication System
Wireless headset systems are available in a wide variety of configurations and price ranges. To ensure a system meets the diverse needs of ground support, consider the following factors carefully:
Is the system truly wireless? A number of so-called “wireless” systems actually require a wire from the headset to a radio or belt pack. While these systems allow freedom of movement, a belt pack or radio wire creates many of the same problems inherent in hardwire systems, particularly tangled cords. Moreover, belt packs generally have less than half the transmission range of self-contained systems worn on the head.
Does the system use DECT or Bluetooth technology? Transmission technology can dramatically affect how well wireless systems perform in the field. Systems that employ Bluetooth technology generally have a limited range and are subject to radio frequency interference from nearby devices.
Look for systems that use Digital Enhanced Cordless Telecommunications technology. DECT units generally offer up to 30 times more coverage and are less subject to interference than Bluetooth. DECT transmissions also have multipath capability, meaning the signal will bounce up, over and around objects in order to establish the best possible connection. DECT signals are also digitally encoded to ensure privacy.
Is the system full-duplex or half-duplex? Half-duplex systems allow communication in both directions, but only one direction at a time. That’s a walkie-talkie. On the other hand, full-duplex systems allow communication in both directions simultaneously. Full-duplex capabilities are an important safety consideration because they allow the parties to speak and hear others at the same time.
Is the system radio-compatible? Communication during pushback and towing is generally confined to the flight deck, wing walkers and tractor operator; however, other ground support functions may benefit from the ability to communicate with remote users over a two-way radio. Look for a system with maximum radio-interface flexibility. | <urn:uuid:64b6024d-747e-42b4-94a5-ce98e783c943> | {
"date": "2013-05-18T06:32:07",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9268864989280701,
"score": 2.578125,
"token_count": 784,
"url": "http://www.aviationpros.com/article/10603970/wireless-headsets-for-team-communication-improve-ramp-safety?page=2"
} |
Issued from the woods of the Loess Hills a few miles east of
NATCHEZ, MISSISSIPPI, USA
April 29, 2012
|CATTLE EGRETS AMONG CATTLE
As in Mexico, around here if you pass by a pasture you're likely to see Cattle Egrets standing among or on the cows, as shown at http://www.backyardnature.net/n/12/120429eg.jpg.
Cattle Egrets in their breeding plumage, like the ones in the picture, can be distinguished from other white egrets and herons by the patches of light orange-brown on their crests and chests. Nonbreeding Cattle Egrets can be all white, and then their relatively thick, yellow beaks and thicker, shorter necks separate them from similar-sized, white herons and egrets found here, such as Snowy Egrets and juvenile Little Blue Herons.
I remember the first time Cattle Egrets were spotted in the rural part of western Kentucky where I grew up, possibly in 1963. Their appearance was so unusual that a farmer not particularly interested in Nature called my parents and said that a whole flock of big white birds had appeared in his pasture, and we went up to take a look. I was in college before I learned that they were Cattle Egrets, BUBULCUS IBIS.
My ornithology teacher told how the birds were undergoing one of the fastest and most widely ranging expansions of distribution ever seen among birds. Originally Cattle Egrets were native to southern Spain and Portugal, tropical and subtropical Africa and humid tropical and subtropical Asia. In the late 1800s they began expanding their range into southern Africa, and were first sighted in the Americas, on the boundary of Guiana and Suriname, in 1877, apparently having flown across the Atlantic Ocean. They didn't get permanently established there until the 1930s, though, but then they began expanding into much of the rest of the Americas, reaching western Kentucky around the early 60s. The species appears still to be expanding northward in western North America, but in the Northeast it seems to be in decline. Though they can turn up as far north as southern Canada, coast to coast, mostly they breed in the US Southeast.
The Wikipedia expert says that Cattle Egrets eat ticks and flies from cattle. They do that, but anyone who watches our birds awhile sees that mainly as the cattle move around they stir up creatures in the grass, which the egrets prey on. The cows' fresh manure also attracts flies for them.
MATING BOX TURTLES
It's interesting to see how turtles manage it, but for many readers familiar with box turtles in other parts of North America the picture may raise the question of why those in our picture bear different colors and patterns than theirs. What's happening is that Box Turtles are represented by six intergrading subspecies.
Hillary's Gulf Coast location is supposed to be home to the Gulf Coast subspecies, Terrapene carolina ssp. major. However, that subspecies is described as having a brownish top shell, or carapace, sometimes with a few dull spots or rays, but nothing like these bright, yellow lines. I can't say what's going on. Apparently Box Turtle taxonomy is a bit tricky.
RESTING CRANE FLY
That looks like a mosquito but you can see from how much of the leaf he covers that he's far too large to be any mosquito species found here. Also, he lacks the hypodermic-like proboscis mosquitoes use to suck blood. No conspicuous mouthparts are visible on our crane fly because adult crane flies generally hardly eat at all, only occasionally lapping up a bit of pollen or sugar-rich flower nectar. Their maggot-like larvae feed on plant roots. Some species can damage crops.
Oosterbroek's monumental, 2012 Catalogue of the Craneflies of the World -- free and online at http://ip30.eti.uva.nl/ccw/ --recognizes 15,345 cranefly species, 1630 of them just in our Nearctic ecozone, which embraces the US, Canada, Greenland, and most of northern Mexico.
That's why when I shipped the picture to volunteer identifier Bea in Ontario it took more time than usual for her verdict to come in, and she was comfortable only with calling it the genus TIPULA.
Whatever our species, it's a pleasure to take the close shown at http://www.backyardnature.net/n/12/120429cg.jpg.
What are those things below the wings looking like needles with droplets of water at their ends? Those are "halteres," which commonly occur among the Fly Order of Insects, the Diptera. Though their purpose isn't known with certainty, it's assumed that they help control flight, enabling flies to make sudden mid-air changes in direction. From the evolutionary perspective, halteres are modified back wings. Most insects have two pairs, or four, wings, but not the Diptera, as the name implies -- di-ptera, as they say "two-wings" in classical Greek.
ADMIRING THE WHITE OAK
In that picture I'm holding a leaf so you can see its underside, much paler than other leaves' topsides. The tree's gray bark of narrow, vertical blocks of scaly plates is shown at http://www.backyardnature.net/n/12/120429qc.jpg.
I'm accustomed to seeing White Oaks on relatively dry upland soils so I was a little surprised when the tree in the picture showed up on a stream bank growing among Sycamores. In fact, White Oaks are fairly rare around here, completely absent in many upland forests where I'd expect them to be. Years ago I mentioned this in a Newsletter and a local reader responded that in this region White Oaks were wiped out many years ago by people cutting them as lumber and, more importantly, using them in the whisky distilling business. The online Flora of North America says that "In the past Quercus alba was considered to be the source of the finest and most durable oak lumber in America for furniture and shipbuilding."
There beside the stream, last year's crop of our White Oak's acorns had been washed away, but this season's were there in their first stages of growth, as seen at http://www.backyardnature.net/n/12/120429qb.jpg.
Traditionally early North Americans regarded the inner bark of White Oaks as highly medicinal. Extracts made from soaking the inner bark in water are astringent (puckery) and were used for gargling, and the old herbals describe the extract as tonic, stimulating and antiseptic. Other listed uses include for "putrid sore throat," diphtheria, hemorrhages, spongy or bleeding gums, and hemorrhoids. Many applications suggest adding a bit of capsicum, or hot pepper, to the extract.
Basically the notion seems to be that the bark's tannin -- the puckery element -- does the main medicinal service. Other oaks actually have more tannin than White Oak, but medicines made with them can be too harsh. White Oak extracts seem to have just the right amount.
The same tannin situation exists with regard to the edibility of acorns. The acorns of other oaks contain more tannin so they require more time and effort to make them edible. White Oak acorns have much less tannin, but even still there's enough to make them too bitter for humans to eat without treatment, which traditionally has been leaching acorn pulp in running water.
By the way, instructions for the kitchen leaching of acorn pulp appear at http://www.ehow.com/how_8427141_leach-acorns.html.
AMERICAN HOLLY FLOWERING
American Hollies are a different species from the English Holly often planted as ornamentals. American Holly bears larger leaves and produces fewer fruits. Hollies come in male or female trees (they're dioecious), and you can tell from the flowers in the upper, left of the above picture that here we have a male tree. A close-up of a male flower with its four out-thrusting stamens is at http://www.backyardnature.net/n/12/120429hp.jpg.
On a female flower the stamens would be rudimentary and there'd be an ovary -- the future fruit -- in the blossom's center.
Maybe because people are so used to seeing English Hollies planted up north often it's assumed that they're northern trees. In fact, American Holly is mainly native to the US Southeast, though along the Coastal Plain it reaches as far north as southern Connecticut. Around here it's strictly an understory tree.
The fruits are mildly toxic but you must eat a lot of them to get sick. Birds, deer, squirrels and other animals eat the fruits, which are drupes bearing several hard "stones." No critter seems to relish them, though, saving them mostly to serve as "emergency food" when other foods run out. That might explain why we see hollies holding their red fruits deep into the winter.
"BEGGAR'S LICE" ON MY SOCKS
Several kinds of plants produce stickery little fruits like that and they all can be called Beggar's Lice. When I tracked down the plant attaching its fruits to me, it was what's shown at http://www.backyardnature.net/n/12/120429my.jpg.
Several beggar's-lice-producing plants are similar to that, so before being sure what I really had I had to "do the botany." Here are details I focused on:
Leaves and stems were hairy, and leaves were rounded toward the base, sometimes clasping the stem, as shown at http://www.backyardnature.net/n/12/120429mw.jpg.
A close-up of a "beggar's louse" is shown stuck in my arm hairs at http://www.backyardnature.net/n/12/120429mx.jpg.
That last picture is sort of tricky. For, you expect the thing stuck to you to be a fruit with hooked spines, but the thing in the picture isn't a fruit. It's actually a baglike calyx surrounding much smaller fruit-like things. I crumbled some calyxes between my fingers and part of what resulted is shown at http://www.backyardnature.net/n/12/120429mv.jpg.
The four shiny things are not seeds. Maybe you've seen that the ovary of most mint flowers is divided into four more-or-less distinct parts. Each of those parts is called a nutlet, and that's what you're seeing. But other plant families beside the Mint produce nutlets.
Our beggar's-louse-producing plant is MYOSOTIS DISCOLOR, a member of the Borage Family, the Boraginaceae, which on the phylogenetic Tree of Life is adjacent to the Mint Family. Myosotis discolor is an invasive from Europe that so far has set up residence here and there in eastern and western North America, but so far seems to be absent in the center.
The English name is often given as Changing Forget-me-not, because Myosotis is the Forget-me-not genus, and in Latin dis-color says "two-colored," apparently referring to the fact that the flowers can be white or blue, though all I've seen here are white. But, this rangy little plant you never notice until its calyxes stick to you seems to have nothing to do with Forget-me-nots, unless you look at technical features. I think some editor must have made up the name "Changing Forget-me-not." Our plant very clearly is one of several "Beggar's Lice."
OATS ALONG THE ROAD
A spikelet plucked from the panicle is shown at http://www.backyardnature.net/n/12/120429ov.jpg.
The same spikelet opened to show the florets inside the glumes at http://www.backyardnature.net/n/12/120429ou.jpg.
This is Oat grass, AVENA SATIVA, the same species producing the oats of oatmeal. Oat spikelets differ from those of the vast majority of other grasses by the very large, boat-shaped glumes subtending the florets.
Glumes are analogous to a regular flower's calyx, so in that last picture of a spikelet, the glumes are the two large, green-and-white striped items at the left in the photograph. The vast majority of grass spikelets bear glumes much shorter than the florets above them. Also, notice that the slender, stiff, needlelike item, the awn, arises from a floret inside the spikelet and not from a glum.
Remember that you can review grass flower terminology at http://www.backyardnature.net/fl_grass.htm.
The spikelets of most Oat plants don't bear needlelike awns. You're likely to see both awned and awnless kinds growing as weeds in our area. When I first saw the awns I thought this might be one of the "Wild Oat" species, for several species reside in the Oat genus Avena, and one of those grows wild in the US Southeast. However, florets of the other species bear long, brownish hairs, and you can see that ours are hairless, or "glabrous." The other species' awns also are twisted, but regular Oat awns, when present, are rigid and straight. Both Oat species are native to Eurasia.
How did that Oat plant make its way to the side of our isolated Mississippi backroad? Near where the grass grew there was a large game farm where exotic animals are kept so hunters can pay high fees to kill them. I'm betting that the animals are fed oats. Our plant was in an often-flooded spot downstream from the farm, so maybe an oat grain had washed there.
That's a roadcut through a special kind of very fine-grained clay called loess. The word loess derives from the German Löß. A deep mantel of loess was deposited here at the end of the last Ice Age about 10,000 years ago. Deep loess deposits occur in a narrow band of upland immediately east of the Mississippi River over most of its entire course. The loess region sometimes is called the Loess Hills. Loess profoundly affects the area's ecology. For one thing, the farther east you go from the Mississippi River, the thinner the loess is, the poorer and more acidic the soil becomes, and the more pines you get instead of broadleaf deciduous trees.
Loess is so important here, and so interesting, that years ago I developed a web portal called "Loess Hills of the Lower Mississippi Valley," at http://www.backyardnature.net/loess/loess.html.
I had hoped to engage local folks in an effort to recognize the Loess Hills as a very interesting, scenic and biologically important, distinct region with ecotourism potential, but nothing ever came from it. At that site you can learn how "loess" can be pronounced, how it came to exist here, what's special about it, and much more.
One thing special about loess is that it erodes into vertical-sided roadcuts as in the picture. People such as road engineers who try to create gentle slopes are doomed to failure. I wish my farming Maya friends in the Yucatan, who must deal with very thin, rocky soil, could see the thick mantel of rich loess we have here.
NO MORE EMAILED NEWSLETTERS
From now on, to read the Newsletters you'll just have to remember to check out the most recently issued edition at http://www.backyardnature.net/n/.
Today's Newsletter is there now waiting for readers, with stories about Cattle Egrets, mating Box Turtles, craneflies, flowering holly trees, Beggar's Lice and more.
If you're on Facebook you can find the Facebook Newsletter page by searching for "Jim Conrad's Naturalist Newsletter." The weekly message left there will link to individual pages with images embedded in text. In today's message, for instance, you can click on "Cattle Egrets" and see a regular web page with text and a photo. I've configured my Facebook page to have a subscribe tab but so far one hasn't appeared. My impression is that if you "like" the Newsletter page, each week you'll receive a message with its link. Maybe not. I'm still figuring it out.
So, this is the end of eleven years of weekly delivered emails.
At first I was upset and annoyed, and thought of writing the 2,158 subscribers suggesting that complaints be made to FatCow at email@example.com. However, something interesting has happened.
Last week about a dozen subscribers accepted my invitation to check out the Newsletter's Facebook page. When they "liked" the page, I got to see their pictures, or at least their avatars. There were all kinds of folks, old and young, skinny and fat, white and brown, serious and joking, one fellow on a boat in Maine, a lady in India with a dot, or Bindi, in the middle of her forehead, someone's baby picture... What an amazing thing that all these people were interested in what I'd written!
So, in a way, FatCow.com's treatment has been a gift. It's resensitized me to my readership. Also, it's nudged me into a mental space where now I'm mentally prepared for the whole BackyardNature.net site to be removed permanently, for whatever reason they come up with. That extra sense of independence means a lot to me. Now if need be I'm ready to write Newsletters and just keep them in my computer, or write them in a notebook hidden in my trailer, or write them on leaves that I let float down the Mississippi River. I've already learned how to make ink from oak galls.
So, we're evolving here. I'm yielding when it's clear that the forces against us control critical resources, but I'm ready to experiment with new possibilities as they appear, and I continue to think, feel and write about the world around us, and share when I'm allowed to.
Good luck in your own evolutions. And thanks for these years of weekly inviting me into your lives.
Best wishes to all Newsletter readers,
To subscribe OR unsubscribe to this Newsletter, go to www.backyardnature.net/news/natnat.php.
Post your own backyard-nature observations and thoughts at http://groups.google.com/group/backyard-nature/
All previous Newsletters are archived at www.backyardnature.net/n/.
Visit Jim's backyard nature site at www.backyardnature.net | <urn:uuid:ed524a66-2327-4328-a7fc-f00a0d14d470> | {
"date": "2013-05-18T05:33:05",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9592416882514954,
"score": 3.59375,
"token_count": 4092,
"url": "http://www.backyardnature.net/n/12/120429.htm"
} |
The town´s history
Although written differently, Tölz was mentioned for the first time in the records of 1180. In 1331, it was granted extensive "Marktrechte", e.g. the right to hold a market. In the 13th and 14th century many workshops (e.g. limeburners and raftsmen) settled in this area. A great fire destroyed large parts of the town in 1453, but with generous noble support reconstruction soon began. Thanks to the location at the river Isar, the rafting and also brewery trade the town soon flourished. 22 breweries could be counted in 1721. Tölz became also famous for arts and crafts with the beautiful coloured chests, cases and beds. In 1845, iodine was found close to Tölz. Therefore, market town Markt Tölz became Bad (= the German word for spa resorts) in 1899. In 1906, it was recognized as town and in 1969 it got the rating "Heilklimatischer Kurort", which means that its climate is beneficial to health. What is more, in 2005, it also got the title "Moorheilbad", that means it is acknowledged as mineral and medicinal mud-bath spa.
The town´s coat of arms
The town´s coat of arms.
The town´s flag with the colours black and gold. | <urn:uuid:d411514d-c8d6-4b8e-8dbe-4a8df14e66fe> | {
"date": "2013-05-18T05:48:50",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9755674004554749,
"score": 2.5625,
"token_count": 288,
"url": "http://www.bad-toelz.de/en/bad-toelz/stadt-bad-toelz/the-towns-history.html?refID=95"
} |
Yarrow: A Tough Groundcover Option
Image by Flickr / entheos
The white flower heads of the yarrow plant
Get acquainted with this hardy and handy groundcover
People are always looking to cover open rough spots in the lawn and around the property. Our hardy native yarrow (Achillea millefolium) establishes quickly and binds the soil to provide a soft, scented carpet. In the wilds of Western Canada, white flat-topped flower heads rise to 75 cm (30 in.) tall from a mass of ferny leaves.
Buy yarrow plants from a local garden centre or grow them from rooted stem fragments planted in fall or early spring. Many colour varieties are easy to raise from seed, and you can mow the spreading yarrow patch along with the grass.
Yarrow leaves rubbed on the skin or burned in a campfire keep away mosquitoes. A poultice from steeped leaves helps heal sores and reduce muscle pain. | <urn:uuid:e76953cb-2bd1-4921-8818-d81bd78db030> | {
"date": "2013-05-18T05:25:31",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.908327043056488,
"score": 2.734375,
"token_count": 203,
"url": "http://www.bcliving.ca/garden/yarrow-a-tough-groundcover-option"
} |
Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents
Convective heat flux is a flux depending on the temperature difference between the body and the adjacent fluid (liquid or gas) and is triggered by the *FILM card. It takes the form
where is the a flux normal to the surface, is the film coefficient, is the body temperature and is the environment fluid temperature (also called sink temperature). Generally, the sink temperature is known. If it is not, it is an unknown in the system. Physically, the convection along the surface can be forced or free. Forced convection means that the mass flow rate of the adjacent fluid (gas or liquid) is known and its temperature is the result of heat exchange between body and fluid. This case can be simulated by CalculiX by defining network elements and using the *BOUNDARY card for the first degree of freedom in the midside node of the element. Free convection, for which the mass flow rate is a n unknown too and a result of temperature differences, cannot be simulated.
Next: Radiative heat flux Up: Loading Previous: Distributed heat flux Contents guido dhondt 2012-10-06 | <urn:uuid:47d24057-e332-41de-bbe6-0338e16b49a6> | {
"date": "2013-05-18T05:29:49",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9330995082855225,
"score": 3.3125,
"token_count": 249,
"url": "http://www.bconverged.com/calculix/doc/ccx/html/node145.html"
} |
Guest Author - Heather Thomas
In most wild species, birds display dominance within the flock to determine their place or roll within that flock. Males display and vocalize to win their choice of female. The term “pecking order” comes from natural bird behavior. Dominant behavior is natural and expected in the wild. We must be cautious, as humans, not to project our sense of right and wrong onto our feathered companions.
Similar to birds that still live in the wild, your bird may be only a few generations removed from the wild and will display natural behavior tendencies. Understanding these basic characteristics helps you understand how to interact with your bird and creates a happier flock experience for all.
The Lunging or Biting Bird
One must be cautious to label “bad” behavior as dominant behavior. Lunging and biting are most often caused by fear. Changing a bird’s environment too drastically such as cage placement, a new cage, rearranging the furniture in a room or even something as simple as changing the drapery on the window next to their cage can cause even a friendly bird to become seemingly aggressive. It is important to consider environmental changes before labeling your bird’s behavior as dominant. Typical dominant behavior that manifests itself as biting or lunging would be a bird that loves to sit only on your shoulder and bites you when you attempt to remove them. Protecting a favorite person or place in your home by lunging or biting at others who happen to come too close would also be correctly labeled as dominant behavior.
The Bully Bird
You may have a situation where you keep more than one bird in a cage. If you have a flock that works well together you will be able to tell who the dominant male is but it will not adversely affect your flock. If you have a dominant male that is a bully, you may observe him forcing other birds off of their perch or guarding a food dish and not letting anyone else eat from it. In worst-case scenarios, your bully will single out a victim and force that bird to dwell at the bottom of the cage or even injure and possibly kill this weaker bird. If you have a dominant bully, it is best to remove him from the cage and keep him in a cage by himself or with his mate.
The Anti-Social Bird
As much as you try your bird has no interest in becoming your friend. This bird often flutters around its cage to avoid your hand or takes to flight to escape your reach. I find this behavior often typical of a bird with unclipped wings. There are people who keep birds that believe it is cruel to clip a bird’s wings. I will cover this topic in depth at a later time but here will touch on effects not clipping your bird’s wings has on dominance. By the very act of keeping a bird as a pet, you are choosing to take this wonderful winged creature and transform it into your friend or companion. If you take this action, it changes the purpose of the animal. If you allow your bird to retain the ability of flight, you are permitting your bird to escape your reach and do whatever it wants. This may be fine, if you want a wild bird as a pet. However, if you want a friendly bird, you want a bird dependent on you, a bird that does not fly away just because it wants to.
For a well-mannered bird, keep the flock mentality where you are the dominant bird. If you want a bird that respects you, you must maintain dominance. This is not achieved by cruel discipline. Observe your bird and be consistent with your expectations and interaction within your flock. Do not allow unacceptable dominant behavior to take root in your avian friend. | <urn:uuid:41e51805-6ce5-4e0e-812d-f25764378578> | {
"date": "2013-05-18T06:33:47",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9609537720680237,
"score": 3.40625,
"token_count": 762,
"url": "http://www.bellaonline.com/articles/art18014.asp"
} |
(BPT) - Everyone, including moms and doctors, can agree that a good night of sleep is necessary for good health, high energy, and an individual’s overall well-being. Not getting enough good sleep – or rapid eye movement sleep – can affect the mind and body’s ability to react appropriately to outside factors, the National Sleep Foundation reports.
Creating the perfect sleep environment is the first step toward ensuring a good night of sleep. March, the first month of spring, is also National Sleep Awareness Month, and it aims to remind everyone why a good night of zzz’s is so important. One in four adults in the United States experience occasional sleepiness, difficulty falling asleep, or waking up feeling un-refreshed at least a few times per week, according to the National Sleep Foundation.
Fortunately, you can implement these tips this spring, and create a comfortable sleeping environment in your home for both you and your family.
* Eliminate distractions – Electronics. Noises. Lights. Many items, such as laptops, TVs and cellphones, commonly found in bedrooms can cause distractions and prevent a person from entering REM sleep. Remove these items from the room. Also, consider running a fan or white noise machine to create a soft sound barrier, which will help muffle unexpected sounds like a person flushing the toilet or an engine rumbling loudly on the street outside.
* Establish comfort – Creating a sleep-conducive environment is an important factor in making the most out of every minute you sleep. Cuddle up each night with soft linens and create a calming atmosphere in the bedroom. To do this, try adding Downy Infusions Lavender Serenity liquid fabric softener when washing your sheets and sleepwear this season, to make your linens and sleepwear silky, soft and soothing. It will help lull you right into bed. With Downy you can wake up to a great scent and start the day off on the right side of the bed.
* Be routine – The human body reacts favorably to familiar and repeated movements. So consider following a routine every night, whether it’s taking a warm bath, reading a chapter in a book or journaling. The National Sleep Foundation advises against watching TV or using electronics as part of this routine because electronics can hinder quality sleep.
* Stay active – Sleep is needed to give the body energy to get through its daily activities. Conversely, daily activities are needed to tire the body out for a good night of sleep. Consider adding physical activities into your daily schedule so you can settle into bed between the covers each night, tired and ready for a good night of sleep.
Nobody enjoys walking around in a mental fog or having no energy during the day, so be sure to create the perfect sleep environment in your home for you and your family this spring. It will help you get the most out of every minute of your zzz’s. | <urn:uuid:83500816-72c7-4a5a-8edd-7e72cbd48886> | {
"date": "2013-05-18T08:03:04",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9243288636207581,
"score": 2.609375,
"token_count": 601,
"url": "http://www.bensoncountynews.com/21745/1979/onlinefeature/17716/creating-the-perfect-sleep-environment-this-spring"
} |
Clinical complications associated with diabetes may include the following:
Cardiovascular disease, in many cases, is caused by atherosclerosis - an excess build-up of plaque on the inner wall of a large blood vessel, which restricts the flow of blood. Heart disease is the leading cause of diabetes-related deaths. Heart disease and stroke are two to four times more common in persons with diabetes.
High blood pressure affects 73 percent of persons with diabetes.
Periodontal (gum) disease occurs with greater frequency in persons with diabetes.
retinopathy or glaucoma (eye disease or blindness)
Blindness due to diabetic retinopathy is a more important cause of visual impairment in younger-onset people than in older-onset people. Males with younger-onset diabetes generally develop retinopathy more rapidly than females with younger-onset diabetes. Diabetes is the leading cause of blindness among adults ages 20 to 74. | <urn:uuid:51f6c283-f9f9-4540-8125-26bbefe1d2f6> | {
"date": "2013-05-18T06:55:50",
"dump": "CC-MAIN-2013-20",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9248546957969666,
"score": 2.90625,
"token_count": 194,
"url": "http://www.bettermedicine.com/topic/diabetes/living-with-diabetes"
} |